Book_coverIn my last post, I looked at shortcomings in edtech research, mostly from outside the world of ELT. I made a series of recommendations of ways in which such research could become more useful. In this post, I look at two very recent collections of ELT edtech research. The first of these is Digital Innovations and Research in Language Learning, edited by Mavridi and Saumell, and published this February by the Learning Technologies SIG of IATEFL. I’ll refer to it here as DIRLL. It’s available free to IATEFL LT SIG members, and can be bought for $10.97 as an ebook on Amazon (US). The second is the most recent edition (February 2020) of the Language Learning & Technology journal, which is open access and available here. I’ll refer to it here as LLTJ.

In both of these collections, the focus is not on ‘technology per se, but rather issues related to language learning and language teaching, and how they are affected or enhanced by the use of digital technologies’. However, they are very different kinds of publication. Nobody involved in the production of DIRLL got paid in any way (to the best of my knowledge) and, in keeping with its provenance from a teachers’ association, has ‘a focus on the practitioner as teacher-researcher’. Almost all of the contributing authors are university-based, but they are typically involved more in language teaching than in research. With one exception (a grant from the EU), their work was unfunded.

The triannual LLTJ is funded by two American universities and published by the University of Hawaii Press. The editors and associate editors are well-known scholars in their fields. The journal’s impact factor is high, close to the impact factor of the paywalled reCALL (published by the University of Cambridge), which is the highest-ranking journal in the field of CALL. The contributing authors are all university-based, many with a string of published articles (in prestige journals), chapters or books behind them. At least six of the studies were funded by national grant-awarding bodies.

I should begin by making clear that there was much in both collections that I found interesting. However, it was not usually the research itself that I found informative, but the literature review that preceded it. Two of the chapters in DIRLL were not really research, anyway. One was the development of a template for evaluating ICT-mediated tasks in CLIL, another was an advocacy of comics as a resource for language teaching. Both of these were new, useful and interesting to me. LLTJ included a valuable literature review of research into VR in FL learning (but no actual new research). With some exceptions in both collections, though, I felt that I would have been better off curtailing my reading after the reviews. Admittedly, there wouldn’t be much in the way of literature reviews if there were no previous research to report …

It was no surprise to see the learners who were the subjects of this research were overwhelmingly university students. In fact, only one article (about a high-school project in Israel, reported in DIRLL) was not about university students. The research areas focused on reflected this bias towards tertiary contexts: online academic reading skills, academic writing, online reflective practices in teacher training programmes, etc.

In a couple of cases, the selection of experimental subjects seemed plain bizarre. Why, if you want to find out about the extent to which Moodle use can help EAP students become better academic readers (in DIRLL), would you investigate this with a small volunteer cohort of postgraduate students of linguistics, with previous experience of using Moodle and experience of teaching? Is a less representative sample imaginable? Why, if you want to investigate the learning potential of the English File Pronunciation app (reported in LLTJ), which is clearly most appropriate for A1 – B1 levels, would you do this with a group of C1-level undergraduates following a course in phonetics as part of an English Studies programme?

More problematic, in my view, was the small sample size in many of the research projects. The Israeli virtual high school project (DIRLL), previously referred to, started out with only 11 students, but 7 dropped out, primarily, it seems, because of institutional incompetence: ‘the project was probably doomed […] to failure from the start’, according to the author. Interesting as this was as an account of how not to set up a project of this kind, it is simply impossible to draw any conclusions from 4 students about the potential of a VLE for ‘interaction, focus and self-paced learning’. The questionnaire investigating experience of and attitudes towards VR (in DIRLL) was completed by only 7 (out of 36 possible) students and 7 (out of 70+ possible) teachers. As the author acknowledges, ‘no great claims can be made’, but then goes on to note the generally ‘positive attitudes to VR’. Perhaps those who did not volunteer had different attitudes? We will never know. The study of motivational videos in tertiary education (DIRLL) started off with 15 subjects, but 5 did not complete the necessary tasks. The research into L1 use in videoconferencing (LLTJ) started off with 10 experimental subjects, all with the same L1 and similar cultural backgrounds, but there was no data available from 4 of them (because they never switched into L1). The author claims that the paper demonstrates ‘how L1 is used by language learners in videoconferencing as a social semiotic resource to support social presence’ – something which, after reading the literature review, we already knew. But the paper also demonstrates quite clearly how L1 is not used by language learners in videoconferencing as a social semiotic resource to support social presence. In all these cases, it is the participants who did not complete or the potential participants who did not want to take part that have the greatest interest for me.

Unsurprisingly, the LLTJ articles had larger sample sizes than those in DIRLL, but in both collections the length of the research was limited. The production of one motivational video (DIRLL) does not really allow us to draw any conclusions about the development of students’ critical thinking skills. Two four-week interventions do not really seem long enough to me to discover anything about learner autonomy and Moodle (DIRLL). An experiment looking at different feedback modes needs more than two written assignments to reach any conclusions about student preferences (LLTJ).

More research might well be needed to compensate for the short-term projects with small sample sizes, but I’m not convinced that this is always the case. Lacking sufficient information about the content of the technologically-mediated tools being used, I was often unable to reach any conclusions. A gamified Twitter environment was developed in one project (DIRLL), using principles derived from contemporary literature on gamification. The authors concluded that the game design ‘failed to generate interaction among students’, but without knowing a lot more about the specific details of the activity, it is impossible to say whether the problem was the principles or the particular instantiation of those principles. Another project, looking at the development of pronunciation materials for online learning (LLTJ), came to the conclusion that online pronunciation training was helpful – better than none at all. Claims are then made about the value of the method used (called ‘innovative Cued Pronunciation Readings’), but this is not compared to any other method / materials, and only a very small selection of these materials are illustrated. Basically, the reader of this research has no choice but to take things on trust. The study looking at the use of Alexa to help listening comprehension and speaking fluency (LLTJ) cannot really tell us anything about IPAs unless we know more about the particular way that Alexa is being used. Here, it seems that the students were using Alexa in an interactive storytelling exercise, but so little information is given about the exercise itself that I didn’t actually learn anything at all. The author’s own conclusion is that the results, such as they are, need to be treated with caution. Nevertheless, he adds ‘the current study illustrates that IPAs may have some value to foreign language learners’.

This brings me onto my final gripe. To be told that IPAs like Alexa may have some value to foreign language learners is to be told something that I already know. This wasn’t the only time this happened during my reading of these collections. I appreciate that research cannot always tell us something new and interesting, but a little more often would be nice. I ‘learnt’ that goal-setting plays an important role in motivation and that gamification can boost short-term motivation. I ‘learnt’ that reflective journals can take a long time for teachers to look at, and that reflective video journals are also very time-consuming. I ‘learnt’ that peer feedback can be very useful. I ‘learnt’ from two papers that intercultural difficulties may be exacerbated by online communication. I ‘learnt’ that text-to-speech software is pretty good these days. I ‘learnt’ that multimodal literacy can, most frequently, be divided up into visual and auditory forms.

With the exception of a piece about online safety issues (DIRLL), I did not once encounter anything which hinted that there may be problems in using technology. No mention of the use to which student data might be put. No mention of the costs involved (except for the observation that many students would not be happy to spend money on the English File Pronunciation app) or the cost-effectiveness of digital ‘solutions’. No consideration of the institutional (or other) pressures (or the reasons behind them) that may be applied to encourage teachers to ‘leverage’ edtech. No suggestion that a zero-tech option might actually be preferable. In both collections, the language used is invariably positive, or, at least, technology is associated with positive things: uncovering the possibilities, promoting autonomy, etc. Even if the focus of these publications is not on technology per se (although I think this claim doesn’t really stand up to close examination), it’s a little disingenuous to claim (as LLTJ does) that the interest is in how language learning and language teaching is ‘affected or enhanced by the use of digital technologies’. The reality is that the overwhelming interest is in potential enhancements, not potential negative effects.

I have deliberately not mentioned any names in referring to the articles I have discussed. I would, though, like to take my hat off to the editors of DIRLL, Sophia Mavridi and Vicky Saumell, for attempting to do something a little different. I think that Alicia Artusi and Graham Stanley’s article (DIRLL) about CPD for ‘remote’ teachers was very good and should interest the huge number of teachers working online. Chryssa Themelis and Julie-Ann Sime have kindled my interest in the potential of comics as a learning resource (DIRLL). Yu-Ju Lan’s article about VR (LLTJ) is surely the most up-to-date, go-to article on this topic. There were other pieces, or parts of pieces, that I liked, too. But, to me, it’s clear that ‘more research is needed’ … much less than (1) better and more critical research, and (2) more digestible summaries of research.

Advertisement
Comments
  1. Thanks for your review of the IATEFL LTSIG book, Phil, and your kind words about the chapter I wrote with Alicia Artusi. I haven’t had the chance to read the other chapters of the book yet, so I’ll perhaps come back and comment on those and whether I agree with your opinion after I’ve had the chance to read them. I do, however, agree that there is a general need for more critical research when it comes to the use of educational technology. I would also say that we need to encourage more practitioners to carry out research, and this is one thing as a LTSIG committee member (newsletter editor) I’m proud to say the LTSIG tries to do. For me, a lot of the research I have seen on language learning and teaching seems to be carried out by people so far divorced from the realities of the classroom, that it serves no use whatsoever to teachers actually teaching. Not to mention the research locked away behind pay-walled journals that may be of interest to teachers, but there is no way most (any) of them can actually afford to read it. I think, therefore, that initiatives such as this, to make research accessible and relevant to teachers, is one to be applauded. I can also confirm that nobody involved in the writing of this book received payment.

  2. Grzegorz Spiewak says:

    I enjoyed these two new posts immensely not least baecause you articulate a point that has always bothered me as a reader of a great many reports on *research* studies in psychology: as arule, they tend to be performed on small groups of undergrads, and the conclusions routinely take giant leaps, making major claims about intelligence, motivation, or human nature as such … Sympathetic as one might want to be to budget strictures and institutional pressure to publish at all costs (pun intended), it’s hard to escape the conclusion that *a lot more [solid] research is needed* before any of these bombastic pronouncements – in both psych and edTech – could ever be taken seriously. Thanks a lot again for a great read!

  3. Hello Philip,

    Thank you for this review. It certainly made me think and it also brought to my attention your previous post titled “More research needed” the one you provide a prescriptive “checklist of things” aimed at helping researchers to conduct good research. I read both posts with interest and I’m summing up my thoughts in my capacity as the lead editor of DIRLL, IATEFL LTSIG Coordinator and researcher. I will be referring to them as post 1 & 2.

    You say that very little research considers the downsides of EdTech adoption e.g. online safety, privacy and data security (post 1). There is, however, a whole chapter in the book which researches and critiques exactly this. I felt that this was rather overlooked to move (and elaborate) on what you feel is missing. I’m saying this because you spent less than a line on the former and more than 10 lines on the latter (quoting you “with the exception of a piece about online safety issues, I did not once encounter anything …………”). I believe readers would benefit from a more balanced account of this.

    You say (post 2) that only one chapter was not about university students. This is simply not true. There is another chapter exclusively on young learners, more specifically pre-teens and teens, providing age-specific findings. While I agree that research in young learner contexts is much needed, we should not overlook the significant ethical considerations involving children in any research process. Access, informed consent from parents/ carers, anonymity, and safeguarding all of which can be particularly challenging. In some countries, researchers may need to obtain official government clearance, such as criminal records (aka DBS in the UK) before research can even begin. This may take a good 2 months. School or social care authorities may place requirements that raise other ethical issues e.g. that research is not carried out during class and so restrict data collection to playtime. The researcher then needs to consider the potential issues raised when research is carried out during the child’s free time when they may want to be with their friends or play. There are so many other ethical issues involved when conducting research with minors. This is by no means to say that we should not be asking for more research on young learners. But the above barriers need to be acknowledged and included in any constructive criticism geared towards encouraging teachers to engage in such research.

    Moving on, I found that your tip about sample size – “Ensure that the sample size is big enough to mean something” (post 1) – hypes size too much. Even statistical researchers would add that size alone is not of significant value unless other criteria are also in place e.g. sample needs to be gathered completely at random, sample needs to have the same demographic composition as the general population etc. Of course, depending on the methodology, a big, representative, randomly selected sample can determine the validity of a project but research is not only statistical. Phenomenology, case study, narrative research etc may involve a few individuals but this doesn’t mean the researcher cannot go really deep and actually “say something”. On the contrary, one can conduct a survey with 1000+ participants that may mean absolutely nothing.

    I couldn’t agree more that longitudinal studies ARE needed. But these do not happen overnight and a project may start small before it moves on to something longer. We should encourage small projects to develop and move on to the next phase especially when initiated by practitioners with no financial motive to present the technology as the silver bullet. I think this did not happen in your review. In fact, I found the language used quite dismissive e.g. “we can’t draw any conclusions”, “two months do not really seem long enough to discover anything about Moodle”, “it is simply impossible to draw any conclusions” etc. In fact, I found the language used in both posts overwhelmingly negative.

    I would like to think that your intent was to provide a truly constructive review. A review that would encourage researcher and reader reflexivity which would, in turn, potentially, lead to more and better research. We most certainly need more research but we also need to encourage, support and, (occasionally) inspire people to do it.

    Thank you again,

    Sophia

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s