Archive for the ‘learning outcomes’ Category

My attention was recently drawn (thanks to Grzegorz Śpiewak) to a recent free publication from OUP. It’s called ‘Multimodality in ELT: Communication skills for today’s generation’ (Donaghy et al., 2023) and it’s what OUP likes to call a ‘position paper’: it offers ‘evidence-based recommendations to support educators and learners in their future success’. Its topic is multimodal (or multimedia) literacy, a term used to describe the importance for learners of being able ‘not just to understand but to create multimedia messages, integrating text with images, sounds and video to suit a variety of communicative purposes and reach a range of target audiences’ (Dudeney et al., 2013: 13).

Grzegorz noted the author of this paper’s ‘positively charged, unhedged language to describe what is arguably a most complex problem area’. As an example, he takes the summary of the first section and circles questionable and / or unsubstantiated claims. It’s just one example from a text that reads more like a ‘manifesto’ than a balanced piece of evidence-reporting. The verb ‘need’ (in the sense of ‘must’, as in ‘teachers / learners / students need to …’) appears no less than 57 times. The modal ‘should’ (as in ‘teachers / learners / students should …’) clocks up 27 appearances.

What is it then that we all need to do? Essentially, the argument is that English language teachers need to develop their students’ multimodal literacy by incorporating more multimodal texts and tasks (videos and images) in all their lessons. The main reason for this appears to be that, in today’s digital age, communication is more often multimodal than not (i.e. monomodal written or spoken text). As an addendum, we are told that multimodal classroom practices are a ‘fundamental part of inclusive teaching’ in classes with ‘learners with learning difficulties and disabilities’. In case you thought it was ironic that such an argument would be put forward in a flat monomodal pdf, OUP also offers the same content through a multimodal ‘course’ with text, video and interactive tasks.

It might all be pretty persuasive, if it weren’t so overstated. Here are a few of the complex problem areas.

What exactly is multimodal literacy?

We are told in the paper that there are five modes of communication: linguistic, visual, aural, gestural and spatial. Multimodal literacy consists, apparently, of the ability

  • to ‘view’ multimodal texts (noticing the different modes, and, for basic literacy, responding to the text on an emotional level, and, for more advanced literacy, respond to it critically)
  • to ‘represent’ ideas and information in a multimodal way (posters, storyboards, memes, etc.)

I find this frustratingly imprecise. First: ‘viewing’. Noticing modes and reacting emotionally to a multimedia artefact do not take anyone very far on the path towards multimodal literacy, even if they are necessary first steps. It is only when we move towards a critical response (understanding the relative significance of different modes and problematizing our initial emotional response) that we can really talk about literacy (see the ‘critical literacy’ of Pegrum et al., 2018). We’re basically talking about critical thinking, a concept as vague and contested as any out there. Responding to a multimedia artefact ‘critically’ can mean more or less anything and everything.

Next: ‘representing’. What is the relative importance of ‘viewing’ and ‘representing’? What kinds of representations (artefacts) are important, and which are not? Presumably, they are not all of equal importance. And, whichever artefact is chosen as the focus, a whole range of technical skills will be needed to produce the artefact in question. So, precisely what kind of representing are we talking about?

Priorities in the ELT classroom

The Oxford authors write that ‘the main focus as English language teachers should obviously be on language’. I take this to mean that the ‘linguistic mode’ of communication should be our priority. This seems reasonable, since it’s hard to imagine any kind of digital literacy without some reading skills preceding it. But, again, the question of relative importance rears its ugly head. The time available for language leaning and teaching is always limited. Time that is devoted to the visual, aural, gestural or spatial modes of communication is time that is not devoted to the linguistic mode.

There are, too, presumably, some language teaching contexts (I’m thinking in particular about some adult, professional contexts) where the teaching of multimodal literacy would be completely inappropriate.

Multimodal literacy is a form of digital literacy. Writers about digital literacies like to say things like ‘digital literacies are as important to language learning as […] reading and writing skills’ or it is ‘crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Pegrum et al, 2022). The question then arises: how important, in relative terms, are the various digital literacies? Where does multimodal literacy stand?

The Oxford authors summarise their view as follows:

There is a need for a greater presence of images, videos, and other multimodal texts in ELT coursebooks and a greater focus on using them as a starting point for analysis, evaluation, debate, and discussion.

My question to them is: greater than what? Typical contemporary courseware is already a whizzbang multimodal jamboree. There seem to me to be more pressing concerns with most courseware than supplementing it with visuals or clickables.

Evidence

The Oxford authors’ main interest is unquestionably in the use of video. They recommend extensive video viewing outside the classroom and digital story-telling activities inside. I’m fine with that, so long as classroom time isn’t wasted on getting to grips with a particular digital tool (e.g. a video editor, which, a year from now, will have been replaced by another video editor).

I’m fine with this because it involves learners doing meaningful things with language, and there is ample evidence to indicate that a good way to acquire language is to do meaningful things with it. However, I am less than convinced by the authors’ claim that such activities will strengthen ‘active and critical viewing, and effective and creative representing’. My scepticism derives firstly from my unease about the vagueness of the terms ‘viewing’ and ‘representing’, but I have bigger reservations.

There is much debate about the extent to which general critical thinking can be taught. General critical viewing has the same problems. I can apply critical viewing skills to some topics, because I have reasonable domain knowledge. In my case, it’s domain knowledge that activates my critical awareness of rhetorical devices, layout, choice of images and pull-out quotes, multimodal add-ons and so on. But without the domain knowledge, my critical viewing skills are likely to remain uncritical.

Perhaps most importantly of all, there is a lack of reliable research about ‘the extent to which language instructors should prioritize multimodality in the classroom’ (Kessler, 2022: 552). There are those, like the authors of this paper, who advocate for a ‘strong version’ of multimodality. Others go for a ‘weak version’ ‘in which non-linguistic modes should only minimally support or supplement linguistic instruction’ (Kessler, 2022: 552). And there are others who argue that multimodal activities may actually detract from or stifle L2 development (e.g. Manchón, 2017). In the circumstances, all the talk of ‘needs to’ and ‘should’ is more than a little premature.

Assessment

The authors of this Oxford paper rightly note that, if we are to adopt a multimodal approach, ‘it is important that assessment requirements take into account the multimodal nature of contemporary communication’. The trouble is that there are no widely used assessments (to my knowledge) that do this (including Oxford’s own tests). English language reading tests (like the Oxford Test of English) measure the comprehension of flat printed texts, as a proxy for reading skills. This is not the place to question the validity of such reading tests. Suffice to say that ‘little consensus exists as to what [the ability to read another language] entails, how it develops, and how progress in development can be monitored and fostered’ (Koda, 2021).

No doubt there are many people beavering away at trying to figure out how to assess multimodal literacy, but the challenges they face are not negligible. Twenty-first century digital (multimodal) literacy includes such things as knowing how to change the language of an online text to your own (and vice versa), how to bring up subtitles, how to convert written text to speech, how to generate audio scripts. All such skills may well be very valuable in this digital age, and all of them limit the need to learn another language.

Final thoughts

I can’t help but wonder why Oxford University Press should bring out a ‘position paper’ that is so at odds with their own publishing and assessing practices, and so at odds with the paper recently published in their flagship journal, ELT Journal. There must be some serious disconnect between the Marketing Department, which commissions papers such as these, and other departments within the company. Why did they allow such overstatement, when it is well known that many ELT practitioners (i.e. their customers) have the view that ‘linguistically based forms are (and should be) the only legitimate form of literacy’ (Choi & Yi, 2016)? Was it, perhaps, the second part of the title of this paper that appealed to the marketing people (‘Communication Skills for Today’s Generation’) and they just thought that ‘multimodality’ had a cool, contemporary ring to it? Or does the use of ‘multimodality’ help the marketing of courses like Headway and English File with additional multimedia bells and whistles? As I say, I can’t help but wonder.

If you want to find out more, I’d recommend the ELT Journal article, which you can access freely without giving your details to the marketing people.

Finally, it is perhaps time to question the logical connection between the fact that much reading these days is multimodal and the idea that multimodal literacy should be taught in a language classroom. Much reading that takes place online, especially with multimodal texts, could be called ‘hyper reading’, characterised as ‘sort of a brew of skimming and scanning on steroids’ (Baron, 2021: 12). Is this the kind of reading that should be promoted with language learners? Baron (2021) argues that the answer to this question depends on the level of reading skills of the learner. The lower the level, the less beneficial it is likely to be. But for ‘accomplished readers with high levels of prior knowledge about the topic’, hyper-reading may be a valuable approach. For many language learners, monomodal deep reading, which demands ‘slower, time-demanding cognitive and reflective functions’ (Baron, 2021: x – xi) may well be much more conducive to learning.

References

Baron, N. S. (2021) How We Read Now. Oxford: Oxford University Press

Choi, J. & Yi, Y. (2016) Teachers’ Integration of Multimodality into Classroom Practices for English Language Learners’ TESOL Journal, 7 (2): 3-4 – 327

Donaghy, K. (author), Karastathi, S. (consultant), Peachey, N. (consultant), (2023). Multimodality in ELT: Communication skills for today’s generation [PDF]. Oxford University Press. https://elt.oup.com/feature/global/expert/multimodality (registration needed)

Dudeney, G., Hockly, N. & Pegrum, M. (2013) Digital Literacies. Harlow: Pearson Education

Kessler, M. (2022) Multimodality. ELT Journal, 76 (4): 551 – 554

Koda, K. (2021) Assessment of Reading. https://doi.org/10.1002/9781405198431.wbeal0051.pub2

Manchón, R. M. (2017) The Potential Impact of Multimodal Composition on Language Learning. Journal of Second Language Writing, 38: 94 – 95

Pegrum, M., Dudeney, G. & Hockly, N. (2018) Digital Literacies Revisited. The European Journal of Applied Linguistics and TEFL, 7 (2): 3 – 24

Pegrum, M., Hockly, N. & Dudeney, G. (2022) Digital Literacies 2nd Edition. New York: Routledge

Last September, Cambridge published a ‘Sustainability Framework for ELT’, which attempts to bring together environmental, social and economic sustainability. It’s a kind of 21st century skills framework and is designed to help teachers ‘to integrate sustainability skills development’ into their lessons. Among the sub-skills that are listed, a handful grabbed my attention:

  • Identifying and understanding obstacles to sustainability
  • Broadening discussion and including underrepresented voices
  • Understanding observable and hidden consequences
  • Critically evaluating sustainability claims
  • Understanding the bigger picture

Hoping to brush up my skills in these areas, I decided to take a look at the upcoming BETT show in London, which describes itself as ‘the biggest Education Technology exhibition in the world’. BETT and its parent company, Hyve, ‘are committed to redefining sustainability within the event industry and within education’. They are doing this by reducing their ‘onsite printing and collateral’. (‘Event collateral’ is an interesting event-industry term that refers to all the crap that is put into delegate bags, intended to ‘enhance their experience of the event’.) BETT and Hyve are encouraging all sponsors to go paperless, too, ‘switching from seat-drop collateral to QR codes’, and delegate bags will no longer be offered. They are partnering with various charities to donate ‘surplus food and furniture’ to local community projects, they are donating to other food charities that support families in need, and they are recycling all of the aisle banners into tote bags. Keynote speakers will include people like Sally Uren, CEO of ‘Forum for the Future’, who will talk about ‘Transforming carbon neutral education for a just and regenerative future’.

BETT and Hyve want us to take their corporate and social responsibility very seriously. All of these initiatives are very commendable, even though I wouldn’t go so far as to say that they will redefine sustainability within the event industry and education. But there is a problem – and it’s not that the world is already over-saturated with recycled tote bags. As the biggest jamboree of this kind in the world, the show attracts over 600 vendors and over 30,000 visitors, with over 120 countries represented. Quite apart from all the collateral and surplus furniture, the carbon and material footprint of the event cannot be negligible. Think of all those start-up solution-providers flying and driving into town, AirB’n’B-ing for the duration, and Ubering around town after hours, for a start.

But this is not really the problem, either. Much as the event likes to talk about ‘driving impact and improving outcomes for teachers and learners’, the clear and only purpose of the event is to sell stuff. It is to enable the investors in the 600+ edtech solution-providers in the exhibition area to move towards making a return on their investment. If we wanted to talk seriously about sustainability, the question that needs to be asked is: to what extent does all the hardware and software on sale contribute in any positive and sustainable way to education? Is there any meaningful social benefit to be derived from all this hardware and software, or is it all primarily just a part of a speculative, financial game? Is the corporate social responsibility of BETT / Hyve a form of green-washing to disguise the stimulation of more production and consumption? Is it all just a kind of environmentalism of the rich’ (Dauvergne, 2016).

Edtech is not the most pressing of environmental problems – indeed, there are examples of edtech that are likely more sustainable than the non-tech alternatives – but the sustainability question remains. There are at least four environmental costs to edtech:

  • The energy-greedy data infrastructures that lie behind digital transactions
  • The raw ingredients of digital devices
  • The environmentally destructive manufacture and production of digital devices
  • The environmental cost of dismantling and disposing digital hardware (Selwyn, 2018)

Some forms of edtech are more environmentally costly than others. First, we might consider the material costs. Going back to pre-internet days, think of the countless tonnes of audio cassettes, VCR tapes, DVDs and CD-ROMs. Think of the discarded playback devices, language laboratories and IWBs. None of these are easily recyclable and most have ended up in landfill, mostly in countries that never used these products. These days the hardware that is used for edtech is more often a device that serves other non-educational purposes, but the planned obsolescence of our phones, tablets and laptops is a huge problem for sustainability.

More important now are probably the energy costs of edtech. Audio and video streaming might seem more environmentally friendly than CDs and DVDs, but, depending on how often the CD or DVD is used, the energy cost of streaming (especially high quality video) can be much higher than using the physical format. AI ups the ante significantly (Brevini, 2022). Five years ago, a standard ‘AI training model in linguistics emit more than 284 tonnes of carbon dioxide equivalent’ (Strubell et al., 2019). With exponentially greater volumes of data now being used, the environmental cost is much, much higher. Whilst VR vendors will tout the environmental benefits of cutting down on travel, getting learners together in a physical room may well have a much lower carbon footprint than meeting in the Metaverse.

When doing the calculus of edtech, we need to evaluate the use-value of the technology. Does the tech actually have any clear educational (or other social) benefit, or is its value primarily in terms of its exchange-value?

To illustrate the difference between use-value and exchange-value, I’d like to return again to the beginnings of modern edtech in ELT. As the global market for ELT materials mushroomed in the 1990s, coursebook publishers realised that, for a relatively small investment, they could boost their sales by bringing out ‘new editions’ of best-selling titles. This meant a new cover, replacing a few texts and topics, making minor modifications to other content, and, crucially, adding extra features. As the years went by, these extra features became digital: CD-ROMs, DVDs, online workbooks and downloadables of various kinds. The publishers knew that sales depended on the existence of these shiny new things, even if many buyers made minimal use or zero use of them. But they gave the marketing departments and sales reps a pitch, and justified an increase in unit price. Did these enhanced coursebooks actually represent any increase in use-value? Did learners make better or faster progress in English as a result? On the whole, the answer has to be an unsurprising and resounding no. We should not be surprised if hundreds of megabytes of drag-and-drop grammar practice fail to have much positive impact on learning outcomes. From the start, it was the impact on the exchange-value (sales and profits) of these products that was the driving force.

Edtech vendors have always wanted to position themselves to potential buyers as ‘solution providers’, trumpeting the use-value of what they are selling. When it comes to attracting investors, it’s a different story, one that is all about minimum viable products, scalability and return on investment.

There are plenty of technologies that have undisputed educational use-value in language learning and teaching. Google Docs, Word, Zoom and YouTube come immediately to mind. Not coincidentally, they are not technologies that were designed for educational purposes. But when you look at specifically educational technology, It becomes much harder (though not impossible) to identify unambiguous gains in use-value. Most commonly, the technology holds out the promise of improved learning, but evidence that it has actually achieved this is extremely rare. Sure, a bells-and-whistles LMS offers exciting possibilities for flipped or blended learning, but research that demonstrates the effectiveness of these approaches in the real world is sadly lacking. Sure, VR might seem to offer a glimpse of motivated learners interacting meaningfully in the Metaverse, but I wouldn’t advise you to bet on it.

And betting is what most edtech is all about. An eye-watering $16.1 billion of venture capital was invested in global edtech in 2020. What matters is not that any of these products or services have any use-value, but that they are perceived to have a use-value. Central to this investment is the further commercialisation and privatisation of education (William & Hogan 2020). BETT is a part of this.

Returning to the development of my sustainability skills, I still need to consider the bigger picture. I’ve suggested that it is difficult to separate edtech from a consideration of capitalism, a system that needs to manufacture consumption, to expand production and markets in order to survive (Dauvergne, 2016: 48). Economic growth is the sine qua non of this system, and it is this that makes the British government (and others) so keen on BETT. Education and edtech in particular are rapidly growing markets. But growth is only sustainable, in environmental terms, if it is premised on things that we actually need, rather than things which are less necessary and ecologically destructive (Hickel, 2020). At the very least, as Selwyn (2021) noted, we need more diverse thinking: ‘What if environmental instability cannot be ‘solved’ simply through the expanded application of digital technologies but is actually exacerbated through increased technology use?

References

Brevini, B. (2022) Is AI Good for the Planet? Cambridge: Polity Press

Dauvergne, P. (2016) Environmentalism of the Rich. Cambridge, Mass.: MIT Press

Hickel, J. (2020) Less Is More. London: William Heinemann

Selwyn, N. (2018) EdTech is killing us all: facing up to the environmental consequences of digital education. EduResearch Matters 22 October, 2018. https://www.aare.edu.au/blog/?p=3293

Selwyn, N. (2021) Ed-Tech Within Limits: Anticipating educational technology in times of environmental crisis. E-Learning and Digital Media, 18 (5): 496 – 510. https://journals.sagepub.com/doi/pdf/10.1177/20427530211022951

Strubell, E., Ganesh, A. & McCallum, A. (2019) Energy and Policy Considerations for Deep Learning in NLP. Cornell University: https://arxiv.org/pdf/1906.02243.pdf

Williamson, B. & Hogan, A. (2020) Commercialisation and privatisation in / of education in the context of Covid-19. Education International

This post is a piece of mediation – an attempt to help you understand the concept of mediation itself. In order to mediate this concept, I am engaging in an act of linguistic mediation, helping you to understand the language of the discourse of mediation, which may, at times, seem obscure. See, for example, the last sentence in this paragraph, a sentence which should not be taken too seriously. This is also an act of cultural mediation, a bridge between you, as reader, and the micro-culture of people who write earnestly about mediation. And, finally, since one can also mediate a text for oneself, it could also be argued that I am adopting an action-oriented approach in which I am myself a social agent and a lifelong learner, using all my plurilingual resources to facilitate pluricultural space in our multidiverse society.

Mediation has become a de-jour topic since the publication of the Companion Volume of the CEFR (North et al., 2018). Since then, it has been the subject of over 20 Erasmus+ funded projects, one of which (MiLLaT, 2021), (funded to the tune of 80,672.00 € and a collaboration between universities in Poland, Czechia, Lithuania and Finland), offers a practical guide for teachers, and which I’ll draw on heavily here.

This guide describes mediation as a ‘complex matter’, but I beg to differ. The guide says that ‘mediation involves facilitating understanding and communication and collaborating to construct new meaning through languaging or plurilanguaging both on the individual and social level’. Packed as it is with jargon, I will employ three of the six key mediation strategies to make this less opaque. These are streamlining (or restructuring) text, breaking down complicated information, and adjusting language (North & Piccardo, 2016: 457). Basically, mediation simply means helping to understand, in a very wide variety of ways and in the broadest possible sense. The mediation pie is big and can be sliced up in many ways: the number of categories and sub-categories make it seem like something bigger than it is. The idea is ‘not something new or unknown’ in language teaching (MilLLaT, 2021).

What is relatively new is the language in which mediation is talked about and the way in which it is associated with other concepts, plurilingualism and pluricultural competence in particular. (Both these concepts require a separate mediating blog post to deconstruct them.) Here, though, I’ll focus briefly on the kinds of language that are used to talk about mediation. A quick glossary:

  • facilitating collaborative interaction with peers = communicative pair / group work
  • facilitating pluricultural space = texts / discussion with cultural content
  • collaborating in a group – collaborating to construct meaning = group work
  • facilitating communication in delicate situations and disagreements = more group work
  • relaying specific information in writing = writing a report
  • processing text in writing = writing a summary

See? It’s not all that complex, after all.

Neither, it must be said, is there anything new about the activities that have been proposed to promote mediation skills. MiLLaT offers 39 classroom activities, divided up into those suitable for synchronous and asynchronous classes. Some are appropriate for polysynchronous classes – which simply means a mixture of synchronous and asynchronous, in case you were wondering.

To make things clearer still, here is a selection of the activities suggested in MiLLaT. I’ll spare you the lengthy explanations of precisely which mediation skills and strategies these activities are supposed to develop.

  • Students read texts and watch videos about malaria, before working in groups to develop a strategy to eradicate malaria from a particular village.
  • Students do a jigsaw reading or video viewing, discuss the information they have come across and do a follow-up task (e.g. express their own opinions, make a presentation).
  • Students read an article / watch a video (about Easter in another country), do some ‘lexical and comprehension activities’, then post messages on a discussion forum about how they will spend Easter.
  • Students read a text about Easter in Spain from an authentic source in Spanish, complete a fill-in-the-blanks exercise using the information and practising the vocabulary they learned from the text, then describe a local event / holiday themselves.
  • Students read a text about teachers, discuss the features of good/bad educators and create a portrait of an ideal teacher.
  • Students read extracts from the CEFR, interview a teacher (in L1) about the school’s plurilingual practices, then make a presentation on the topic in L2.
  • One student shows the others some kind of visual presentation. The rest discuss it in groups, before the original student tells the others about it and leads a discussion.
  • Students analyse a text on Corporate Social Responsibility, focusing on the usage of relevant vocabulary.
  • Students working in groups ‘teach’ a topic to their group members using figures/diagrams.
  • Students read a text about inclusive writing, then identify examples of inclusive language from a ‘Politically Correct Bedtime Story’, reflect on these examples, posting their thoughts in a forum.
  • Students watch a TED talk and write down the top five areas they paid attention to when watching the talk, share a summary of their observations with the rest of their group, and give written feedback to the speaker.
  • Students read a text and watch a video about note-taking and mindmapping, before reading an academic text and rendering it as a mindmap.
  • Students explore a range of websites and apps that may be helpful for self-study.
  • Students practise modal verbs by completing a gapped transcript of an extract from ‘Schindler’s List’.
  • Students practise regular and irregular pasts by gap-filling the song ‘Don’t Cry for Me Argentina’.
  • Students practise the present continuous by giving a running commentary on an episode of ‘Mr Bean’.

You could be forgiven for wondering what some of this has to do with mediation. Towards the end of this list, some of the examples are not terribly communicative or real-world, but they could legitimately be described as pedagogical mediation. Or ‘teaching’, for short.

Much could be said about the quality of some of the MiLLaT activities, the lack of originality, the (lack of) editing, topics that are already dated, copyright issues, and even the value of the activities. Was this really worth €80,000? However, the main point I’d like to make is that, when it comes down to classroom practicalities, you won’t find anything new. Rather than trawling through the MiLLaT documents, I’d recommend you splash out on Chiappini and Mansur’s (2021) ‘Activities for Mediation’ if you’re looking for some ready-made mediation ideas. Alternatively, take any tried and tested communicative classroom task, and describe it using some mediation jargon. If you do this, you’ll have the added bonus of practising your own mediation strategies (you could, for example, read the CEFR Companion Volume in a language other than your own, mentally translate into another language, and then amplify the text using the jargon from the CEFR CV). It will do wonders for your sociolinguistic, pragmatic, plurilingual and pluricultural competences.

Now that we have mediation etherized upon a table, there is an overwhelming question that cannot be avoided. Is the concept of mediation worth it, after all? I like the fact that mediation between two or more languages (Stathopoulou, 2015) has helped to legitimize interlingual activities in the ELT classroom, but such legitimization does not really require the notion of mediation. This is more than amply provided for by research into L1 use in L2 learning, as well as by the theoretical framework of translanguaging. But beyond that? I’m certainly not the first person to have asked the question. Bart Deygers (2019), for example, argues that the CEFR CV ‘does not truly engage with well-founded criticism’, and neither does it ‘refer to the many empirical studies that have been conducted since 2001’ that could have helped it. He refers to a ‘hermetic writing style’ and its use of ‘vague and impressionistic language’. Mediation, he argues, would be better seen ‘as a value statement rather than as a real theoretical– conceptual innovation’. From the list above of practical activities, it would be also hard to argue that there is anything innovative in its classroom implementation. Mediation advocates will respond by saying ‘that is not what we meant at all, that is not it, at all’ as they come and go, talking of North and Piccardo. Mediation may offer rich pickings for grants of various kinds, it may seem to be a compelling topic for conference presentations, training courses and publications, but I’m not convinced it has much else going for it,

References

Chiappini, R. & Mansur, E. (2021). Activities for Mediation. Delta Publishing: Stuttgart

Deygers, B. (2019). The CEFR Companion Volume: Between Research-Based Policy and Policy-Based Research. Applied Linguistics 2019: 0/0: 1–7

MiLLaT (Mediation in Language Learning and Teaching). (2021). Guide for Language Teachers: Traditional and Synchronous Tasks https://ec.europa.eu/programmes/erasmus-plus/project-result-content/2d9860e2-96ee-46aa-9bc6-1595cfcd1893/MiLLaT_Guide_for_Teachers_IO_03.pdf and Guide for Language Teachers: Asynchronous and Polysynchronous Tasks https://ec.europa.eu/programmes/erasmus-plus/project-result-content/3d819e5a-35d7-4137-a2c8-697d22bf6b79/Materials_Developing_Mediation_for_Asynchronous_and_Polysynchronous_Online_Courses_1_.pdf

North, B. & Piccardo, E. (2016). Developing illustrative descriptors of aspects of mediation for the Common European Framework of Reference (CEFR): A Council of Europe Project. Language Teaching, 49 (3): 455 – 459

North, B., Goodier, T., Piccardo, E. et al. (2018). Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Companion Volume With New Descriptors. Strasbourg: Council of Europe

Stathopoulou, M. (2015). Cross-Language Mediation in Foreign Language Teaching and Testing. Bristol: Multilingual Matters

The paragraph above was written by an AI-powered text generator called neuroflash https://app.neuro-flash.com/home which I told to produce a text on the topic ‘AI and education’. As texts on this topic go, it is both remarkable (in that it was not written by a human) and entirely unremarkable (in that it is practically indistinguishable from hundreds of human-written texts on the same subject). Neuroflash uses a neural network technology called GPT-3 – ‘a large language model’ – and ‘one of the most interesting and important AI systems ever produced’ (Chalmers, 2020). Basically, it generates text by predicting sequences of words based on huge databases. The nature of the paragraph above tells you all you need to know about the kinds of content that are usually found in texts about AI and education.

Not dissimilar from the neuroflash paragraph, educational commentary on uses of AI is characterised by (1) descriptions of AI tools already in use (e.g. speech recognition and machine translation) and (2) vague predictions which invariably refer to ‘the promise of personalised learning, adjusting what we give learners according to what they need to learn and keeping them motivated by giving them content that is of interest to them’ (Hughes, 2022). The question of what precisely will be personalised is unanswered: providing learners with optimal sets of resources (but which ones?), providing counselling services, recommendations or feedback for learners and teachers (but of what kind?) (Luckin, 2022). Nearly four years ago, I wrote https://adaptivelearninginelt.wordpress.com/2018/08/13/ai-and-language-teaching/ about the reasons why these questions remain unanswered. The short answer is that AI in language learning requires a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. This is lacking in SLA, or, at least, there is no general agreement on what it is. Worse, the models that are most commonly adopted in AI-driven programs (e.g. the deliberate learning of discrete items of grammar and vocabulary) are not supported by either current theory or research (see, for example, VanPatten & Smith, 2022).

In 2021, the IATEFL Learning Technologies SIG organised an event dedicated to AI in education. Unsurprisingly, there was a fair amount of input on AI in assessment, but my interest is in how AI might revolutionize how we learn and teach, not how we assess. What concrete examples did speakers provide?

Rose Luckin, the most well-known British expert on AI in education, kicked things off by mentioning three tools. One of these, Carnegie Learning, is a digital language course that looks very much like any of the ELT courses on offer from the big publishers – a fully blendable, multimedia (e.g. flashcards and videos) synthetic syllabus. This ‘blended learning solution’ is personalizable, since ‘no two students learn alike’, and, it claims, will develop a ‘lifelong love of language’. It appears to be premised on the idea of language learning as optimizing the delivery of ‘content’, of this content consisting primarily of discrete items, and of equating input with uptake. Been there, done that.

A second was Alelo Enskill https://www.alelo.com/about-us/ a chatbot / avatar roleplay program, first developed by the US military to teach Iraqi Arabic and aspects of Iraqi culture to Marines. I looked at the limitations of chatbot technology for language learning here https://adaptivelearninginelt.wordpress.com/2016/12/01/chatbots/ . The third tool mentioned by Luckin was Duolingo. Enough said.

Another speaker at this event was the founder and CEO of Edugo.AI https://www.edugo.ai/ , an AI-powered LMS which uses GPT-3. It allows schools to ‘create and upload on the platform any kind of language material (audio, video, text…). Our AI algorithms process and convert it in gamified exercises, which engage different parts of the brain, and gets students eager to practice’. Does this speaker know anything about gamification (for a quick read, I’d recommend Paul Driver (2012)) or neuroscience, I wonder. What, for that matter, does he know about language learning? Apparently, ‘language is not just about words, language is about sentences’ (Tomasello, 2022). Hmm, this doesn’t inspire confidence.

When you look at current uses of AI in language learning, there is very little (outside of testing, translation and speech ↔ text applications) that could justify enthusiastic claims that AI has any great educational potential. Skepticism seems to me a more reasonable and scientific response: de omnibus dubitandum.

Education is not the only field where AI has been talked up. When Covid hit us, AI was seen as the game-changing technology. It ‘could be deployed to make predictions, enhance efficiencies, and free up staff through automation; it could help rapidly process vast amounts of information and make lifesaving decisions’ (Chakravorti, 2022). The contribution of AI to the development of vaccines has been huge, but its role in diagnosing and triaging patients has been another matter altogether. Hundreds of predictive tools were developed: ‘none of them made a real difference, and some were potentially harmful’ (Heaven, 2021). Expectations were unrealistic and led to the deployment of tools before they were properly trialled. Thirty months down the line, a much more sober understanding of the potential of AI has emerged. Here, then, are the main lessons that have been learnt (I draw particularly on Engler, 2020, and Chakravorti, 2022) that are also relevant to education and language learning.

  • Anticipate what could go wrong before anticipating what might go right. Engler (2020) writes that ‘a poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem’. In language learning, it is highly unlikely that personalized recommendations will ever reach anything even approaching this level of reliability. What are the implications for individual learners whose learning is inappropriately personalised?
  • We also know that a significant problem with AI systems is bias (O’Neil, 2016). There is a well-documented history of discriminatory outcomes because of people’s race, gender, social class or disability profile. Bias needs to be addressed proactively, not reactively.
  • Acknowledge from the outset that, for AI to work, huge amounts of data related to prior outcomes will be needed. In the cases of both Covid and language learning, much of this data will be personal. This raises immediate questions of privacy and consent, especially for learners who are children. Don’t minimise this issue.
  • Common by-products of AI use include insecure data storage, digital surveillance and consolidation of market power (Engler, 2020). Since such issues are so pervasive, assume they will be a problem, and need to be considered in evaluating the potential of any piece of AI edtech.
  • Find out what the subject matter experts have to say: ‘AI is only helpful when applied judiciously by subject-matter experts—people with long-standing experience with the problem that they are trying to solve […] Effectively predicting a badly defined problem is worse than doing nothing at all’ (Engler, 2020). It may be that the problem the AI is trying to help solve is the wrong problem and not worth solving at all (e.g. more efficient ‘delivery’ of explicit grammatical knowledge, gamifying gapfills).

To bring this post to an end, I returned to the AI / GTP-3 driven neuroflash program. This time, I asked it to write a longer piece entitled ‘The dangers of artificial intelligence in education: what could go wrong?’. Here’s what it came up with:

There are a few potential dangers of artificial intelligence in education. One is that AI could be used to replace human teachers. This could lead to a decline in the quality of education, as machines are not yet capable of replicating the full range of human emotions and social interactions that are necessary for teaching. Additionally, AI could be used to create biased or “racist” educational materials, as has already happened with some commercial software products. Finally, AI could be used to track and monitor students in ways that invade their privacy, or that could be used to manipulate their behavior.

All of these dangers are real and should be taken seriously. However, it’s important to remember that AI is still a new technology, and its impact on education is not yet fully understood. As such, we should tread cautiously when implementing AI into our educational systems.

When it comes to education, artificial intelligence could be a double-edged sword. While AI has the potential to revolutionize the field of education and help students learn in more efficient ways, there are also dangers that need to be considered.

One of the dangers of AI in education is that it could lead to a more standardized and homogenized form of learning. If all students are relying on AI-based learning tools, they may all end up learning the same things in the same way. This could lead to a loss of creativity and individuality in education.

Another danger is that AI-based learning tools could become too reliant on data and algorithms. This could lead to a situation where students are simply being taught what the machines think they should know, rather than being allowed to explore and discover new things for themselves.

Finally, there is the danger that AI could eventually replace human teachers altogether. While this may seem like a far-fetched scenario, it is not outside the realm of possibility. If AI-based learning tools become sophisticated enough, they may one day be able to do the job of teaching just as well as humans can.

These are just some of the dangers that need to be considered when it comes to artificial intelligence in education. While AI has the potential to revolutionize education, we need to be careful about how we implement it so that we don’t end up doing more harm than good.

References

Chakravorti, B. (2022) Why AI Failed to Live Up to Its Potential During the Pandemic. Harvard Business Review March 17,2022. https://hbr.org/2022/03/why-ai-failed-to-live-up-to-its-potential-during-the-pandemic

Chalmers, D. (2020) Weinberg, Justin (ed.). “GPT-3 and General Intelligence”. Daily Nous. Philosophers On GPT-3 (updated with replies by GPT-3) July 30, 2020. https://dailynous.com/2020/07/30/philosophers-gpt-3/#chalmers

Driver, P. (2012) The Irony of Gamification. In English Digital Magazine 3, British Council Portugal, pp. 21 – 24 http://digitaldebris.info/digital-debris/2011/12/31/the-irony-of-gamification-written-for-ied-magazine.html

Engler, A. (2020) A guide to healthy skepticism of artificial intelligence and coronavirus. Washington D.C.: Brookings Institution https://www.brookings.edu/research/a-guide-to-healthy-skepticism-of-artificial-intelligence-and-coronavirus/

Heaven, W. D. (2021) Hundreds of AI tools have been built to catch covid. None of them helped. MIT Technology Review, July 30, 2021. https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/

Hughes, G. (2022) What lies at the end of the AI rainbow? IATEFL LTSIG Newsletter Issue April 2022

Luckin, R. (2022) The implications of AI for language learning and teaching. IATEFL LTSIG Newsletter Issue April 2022

O’Neil, C. (2016) Weapons of Math Destruction. London: Allen Lane

Tomasello, G. (2022) Next Generation of AI-Language Education Software:NLP & Language Modules (GPT3). IATEFL LTSIG Newsletter Issue April 2022

VanPatten, B. & Smith, M. (2022) Explicit and Implicit Learning in Second Language Acquisition. Cambridge: Cambridge University Press

On 21 January, I attended the launch webinar of DEFI (the Digital Education Futures Initiative), an initiative of the University of Cambridge, which seeks to work ‘with partners in industry, policy and practice to explore the field of possibilities that digital technology opens up for education’. The opening keynote speaker was Andrea Schleicher, head of education at the OECD. The OECD’s vision of the future of education is outlined in Schleicher’s book, ‘World Class: How to Build a 21st-Century School System’, freely available from the OECD, but his presentation for DEFI offers a relatively short summary. A recording is available here, and this post will take a closer look at some of the things he had to say.

Schleicher is a statistician and the coordinator of the OECD’s PISA programme. Along with other international organisations, such as the World Economic Forum and the World Bank (see my post here), the OECD promotes the global economization and corporatization of education, ‘based on the [human capital] view that developing work skills is the primary purpose of schooling’ (Spring, 2015: 14). In other words, the main proper function of education is seen to be meeting the needs of global corporate interests. In the early days of the COVID-19 pandemic, with the impact of school closures becoming very visible, Schleicher expressed concern about the disruption to human capital development, but thought it was ‘a great moment’: ‘the current wave of school closures offers an opportunity for experimentation and for envisioning new models of education’. Every cloud has a silver lining, and the pandemic has been a godsend for private companies selling digital learning (see my post about this here) and for those who want to reimagine education in a more corporate way.

Schleicher’s presentation for DEFI was a good opportunity to look again at the way in which organisations like the OECD are shaping educational discourse (see my post about the EdTech imaginary and ELT).

He begins by suggesting that, as a result of the development of digital technology (Google, YouTube, etc.) literacy is ‘no longer just about extracting knowledge’. PISA reading scores, he points out, have remained more or less static since 2000, despite the fact that we have invested (globally) more than 15% extra per student in this time. Only 9% of all 15-year-old students in the industrialised world can distinguish between fact and opinion.

To begin with, one might argue about the reliability and validity of the PISA reading scores (Berliner, 2020). One might also argue, as did a collection of 80 education experts in a letter to the Guardian, that the scores themselves are responsible for damaging global education, raising further questions about their validity. One might argue that the increased investment was spent in the wrong way (e.g. on hardware and software, rather than teacher training, for example), because the advice of organisations like OECD has been uncritically followed. And the statistic about critical reading skills is fairly meaningless unless it is compared to comparable metrics over a long time span: there is no reason to believe that susceptibility to fake news is any more of a problem now than it was, say, one hundred years ago. Nor is there any reason to believe that education can solve the fake-news problem (see my post about fake news and critical thinking here). These are more than just quibbles, but the main point that Schleicher is making is that education needs to change.

Schleicher next presents a graph which is designed to show that the amount of time that students spend studying correlates poorly with the amount they learn. His interest is in the (lack of) productivity of educational activities in some contexts. He goes on to argue that there is greater productivity in educational activities when learners have a growth mindset, implying (but not stating) that mindset interventions in schools would lead to a more productive educational environment.

Schleicher appears to confuse what students learn with the things they have learnt that have been measured by PISA. The two are obviously rather different, since PISA is only interested in a relatively small subset of the possible learning outcomes of schooling. His argument for growth mindset interventions hinges on the assumption that such interventions will lead to gains in reading scores. However, his graph demonstrates a correlation between growth mindset and reading scores, not a causal relationship. A causal relationship has not been clearly and empirically demonstrated (see my post about growth mindsets here) and recent work by Carol Dweck and her associates (e.g. Yeager et al., 2016), as well as other researchers (e.g. McPartlan et al, 2020), indicates that the relationship between gains in learning outcomes and mindset interventions is extremely complex.

Schleicher then turns to digitalisation and briefly discusses the positive and negative affordances of technology. He eulogizes platform companies before showing a slide designed to demonstrate that (in the workplace) there is a strong correlation between ICT use and learning. He concludes: ‘the digital world of learning is a hugely empowering world of learning’.

A brief paraphrase of this very disingenuous part of the presentation would be: technology can be good and bad, but I’ll only focus on the former. The discourse appears balanced, but it is anything but.

During the segment, Schleicher argues that technology is empowering, and gives the examples of ‘the most successful companies these days, they’re not created by a big industry, they’re created by a big idea’. This is plainly counterfactual. In the case of Alphabet and Facebook, profits did not follow from a ‘big idea’: the ideas changed as the companies evolved.

Schleicher then sketches a picture of an unpredictable future (pandemics, climate change, AI, cyber wars, etc.) as a way of framing the importance of being open (and resilient) to different futures and how we respond to them. He offers two different kinds of response: maintenance of the status quo, or ‘outsourcing’ of education. The pandemic, he suggests, has made more countries aware that the latter is the way forward.

In his discussion of the maintenance of the status quo, Schleicher talks about the maintenance of educational monopolies. By this, he must be referring to state monopolies on education: this is a favoured way of neoliberals of referring to state-sponsored education. But the extent to which, in 2021 in many OECD countries, the state has any kind of monopoly of education, is very open to debate. Privatization is advancing fast. Even in 2015, the World Education Forum’s ‘Final Report’ wrote that ‘the scale of engagement of nonstate actors at all levels of education is growing and becoming more diversified’. Schleicher goes on to talk about ‘large, bureaucratic school systems’, suggesting that such systems cannot be sufficiently agile, adaptive or responsive. ‘We should ask this question,’ he says, but his own answer to it is totally transparent: ‘changing education can be like moving graveyards’ is the title of the next slide. Education needs to be more like the health sector, he claims, which has been able to develop a COVID vaccine in such a short period of time. We need an education industry that underpins change in the same way as the health industry underpins vaccine development. In case his message isn’t yet clear enough, I’ll spell it out: education needs to be privatized still further.

Schleicher then turns to the ways in which he feels that digital technology can enhance learning. These include the use of AR, VR and AI. Technology, he says, can make learning so much more personalized: ‘the computer can study how you study, and then adapt learning so that it is much more granular, so much more adaptive, so much more responsive to your learning style’. He moves on to the field of assessment, again singing the praises of technology in the ways that it can offer new modes of assessment and ‘increase the reliability of machine rating for essays’. Through technology, we can ‘reunite learning and assessment’. Moving on to learning analytics, he briefly mentions privacy issues, before enthusing at greater length about the benefits of analytics.

Learning styles? Really? The reliability of machine scoring of essays? How reliable exactly? Data privacy as an area worth only a passing mention? The use of sensors to measure learners’ responses to learning experiences? Any pretence of balance appears now to have been shed. This is in-your-face sales talk.

Next up is a graph which purports to show the number of teachers in OECD countries who use technology for learners’ project work. This is followed by another graph showing the number of teachers who have participated in face-to-face and online CPD. The point of this is to argue that online CPD needs to become more common.

I couldn’t understand what point he was trying to make with the first graph. For the second, it is surely the quality of the CPD, rather than the channel, that matters.

Schleicher then turns to two further possible responses of education to unpredictable futures: ‘schools as learning hubs’ and ‘learn-as-you-go’. In the latter, digital infrastructure replaces physical infrastructure. Neither is explored in any detail. The main point appears to be that we should consider these possibilities, weighing up as we do so the risks and the opportunities (see slide below).

Useful ways to frame questions about the future of education, no doubt, but Schleicher is operating with a set of assumptions about the purpose of education, which he chooses not to explore. His fundamental assumption – that the primary purpose of education is to develop human capital in and for the global economy – is not one that I would share. However, if you do take that view, then privatization, economization, digitalization and the training of social-emotional competences are all reasonable corollaries, and the big question about the future concerns how to go about this in a more efficient way.

Schleicher’s (and the OECD’s) views are very much in accord with the libertarian values of the right-wing philanthro-capitalist foundations of the United States (the Gates Foundation, the Broad Foundation and so on), funded by Silicon Valley and hedge-fund managers. It is to the US that we can trace the spread and promotion of these ideas, but it is also, perhaps, to the US that we can now turn in search of hope for an alternative educational future. The privatization / disruption / reform movement in the US has stalled in recent years, as it has become clear that it failed to deliver on its promise of improved learning. The resistance to privatized and digitalized education is chronicled in Diane Ravitch’s latest book, ‘Slaying Goliath’ (2020). School closures during the pandemic may have been ‘a great moment’ for Schleicher, but for most of us, they have underscored the importance of face-to-face free public schooling. Now, with the electoral victory of Joe Biden and the appointment of a new US Secretary for Education (still to be confirmed), we are likely to see, for the first time in decades, an education policy that is firmly committed to public schools. The US is by far the largest contributor to the budget of the OECD – more than twice any other nation. Perhaps a rethink of the OECD’s educational policies will soon be in order?

References

Berliner D.C. (2020) The Implications of Understanding That PISA Is Simply Another Standardized Achievement Test. In Fan G., Popkewitz T. (Eds.) Handbook of Education Policy Studies. Springer, Singapore. https://doi.org/10.1007/978-981-13-8343-4_13

McPartlan, P., Solanki, S., Xu, D. & Sato, B. (2020) Testing Basic Assumptions Reveals When (Not) to Expect Mindset and Belonging Interventions to Succeed. AERA Open, 6 (4): 1 – 16 https://journals.sagepub.com/doi/pdf/10.1177/2332858420966994

Ravitch, D. (2020) Slaying Goliath: The Passionate Resistance to Privatization and the Fight to Save America’s Public School. New York: Vintage Books

Schleicher, A. (2018) World Class: How to Build a 21st-Century School System. Paris: OECD Publishing https://www.oecd.org/education/world-class-9789264300002-en.htm

Spring, J. (2015) Globalization of Education 2nd Edition. New York: Routledge

Yeager, D. S., et al. (2016) Using design thinking to improve psychological interventions: The case of the growth mindset during the transition to high school. Journal of Educational Psychology, 108(3), 374–391. https://doi.org/10.1037/edu0000098

Since no single definition of critical thinking prevails (Dummett & Hughes, 2019: 2), discussions of the topic invariably begin with attempts to provide a definition. Lai (2011) offers an accessible summary of a range of possible meanings, but points out that, in educational contexts, its meaning is often rather vague and encompasses other concepts (such as higher order thinking skills) which also lack clarity. Paul Dummett and John Hughes (2019: 4) plump for ‘a mindset that involves thinking reflectively, rationally and reasonably’ – a definition which involves a vague noun (that could mean a fixed state of mind, a learned attitude, a disposition or a mood) and three highly subjective adverbs. I don’t think I could do any better. However, instead of looking for a definition, we can reach a sort of understanding by looking at examples of it. Dummett and Hughes’ book is extremely rich in practical examples, and the picture that emerges of critical thinking is complex and multifaceted.

As you might expect of a weasel word like ‘critical thinking’, there appears to be general agreement that it’s a ‘good thing’. Paul Dummett suggests that there are two common reasons for promoting the inclusion of critical thinking activities in the language classroom. The first of these is a desire to get students thinking for themselves. The second is the idea ‘that we live in an age of misinformation in which only the critically minded can avoid manipulation or slavish conformity’. Neither seems contentious at first glance, although he points out that ‘they tend to lead to a narrow application of critical thinking in ELT materials: that is to say, the analysis of texts and evaluation of the ideas expressed in them’. It’s the second of these rationales that I’d like to explore further.

Penny Ur (2020: 9) offers a more extended version of it:

The role of critical thinking in education has become more central in the 21st century, simply because there is far more information readily available to today’s students than there was in previous centuries (mainly, but not only, online), and it is vital for them to be able to deal with such input wisely. They need to be able to distinguish between what is important and what is trivial, between truth and lies, between fact and opinion, between logical argument and specious propaganda […] Without such skills and awareness of the need to exercise them, they are liable to find themselves victims of commercial or political interests, their thinking manipulated by persuasion disguised as information.

In the same edited collection Olja Milosevic (2020:18) echoes Ur’s argument:

Critical thinking becomes even more important as communication increasingly moves online. Students find an overwhelming amount of information and need to be taught how to evaluate its relevance, accuracy and quality. If teachers do not teach students how to go beyond surface meaning, students cannot be expected to practise it.

In the passages I’ve quoted, these writers are referring to one particular kind of critical thinking. The ability to critically evaluate the reliability, accuracy, etc of a text is generally considered to be a part of what is usually called ‘media information literacy’. In these times of fake news, so the argument goes, it is vital for students to develop (with their teachers’ help) the necessary skills to spot fake news when they see it. The most prototypical critical thinking activity in ELT classrooms is probably one in which students analyse some fake news, such as the website about the Pacific Tree Octopus (which is the basis of a lesson in Dudeney et al., 2013: 198 – 203).

Before considering media information literacy in more detail, it’s worth noting in passing that a rationale for critical thinking activities is no rationale at all if it only concerns one aspect of critical thinking, since it has applied attributes of a part (media information literacy) to a bigger whole (critical thinking).

There is no shortage of good (free) material available for dealing with fake news in the ELT classroom. Examples include work by James Taylor, Chia Suan Chong and Tyson Seburn. Material of this kind may result in lively, interesting, cognitively challenging, communicative and, therefore, useful lessons. But how likely is it that material of this kind will develop learners’ media information literacy and, by extension therefore, their critical thinking skills? How likely is it that teaching material of this kind will help people identify (and reject) fake news? Is it possible that material of this kind is valuable despite its rationale, rather than because of it? In the spirit of rational, reflective and reasonable thinking, these are questions that seem to be worth exploring.

ELT classes and fake news

James Taylor has suggested that the English language classroom is ‘the perfect venue for [critical thinking] skills to be developed’. Although academic English courses necessarily involve elements of critical thinking, I’m not so sure that media information literacy (and, specifically, the identification of fake news) can be adequately addressed in general English classes. There are so many areas, besides those that are specifically language-focussed, competing for space in language classes (think of all those other 21st century skills), that it is hard to see how sufficient time can be found for real development of this skill. It requires modelling, practice of the skill, feedback on the practice, and more practice (Mulnix, 2010): it needs time. Fake news activities in the language classroom would, of course, be of greater value if they were part of an integrated approach across the curriculum. Unfortunately, this is rarely the case.

Information literacy skills

Training materials for media information literacy usually involve a number of stages. These include things like fact-checking and triangulation of different sources, consideration of web address, analysis of images, other items on the site, source citation and so on. The problem, however, is that news-fakers have become so good at what they do. The tree octopus site is very crude in comparison to what can be produced nowadays by people who have learnt to profit from the online economy of misinformation. Facebook employs an army of algorithmic and human fact-checkers, but still struggles. The bottom line is that background knowledge is needed (this is as true for media information literacy as it is for critical thinking more generally) (Willingham, 2007). With news, the scope of domain knowledge is so vast that it is extremely hard to transfer one’s ability to critically evaluate one particular piece of news to another. We are all fooled from time to time.

Media information literacy interventions: research on effectiveness

With the onset of COVID-19, the ability to identify fake news has become, more than ever, a matter of life and death. There is little question that this ability correlates strongly with analytic thinking (see, for example, Stanley et al., 2020). What is much less clear is how we can go about promoting analytic thinking. Analytic thinking comes in different varieties, and another hot-off-the-press research study into susceptibility to COVID-19 fake news (Roozenbeek et al., 2020) has found that the ability to spot fake news may correlate more strongly with numerical literacy than with reasoning ability. In fact, the research team found that a lack of numerical literacy was the most consistent predictor of susceptibility to misinformation about COVID-19. Perhaps we are attempting to develop the wrong kind of analytic thinking?

In educational contexts, attempts to promote media information literacy typically seek to develop reasoning abilities, and the evidence for their effectiveness is mixed. First of all, it needs to be said that ‘little large-scale evidence exists on the effectiveness of promoting digital media literacy as a response to online misinformation’ (Guess et al., 2020). An early meta-analysis (Jeong et al., 2012) found that such interventions had a positive effect, when the interventions were long (not one-off), but impacted more on students’ knowledge than they did on their behaviour. More recently, Huguet et al (2019) were unable to draw ‘definitive conclusions from past research, such as what kinds of media literacy practices work and under what conditions’. And this year, a study by Guess et al (2020) did not generate sufficient evidence ‘to conclude that the [media information literacy] intervention changed real-world consumption of false news’. I am unaware of any robust research in this area in the context of ELT.

It’s all rather disappointing. Why are we not better at it? After all, teachers of media studies have been exploring pathways for many years now. One possible answer is this: Media information literacy, like critical thinking more generally, is a skill that is acquirable, but it can only be acquired if there is a disposition to do so. The ability to think critically and the disposition to do so are separate entities (Facione, 2000). Training learners to be more critical in their approach to media information may be so much pissing in the wind if the disposition to be sceptical is not there. Shaping dispositions is a much harder task than training skills.

Both of the research studies into susceptibility to COVID-19 misinformation that I referred to earlier in this section underscore the significance of dispositions to analytic thinking. Roozenbeek et al (2020) found, in line with much previous research (for example, Jost et al. 2018), that political conservatism is associated with a slightly higher susceptibility to misinformation. Political views (on either side of the political spectrum) rarely change as a result of exposure to science or reasoned thinking. They also found that ‘self-identifying as a member of a minority predicts susceptibility to misinformation about the virus in all countries surveyed’ (except, interestingly, in the UK). Again, when issues of identity are at stake, emotional responses tend to trump rational ones.

Rational, reflective and reasonable thinking about media information literacy leads to an uncomfortable red-pill rabbit-hole. This is how Bulger and Davidson (2018) put it:

The extent to which media literacy can combat the problematic news environment is an open question. Is denying the existence of climate change a media literacy problem? Is believing that a presidential candidate was running a sex-trafficking ring out of a pizza shop a media literacy problem? Can media literacy combat the intentionally opaque systems of serving news on social media platforms? Or intentional campaigns of disinformation?

Teachers and fake news

The assumption that the critical thinking skills of young people can be developed through the intervention of their teachers is rarely problematized. It should be. A recent study of Spanish pre-service teachers (Fuertes-Prieto et al., 2020) showed that their ‘level of belief in pseudoscientific issues is comparable, or even higher in some cases to those of the general population’. There is no reason to believe that this changes after they have qualified. Teachers are probably no more likely to change their beliefs when presented with empirical evidence (Menz et al., 2020) than people from any other profession. Research has tended to focus on teachers’ lack of critical thinking in areas related to their work, but, things may be no different in the wider world. It is estimated that over a quarter of teachers in the US voted for the world’s greatest peddler of fake news in the 2016 presidential election.

It is also interesting to note that the sharing of fake news on social media is much more widespread among older people (including US teachers who have an average age of 42.4) than those under 30 (Bouygues, 2019).

Institutional contexts and fake news

Cory Doctorow has suggested that the fake news problem is not a problem of identifying what is true and what is fake, but a problem ‘about how we know whether something is true. We’re not disagreeing about facts, we’re disagreeing about epistemology’. In a post-modernist world of ‘Truth Decay’ (Kavanagh & Rich, 2018), where there is ‘a blurring of the line between opinion and fact’, epistemological authority is a rare commodity. Medicine, social sciences and applied linguistics are all currently experiencing a ‘replication crisis’ (Ioannidis, 2005) and we had a British education minister saying that ‘people of this country have had enough of experts’.

News reporting has always relied to some extent on trust in the reliability of the news source. The BBC or CNN might attempt to present themselves as more objective than, say, Fox News or InfoWars, but trust in all news outlets has collapsed globally in recent years. As Michael Shudson has written in the Columbia Journalism Review, ‘all news outlets write from a set of values, not simply from a disinterested effort at truth’. If a particular news channel manifestly shares different values from your own, it is easy to reject the veracity of the news it reports. Believers in COVID conspiracy theories often hold their views precisely because of their rejection of the epistemological authority of mainstream news and the WHO or governments who support lockdown measures.

The training of media information literacy in schools is difficult because, for many people in the US (and elsewhere), education is not dissimilar to mainstream media. They ‘are seen as the enemy — two institutions who are trying to have power over how people think. Two institutions that are trying to assert authority over epistemology’ (boyd, 2018). Schools have always been characterized by imbalances in power (between students and teachers / administrators), and this power dynamic is not conducive to open-minded enquiry. Children are often more aware of the power of their teachers than they are accepting of their epistemological authority. They are enjoined to be critical thinkers, but only about certain things and only up to a certain point. One way for children to redress the power imbalance is to reject the epistemological authority of their teachers. I think this may explain why a group of young children I observed recently coming out of a lesson devoted to environmental issues found such pleasure in joking about Greta ‘Thunfisch’.

Power relationships in schools are reflected and enacted in the interaction patterns between teachers and students. The most common of these is ‘initiation-response-feedback (IRF)’ and it is unlikely that this is particularly conducive to rational, reflective and reasonable thinking. At the same time, as Richard Paul, one of the early advocates of critical thinking in schools, noted, much learning activity is characterised by lower order thinking skills, especially memorization (Paul, 1992: 22). With this kind of backdrop, training in media information literacy is more likely to be effective if it goes beyond the inclusion of a few ‘fake news’ exercises: a transformation in the way that the teaching is done will also be needed. Benesch (1999) describes this as a more ‘dialogic’ approach and there is some evidence that a more dialogic approach can have a positive impact on students’ dispositions (e.g. Hajhosseiny, 2012).

I think that David Buckingham (2019a) captures the educational problem very neatly:

There’s a danger here of assuming that we are dealing with a rational process – or at least one that can, by some pedagogical means, be made rational. But from an educational perspective, we surely have to begin with the question of why people might believe apparently ‘fake’ news in the first place. Where we decide to place our trust is as much to do with fantasy, emotion and desire, as with rational calculation. All of us are inclined to believe what we want to believe.

Fake news: a problem or a symptom of a problem?

There has always been fake news. The big problem now is ‘the speed and ease of its dissemination, and it exists primarily because today’s digital capitalism makes it extremely profitable – look at Google and Facebook – to produce and circulate false but click-worthy narratives’ (Morosov, 2017). Fake news taps into and amplifies broader tendencies and divides in society: the problem is not straightforward and is unlikely to be easy to eradicate (Buckingham, 2019a: 3).

There is increasing discussion of media regulation and the recent banning by Facebook of Holocaust denial and QAnon is a recognition that some regulation cannot now be avoided. But strict regulations would threaten the ‘basic business model, and the enormous profitability’ of social media companies (Buckingham, 2009b) and there are real practical and ethical problems in working out exactly how regulation would happen. Governments do not know what to do.

Lacking any obvious alternative, media information literacy is often seen as the solution: can’t we ‘fact check and moderate our way out of this conundrum’ (boyd, 2018)? danah boyd’s stark response is, no, this will fail. It’s an inadequate solution to an oversimplified problem (Buckingham, 2019a).

Along with boyd and Buckingham, I’m not trying to argue that we drop media information literacy activities from educational (including ELT) programmes. Quite the opposite. But if we want our students to think reflectively, rationally and reasonably, I think we will need to start by doing the same.

References

Benesch, S. (1999). Thinking critically, thinking dialogically. TESOL Quarterly, 33: pp. 573 – 580

Bouygues, H. L. (2019). Fighting Fake News: Lessons From The Information Wars. Reboot Foundation https://reboot-foundation.org/fighting-fake-news/

boyd, d. (2018). You Think You Want Media Literacy… Do You? Data and Society: Points https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2

Buckingham, D. (2019a). Teaching Media in a ‘Post-Truth’ Age: Fake News, Media Bias and the Challenge for Media Literacy Education. Cultura y Educación 31(2): pp. 1-19

Buckingham, D. (2019b). Rethinking digital literacy: Media education in the age of digital capitalism. https://ddbuckingham.files.wordpress.com/2019/12/media-education-in-digital-capitalism.pdf

Bulger, M. & Davidson, P. (2018). The Promises, Challenges and Futures of Media Literacy. Data and Society. https://datasociety.net/pubs/oh/DataAndSociety_Media_Literacy_2018.pdf

Doctorow, C. (2017). Three kinds of propaganda, and what to do about them. boingboing 25th February 2017, https://boingboing.net/2017/02/25/counternarratives-not-fact-che.html

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Dummett, P. & Hughes, J. (2019). Critical Thinking in ELT. Boston: National Geographic Learning

Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and relation to critical thinking skill. Informal Logic, 20(1), 61–84.

Fuertes-Prieto, M.Á., Andrés-Sánchez, S., Corrochano-Fernández, D. et al. (2020). Pre-service Teachers’ False Beliefs in Superstitions and Pseudosciences in Relation to Science and Technology. Science & Education 29, 1235–1254 (2020). https://doi.org/10.1007/s11191-020-00140-8

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, N., Reifler, J. & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences Jul 2020, 117 (27) 15536-15545; DOI: 10.1073/pnas.1920498117

Hajhosseiny, M. (2012). The Effect of Dialogic Teaching on Students’ Critical Thinking Disposition. Procedia – Social and Behavioral Sciences, 69: pp. 1358 – 1368

Huguet, A., Kavanagh, J., Baker, G. & Blumenthal, M. S. (2019). Exploring Media Literacy Education as a Tool for Mitigating Truth Decay. RAND Corporation, https://www.rand.org/content/dam/rand/pubs/research_reports/RR3000/RR3050/RAND_RR3050.pdf

Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine 2 (8): e124. https://doi.org/10.1371/journal.pmed.0020124

Jeong, S. H., Cho, H., & Hwang, Y. (2012). Media literacy interventions: A meta-analytic review. Journal of Communication, 62, pp. 454–472

Jones-Jang, S. M., Mortensen, T. & Liu, J. (2019). Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. American Behavioral Scientist, pp. 1 – 18, doi:10.1177/0002764219869406

Jost, J. T., van der Linden, S., Panagopoulos, C. & Hardin, C. D. (2018). Ideological asymmetries in conformity, desire for shared reality, and the spread of misinformation. Current Opinion in Psychology, 23: pp/ 77-83. doi:10.1016/j.copsyc.2018.01.003

Kavanagh, J. & Rich, M. D. (2018). Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life. RAND Corporation, https://www.rand.org/pubs/research_reports/RR2314.html

Lai, E.R. 2011. Critical Thinking: A Literature Review. Pearson. http://images.pearsonassessments.com/images/tmrs/CriticalThinkingReviewFINAL.pdf

Menz, C., Spinath, B. & Seifried, E. (2020). Misconceptions die hard: prevalence and reduction of wrong beliefs in topics from educational psychology among preservice teachers. European Journal of Psychology of Education https://doi.org/10.1007/s10212-020-00474-5

Milosevic, O. (2020). Promoting critical thinking in the EFL classroom. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp.17 – 22

Morozov, E. (2017). Moral panic over fake news hides the real enemy – the digital giants. The Guardian, 8 January 2017 https://www.theguardian.com/commentisfree/2017/jan/08/blaming-fake-news-not-the-answer-democracy-crisis

Mulnix, J.W. 2010. ‘Thinking critically about critical thinking’ Educational Philosophy and Theory, 2010

Paul, R. W. (1992). Critical thinking: What, why, and how? New Directions for Community Colleges, 77: pp. 3–24.

Roozenbeek, J., Schneider, C.R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., van der Bles, A. M. & and van der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7 (10) https://doi.org/10.1098/rsos.201199

Stanley, M., Barr, N., Peters, K. & Seli, P. (2020). Analytic-thinking predicts hoax beliefs and helping behaviors in response to the COVID-19 pandemic. PsyArxiv Preprints doi:10.31234/osf.io/m3vt

Ur, P. (2020). Critical Thinking. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp.9 – 16

Willingham, D. T. (2007). Critical Thinking: Why Is It So Hard to Teach? American Educator Summer 2007: pp. 8 – 19

In the first post in this 3-part series, I focussed on data collection practices in a number of ELT websites, as a way of introducing ‘critical data literacy’. Here, I explore the term in more detail.

Although the term ‘big data’ has been around for a while (see this article and infographic) it’s less than ten years ago that it began to enter everyday language, and found its way into the OED (2013). In the same year, Viktor Mayer-Schönberger and Kenneth Cukier published their best-selling ‘Big Data: A Revolution That Will Transform How We Live, Work, and Think’ (2013) and it was hard to avoid enthusiastic references in the media to the transformative potential of big data in every sector of society.

Since then, the use of big data and analytics has become ubiquitous. Massive data collection (and data surveillance) has now become routine and companies like Palantir, which specialise in big data analytics, have become part of everyday life. Palantir’s customers include the LAPD, the CIA, the US Immigration and Customs Enforcement (ICE) and the British Government. Its recent history includes links with Cambridge Analytica, assistance in an operation to arrest the parents of illegal migrant children, and a racial discrimination lawsuit where the company was accused of having ‘routinely eliminated’ Asian job applicants (settled out of court for $1.7 million).

Unsurprisingly, the datafication of society has not gone entirely uncontested. Whilst the vast majority of people seem happy to trade their personal data for convenience and connectivity, a growing number are concerned about who benefits most from this trade-off. On an institutional level, the EU introduced the General Data Protection Regulation (GDPR), which led to Google being fined Ꞓ50 million for insufficient transparency in their privacy policy and their practices of processing personal data for the purposes of behavioural advertising. In the intellectual sphere, there has been a recent spate of books that challenge the practices of ubiquitous data collection, coining new terms like ‘surveillance capitalism’, ‘digital capitalism’ and ‘data colonialism’. Here are four recent books that I have found particularly interesting.

Beer, D. (2019). The Data Gaze. London: Sage

Couldry, N. & Mejias, U. A. (2019). The Costs of Connection. Stanford: Stanford University Press

Sadowski, J. (2020). Too Smart. Cambridge, Mass.: MIT Press

Zuboff, S. (2019). The Age of Surveillance Capitalism. New York: Public Affairs

The use of big data and analytics in education is also now a thriving industry, with its supporters claiming that these technologies can lead to greater personalization, greater efficiency of instruction and greater accountability. Opponents (myself included) argue that none of these supposed gains have been empirically demonstrated, and that the costs to privacy, equity and democracy outweigh any potential gains. There is a growing critical literature and useful, recent books include:

Bradbury, A. & Roberts-Holmes, G. (2018). The Datafication of Primary and Early Years Education. Abingdon: Routledge

Jarke, J. & Breiter, A. (Eds.) (2020). The Datafication of Education. Abingdon: Routledge

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. London: Sage

Concomitant with the rapid growth in the use of digital tools for language learning and teaching, and therefore the rapid growth in the amount of data that learners were (mostly unwittingly) giving away, came a growing interest in the need for learners to develop a set of digital competencies, or literacies, which would enable them to use these tools effectively. In the same year that Mayer-Schönberger and Cukier brought out their ‘Big Data’ book, the first book devoted to digital literacies in English language teaching came out (Dudeney et al., 2013). They defined digital literacies as the individual and social skills needed to effectively interpret, manage, share and create meaning in the growing range of digital communication channels (Dudeney et al., 2013: 2). The book contained a couple of activities designed to raise students’ awareness of online identity issues, along with others intended to promote critical thinking about digitally-mediated information (what the authors call ‘information literacy’), but ‘critical literacy’ was missing from the authors’ framework.

Critical thinking and critical literacy are not the same thing. Although there is no generally agreed definition of the former (with a small ‘c’), it is focussed primarily on logic and comprehension (Lee, 2011). Paul Dummett and John Hughes (2019: 4) describe it as ‘a mindset that involves thinking reflectively, rationally and reasonably’. The prototypical critical thinking activity involves the analysis of a piece of fake news (e.g. the task where students look at a website about tree octopuses in Dudeney et al. 2013: 198 – 203). Critical literacy, on the other hand, involves standing back from texts and technologies and viewing them as ‘circulating within a larger social and textual context’ (Warnick, 2002). Consideration of the larger social context necessarily entails consideration of unequal power relationships (Leee, 2011; Darvin, 2017), such as that between Google and the average user of Google. And it follows from this that critical literacy has a socio-political emancipatory function.

Critical digital literacy is now a growing field of enquiry (e.g. Pötzsch, 2019) and there is an awareness that digital competence frameworks, such as the Digital Competence Framework of the European Commission, are incomplete and out of date without the inclusion of critical digital literacy. Dudeney et al (2013) clearly recognise the importance of including critical literacy in frameworks of digital literacies. In Pegrum et al. (2018, unfortunately paywalled), they update the framework from their 2013 book, and the biggest change is the inclusion of critical literacy. They divide this into the following:

  • critical digital literacy – closely related to information literacy
  • critical mobile literacy – focussing on issues brought to the fore by mobile devices, ranging from protecting privacy through to safeguarding mental and physical health
  • critical material literacy – concerned with the material conditions underpinning the use of digital technologies, ranging from the socioeconomic influences on technological access to the environmental impacts of technological manufacturing and disposal
  • critical philosophical literacy – concerned with the big questions posed to and about humanity as our lives become conjoined with the existence of our smart devices, robots and AI
  • critical academic literacy, which refers to the pressing need to conduct meaningful studies of digital technologies in place of what is at times ‘cookie-cutter’ research

I’m not entirely convinced by the subdivisions, but labelling in this area is still in its infancy. My particular interest here, in critical data literacy, seems to span across a number of their sub-divisions. And the term that I am using, ‘critical data literacy’, which I’ve taken from Tygel & Kirsch (2016), is sometimes referred to as ‘critical big data literacy’ (Sander, 2020a) or ‘personal data literacy’ (Pangrazio & Selwyn, 2019). Whatever it is called, it is the development of ‘informed and critical stances toward how and why [our] data are being used’ (Pangrazio & Selwyn, 2018). One of the two practical activities in the Pegrum at al article (2018) looks at precisely this area (the task requires students to consider the data that is collected by fitness apps). It will be interesting to see, when the new edition of the ‘Digital Literacies’ book comes out (perhaps some time next year), how many other activities take a more overtly critical stance.

In the next post, I’ll be looking at a range of practical activities for developing critical data literacy in the classroom. This involves both bridging the gaps in knowledge (about data, algorithms and online privacy) and learning, practically, how to implement ‘this knowledge for a more empowered internet usage’ (Sander, 2020b).

Without wanting to invalidate the suggestions in the next post, a word of caution is needed. Just as critical thinking activities in the ELT classroom cannot be assumed to lead to any demonstrable increase in critical thinking (although there may be other benefits to the activities), activities to promote critical literacy cannot be assumed to lead to any actual increase in critical literacy. The reaction of many people may well be ‘It’s not like it’s life or death or whatever’ (Pangrazio & Selwyn, 2018). And, perhaps, education is rarely, if ever, a solution to political and social problems, anyway. And perhaps, too, we shouldn’t worry too much about educational interventions not leading to their intended outcomes. Isn’t that almost always the case? But, with those provisos in mind, I’ll come back next time with some practical ideas.

REFERENCES

Darvin R. (2017). Language, Ideology, and Critical Digital Literacy. In: Thorne S., May S. (eds) Language, Education and Technology. Encyclopedia of Language and Education (3rd ed.). Springer, Cham. pp. 17 – 30 https://doi.org/10.1007/978-3-319-02237-6_35

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Dummett, P. & Hughes, J. (2019). Critical Thinking in ELT. Boston: National Geographic Learning

Lee, C. J. (2011). Myths about critical literacy: What teachers need to unlearn. Journal of Language and Literacy Education [Online], 7 (1), 95-102. Available at http://www.coa.uga.edu/jolle/2011_1/lee.pdf

Mayer-Schönberger, V. & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think. London: John Murray

Pangrazio, L. & Selwyn, N. (2018). ‘It’s not like it’s life or death or whatever’: young people’s understandings of social media data. Social Media + Society, 4 (3): pp. 1–9. https://journals.sagepub.com/doi/pdf/10.1177/2056305118787808

Pangrazio, L. & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media and Society, 21 (2): pp. 419 – 437

Pegrum, M., Dudeney, G. & Hockly, N. (2018). Digital literacies revisited. The European Journal of Applied Linguistics and TEFL, 7 (2), pp. 3-24

Pötzsch, H. (2019). Critical Digital Literacy: Technology in Education Beyond Issues of User Competence and Labour-Market Qualifications. tripleC: Communication, Capitalism & Critique, 17: pp. 221 – 240 Available at https://www.triple-c.at/index.php/tripleC/article/view/1093

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479 https://www.econstor.eu/bitstream/10419/218936/1/2020-2-1479.pdf

Sander, I. (2020b). Critical big data literacy tools – Engaging citizens and promoting empowered internet usage. Data & Policy, 2: DOI: https://doi.org/10.1017/dap.2020.5

Tygel, A. & Kirsch, R. (2016). Contributions of Paulo Freire for a Critical Data Literacy: a Popular Education Approach. The Journal of Community Informatics, 12 (3). Available at http://www.ci-journal.net/index.php/ciej/article/view/1296

Warnick, B. (2002). Critical Literacy in a Digital Era. Mahwah, NJ, Lawrence Erlbaum Associates

Book_coverIn my last post, I looked at shortcomings in edtech research, mostly from outside the world of ELT. I made a series of recommendations of ways in which such research could become more useful. In this post, I look at two very recent collections of ELT edtech research. The first of these is Digital Innovations and Research in Language Learning, edited by Mavridi and Saumell, and published this February by the Learning Technologies SIG of IATEFL. I’ll refer to it here as DIRLL. It’s available free to IATEFL LT SIG members, and can be bought for $10.97 as an ebook on Amazon (US). The second is the most recent edition (February 2020) of the Language Learning & Technology journal, which is open access and available here. I’ll refer to it here as LLTJ.

In both of these collections, the focus is not on ‘technology per se, but rather issues related to language learning and language teaching, and how they are affected or enhanced by the use of digital technologies’. However, they are very different kinds of publication. Nobody involved in the production of DIRLL got paid in any way (to the best of my knowledge) and, in keeping with its provenance from a teachers’ association, has ‘a focus on the practitioner as teacher-researcher’. Almost all of the contributing authors are university-based, but they are typically involved more in language teaching than in research. With one exception (a grant from the EU), their work was unfunded.

The triannual LLTJ is funded by two American universities and published by the University of Hawaii Press. The editors and associate editors are well-known scholars in their fields. The journal’s impact factor is high, close to the impact factor of the paywalled reCALL (published by the University of Cambridge), which is the highest-ranking journal in the field of CALL. The contributing authors are all university-based, many with a string of published articles (in prestige journals), chapters or books behind them. At least six of the studies were funded by national grant-awarding bodies.

I should begin by making clear that there was much in both collections that I found interesting. However, it was not usually the research itself that I found informative, but the literature review that preceded it. Two of the chapters in DIRLL were not really research, anyway. One was the development of a template for evaluating ICT-mediated tasks in CLIL, another was an advocacy of comics as a resource for language teaching. Both of these were new, useful and interesting to me. LLTJ included a valuable literature review of research into VR in FL learning (but no actual new research). With some exceptions in both collections, though, I felt that I would have been better off curtailing my reading after the reviews. Admittedly, there wouldn’t be much in the way of literature reviews if there were no previous research to report …

It was no surprise to see the learners who were the subjects of this research were overwhelmingly university students. In fact, only one article (about a high-school project in Israel, reported in DIRLL) was not about university students. The research areas focused on reflected this bias towards tertiary contexts: online academic reading skills, academic writing, online reflective practices in teacher training programmes, etc.

In a couple of cases, the selection of experimental subjects seemed plain bizarre. Why, if you want to find out about the extent to which Moodle use can help EAP students become better academic readers (in DIRLL), would you investigate this with a small volunteer cohort of postgraduate students of linguistics, with previous experience of using Moodle and experience of teaching? Is a less representative sample imaginable? Why, if you want to investigate the learning potential of the English File Pronunciation app (reported in LLTJ), which is clearly most appropriate for A1 – B1 levels, would you do this with a group of C1-level undergraduates following a course in phonetics as part of an English Studies programme?

More problematic, in my view, was the small sample size in many of the research projects. The Israeli virtual high school project (DIRLL), previously referred to, started out with only 11 students, but 7 dropped out, primarily, it seems, because of institutional incompetence: ‘the project was probably doomed […] to failure from the start’, according to the author. Interesting as this was as an account of how not to set up a project of this kind, it is simply impossible to draw any conclusions from 4 students about the potential of a VLE for ‘interaction, focus and self-paced learning’. The questionnaire investigating experience of and attitudes towards VR (in DIRLL) was completed by only 7 (out of 36 possible) students and 7 (out of 70+ possible) teachers. As the author acknowledges, ‘no great claims can be made’, but then goes on to note the generally ‘positive attitudes to VR’. Perhaps those who did not volunteer had different attitudes? We will never know. The study of motivational videos in tertiary education (DIRLL) started off with 15 subjects, but 5 did not complete the necessary tasks. The research into L1 use in videoconferencing (LLTJ) started off with 10 experimental subjects, all with the same L1 and similar cultural backgrounds, but there was no data available from 4 of them (because they never switched into L1). The author claims that the paper demonstrates ‘how L1 is used by language learners in videoconferencing as a social semiotic resource to support social presence’ – something which, after reading the literature review, we already knew. But the paper also demonstrates quite clearly how L1 is not used by language learners in videoconferencing as a social semiotic resource to support social presence. In all these cases, it is the participants who did not complete or the potential participants who did not want to take part that have the greatest interest for me.

Unsurprisingly, the LLTJ articles had larger sample sizes than those in DIRLL, but in both collections the length of the research was limited. The production of one motivational video (DIRLL) does not really allow us to draw any conclusions about the development of students’ critical thinking skills. Two four-week interventions do not really seem long enough to me to discover anything about learner autonomy and Moodle (DIRLL). An experiment looking at different feedback modes needs more than two written assignments to reach any conclusions about student preferences (LLTJ).

More research might well be needed to compensate for the short-term projects with small sample sizes, but I’m not convinced that this is always the case. Lacking sufficient information about the content of the technologically-mediated tools being used, I was often unable to reach any conclusions. A gamified Twitter environment was developed in one project (DIRLL), using principles derived from contemporary literature on gamification. The authors concluded that the game design ‘failed to generate interaction among students’, but without knowing a lot more about the specific details of the activity, it is impossible to say whether the problem was the principles or the particular instantiation of those principles. Another project, looking at the development of pronunciation materials for online learning (LLTJ), came to the conclusion that online pronunciation training was helpful – better than none at all. Claims are then made about the value of the method used (called ‘innovative Cued Pronunciation Readings’), but this is not compared to any other method / materials, and only a very small selection of these materials are illustrated. Basically, the reader of this research has no choice but to take things on trust. The study looking at the use of Alexa to help listening comprehension and speaking fluency (LLTJ) cannot really tell us anything about IPAs unless we know more about the particular way that Alexa is being used. Here, it seems that the students were using Alexa in an interactive storytelling exercise, but so little information is given about the exercise itself that I didn’t actually learn anything at all. The author’s own conclusion is that the results, such as they are, need to be treated with caution. Nevertheless, he adds ‘the current study illustrates that IPAs may have some value to foreign language learners’.

This brings me onto my final gripe. To be told that IPAs like Alexa may have some value to foreign language learners is to be told something that I already know. This wasn’t the only time this happened during my reading of these collections. I appreciate that research cannot always tell us something new and interesting, but a little more often would be nice. I ‘learnt’ that goal-setting plays an important role in motivation and that gamification can boost short-term motivation. I ‘learnt’ that reflective journals can take a long time for teachers to look at, and that reflective video journals are also very time-consuming. I ‘learnt’ that peer feedback can be very useful. I ‘learnt’ from two papers that intercultural difficulties may be exacerbated by online communication. I ‘learnt’ that text-to-speech software is pretty good these days. I ‘learnt’ that multimodal literacy can, most frequently, be divided up into visual and auditory forms.

With the exception of a piece about online safety issues (DIRLL), I did not once encounter anything which hinted that there may be problems in using technology. No mention of the use to which student data might be put. No mention of the costs involved (except for the observation that many students would not be happy to spend money on the English File Pronunciation app) or the cost-effectiveness of digital ‘solutions’. No consideration of the institutional (or other) pressures (or the reasons behind them) that may be applied to encourage teachers to ‘leverage’ edtech. No suggestion that a zero-tech option might actually be preferable. In both collections, the language used is invariably positive, or, at least, technology is associated with positive things: uncovering the possibilities, promoting autonomy, etc. Even if the focus of these publications is not on technology per se (although I think this claim doesn’t really stand up to close examination), it’s a little disingenuous to claim (as LLTJ does) that the interest is in how language learning and language teaching is ‘affected or enhanced by the use of digital technologies’. The reality is that the overwhelming interest is in potential enhancements, not potential negative effects.

I have deliberately not mentioned any names in referring to the articles I have discussed. I would, though, like to take my hat off to the editors of DIRLL, Sophia Mavridi and Vicky Saumell, for attempting to do something a little different. I think that Alicia Artusi and Graham Stanley’s article (DIRLL) about CPD for ‘remote’ teachers was very good and should interest the huge number of teachers working online. Chryssa Themelis and Julie-Ann Sime have kindled my interest in the potential of comics as a learning resource (DIRLL). Yu-Ju Lan’s article about VR (LLTJ) is surely the most up-to-date, go-to article on this topic. There were other pieces, or parts of pieces, that I liked, too. But, to me, it’s clear that ‘more research is needed’ … much less than (1) better and more critical research, and (2) more digestible summaries of research.

Colloquium

At the beginning of March, I’ll be going to Cambridge to take part in a Digital Learning Colloquium (for more information about the event, see here ). One of the questions that will be explored is how research might contribute to the development of digital language learning. In this, the first of two posts on the subject, I’ll be taking a broad overview of the current state of play in edtech research.

I try my best to keep up to date with research. Of the main journals, there are Language Learning and Technology, which is open access; CALICO, which offers quite a lot of open access material; and reCALL, which is the most restricted in terms of access of the three. But there is something deeply frustrating about most of this research, and this is what I want to explore in these posts. More often than not, research articles end with a call for more research. And more often than not, I find myself saying ‘Please, no, not more research like this!’

First, though, I would like to turn to a more reader-friendly source of research findings. Systematic reviews are, basically literature reviews which can save people like me from having to plough through endless papers on similar subjects, all of which contain the same (or similar) literature review in the opening sections. If only there were more of them. Others agree with me: the conclusion of one systematic review of learning and teaching with technology in higher education (Lillejord et al., 2018) was that more systematic reviews were needed.

Last year saw the publication of a systematic review of research on artificial intelligence applications in higher education (Zawacki-Richter, et al., 2019) which caught my eye. The first thing that struck me about this review was that ‘out of 2656 initially identified publications for the period between 2007 and 2018, 146 articles were included for final synthesis’. In other words, only just over 5% of the research was considered worthy of inclusion.

The review did not paint a very pretty picture of the current state of AIEd research. As the second part of the title of this review (‘Where are the educators?’) makes clear, the research, taken as a whole, showed a ‘weak connection to theoretical pedagogical perspectives’. This is not entirely surprising. As Bates (2019) has noted: ‘since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). As a result, such AI applications tend to adopt a very behaviourist model of learning: present / test / feedback.’ More generally, it is clear that technology adoption (and research) is being driven by technology enthusiasts, with insufficient expertise in education. The danger is that edtech developers ‘will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning’ (Lynch, 2017).

This, then, is the first of my checklist of things that, collectively, researchers need to do to improve the value of their work. The rest of this list is drawn from observations mostly, but not exclusively, from the authors of systematic reviews, and mostly come from reviews of general edtech research. In the next blog post, I’ll look more closely at a recent collection of ELT edtech research (Mavridi & Saumell, 2020) to see how it measures up.

1 Make sure your research is adequately informed by educational research outside the field of edtech

Unproblematised behaviourist assumptions about the nature of learning are all too frequent. References to learning styles are still fairly common. The most frequently investigated skill that is considered in the context of edtech is critical thinking (Sosa Neira, et al., 2017), but this is rarely defined and almost never problematized, despite a broad literature that questions the construct.

2 Adopt a sceptical attitude from the outset

Know your history. Decades of technological innovation in education have shown precious little in the way of educational gains and, more than anything else, have taught us that we need to be sceptical from the outset. ‘Enthusiasm and praise that are directed towards ‘virtual education, ‘school 2.0’, ‘e-learning and the like’ (Selwyn, 2014: vii) are indications that the lessons of the past have not been sufficiently absorbed (Levy, 2016: 102). The phrase ‘exciting potential’, for example, should be banned from all edtech research. See, for example, a ‘state-of-the-art analysis of chatbots in education’ (Winkler & Söllner, 2018), which has nothing to conclude but ‘exciting potential’. Potential is fine (indeed, it is perhaps the only thing that research can unambiguously demonstrate – see section 3 below), but can we try to be a little more grown-up about things?

3 Know what you are measuring

Measuring learning outcomes is tricky, to say the least, but it’s understandable that researchers should try to focus on them. Unfortunately, ‘the vast array of literature involving learning technology evaluation makes it challenging to acquire an accurate sense of the different aspects of learning that are evaluated, and the possible approaches that can be used to evaluate them’ (Lai & Bower, 2019). Metrics such as student grades are hard to interpret, not least because of the large number of variables and the danger of many things being conflated in one score. Equally, or possibly even more, problematic, are self-reporting measures which are rarely robust. It seems that surveys are the most widely used instrument in qualitative research (Sosa Neira, et al., 2017), but these will tell us little or nothing when used for short-term interventions (see point 5 below).

4 Ensure that the sample size is big enough to mean something

In most of the research into digital technology in education that was analysed in a literature review carried out for the Scottish government (ICF Consulting Services Ltd, 2015), there were only ‘small numbers of learners or teachers or schools’.

5 Privilege longitudinal studies over short-term projects

The Scottish government literature review (ICF Consulting Services Ltd, 2015), also noted that ‘most studies that attempt to measure any outcomes focus on short and medium term outcomes’. The fact that the use of a particular technology has some sort of impact over the short or medium term tells us very little of value. Unless there is very good reason to suspect the contrary, we should assume that it is a novelty effect that has been captured (Levy, 2016: 102).

6 Don’t forget the content

The starting point of much edtech research is the technology, but most edtech, whether it’s a flashcard app or a full-blown Moodle course, has content. Research reports rarely give details of this content, assuming perhaps that it’s just fine, and all that’s needed is a little tech to ‘present learners with the ‘right’ content at the ‘right’ time’ (Lynch, 2017). It’s a foolish assumption. Take a random educational app from the Play Store, a random MOOC or whatever, and the chances are you’ll find it’s crap.

7 Avoid anecdotal accounts of technology use in quasi-experiments as the basis of a ‘research article’

Control (i.e technology-free) groups may not always be possible but without them, we’re unlikely to learn much from a single study. What would, however, be extremely useful would be a large, collated collection of such action-research projects, using the same or similar technology, in a variety of settings. There is a marked absence of this kind of work.

8 Enough already of higher education contexts

Researchers typically work in universities where they have captive students who they can carry out research on. But we have a problem here. The systematic review of Lundin et al (2018), for example, found that ‘studies on flipped classrooms are dominated by studies in the higher education sector’ (besides lacking anchors in learning theory or instructional design). With some urgency, primary and secondary contexts need to be investigated in more detail, not just regarding flipped learning.

9 Be critical

Very little edtech research considers the downsides of edtech adoption. Online safety, privacy and data security are hardly peripheral issues, especially with younger learners. Ignoring them won’t make them go away.

More research?

So do we need more research? For me, two things stand out. We might benefit more from, firstly, a different kind of research, and, secondly, more syntheses of the work that has already been done. Although I will probably continue to dip into the pot-pourri of articles published in the main CALL journals, I’m looking forward to a change at the CALICO journal. From September of this year, one issue a year will be thematic, with a lead article written by established researchers which will ‘first discuss in broad terms what has been accomplished in the relevant subfield of CALL. It should then outline which questions have been answered to our satisfaction and what evidence there is to support these conclusions. Finally, this article should pose a “soft” research agenda that can guide researchers interested in pursuing empirical work in this area’. This will be followed by two or three empirical pieces that ‘specifically reflect the research agenda, methodologies, and other suggestions laid out in the lead article’.

But I think I’ll still have a soft spot for some of the other journals that are coyer about their impact factor and that can be freely accessed. How else would I discover (it would be too mean to give the references here) that ‘the effective use of new technologies improves learners’ language learning skills’? Presumably, the ineffective use of new technologies has the opposite effect? Or that ‘the application of modern technology represents a significant advance in contemporary English language teaching methods’?

References

Bates, A. W. (2019). Teaching in a Digital Age Second Edition. Vancouver, B.C.: Tony Bates Associates Ltd. Retrieved from https://pressbooks.bccampus.ca/teachinginadigitalagev2/

ICF Consulting Services Ltd (2015). Literature Review on the Impact of Digital Technology on Learning and Teaching. Edinburgh: The Scottish Government. https://dera.ioe.ac.uk/24843/1/00489224.pdf

Lai, J.W.M. & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education, 133(1), 27-42. Elsevier Ltd. Retrieved January 14, 2020 from https://www.learntechlib.org/p/207137/

Levy, M. 2016. Researching in language learning and technology. In Farr, F. & Murray, L. (Eds.) The Routledge Handbook of Language Learning and Technology. Abingdon, Oxon.: Routledge. pp.101 – 114

Lillejord S., Børte K., Nesje K. & Ruud E. (2018). Learning and teaching with technology in higher education – a systematic review. Oslo: Knowledge Centre for Education https://www.forskningsradet.no/siteassets/publikasjoner/1254035532334.pdf

Lundin, M., Bergviken Rensfeldt, A., Hillman, T. et al. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education 15, 20 (2018) doi:10.1186/s41239-018-0101-6

Lynch, J. (2017). How AI Will Destroy Education. Medium, November 13, 2017. https://buzzrobot.com/how-ai-will-destroy-education-20053b7b88a6

Mavridi, S. & Saumell, V. (Eds.) (2020). Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL

Selwyn, N. (2014). Distrusting Educational Technology. New York: Routledge

Sosa Neira, E. A., Salinas, J. and de Benito Crosetti, B. (2017). Emerging Technologies (ETs) in Education: A Systematic Review of the Literature Published between 2006 and 2016. International Journal of Emerging Technologies in Education, 12 (5). https://online-journals.org/index.php/i-jet/article/view/6939

Winkler, R. & Söllner, M. (2018): Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. In: Academy of Management Annual Meeting (AOM). Chicago, USA. https://www.alexandria.unisg.ch/254848/1/JML_699.pdf

Zawacki-Richter, O., Bond, M., Marin, V. I. And Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education 2019

Digital flashcard systems like Memrise and Quizlet remain among the most popular language learning apps. Their focus is on the deliberate learning of vocabulary, an approach described by Paul Nation (Nation, 2005) as ‘one of the least efficient ways of developing learners’ vocabulary knowledge but nonetheless […] an important part of a well-balanced vocabulary programme’. The deliberate teaching of vocabulary also features prominently in most platform-based language courses.

For both vocabulary apps and bigger courses, the lexical items need to be organised into sets for the purposes of both presentation and practice. A common way of doing this, especially at lower levels, is to group the items into semantic clusters (sets with a classifying superordinate, like body part, and a collection of example hyponyms, like arm, leg, head, chest, etc.).

The problem, as Keith Folse puts it, is that such clusters ‘are not only unhelpful, they actually hinder vocabulary retention’ (Folse, 2004: 52). Evidence for this claim may be found in Higa (1963), Tinkham (1993, 1997), Waring (1997), Erten & Tekin (2008) and Barcroft (2015), to cite just some of the more well-known studies. The results, says Folse, ‘are clear and, I think, very conclusive’. The explanation that is usually given draws on interference theory: semantic similarity may lead to confusion (e.g. when learners mix up days of the week, colour words or adjectives to describe personality).

It appears, then, to be long past time to get rid of semantic clusters in language teaching. Well … not so fast. First of all, although most of the research sides with Folse, not all of it does. Nakata and Suzuki (2019) in their survey of more recent research found that results were more mixed. They found one study which suggested that there was no significant difference in learning outcomes between presenting words in semantic clusters and semantically unrelated groups (Ishii, 2015). And they found four studies (Hashemi & Gowdasiaei, 2005; Hoshino, 2010; Schneider, Healy, & Bourne, 1998, 2002) where semantic clusters had a positive effect on learning.

Nakata and Suzuki (2019) offer three reasons why semantic clustering might facilitate vocabulary learning: it (1) ‘reflects how vocabulary is stored in the mental lexicon, (2) introduces desirable difficulty, and (3) leads to extra attention, effort, or engagement from learners’. Finkbeiner and Nicol (2003) make a similar point: ‘although learning semantically related words appears to take longer, it is possible that words learned under these conditions are learned better for the purpose of actual language use (e.g., the retrieval of vocabulary during production and comprehension). That is, the very difficulty associated with learning the new labels may make them easier to process once they are learned’. Both pairs of researcher cited in this paragraph conclude that semantic clusters are best avoided, but their discussion of the possible benefits of this clustering is a recognition that the research (for reasons which I will come on to) cannot lead to categorical conclusions.

The problem, as so often with pedagogical research, is the gap between research conditions and real-world classrooms. Before looking at this in a little more detail, one relatively uncontentious observation can be made. Even those scholars who advise against semantic clustering (e.g. Papathanasiou, 2009), acknowledge that the situation is complicated by other factors, especially the level of proficiency of the learner and whether or not one or more of the hyponyms are known to the learner. At higher levels (when it is more likely that one or more of the hyponyms are already, even partially, known), semantic clustering is not a problem. I would add that, on the whole at higher levels, the deliberate learning of vocabulary is even less efficient than at lower levels and should be an increasingly small part of a well-balanced vocabulary programme.

So, why is there a problem drawing practical conclusions from the research? In order to have any scientific validity at all, researchers need to control a large number of variable. They need, for example, to be sure that learners do not already know any of the items that are being presented. The only practical way of doing this is to present sets of invented words, and this is what most of the research does (Sarioğlu, 2018). These artificial words solve one problem, but create others, the most significant of which is item difficulty. Many factors impact on item difficulty, and these include word frequency (obviously a problem with invented words), word length, pronounceability and the familiarity and length of the corresponding item in L1. None of the studies which support the abandonment of semantic clusters have controlled all of these variables (Nakata and Suzuki, 2019). Indeed, it would be practically impossible to do so. Learning pseudo-words is a very different proposition to learning real words, which a learner may subsequently encounter or want to use.

Take, for example, the days of the week. It’s quite common for learners to muddle up Tuesday and Thursday. The reason for this is not just semantic similarity (Tuesday and Monday are less frequently confused). They are also very similar in terms of both spelling and pronunciation. They are ‘synforms’ (see Laufer, 2009), which, like semantic clusters, can hinder learning of new items. But, now imagine a French-speaking learner of Spanish studying the days of the week. It is much less likely that martes and jueves will be muddled, because of their similarity to the French words mardi and jeudi. There would appear to be no good reason not to teach the complete set of days of the week to a learner like this. All other things being equal, it is probably a good idea to avoid semantic clusters, but all other things are very rarely equal.

Again, in an attempt to control for variables, researchers typically present the target items in isolation (in bilingual pairings). But, again, the real world does not normally conform to this condition. Leo Sellivan (2014) suggests that semantic clusters (e.g. colours) are taught as part of collocations. He gives the examples of red dress, green grass and black coffee, and points out that the alliterative patterns can serve as mnemonic devices which will facilitate learning. The suggestion is, I think, a very good one, but, more generally, it’s worth noting that the presentation of lexical items in both digital flashcards and platform courses is rarely context-free. Contexts will inevitably impact on learning and may well obviate the risks of semantic clustering.

Finally, this kind of research typically gives participants very restricted time to memorize the target words (Sarioğlu, 2018) and they are tested in very controlled recall tasks. In the case of language platform courses, practice of target items is usually spread out over a much longer period of time, with a variety of exposure opportunities (in controlled practice tasks, exposure in texts, personalisation tasks, revision exercises, etc.) both within and across learning units. In this light, it is not unreasonable to argue that laboratory-type research offers only limited insights into what should happen in the real world of language learning and teaching. The choice of learning items, the way they are presented and practised, and the variety of activities in the well-balanced vocabulary programme are probably all more significant than the question of whether items are organised into semantic clusters.

Although semantic clusters are quite common in language learning materials, much more common are thematic clusters (i.e. groups of words which are topically related, but include a variety of parts of speech (see below). Researchers, it seems, have no problem with this way of organising lexical sets. By way of conclusion, here’s an extract from a recent book:

‘Introducing new words together that are similar in meaning (synonyms), such as scared and frightened, or forms (synforms), like contain and maintain, can be confusing, and students are less likely to remember them. This problem is known as ‘interference’. One way to avoid this is to choose words that are around the same theme, but which include a mix of different parts of speech. For example, if you want to focus on vocabulary to talk about feelings, instead of picking lots of adjectives (happy, sad, angry, scared, frightened, nervous, etc.) include some verbs (feel, enjoy, complain) and some nouns (fun, feelings, nerves). This also encourages students to use a variety of structures with the vocabulary.’ (Hughes, et al., 2015: 25)

 

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Erten, I.H., & Tekin, M. 2008. Effects on vocabulary acquisition of presenting new words in semantic sets versus semantically-unrelated sets. System, 36 (3), 407-422

Finkbeiner, M. & Nicol, J. 2003. Semantic category effects in second language word learning. Applied Psycholinguistics 24 (2003), 369–383

Folse, K. S. 2004. Vocabulary Myths. Ann Arbor: University of Michigan Press

Hashemi, M.R., & Gowdasiaei, F. 2005. An attribute-treatment interaction study: Lexical-set versus semantically-unrelated vocabulary instruction. RELC Journal, 36 (3), 341-361

Higa, M. 1963. Interference effects of intralist word relationships in verbal learning. Journal of Verbal Learning and Verbal Behavior, 2, 170-175

Hoshino, Y. 2010. The categorical facilitation effects on L2 vocabulary learning in a classroom setting. RELC Journal, 41, 301–312

Hughes, S. H., Mauchline, F. & Moore, J. 2019. ETpedia Vocabulary. Shoreham-by-Sea: Pavilion Publishing and Media

Ishii, T. 2015. Semantic connection or visual connection: Investigating the true source of confusion. Language Teaching Research, 19, 712–722

Laufer, B. 2009. The concept of ‘synforms’ (similar lexical forms) in vocabulary acquisition. Language and Education, 2 (2): 113 – 132

Nakata, T. & Suzuki, Y. 2019. Effects Of Massing And Spacing On The Learning Of Semantically Related And Unrelated Words. Studies in Second Language Acquisition 41 (2), 287 – 311

Nation, P. 2005. Teaching Vocabulary. Asian EFL Journal. http://www.asian-efl-journal.com/sept_05_pn.pdf

Papathanasiou, E. 2009. An investigation of two ways of presenting vocabulary. ELT Journal 63 (4), 313 – 322

Sarioğlu, M. 2018. A Matter of Controversy: Teaching New L2 Words in Semantic Sets or Unrelated Sets. Journal of Higher Education and Science Vol 8 / 1: 172 – 183

Schneider, V. I., Healy, A. F., & Bourne, L. E. 1998. Contextual interference effects in foreign language vocabulary acquisition and retention. In Healy, A. F. & Bourne, L. E. (Eds.), Foreign language learning: Psycholinguistic studies on training and retention (pp. 77–90). Mahwah, NJ: Erlbaum

Schneider, V. I., Healy, A. F., & Bourne, L. E. 2002. What is learned under difficult conditions is hard to forget: Contextual interference effects in foreign vocabulary acquisition, retention, and transfer. Journal of Memory and Language, 46, 419–440

Sellivan, L. 2014. Horizontal alternatives to vertical lists. Blog post: http://leoxicon.blogspot.com/2014/03/horizontal-alternatives-to-vertical.html

Tinkham, T. 1993. The effect of semantic clustering on the learning of second language vocabulary. System 21 (3), 371-380.

Tinkham, T. 1997. The effects of semantic and thematic clustering on the learning of a second language vocabulary. Second Language Research, 13 (2),138-163

Waring, R. 1997. The negative effects of learning words in semantic sets: a replication. System, 25 (2), 261 – 274