Archive for the ‘ed tech’ Category

In the world of ELT teacher blogs, magazines, webinars and conferences right now, you would be hard pressed to avoid the topic of generative AI. Ten years ago, the hot topic was ‘mobile learning’. Might there be some lessons to be learnt from casting our gaze back a little more than a decade?

One of the first ELT-related conferences about mobile learning took place in Japan in 2006. Reporting on this a year later, Dudeney and Hockly (2007: 156) observed that ‘m-learning appears to be here to stay’. By 2009, Agnes Kukulska-Hulme was asking ‘will mobile learning change language learning?’ Her answer, of course, was yes, but it took a little time for the world of ELT to latch onto this next big thing (besides a few apps). Relatively quick out of the blocks was Caroline Moore with an article in the Guardian (8 March 2011) arguing for wider use of mobile learning in ELT. As is so often the case with early promoters of edtech, Caroline had a vested interest, as a consultant in digital language learning, in advancing her basic argument. This was that the technology was so ubiquitous and so rich in potential that it would be foolish not to make the most of it.

The topic gained traction with an IATEFL LT SIG webinar in December 2011, a full-day pre-conference event at the main IATEFL conference early the following year, along with a ‘Macmillan Education Mobile Learning Debate’. Suddenly, mobile learning was everywhere and, by the end of the year, it was being described as ‘the future of learning’ (Kukulska-Hulme, A., 2012). In early 2013, ELT Journal published a defining article, ‘Mobile Learning’ (Hockly, N., 2013). By this point, it wasn’t just a case of recommending teachers to try out a few apps with their learners. The article concludes by saying that ‘the future is increasingly mobile, and it behoves us to reflect this in our teaching practice’ (Hockly, 2013: 83). The rhetorical force was easier to understand than the logical connection.

It wasn’t long before mobile learning was routinely described as the ‘future of language learning’ and apps, like DuoLingo and Busuu, were said to be ‘revolutionising language learning’. Kukulska-Hulme (Kukulska-Hulme et al., 2017) contributed a chapter entitled ‘Mobile Learning Revolution’ to a handbook of technology and second language learning.

In 2017 (books take a while to produce), OUP brought out ‘Mobile Learning’ by Shaun Wilden (2017). Shaun’s book is the place to go for practical ideas: playing around with photos, using QR codes, audio / video recording and so on. The reasons for using mobile learning continue to grow (developing 21st century skills like creativity, critical thinking and digital literacy in ‘student-centred, dynamic, and motivating ways’).

Unlike Nicky Hockly’s article (2013), Shaun acknowledges that there may be downsides to mobile technology in the classroom. The major downside, as everybody who has ever been in a classroom where phones are permitted knows, is that the technology may be a bigger source of distraction than it is of engagement. Shaun offers a page about ‘acceptable use policies’ for mobile phones in classrooms, but does not let (what he describes as) ‘media scare stories’ get in the way of his enthusiasm.

There are undoubtedly countless examples of ways in which mobile phones can (and even should) be used to further language learning, although I suspect that the QR reader would struggle to make the list. The problem is that these positive examples are all we ever hear about. The topic of distraction does not even get a mention in the chapter on mobile language learning in ‘The Routledge Handbook of Language Learning and Technology’ (Stockwell, 2016). Neither does it appear in Li Li’s (2017) ‘New Technologies and Language Learning’.

Glenda Morgan (2023) has described this as ‘Success Porn in EdTech’, where success is exaggerated, failures minimized and challenges rendered to the point that they are pretty much invisible. ‘Success porn’ is a feature of conference presentations and blog posts, genres which require relentless positivity and a ‘constructive sense of hope, optimism and ambition’ (Selwyn, 2016). Edtech Kool-Aid (ibid) is also a feature of academic writing. Do a Google Scholar search for ‘mobile learning language learning’ to see what I mean. The first article that comes up is entitled ‘Positive effects of mobile learning on foreign language learning’. Skepticism is in very short supply, as it is in most research into edtech. There are a number of reasons for this, one of which (that ‘locating one’s work in the pro-edtech zeitgeist may be a strategic choice to be part of the mainstream of the field’ (Mertala et al., 2022)) will resonate with colleagues who wish to give conference presentations and write blogs for publishers. The discourse around AI is, of course, no different (see Nemorin et al., 2022).

Anyway, back to the downside of mobile learning and the ‘media scare stories’. Most language learning takes place in primary and secondary schools. According to a recent report from Common Sense (Radesky et al., 2023), US teens use their smart phones for a median of 4 ½ hours per day, checking for notifications a median of 51 times. Almost all of them (97%) use their phones at school, mostly for social media, videos or gaming. Schools have a variety of policies, and widely varying enforcement within those policies. Your country may not be quite the same as the US, but it’s probably heading that way.

Research suggests that excessive (which is to say typical) mobile phone use has a negative impact on learning outcomes, wellbeing and issues like bullying (see this brief summary of global research). This comes as no surprise to most people – the participants at the 2012 Macmillan debate were aware of these problems. The question that needs to be asked, therefore, is not whether mobile learning can assist language learning, but whether the potential gains outweigh the potential disadvantages. Is language learning a special case?

One in four countries around the world have decided to ban phones in school. A new report from UNESCO (2023) calls for a global smart phone ban in education, pointing out that there is ‘little robust research to demonstrate digital technology inherently added value to education’. The same report delves a little into generative AI, and a summary begins ‘Generative AI may not bring the kind of change in education often discussed. Whether and how AI would be used in education is an open question (Gillani et al., 2023)’ (UNESCO, 2023: 13).

The history of the marketing of edtech has always been ‘this time it’s different’. It relies on a certain number of people repeating the mantra, since the more it is repeated, the more likely it will be perceived to be true (Fazio et al., 2019): this is the illusory truth effect or the ‘Snark rule[1]’. Mobile learning changed things for the better for some learners in some contexts: claims that it was the future of, or would revolutionize, language learning have proved somewhat exaggerated. Indeed, the proliferation of badly-designed language learning apps suggests that much mobile learning reinforces the conventional past of language learning (drilling, gamified rote learning, native-speaker models, etc.) rather than leading to positive change (see Kohn, 2023). The history of edtech is a history of broken promises and unfulfilled potential and there is no good reason why generative AI will be any different.

Perhaps, then, it behoves us to be extremely sceptical about the current discourse surrounding generative AI in ELT. Like mobile technology, it may well be an extremely useful tool, but the chances that it will revolutionize language teaching are extremely slim – much like the radio, TV, audio / video recording and playback, the photocopier, the internet and VR before it. A few people will make some money for a while, but truly revolutionary change in teaching / learning will not come about through technological innovation.

References

Dudeney, G. & Hockly, N. (2007) How to Teach English with Technology. Harlow: Pearson Education

Fazio, L. K., Rand, D. G. & Pennycook, G. (2019) Repetition increases perceived truth equally for plausible and implausible statements. Psychonomic Bulletin and Review 26: 1705–1710. https://doi.org/10.3758/s13423-019-01651-4

Hockly, N. (2013) Mobile Learning. ELT Journal, 67 (1): 80 – 84

Kohn, A. (2023) How ‘Innovative’ Ed Tech Actually Reinforces Convention. Education Week, 19 September 2023.

Kukulska-Hulme, A. (2009) Will Mobile Learning Change Language Learning? reCALL, 21 (2): 157 – 165

Kukulska-Hulme, A. (2012) Mobile Learning and the Future of Learning. International HETL Review, 2: 13 – 18

Kukulska-Hulme, A., Lee, H. & Norris, L. (2017) Mobile Learning Revolution: Implications for Language Pedagogy. In Chapelle, C. A. & Sauro, S. (Eds.) The Handbook of Technology and Second Language Teaching and Learning. John Wiley & Sons

Li, L. (2017) New Technologies and Language Learning. London: Palgrave

Mertala, P., Moens, E. & Teräs, M. (2022) Highly cited educational technology journal articles: a descriptive and critical analysis, Learning, Media and Technology, DOI: 10.1080/17439884.2022.2141253

Nemorin, S., Vlachidis, A., Ayerakwa, H. M. & Andriotis, P. (2022): AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development, Learning, Media and Technology, DOI: 10.1080/17439884.2022.2095568

Radesky, J., Weeks, H.M., Schaller, A., Robb, M., Mann, S., and Lenhart, A. (2023) Constant Companion: A Week in the Life of a Young Person’s Smartphone Use. San Francisco, CA: Common Sense.

Selwyn, N. (2016) Minding our Language: Why Education and Technology is Full of Bullshit … and What Might be Done About it. Learning, Media and Technology, 41 (3): 437–443

Stockwell, G. (2016) Mobile Language Learning. In Farr, F. & Murray, L. (Eds.) The Routledge Handbook of Language Learning and Technology. Abingdon: Routledge. pp. 296 – 307

UNESCO (2023) Global Education Monitoring Report 2023: Technology in Education – A Tool on whose Terms?Paris: UNESCO

Wilden, S. (2017) Mobile Learning. Oxford: OUP


[1] Named after Lewis Carroll’s poem ‘The Hunting of the Snark’ in which the Bellman cries ‘I have said it thrice: What I tell you three times is true.’

When the internet arrived on our desktops in the 1990s, language teachers found themselves able to access huge amounts of authentic texts of all kinds. It was a true game-changer. But when it came to ELT dedicated websites, the pickings were much slimmer. There was a very small number of good ELT resource sites (onestopenglish stood out from the crowd), but more ubiquitous and more enduring were the sites offering downloadable material shared by teachers. One of these, ESLprintables.com, currently has 1,082,522 registered users, compared to the 700,000+ of onestopenglish.

The resources on offer at sites such as these range from texts and scripted dialogues, along with accompanying comprehension questions, to grammar explanations and gapfills, vocabulary matching tasks and gapfills, to lists of prompts for discussions. Almost all of it is unremittingly awful, a terrible waste of the internet’s potential.

Ten years later, interactive online possibilities began to appear. Before long, language teachers found themselves able to use things like blogs, wikis and Google Docs. It was another true game changer. But when it came to ELT dedicated things, the pickings were much slimmer. There is some useful stuff (flashcard apps, for example) out there, but more ubiquitous are interactive versions of the downloadable dross that already existed. Learning platforms, which have such rich possibilities, are mostly loaded with gapfills, drag-and-drop, multiple choice, and so on. Again, it seems such a terrible waste of the technology’s potential. And all of this runs counter to what we know about how people learn another language. It’s as if decades of research into second language acquisition had never taken place.

And now we have AI and large language models like GPT. The possibilities are rich and quite a few people, like Sam Gravell and Svetlana Kandybovich, have already started suggesting interesting and creative ways of using the technology for language teaching. Sadly, though, technology has a tendency to bring out the worst in approaches to language teaching, since there’s always a bandwagon to be jumped on. Welcome to Twee, A.I. powered tools for English teachers, where you can generate your own dross in a matter of seconds. You can generate texts and dialogues, pitched at one of three levels, with or without target vocabulary, and produce comprehension questions (open questions, T / F, or M / C), exercises where vocabulary has to be matched to definitions, word-formation exercises, gapfills. The name of the site has been carefully chosen (Cambridge dictionary defines ‘twee’ as ‘artificially attractive’).

I decided to give it a try. Twee uses the same technology as ChatGPT and the results were unsurprising. I won’t comment in any detail on the intrinsic interest or the accuracy of factual information in the texts. They are what you might expect if you have experimented with ChatGPT. For the same reason, I won’t go into details about the credibility or naturalness of the dialogues. Similarly, the ability of Twee to gauge the appropriacy of texts for particular levels is poor: it hasn’t been trained on a tagged learner corpus. In any case, having only three level bands (A1/A2, B1/B2 and C1/C2) means that levelling is far too approximate. Suffice to say that the comprehension questions, vocabulary-item selection, vocabulary practice activities would all require very heavy editing.

Twee is still in beta, and, no doubt, improvements will come as the large language models on which it draws get bigger and better. Bilingual functionality is a necessary addition, and is doable. More reliable level-matching would be nice, but it’s a huge technological challenge, besides being theoretically problematic. But bigger problems remain and these have nothing to do with technology. Take a look at the examples below of how Twee suggests its reading comprehension tasks (open questions, M / C, T / F) could be used with some Beatles songs.

Is there any point getting learners to look at a ‘dialogue’ (on the topic of yellow submarines) like the one below? Is there any point getting learners to write essays using prompts such as those below?

What possible learning value could tasks such as these have? Is there any credible theory of language learning behind any of this, or is it just stuff that would while away some classroom time? AI meets ESLprintables – what a waste of the technology’s potential!

Edtech vendors like to describe their products as ‘solutions’, but the educational challenges, which these products are supposedly solutions to, often remain unexamined. Poor use of technology can exacerbate these challenges by making inappropriate learning materials more easily available.

Last September, Cambridge published a ‘Sustainability Framework for ELT’, which attempts to bring together environmental, social and economic sustainability. It’s a kind of 21st century skills framework and is designed to help teachers ‘to integrate sustainability skills development’ into their lessons. Among the sub-skills that are listed, a handful grabbed my attention:

  • Identifying and understanding obstacles to sustainability
  • Broadening discussion and including underrepresented voices
  • Understanding observable and hidden consequences
  • Critically evaluating sustainability claims
  • Understanding the bigger picture

Hoping to brush up my skills in these areas, I decided to take a look at the upcoming BETT show in London, which describes itself as ‘the biggest Education Technology exhibition in the world’. BETT and its parent company, Hyve, ‘are committed to redefining sustainability within the event industry and within education’. They are doing this by reducing their ‘onsite printing and collateral’. (‘Event collateral’ is an interesting event-industry term that refers to all the crap that is put into delegate bags, intended to ‘enhance their experience of the event’.) BETT and Hyve are encouraging all sponsors to go paperless, too, ‘switching from seat-drop collateral to QR codes’, and delegate bags will no longer be offered. They are partnering with various charities to donate ‘surplus food and furniture’ to local community projects, they are donating to other food charities that support families in need, and they are recycling all of the aisle banners into tote bags. Keynote speakers will include people like Sally Uren, CEO of ‘Forum for the Future’, who will talk about ‘Transforming carbon neutral education for a just and regenerative future’.

BETT and Hyve want us to take their corporate and social responsibility very seriously. All of these initiatives are very commendable, even though I wouldn’t go so far as to say that they will redefine sustainability within the event industry and education. But there is a problem – and it’s not that the world is already over-saturated with recycled tote bags. As the biggest jamboree of this kind in the world, the show attracts over 600 vendors and over 30,000 visitors, with over 120 countries represented. Quite apart from all the collateral and surplus furniture, the carbon and material footprint of the event cannot be negligible. Think of all those start-up solution-providers flying and driving into town, AirB’n’B-ing for the duration, and Ubering around town after hours, for a start.

But this is not really the problem, either. Much as the event likes to talk about ‘driving impact and improving outcomes for teachers and learners’, the clear and only purpose of the event is to sell stuff. It is to enable the investors in the 600+ edtech solution-providers in the exhibition area to move towards making a return on their investment. If we wanted to talk seriously about sustainability, the question that needs to be asked is: to what extent does all the hardware and software on sale contribute in any positive and sustainable way to education? Is there any meaningful social benefit to be derived from all this hardware and software, or is it all primarily just a part of a speculative, financial game? Is the corporate social responsibility of BETT / Hyve a form of green-washing to disguise the stimulation of more production and consumption? Is it all just a kind of environmentalism of the rich’ (Dauvergne, 2016).

Edtech is not the most pressing of environmental problems – indeed, there are examples of edtech that are likely more sustainable than the non-tech alternatives – but the sustainability question remains. There are at least four environmental costs to edtech:

  • The energy-greedy data infrastructures that lie behind digital transactions
  • The raw ingredients of digital devices
  • The environmentally destructive manufacture and production of digital devices
  • The environmental cost of dismantling and disposing digital hardware (Selwyn, 2018)

Some forms of edtech are more environmentally costly than others. First, we might consider the material costs. Going back to pre-internet days, think of the countless tonnes of audio cassettes, VCR tapes, DVDs and CD-ROMs. Think of the discarded playback devices, language laboratories and IWBs. None of these are easily recyclable and most have ended up in landfill, mostly in countries that never used these products. These days the hardware that is used for edtech is more often a device that serves other non-educational purposes, but the planned obsolescence of our phones, tablets and laptops is a huge problem for sustainability.

More important now are probably the energy costs of edtech. Audio and video streaming might seem more environmentally friendly than CDs and DVDs, but, depending on how often the CD or DVD is used, the energy cost of streaming (especially high quality video) can be much higher than using the physical format. AI ups the ante significantly (Brevini, 2022). Five years ago, a standard ‘AI training model in linguistics emit more than 284 tonnes of carbon dioxide equivalent’ (Strubell et al., 2019). With exponentially greater volumes of data now being used, the environmental cost is much, much higher. Whilst VR vendors will tout the environmental benefits of cutting down on travel, getting learners together in a physical room may well have a much lower carbon footprint than meeting in the Metaverse.

When doing the calculus of edtech, we need to evaluate the use-value of the technology. Does the tech actually have any clear educational (or other social) benefit, or is its value primarily in terms of its exchange-value?

To illustrate the difference between use-value and exchange-value, I’d like to return again to the beginnings of modern edtech in ELT. As the global market for ELT materials mushroomed in the 1990s, coursebook publishers realised that, for a relatively small investment, they could boost their sales by bringing out ‘new editions’ of best-selling titles. This meant a new cover, replacing a few texts and topics, making minor modifications to other content, and, crucially, adding extra features. As the years went by, these extra features became digital: CD-ROMs, DVDs, online workbooks and downloadables of various kinds. The publishers knew that sales depended on the existence of these shiny new things, even if many buyers made minimal use or zero use of them. But they gave the marketing departments and sales reps a pitch, and justified an increase in unit price. Did these enhanced coursebooks actually represent any increase in use-value? Did learners make better or faster progress in English as a result? On the whole, the answer has to be an unsurprising and resounding no. We should not be surprised if hundreds of megabytes of drag-and-drop grammar practice fail to have much positive impact on learning outcomes. From the start, it was the impact on the exchange-value (sales and profits) of these products that was the driving force.

Edtech vendors have always wanted to position themselves to potential buyers as ‘solution providers’, trumpeting the use-value of what they are selling. When it comes to attracting investors, it’s a different story, one that is all about minimum viable products, scalability and return on investment.

There are plenty of technologies that have undisputed educational use-value in language learning and teaching. Google Docs, Word, Zoom and YouTube come immediately to mind. Not coincidentally, they are not technologies that were designed for educational purposes. But when you look at specifically educational technology, It becomes much harder (though not impossible) to identify unambiguous gains in use-value. Most commonly, the technology holds out the promise of improved learning, but evidence that it has actually achieved this is extremely rare. Sure, a bells-and-whistles LMS offers exciting possibilities for flipped or blended learning, but research that demonstrates the effectiveness of these approaches in the real world is sadly lacking. Sure, VR might seem to offer a glimpse of motivated learners interacting meaningfully in the Metaverse, but I wouldn’t advise you to bet on it.

And betting is what most edtech is all about. An eye-watering $16.1 billion of venture capital was invested in global edtech in 2020. What matters is not that any of these products or services have any use-value, but that they are perceived to have a use-value. Central to this investment is the further commercialisation and privatisation of education (William & Hogan 2020). BETT is a part of this.

Returning to the development of my sustainability skills, I still need to consider the bigger picture. I’ve suggested that it is difficult to separate edtech from a consideration of capitalism, a system that needs to manufacture consumption, to expand production and markets in order to survive (Dauvergne, 2016: 48). Economic growth is the sine qua non of this system, and it is this that makes the British government (and others) so keen on BETT. Education and edtech in particular are rapidly growing markets. But growth is only sustainable, in environmental terms, if it is premised on things that we actually need, rather than things which are less necessary and ecologically destructive (Hickel, 2020). At the very least, as Selwyn (2021) noted, we need more diverse thinking: ‘What if environmental instability cannot be ‘solved’ simply through the expanded application of digital technologies but is actually exacerbated through increased technology use?

References

Brevini, B. (2022) Is AI Good for the Planet? Cambridge: Polity Press

Dauvergne, P. (2016) Environmentalism of the Rich. Cambridge, Mass.: MIT Press

Hickel, J. (2020) Less Is More. London: William Heinemann

Selwyn, N. (2018) EdTech is killing us all: facing up to the environmental consequences of digital education. EduResearch Matters 22 October, 2018. https://www.aare.edu.au/blog/?p=3293

Selwyn, N. (2021) Ed-Tech Within Limits: Anticipating educational technology in times of environmental crisis. E-Learning and Digital Media, 18 (5): 496 – 510. https://journals.sagepub.com/doi/pdf/10.1177/20427530211022951

Strubell, E., Ganesh, A. & McCallum, A. (2019) Energy and Policy Considerations for Deep Learning in NLP. Cornell University: https://arxiv.org/pdf/1906.02243.pdf

Williamson, B. & Hogan, A. (2020) Commercialisation and privatisation in / of education in the context of Covid-19. Education International

Recent years have seen a proliferation of computer-assisted pronunciations trainers (CAPTs), both as stand-alone apps and as a part of broader language courses. The typical CAPT records the learner’s voice, compares this to a model of some kind, detects differences between the learner and the model, and suggests ways that the learner may more closely approximate to the model (Agarwal & Chakraborty, 2019). Most commonly, the focus is on individual phonemes, rather than, as in Richard Cauldwell’s ‘Cool Speech’ (2012), on the features of fluent natural speech (Rogerson-Revell, 2021).

The fact that CAPTs are increasingly available and attractive ‘does not of course ensure their pedagogic value or effectiveness’ … ‘many are technology-driven rather than pedagogy-led’ (Rogerson-Revell, 2021). Rogerson-Revell (2021) points to two common criticisms of CAPTs. Firstly, their pedagogic accuracy sometimes falls woefully short. He gives the example of a unit on intonation in one app, where users are told that ‘when asking questions in English, our voice goes up in pitch’ and ‘we lower the pitch of our voice at the end of questions’. Secondly, he observes that CAPTs often adopt a one-size-fits-all approach, despite the fact that we know that issues of pronunciation are extremely context-sensitive: ‘a set of learners in one context will need certain features that learners in another context do not’ (Levis, 2018: 239).

There are, in addition, technical challenges that are not easy to resolve. Many CAPTs rely on automatic speech recognition (ASR), which can be very accurate with some accents, but much less so with other accents (including many non-native-speaker accents) (Korzekwa et al., 2022). Anyone using a CAPT will experience instances of the software identifying pronunciation problems that are not problems, and failing to identify potentially more problematic issues (Agarwal & Chakraborty, 2019).

We should not, therefore, be too surprised if these apps don’t always work terribly well. Some apps, like the English File Pronunciation app, have been shown to be effective in helping the perception and production of certain phonemes by a very unrepresentative group of Spanish learners of English (Fouz-González, 2020), but this tells us next to nothing about the overall effectiveness of the app. Most CAPTs have not been independently reviewed, and, according to a recent meta-analysis of CAPTs (Mahdi & Al Khateeb, 2019), the small number of studies are ‘all of very low quality’. This, unfortunately, renders their meta-analysis useless.

Even if the studies in the meta-analysis had not been of very low quality, we would need to pause before digesting any findings about CAPTs’ effectiveness. Before anything else, we need to develop a good understanding of what they might be effective at. It’s here that we run headlong into the problem of native-speakerism (Holliday, 2006; Kiczkowiak, 2018).

The pronunciation model that CAPTs attempt to push learners towards is a native-speaker model. In the case of ELSA Speak, for example, this is a particular kind of American accent, although ‘British and other accents’ will apparently soon be added. Xavier Anguera, co-founder and CTO of ELSA Speak, in a fascinating interview with Paul Raine of TILTAL, happily describes his product as ‘an app that is for accent reduction’. Accent reduction is certainly a more accurate way of describing CAPTs than accent promotion.

Accent reduction, or the attempt to mimic an imagined native-speaker pronunciation, is now ‘rarely put forward by teachers or researchers as a worthwhile goal’ (Levis, 2018: 33) because it is only rarely achievable and, in many contexts, inappropriate. In addition, accent reduction cannot easily be separated from accent prejudice. Accent reduction courses and products ‘operate on the assumption that some accents are more legitimate than others’ (Ennser-Kananen, et al., 2021) and there is evidence that they can ‘reinscribe racial inequalities’ (Ramjattan, 2019). Accent reduction is quintessentially native-speakerist.

Rather than striving towards a native-speaker accentedness, there is a growing recognition among teachers, methodologists and researchers that intelligibility may be a more appropriate learning goal (Levis, 2018) than accentedness. It has been over 20 years since Jennifer Jenkins (2000) developed her Lingua Franca Core (LFC), a relatively short list of pronunciation features that she considered central to intelligibility in English as a Lingua Franca contexts (i.e. the majority of contexts in which English is used). Intelligibility as the guiding principle of pronunciation teaching continues to grow in influence, spurred on by the work of Walker (2010), Kiczkowiak & Lowe (2018), Patsko & Simpson (2019) and Hancock (2020), among others.

Unfortunately, intelligibility is a deceptively simple concept. What exactly it is, is ‘not an easy question to answer’ writes John Levis (2018) before attempting his own answer in the next 250 pages. As admirable as the LFC may be as an attempt to offer a digestible and actionable list of key pronunciation features, it ‘remains controversial in many of its recommendations. It lacks robust empirical support, assumes that all NNS contexts are similar, and does not take into account the importance of stigma associated with otherwise intelligible pronunciations’ (Levis, 2018: 47). Other attempts to list features of intelligibility fare no better in Levis’s view: they are ‘a mishmash of incomplete and contradictory recommendations’ (Levis, 2018: 49).

Intelligibility is also complex because of the relationship between intelligibility and comprehensibility, or the listener’s willingness to understand – their attitude or stance towards the speaker. Comprehensibility is a mediation concept (Ennser-Kananen, et al., 2021). It is a two-way street, and intelligibility-driven approaches need to take this into account (unlike the accent-reduction approach which places all the responsibility for comprehensibility on the shoulders of the othered speaker).

The problem of intelligibility becomes even more thorny when it comes to designing a pronunciation app. Intelligibility and comprehensibility cannot easily be measured (if at all!), and an app’s algorithms need a concrete numerically-represented benchmark towards which a user / learner can be nudged. Accentedness can be measured (even if the app has to reify a ‘native-speaker accent’ to do so). Intelligibility / Comprehensibility is simply not something, as Xavier Anguera acknowledges, that technology can deal with. In this sense, CAPTs cannot avoid being native-speakerist.

At this point, we might ride off indignantly into the sunset, but a couple of further observations are in order. First of all, accentedness and comprehensibility are not mutually exclusive categories. Anguera notes that intelligibility can be partly improved by reducing accentedness, and some of the research cited by Levis (2018) backs him up on this. But precisely how much and what kind of accent reduction improves intelligibility is not knowable, so the use of CAPTs is something of an optimistic stab in the dark. Like all stabs in the dark, there are dangers. Secondly, individual language learners may be forgiven for not wanting to wait for accent prejudice to become a thing of the past: if they feel that they will suffer less from prejudice by attempting here and now to reduce their ‘foreign’ accent, it is not for me, I think, to pass judgement. The trouble, of course, is that CAPTs contribute to the perpetuation of the prejudices.

There is, however, one area where the digital evaluation of accentedness is, I think, unambiguously unacceptable. According to Rogerson-Revell (2021), ‘Australia’s immigration department uses the Pearson Test of English (PTE) Academic as one of five tests. The PTE tests speaking ability using voice recognition technology and computer scoring of test-takers’ audio recordings. However, L1 English speakers and highly proficient L2 English speakers have failed the oral fluency section of the English test, and in some cases it appears that L1 speakers achieve much higher scores if they speak unnaturally slowly and carefully’. Human evaluations are not necessarily any better.

References

Agarwal, C. & Chakraborty, P. (2019) A review of tools and techniques for computer aided pronunciation training (CAPT) in English. Education and Information Technologies, 24: 3731–3743. https://doi.org/10.1007/s10639-019-09955-7

Cauldwell, R (2012) Cool Speech app. Available at: http://www.speechinaction.org/cool-speech-2

Fouz-González, J (2020) Using apps for pronunciation training: An empirical evaluation of the English File Pronunciation app. Language Learning & Technology, 24(1): 62–85.

Ennser-Kananen, J., Halonen, M. & Saarinen, T. (2021) “Come Join Us and Lose Your Accent!” Accent Modification Courses as Hierarchization of International Students. Journal of International Students 11 (2): 322 – 340

Holliday, A. (2006) Native-speakerism. ELT Journal, 60 (4): 385 – 387

Jenkins. J. (2000) The Phonology of English as a Lingua Franca. Oxford: Oxford University Press

Hancock, M. (2020) 50 Tips for Teaching Pronunciation. Cambridge: Cambridge University Press

Kiczkowiak, M. (2018) Native Speakerism in English Language Teaching: Voices From Poland. Doctoral dissertation.

Kiczkowiak, M & Lowe, R. J. (2018) Teaching English as a Lingua Franca. Stuttgart: DELTA Publishing

Korzekwa, D., Lorenzo-Trueba, J., Thomas Drugman, T. & Kostek, B. (2022) Computer-assisted pronunciation training—Speech synthesis is almost all you need. Speech Communication, 142: 22 – 33

Levis, J. M. (2018) Intelligibility, Oral Communication, and the Teaching of Pronunciation. Cambridge: Cambridge University Press

Mahdi, H. S. & Al Khateeb, A. A. (2019) The effectiveness of computer-assisted pronunciation training: A meta-analysis. Review of Education, 7 (3): 733 – 753

Patsko, L. & Simpson, K. (2019) How to Write Pronunciation Activities. ELT Teacher 2 Writer https://eltteacher2writer.co.uk/our-books/how-to-write-pronunciation-activities/

Ramjattan, V. A. (2019) Racializing the problem of and solution to foreign accent in business. Applied Linguistics Review, 13 (4). https://doi.org/10.1515/applirev2019-0058

Rogerson-Revell, P. M. (2021) Computer-Assisted Pronunciation Training (CAPT): Current Issues and Future Directions. RELC Journal, 52(1), 189–205. https://doi.org/10.1177/0033688220977406

Walker, R. (2010) Teaching the Pronunciation of English as a Lingua Franca. Oxford: Oxford University Press

Who can tell where a blog post might lead? Over six years ago I wrote about adaptive professional development for teachers. I imagined the possibility of bite-sized, personalized CPD material. Now my vision is becoming real.

For the last two years, I have been working with a start-up that has been using AI to generate text using GPT-3 large language models. GPT-3 has recently been in the news because of the phenomenal success of the newly released ChatGPT. The technology certainly has a wow factor, but it has been around for a while now. ChatGPT can generate texts of various genres on any topic (with a few exceptions like current affairs) and the results are impressive. Imagine, then, how much more impressive the results can be when the kind of text is limited by genre and topic, allowing the software to be trained much more reliably.

This is what we have been working on. We took as our training corpus a huge collection of English language teaching teacher development texts that we could access online: blogs from all the major publishers, personal blogs, transcriptions from recorded conference presentations and webinars, magazine articles directed at teachers, along with books from publishers such as DELTA and Pavilion ELT, etc. We identified topics that seemed to be of current interest and asked our AI to generate blog posts. Later, we were able to get suggestions of topics from the software itself.

We then contacted a number of teachers and trainers who contribute to the publishers’ blogs and contracted them, first, to act as human trainers for the software, and, second, to agree to their names being used as the ‘authors’ of the blog posts we generated. In one or two cases, the authors thought that they had actually written the posts themselves! Next we submitted these posts to the marketing departments of the publishers (who run the blogs). Over twenty were submitted in this way, including:

  • What do teachers need to know about teaching 21st century skills in the English classroom?
  • 5 top ways of improving the well-being of English teachers
  • Teaching leadership skills in the primary English classroom
  • How can we promote eco-literacy in the English classroom?
  • My 10 favourite apps for English language learners

We couldn’t, of course, tell the companies that AI had been used to write the copy, but once we were sure that nobody had ever spotted the true authorship of this material, we were ready to move to the next stage of the project. We approached the marketing executives of two publishers and showed how we could generate teacher development material at a fraction of the current cost and in a fraction of the time. Partnerships were quickly signed.

Blog posts were just the beginning. We knew that we could use the same technology to produce webinar scripts, using learning design insights to optimise the webinars. The challenge we faced was that webinars need a presenter. We experimented with using animations, but feedback indicated that participants like to see a face. This is eminently doable, using our contracted authors and deep fake technology, but costs are still prohibitive. It remains cheaper and easier to use our authors delivering the scripts we have generated. This will no doubt change before too long.

The next obvious step was to personalize the development material. Large publishers collect huge amounts of data about visitors to their sites using embedded pixels. It is also relatively cheap and easy to triangulate this data with information from the customer databases and from activity on social media (especially Facebook). We know what kinds of classes people teach, and we know which aspects of teacher development they are interested in.

Publishers have long been interested in personalizing marketing material, and the possibility of extending this to the delivery of real development content is clearly exciting. (See below an email I received this week from the good folks at OUP marketing.)

Earlier this year one of our publishing partners began sending links to personalized materials of the kind we were able to produce with AI. The experiment was such a success that we have already taken it one stage further.

One of the most important clients of our main publishing partner employs hundreds of teachers to deliver online English classes using courseware that has been tailored to the needs of the institution. With so many freelance teachers working for them, along with high turnover of staff, there is inevitably a pressing need for teacher training to ensure optimal delivery. Since the classes are all online, it is possible to capture precisely what is going on. Using an AI-driven tool that was inspired by the Visible Classroom app (informed by the work of John Hattie), we can identify the developmental needs of the teachers. What kinds of activities are they using? How well do they exploit the functionalities of the platform? What can be said about the quality of their teacher talk? We combine this data with everything else and our proprietary algorithms determine what kinds of training materials each teacher receives. It doesn’t stop there. We can also now evaluate the effectiveness of these materials by analysing the learning outcomes of the students.

Teaching efficacy can by massively increased, whilst the training budget of the institution can be slashed. If all goes well, there will be no further need for teacher trainers at all. We won’t be stopping there. If results such as these can be achieved in teacher training, there’s no reason why the same technology cannot be leveraged for the teaching itself. Most of our partner’s teaching and testing materials are now quickly and very cheaply generated using GPT-3.5. If you want to see how this is done, check out the work of edugo.AI (a free trial is available) which can generate gapfills and comprehension test questions in a flash. As for replacing the teachers, we’re getting there. For the time being, though, it’s more cost-effective to use freelancers and to train them up.

You could be forgiven for wondering what, precisely, digital literacies are. In the first edition of ‘Digital Literacies’, Dudeney et al. (2013:2) define the term as ‘the individual and social skills needed to effectively interpret, manage, share and create meaning in the growing range of digital communication channels’. This is pretty broad, and would seem to encompass more or less anything that people do with digital technology, including the advanced arts of trolling and scamming. Nine years later, in the new edition of this book (Pegrum et al., 2022:5), the authors modify their definition a little: ‘the individual and social skills needed to effectively manage meaning in an era of digitally networked, often blended, communication’. This is broader still. In the intervening years there has been a massive proliferation of ways of describing specific digital literacies, as well as more frameworks of digital literacies than anyone (bar people writing about the topic) could possibly want. Of course, there is much in common between all these descriptive and taxonomic efforts, but there is also much that differs. What, precisely, ‘digital literacies’ means changes over both time and space. It carries different meanings in Australia, Sweden and Argentina, and, perhaps, it only makes sense to have a local conceptualisation of the term (Pangrazio et al., 2020). By the time you have figured out what these differences are, things will have moved on. Being ‘digitally-literate’ literate is an ongoing task.

What, precisely, ‘digital literacies’ are only really matters when we are told that it is vital to teach them. It’s easy to agree that digital skills are quite handy in this networked world, but, unless we have a very clear idea of what they are, it’s not going to be easy to know which ones to teach or how to teach them. Before we get caught up in the practical pedagogical details, it might be useful to address three big questions:

  • How useful it is to talk about digital literacies?
  • Can digital literacies be taught?
  • Should digital literacies be taught as part of the English language curriculum?

How useful is it to talk about digital literacies?

Let’s take one example of a framework: the Cambridge Life Competencies Framework (CLC). The CLC lists six key competencies (creative thinking, critical thinking, communication, collaboration, learning to learn, and social responsibilities). Underpinning and informing these six competencies are three ‘foundation layers’: ‘emotional development’, ‘discipline knowledge’ and ‘digital literacy’. Digital literacy is broken down as follows:

It’s a curious amalgam of relatively straightforward skills and much more complex combinations of skills with knowledge, attitudes and dispositions. In the former category (see the first box in the chart above), we would find things like the ability to use tags, hashtags, search engines, and filters. In the latter (see the second box in the chart above), we would find things like the ability to recognise fake news or to understand how and why personally targeted advertising is such a money-spinner.

Another example, this one, from Pegrum et al (2018), is more complex and significantly more detailed. On the more technical side, we see references to the ability to navigate within multimodal gaming, VR and AR environments, or the ability to write and modify computer code. And for more complex combinations of skills, knowledge, attitudes and dispositions, we have things like the ability to develop a reputation and exert influence within online networks, or ‘the ability to exert a positive influence (online) by adopting a stance of intellectual humility and ethical pluralism’.

This is all a far remove from only seven years ago when ‘digital literacies’ were described as ‘the practices of reading, writing and communication made possible by digital media’ (Hafner et al., 2015) and the kinds of skills required were almost all closely connected to reading and writing. The digital world has changed, and so has our understanding of what it means to operate effectively within that world. Perhaps it is time, too, to change our terminology: ‘literacies’ is still with us, but it seems almost wilfully misleading. ‘Abilities’ or ‘competencies’ would seem to be much more appropriate terms to refer to what we are discussing in these frameworks, but ‘abilities’ probably isn’t sciency enough, and ‘competencies’ has already been done to death.

The problem with lumping all these things together under a single superordinate is that it seems to imply that there is some sort of equivalence between all the subordinate labels, that there is some categorial similarity. Pegrum et al (2022) acknowledge that there are differences of complexity between these ‘literacies’ – they use a star system to indicate degree of complexity. But I think that there is no sufficiently strong reason to put some of these things together in the first place. Dudeney at al (2013: 14) note that some of their literacies are ‘macroliteracies’ – ‘in other words, a literacy which draws on numerous other literacies – and involves linguistic, multimedia, spatial, kinaesthetic and other skills’. Why, then, call them ‘literacies’ at all? The only connection between knowing how to generate a LOLcat post and knowing how to ‘remain non-judgemental towards new perspectives, multiple viewpoints, and shifting contexts’ is that both are related to our use of digital technology. But since there is very little in our lives that is not now related in some way to digital technology, is this good enough reason to bracket these two abilities together?

Pegrum et al (2022) found that they needed to expand their list of digital literacies in the new edition of their book, and they will no doubt need to do so again nine years from now. But is the fact that something could be included in a taxonomy a good reason for actually including it? ‘Code literacy’, for example, seems rather less urgent now than it did nine years ago. I have never been convinced by gaming literacy or remix literacy. Are these really worth listing alongside the others in the table? Even if they are, nobody (including Pegrum et al.) would disagree that some prioritisation is necessary. However, when we refer to ‘digital literacies’ and how vital it is to teach them, we typically don’t specify a particular literacy and not another. We risk committing the logical error of assuming that something that holds true for a group or category, also holds true for all the members of the group or subordinates of the category.

Can digital literacies be taught?

There is clearly no particular problem in teaching and learning some digital literacies, especially the more technical ones. Unfortunately, the more specific and technical we are (e.g. when we mention a particular digital tool), the more likely it is that its shelf-life will be limited. Hardware comes and goes (I haven’t had to format a floppy disc for a while), as do apps and software. To the risk of wasting time teaching a skill that may soon be worthless, we may add the risk of not including literacies that have not yet found their way into the taxonomies. Examples include knowing how to avoid plagiarism detectors (as opposed to avoiding plagiarism) or how to use GPT-3 (and soon GPT-4) text generators. Handy for students.

The choice of digital tools is crucial when one of the key pieces of advice for teaching digital literacy is to integrate the use of digital tools into lessons (e.g. in the Cambridge Life Competencies Digital Literacy booklet). This advice skates over the key questions of which tool, and which literacy is being targeted (and why). Watching TikTok videos, using Mentimeter in class, or having a go with a VR headset may serve various educational purposes, but it would be stretching a point to argue that these activities will do much for anyone’s digital literacy. Encouraging teachers to integrate technology into their lessons (government policy in some countries) makes absolutely no sense unless the desired outcome – digital literacy – is precisely defined in advance. It rarely is. See here for further discussion.

Encouragement to include technology, any old technology, in lessons is almost never grounded in claims that a particular technical skill (e.g. navigating TikTok) has any pressing value. Rather, the justification usually comes from reference to what might be called ‘higher-order’ skills, like critical thinking: what I referred to earlier as curious amalgams of relatively straightforward skills and much more complex combinations of skills with knowledge, attitudes and dispositions.

The problem here is that it remains very uncertain whether things like ethical literacy or critical digital literacy are likely to be learnt through instruction. They can certainly be practised, and Pegrum et al (2022) have some very nice activities. The aims of these activities is typically described using a vague ‘raise awareness of’ formula, but whether they will lead, for example, to any improved ability ‘to exert a positive influence (online) by adopting a stance of intellectual humility and ethical pluralism’ is debatable. Much as the world might be a better place if classroom activities of this kind did actually work, research evidence is sadly lacking. For a more detailed look at the problems of trying to teach critical digital literacy / media information literacy, see here.

Should digital literacies be part of the English language curriculum?

So, is it ‘crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Pegrum et al, 2022)? Well, it all depends on which digital literacies we are talking about. It also depends on what kind of learners in what kinds of learning contexts. And it depends on both institutional objectives and the personal objectives of the learners themselves. So, ‘crucial’, no, but we’ll put the choice of adjective down to rhetorical flourish.

Is it true that ‘digital literacies are as important to language learning as […] reading and writing skills […]’ (Pegrum et al., 2022: 1)? Clearly not. Since it’s hard to imagine any kind of digital literacy without some reading skills preceding it, the claim that they are comparable in importance is also best understood as rhetorical flourish.

A modicum of critical (digital) literacy is helpful when it comes to reading literature on digital literacies.

References

Dudeney, G., Hockly, N. & Pegrum, M. (2013) Digital Literacies. Harlow: Pearson Education

Hafner, C.A., Chik, A. & Jones, R. H. (2015) Digital Literacies and Language Learning, Language Learning & Technology, 19 (3): 1-  7

Pangrazio, L., Godhe, A.-L., & Ledesma, A. G. L. (2020) What is digital literacy? A comparative review of publications across three language contexts. E-Learning and Digital Media, 17(6), 442–459. https://doi.org/10.1177/204275302094629

Pegrum, M., Hockly, N. & Dudeney, G. (2022) Digital Literacies 2nd Edition. New York: Routledge

Pegrum, M., Dudeney, G. & Hockly, N. (2018) Digital Literacies Revisited. The European Journal of Applied Linguistics and TEFL, 7 (2): 3 – 24

The paragraph above was written by an AI-powered text generator called neuroflash https://app.neuro-flash.com/home which I told to produce a text on the topic ‘AI and education’. As texts on this topic go, it is both remarkable (in that it was not written by a human) and entirely unremarkable (in that it is practically indistinguishable from hundreds of human-written texts on the same subject). Neuroflash uses a neural network technology called GPT-3 – ‘a large language model’ – and ‘one of the most interesting and important AI systems ever produced’ (Chalmers, 2020). Basically, it generates text by predicting sequences of words based on huge databases. The nature of the paragraph above tells you all you need to know about the kinds of content that are usually found in texts about AI and education.

Not dissimilar from the neuroflash paragraph, educational commentary on uses of AI is characterised by (1) descriptions of AI tools already in use (e.g. speech recognition and machine translation) and (2) vague predictions which invariably refer to ‘the promise of personalised learning, adjusting what we give learners according to what they need to learn and keeping them motivated by giving them content that is of interest to them’ (Hughes, 2022). The question of what precisely will be personalised is unanswered: providing learners with optimal sets of resources (but which ones?), providing counselling services, recommendations or feedback for learners and teachers (but of what kind?) (Luckin, 2022). Nearly four years ago, I wrote https://adaptivelearninginelt.wordpress.com/2018/08/13/ai-and-language-teaching/ about the reasons why these questions remain unanswered. The short answer is that AI in language learning requires a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. This is lacking in SLA, or, at least, there is no general agreement on what it is. Worse, the models that are most commonly adopted in AI-driven programs (e.g. the deliberate learning of discrete items of grammar and vocabulary) are not supported by either current theory or research (see, for example, VanPatten & Smith, 2022).

In 2021, the IATEFL Learning Technologies SIG organised an event dedicated to AI in education. Unsurprisingly, there was a fair amount of input on AI in assessment, but my interest is in how AI might revolutionize how we learn and teach, not how we assess. What concrete examples did speakers provide?

Rose Luckin, the most well-known British expert on AI in education, kicked things off by mentioning three tools. One of these, Carnegie Learning, is a digital language course that looks very much like any of the ELT courses on offer from the big publishers – a fully blendable, multimedia (e.g. flashcards and videos) synthetic syllabus. This ‘blended learning solution’ is personalizable, since ‘no two students learn alike’, and, it claims, will develop a ‘lifelong love of language’. It appears to be premised on the idea of language learning as optimizing the delivery of ‘content’, of this content consisting primarily of discrete items, and of equating input with uptake. Been there, done that.

A second was Alelo Enskill https://www.alelo.com/about-us/ a chatbot / avatar roleplay program, first developed by the US military to teach Iraqi Arabic and aspects of Iraqi culture to Marines. I looked at the limitations of chatbot technology for language learning here https://adaptivelearninginelt.wordpress.com/2016/12/01/chatbots/ . The third tool mentioned by Luckin was Duolingo. Enough said.

Another speaker at this event was the founder and CEO of Edugo.AI https://www.edugo.ai/ , an AI-powered LMS which uses GPT-3. It allows schools to ‘create and upload on the platform any kind of language material (audio, video, text…). Our AI algorithms process and convert it in gamified exercises, which engage different parts of the brain, and gets students eager to practice’. Does this speaker know anything about gamification (for a quick read, I’d recommend Paul Driver (2012)) or neuroscience, I wonder. What, for that matter, does he know about language learning? Apparently, ‘language is not just about words, language is about sentences’ (Tomasello, 2022). Hmm, this doesn’t inspire confidence.

When you look at current uses of AI in language learning, there is very little (outside of testing, translation and speech ↔ text applications) that could justify enthusiastic claims that AI has any great educational potential. Skepticism seems to me a more reasonable and scientific response: de omnibus dubitandum.

Education is not the only field where AI has been talked up. When Covid hit us, AI was seen as the game-changing technology. It ‘could be deployed to make predictions, enhance efficiencies, and free up staff through automation; it could help rapidly process vast amounts of information and make lifesaving decisions’ (Chakravorti, 2022). The contribution of AI to the development of vaccines has been huge, but its role in diagnosing and triaging patients has been another matter altogether. Hundreds of predictive tools were developed: ‘none of them made a real difference, and some were potentially harmful’ (Heaven, 2021). Expectations were unrealistic and led to the deployment of tools before they were properly trialled. Thirty months down the line, a much more sober understanding of the potential of AI has emerged. Here, then, are the main lessons that have been learnt (I draw particularly on Engler, 2020, and Chakravorti, 2022) that are also relevant to education and language learning.

  • Anticipate what could go wrong before anticipating what might go right. Engler (2020) writes that ‘a poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem’. In language learning, it is highly unlikely that personalized recommendations will ever reach anything even approaching this level of reliability. What are the implications for individual learners whose learning is inappropriately personalised?
  • We also know that a significant problem with AI systems is bias (O’Neil, 2016). There is a well-documented history of discriminatory outcomes because of people’s race, gender, social class or disability profile. Bias needs to be addressed proactively, not reactively.
  • Acknowledge from the outset that, for AI to work, huge amounts of data related to prior outcomes will be needed. In the cases of both Covid and language learning, much of this data will be personal. This raises immediate questions of privacy and consent, especially for learners who are children. Don’t minimise this issue.
  • Common by-products of AI use include insecure data storage, digital surveillance and consolidation of market power (Engler, 2020). Since such issues are so pervasive, assume they will be a problem, and need to be considered in evaluating the potential of any piece of AI edtech.
  • Find out what the subject matter experts have to say: ‘AI is only helpful when applied judiciously by subject-matter experts—people with long-standing experience with the problem that they are trying to solve […] Effectively predicting a badly defined problem is worse than doing nothing at all’ (Engler, 2020). It may be that the problem the AI is trying to help solve is the wrong problem and not worth solving at all (e.g. more efficient ‘delivery’ of explicit grammatical knowledge, gamifying gapfills).

To bring this post to an end, I returned to the AI / GTP-3 driven neuroflash program. This time, I asked it to write a longer piece entitled ‘The dangers of artificial intelligence in education: what could go wrong?’. Here’s what it came up with:

There are a few potential dangers of artificial intelligence in education. One is that AI could be used to replace human teachers. This could lead to a decline in the quality of education, as machines are not yet capable of replicating the full range of human emotions and social interactions that are necessary for teaching. Additionally, AI could be used to create biased or “racist” educational materials, as has already happened with some commercial software products. Finally, AI could be used to track and monitor students in ways that invade their privacy, or that could be used to manipulate their behavior.

All of these dangers are real and should be taken seriously. However, it’s important to remember that AI is still a new technology, and its impact on education is not yet fully understood. As such, we should tread cautiously when implementing AI into our educational systems.

When it comes to education, artificial intelligence could be a double-edged sword. While AI has the potential to revolutionize the field of education and help students learn in more efficient ways, there are also dangers that need to be considered.

One of the dangers of AI in education is that it could lead to a more standardized and homogenized form of learning. If all students are relying on AI-based learning tools, they may all end up learning the same things in the same way. This could lead to a loss of creativity and individuality in education.

Another danger is that AI-based learning tools could become too reliant on data and algorithms. This could lead to a situation where students are simply being taught what the machines think they should know, rather than being allowed to explore and discover new things for themselves.

Finally, there is the danger that AI could eventually replace human teachers altogether. While this may seem like a far-fetched scenario, it is not outside the realm of possibility. If AI-based learning tools become sophisticated enough, they may one day be able to do the job of teaching just as well as humans can.

These are just some of the dangers that need to be considered when it comes to artificial intelligence in education. While AI has the potential to revolutionize education, we need to be careful about how we implement it so that we don’t end up doing more harm than good.

References

Chakravorti, B. (2022) Why AI Failed to Live Up to Its Potential During the Pandemic. Harvard Business Review March 17,2022. https://hbr.org/2022/03/why-ai-failed-to-live-up-to-its-potential-during-the-pandemic

Chalmers, D. (2020) Weinberg, Justin (ed.). “GPT-3 and General Intelligence”. Daily Nous. Philosophers On GPT-3 (updated with replies by GPT-3) July 30, 2020. https://dailynous.com/2020/07/30/philosophers-gpt-3/#chalmers

Driver, P. (2012) The Irony of Gamification. In English Digital Magazine 3, British Council Portugal, pp. 21 – 24 http://digitaldebris.info/digital-debris/2011/12/31/the-irony-of-gamification-written-for-ied-magazine.html

Engler, A. (2020) A guide to healthy skepticism of artificial intelligence and coronavirus. Washington D.C.: Brookings Institution https://www.brookings.edu/research/a-guide-to-healthy-skepticism-of-artificial-intelligence-and-coronavirus/

Heaven, W. D. (2021) Hundreds of AI tools have been built to catch covid. None of them helped. MIT Technology Review, July 30, 2021. https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/

Hughes, G. (2022) What lies at the end of the AI rainbow? IATEFL LTSIG Newsletter Issue April 2022

Luckin, R. (2022) The implications of AI for language learning and teaching. IATEFL LTSIG Newsletter Issue April 2022

O’Neil, C. (2016) Weapons of Math Destruction. London: Allen Lane

Tomasello, G. (2022) Next Generation of AI-Language Education Software:NLP & Language Modules (GPT3). IATEFL LTSIG Newsletter Issue April 2022

VanPatten, B. & Smith, M. (2022) Explicit and Implicit Learning in Second Language Acquisition. Cambridge: Cambridge University Press

There’s an aspect of language learning which everyone agrees is terribly important, but no one can quite agree on what to call it. I’m talking about combinations of words, including fixed expressions, collocations, phrasal verbs and idioms. These combinations are relatively fixed, cannot always be predicted from their elements or generated by grammar rules (Laufer, 2022). They are sometimes referred to as formulaic sequences, formulaic expressions, lexical bundles or lexical chunks, among other multiword items. They matter to English language learners because a large part of English consists of such combinations. Hill (2001) suggests this may be up to 70%. More conservative estimates report 58.6% of writing and 52.3% of speech (Erman & Warren, 2000). Some of these combinations (e.g. ‘of course’, ‘at least’) are so common that they fall into lists of the 1000 most frequent lexical items in the language.

By virtue of their ubiquity and frequency, they are important both for comprehension of reading and listening texts and for the speed at which texts can be processed. This is because knowledge of these combinations ‘makes discourse relatively predictable’ (Boers, 2020). Similarly, such knowledge can significantly contribute to spoken fluency because combinations ‘can be retrieved from memory as prefabricated units rather than being assembled at the time of speaking’ (Boer, 2020).

So far, so good, but from here on, the waters get a little muddier. Given their importance, what is the best way for a learner to acquire a decent stock of them? Are they best acquired through incidental learning (through meaning-focused reading and listening) or deliberate learning (e.g. with focused exercises of flashcards)? If the former, how on earth can we help learners to make sure that they get exposure to enough combinations enough times? If the latter, what kind of practice works best and, most importantly, which combinations should be selected? With, at the very least, many tens of thousands of such combinations, life is too short to learn them all in a deliberate fashion. Some sort of triage is necessary, but how should we go about this? Frequency of occurrence would be one obvious criterion, but this merely raises the question of what kind of database should be used to calculate frequency – the spoken discourse of children will reveal very different patterns from the written discourse of, say, applied linguists. On top of that, we cannot avoid consideration of the learners’ reasons for learning the language. If, as is statistically most probable, they are learning English to use as a lingua franca, how important or relevant is it to learn combinations that are frequent, idiomatic and comprehensible in native-speaker cultures, but may be rare and opaque in many English as a Lingua Franca contexts?

There are few, if any, answers to these big questions. Research (e.g. Pellicer-Sánchez, 2020) can give us pointers, but, the bottom line is that we are left with a series of semi-informed options (see O’Keeffe et al., 2007: 58 – 99). So, when an approach comes along that claims to use software to facilitate the learning of English formulaic expressions (Lin, 2022) I am intrigued, to say the least.

The program is, slightly misleadingly, called IdiomsTube (https://www.idiomstube.com). A more appropriate title would have been IdiomaticityTube (as it focuses on ‘speech formulae, proverbs, sayings, similes, binomials, collocations, and so on’), but I guess ‘idioms’ is a more idiomatic word than ‘idiomaticity’. IdiomsTube allows learners to choose any English-captioned video from YouTube, which is then automatically analysed to identify from two to six formulaic expressions that are presented to the learner as learning objects. Learners are shown these items; the items are hyperlinked to (good) dictionary entries; learners watch the video and are then presented with a small variety of practice tasks. The system recommends particular videos, based on an automated analysis of their difficulty (speech rate and a frequency count of the lexical items they include) and on recommendations from previous users. The system is gamified and, for class use, teachers can track learner progress.

When an article by the program’s developer, Phoebe Lin, (in my view, more of an advertising piece than an academic one) came out in the ReCALL journal, she tweeted that she’d love feedback. I reached out but didn’t hear back. My response here is partly an evaluation of Dr Lin’s program, partly a reflection on how far technology can go in solving some of the knotty problems of language learning.

Incidental and deliberate learning

Researchers have long been interested in looking for ways of making incidental learning of lexical items more likely to happen (Boers, 2021: 39 ff.), of making it more likely that learners will notice lexical items while focusing on the content of a text. Most obviously, texts can be selected, written or modified so they contain multiple instances of a particular item (‘input flooding’). Alternatively, texts can be typographically enhanced so that particular items are highlighted in some way. But these approaches are not possible when learners are given the freedom to select any video from YouTube and when the written presentations are in the form of YouTube captions. Instead, IdiomsTube presents the items before the learner watches the video. They are, in effect, told to watch out for these items in advance. They are also given practice tasks after viewing.

The distinction between incidental and deliberate vocabulary learning is not always crystal-clear. In this case, it seems fairly clear that the approach is more slanted to deliberate learning, even though the selection of video by the learner is determined by a focus on content. Whether this works or not will depend on (1) the level-appropriacy of the videos that the learner watches, (2) the effectiveness of the program in recommending / identifying appropriate videos, (3) the ability of the program to identify appropriate formulaic expressions as learning targets in each video, and (4) the ability of the program to generate appropriate practice of these items.

Evaluating the level of YouTube videos

What makes a video easy or hard to understand? IdiomsTube attempts this analytical task by calculating (1) the speed of the speech and (2) the difficulty of the lexis as determined by the corpus frequency of these items. This gives a score out of five for each category (speed and difficulty). I looked at fifteen videos, all of which were recommended by the program. Most of the ones I looked at were scored at Speed #3 and Difficulty #1. One that I looked at, ‘Bruno Mars Carpool Karaoke’, had a speed of #2 and a difficulty of #1 (i.e. one of the easiest). The video is 15 minutes long. Here’s an extract from the first 90 seconds:

Let’s set this party off right, put yo’ pinky rings up to the moon, twenty four karat magic in the air, head to toe soul player, second verse for the hustlas, gangstas, bad bitches and ya ugly ass friends, I gotta show how a pimp get it in, and they waking up the rocket why you mad

Whoa! Without going into details, it’s clear that something has gone seriously wrong. Evaluating the difficulty of language, especially spoken language, is extremely complex (not least because there’s no objective measure of such a thing). It’s not completely dissimilar to the challenge of evaluating the accuracy, appropriacy and level of sophistication of a learner’s spoken language, and we’re a long way from being able to do that with any acceptable level of reliability. At least, we’re a long, long way from being able to do it well when there are no constraints on the kind of text (which is the case when taking the whole of YouTube as a potential source). Especially if we significantly restrict topic and text type, we can train software to do a much better job. However, this will require human input: it cannot be automated without.

The length of these 15 videos ranged from 3.02 to 29.27 minutes, with the mean length being about 10 minutes, and the median 8.32 minutes. Too damn long.

Selecting appropriate learning items

The automatic identification of formulaic language in a text presents many challenges: it is, as O’Keeffe et al. (2007: 82) note, only partially possible. A starting point is usually a list, and IdiomsTube begins with a list of 53,635 items compiled by the developer (Lin, 2022) over a number of years. The software has to match word combinations in the text to items in the list, and has to recognise variant forms. Formulaic language cannot always be identified just by matching to lists of forms: a piece of cake may just be a piece of cake, and therefore not a piece of cake to analyse. 53,365 items may sound like a lot, but a common estimate of the number of idioms in English is 25,000. The number of multiword units is much, much higher. 53,365 is not going to be enough for any reliable capture.

Since any given text is likely to contain a lot of formulaic language, the next task is to decide how to select for presentation (i.e. as learning objects) from those identified. The challenge is, as Lin (2022) remarks, both technical and theoretical: how can frequency and learnability be measured? There are no easy answers, and the approach of IdiomsTube is, by its own admission, crude. The algorithm prioritises longer items that contain lower frequency single items, and which have a low frequency of occurrence in a corpus of 40,000 randomly-sampled YouTube videos. The aim is to focus on formulaic language that is ‘more challenging in terms of composition (i.e. longer and made up of more difficult words) and, therefore, may be easier to miss due to their infrequent appearance on YouTube’. My immediate reaction is to question whether this approach will not prioritise items that are not worth the bother of deliberate learning in the first place.

The proof is in the proverbial pudding, so I looked at the learning items that were offered by my sample of 15 recommended videos. Sadly, IdiomsTube does not even begin to cut the mustard. The rest of this section details why the selection was so unsatisfactory: you may want to skip this and rejoin me at the start of the next section.

  • In total 85 target items were suggested. Of these 39 (just under half) were not fixed expressions. They were single items. Some of these single items (e.g. ‘blog’ and ‘password’ would be extremely easy for most learners). Of the others, 5 were opaque idioms (the most prototypical kind of idiom), the rest were collocations and fixed (but transparent) phrases and frames.
  • Some items (e.g. ‘I rest my case’) are limited in terms of the contexts in which they can be appropriately used.
  • Some items did not appear to be idiomatic in any way. ‘We need to talk’ and ‘able to do it’, for example, are strange selections, compared to others in their respective lists. They are also very ‘easy’: if you don’t readily understand items like these, you wouldn’t have a hope in hell of understanding the video.
  • There were a number of errors in the recommended target items. Errors included duplication of items within one set (‘get in the way’ + ‘get in the way of something’), misreading of an item (‘the shortest’ misread as ‘the shorts’), mislabelling of an item (‘vend’ instead of ‘vending machine’), linking to the wrong dictionary entry (e.g. ‘mini’ links to ‘miniskirt’, although in the video ‘mini’ = ‘small’, or, in another video, ‘stoke’ links to ‘stoked’, which is rather different!).
  • The selection of fixed expressions is sometimes very odd. In one video, the following items have been selected: get into an argument, vend, from the ground up, shovel, we need to talk, prefecture. The video contains others which would seem to be better candidates, including ‘You can’t tell’ (which appears twice), ‘in charge of’, ‘way too’ (which also appears twice), and ‘by the way’. It would seem, therefore, that some inappropriate items are selected, whilst other more appropriate ones are omitted.
  • There is a wide variation in the kind of target item. One set, for example, included: in order to do, friction, upcoming, run out of steam, able to do it, notification. Cross-checking with Pearson’s Global Scale of English, we have items ranging from A2 to C2+.

The challenges of automation

IdiomsTube comes unstuck on many levels. It fails to recommend appropriate videos to watch. It fails to suggest appropriate language to learn. It fails to provide appropriate practice. You wouldn’t know this from reading the article by Phoebe Lin in the ReCALL journal which does, however, suggest that ‘further improvements in the design and functions of IdiomsTube are needed’. Necessary they certainly are, but the interesting question is how possible they are.

My interest in IdiomsTube comes from my own experience in an app project which attempted to do something not completely dissimilar. We wanted to be able to evaluate the idiomaticity of learner-generated language, and this entailed identifying formulaic patterns in a large corpus. We wanted to develop a recommendation engine for learning objects (i.e. the lexical items) by combining measures of frequency and learnability. We wanted to generate tasks to practise collocational patterns, by trawling the corpus for contexts that lent themselves to gapfills. With some of these challenges, we failed. With others, we found a stopgap solution in human curation, writing and editing.

IdiomsTube is interesting, not because of what it tells us about how technology can facilitate language learning. It’s interesting because it tells us about the limits of technological applications to learning, and about the importance of sorting out theoretical challenges before the technical ones. It’s interesting as a case study is how not to go about developing an app: its ‘special enhancement features such as gamification, idiom-of-the-day posts, the IdiomsTube Teacher’s interface and IdiomsTube Facebook and Instagram pages’ are pointless distractions when the key questions have not been resolved. It’s interesting as a case study of something that should not have been published in an academic journal. It’s interesting as a case study of how techno-enthusiasm can blind you to the possibility that some learning challenges do not have solutions that can be automated.

References

Boers, F. (2020) Factors affecting the learning of multiword items. In Webb, S. (Ed.) The Routledge Handbook of Vocabulary Studies. Abingdon: Routledge. pp. 143 – 157

Boers, F. (2021) Evaluating Second Language Vocabulary and Grammar Instruction. Abingdon: Routledge

Erman, B. & Warren, B. (2000) The idiom principle and the open choice principle. Text, 20 (1): pp. 29 – 62

Hill, J. (2001) Revising priorities: from grammatical failure to collocational success. In Lewis, M. (Ed.) Teaching Collocation: further development in the Lexical Approach. Hove: LTP. Pp.47- 69

Laufer, B. (2022) Formulaic sequences and second language learning. In Szudarski, P. & Barclay, S. (Eds.) Vocabulary Theory, Patterning and Teaching. Bristol: Multilingual Matters. pp. 89 – 98

Lin, P. (2022). Developing an intelligent tool for computer-assisted formulaic language learning from YouTube videos. ReCALL 34 (2): pp.185–200.

O’Keeffe, A., McCarthy, M. & Carter, R. (2007) From Corpus to Classroom. Cambridge: Cambridge University Press

Pellicer-Sánchez, A. (2020) Learning single words vs. multiword items. In Webb, S. (Ed.) The Routledge Handbook of Vocabulary Studies. Abingdon: Routledge. pp. 158 – 173

In May of last year, EL Gazette had a story entitled ‘Your new English language teacher is a robot’ that was accompanied by a stock photo of a humanoid robot, Pepper (built by SoftBank Robotics). The story was pure clickbait and the picture had nothing to do with it. The article actually concerned a chatbot (EAP Talk) to practise EAP currently under development at a Chinese university. There’s nothing especially new about chatbots: I last blogged about them in 2016 and interest in them, both research and practical, dates back to the 1970s (Lee et al., 2020). There’s nothing, as far as I can see, especially new about the Chinese EAP chatbot project either. The article concludes by saying that the academic behind the project ‘does not believe that AI can ever replace a human teacher’, but that chatbots might offer some useful benefits.

The benefits are, however, limited – a point that is acknowledged even by chatbot enthusiasts like Lee et al (2020). We are some way from having chatbots that we can actually have meaningful conversations with, but they do appear to have some potential as ‘intelligent tutoring systems’ to provide practice of and feedback on pre-designated bits of language (especially vocabulary and phrases). The main benefit that is usually given, as in the EL Gazette article, is that they are non-judgemental and may, therefore, be appropriate for shy or insecure learners.

Social robots, of the kind used in the illustration for the EL Gazette story, are, of course, not the same as chatbots. Chatbots, like EAP Talk, can be incorporated into all sorts of devices (notably phones, tablets and laptops) and all sorts of applications. If social robots are to be used for language learning, they will clearly need to incorporate chatbots, but in what ways could the other features of robots facilitate language acquisition? Pepper (the robot in the picture) has ‘touch sensors, LEDs and microphones for multimodal interactions’, along with ‘infrared sensors, bumpers, an inertial unit, 2D and 3D cameras, and sonars for omnidirectional and autonomous navigation’. How could these features help language acquisition?

Lee and Lee (2022) attempt to provide an answer to this question. Here’s what they have come up with:

By virtue of their physical embodiment, social robots have been suggested to provide language learners with direct and physical interactions, which is considered one of the basic ingredients for language learning. In addition, as social robots are generally humanoids or anthropomorphized animal shapes, they have been valued for their ability to serve as familiar conversational partners, having potential to lower the affective filter of language learners.

Is there any research evidence to back up these claims? The short answer is no. Motivation and engagement may sometimes be positively impacted, but we can’t say any more than that. As far as learning is concerned, Lee and Lee (2022: 121) write: involving social robots led to statistically similar or even higher [English language learning] outcomes compared with traditional ELT contexts (i.e. no social robot). In other words, social robots did not, on the whole, have a negative impact on learning outcomes. Hardly grounds for wild enthusiasm … Still, Lee and Lee, in the next line, refer to the ‘positive effectiveness of social robots in English teaching’ before proceeding to enumerate the ways in which these robots could be used in English language learning. Doesn’t ELT Journal have editors to pick up on this kind of thing?

So, how could these robots be used? Lee and Lee suggest (for younger learners) one-on-one vocabulary tutoring, dialogue practice, more vocabulary teaching, and personalized feedback. That’s it. It’s worth noting that all of these functions could equally well be carried out by chatbots as by social robots.

Lee and Lee discuss and describe the social robot, NAO6, also built by SoftBank Robotics. It’s a smaller and cheaper cousin of the Pepper robot that illustrates the EL Gazette article. Among Lee and Lee’s reasons for using social robots is that they ‘have become more accessible due to ever-lower costs’: NAO6 costs around £350 a month to rent. Buying it outright is also an option. Eduporium (‘Empowering the future with technology’) has one on offer for $12,990.00. According to the blurb, it helps ‘teach coding, brings literature to life, enhances special education, and allows for training simulations. Plus, its educational solutions include an intuitive interface, remote learning, and various applications for accessibility!’

It’s easy enough to understand why EL Gazette uses clickbait from time to time. I’m less clear about why ELT Journal would print this kind of nonsense. According to Lee and Lee, further research into social robots ‘would initiate a new era of language learning’ in which the robots will become ‘an important addition to the ELT arsenal’. Yeah, right …

References

Lee, H. & Lee, J. H. (2022) Social robots for English language teaching. ELT Journal 76 (1): 119 – 124

Lee, J. H., Yang, H., Shin D. & Kim, H. (2020) Chatbots. ELT Journal 74 (3): 338 – 3444

The pandemic has affected all learners, but the more vulnerable the learner, the harder they have been hit. The evidence is very clear that Covid and the response of authorities to it has, in the words of UNESCO , ‘increased inequalities and exacerbated a pre-existing education crisis’. Learning poverty (a term coined by UNESCO and the World Bank), which refers to the ability to read and understand a simple text by the age of 10, is just one way of looking at these inequalities. Before the pandemic, 53% of children in low and middle income countries (and 9% in high income countries) were living in learning poverty. According to the World Bank (Azevedo et al., 2021), the pandemic will amplify this crisis with the figure rising to somewhere between 63% and 70%. The fear is that the recovery from Covid may be ‘similarly inequitable and that the effects of COVID-19 will be long-lasting’ (ibid.).

Inequity was not, of course, the only problem that educational systems faced before the pandemic. Since the turn of the millennium, it has been common to talk about ‘reimagining education’, and use of this phrase peaked in the summer of 2020. Leading the discoursal charge was Andreas Schleicher, head of education at the OECD, who saw the pandemic as ‘a great moment’ for education, since ‘the current wave of school closures offers an opportunity for experimentation and for envisioning new models of education’. Schleicher’s reimagining involves a closely intertwined privatization (by ending state monopolies) and digitalization of education (see this post for more details). Other reimaginings are usually very similar. Yong Zhao, for example, does not share Schleicher’s enthusiasm for standardized tests, but he sees an entrepreneurial, technology-driven, market-oriented approach as the way forward. He outlined this, pre-pandemic, in his book An Education Crisis is a Terrible Thing to Waste (Zhao et al., 2019), and then picked up on the pandemic (Zhao, 2020) to reiterate his ideas and, no doubt, to sell his book – all ‘in the spirit’, he writes, ‘of never wasting a good crisis’.

It was Churchill who first said ‘Never let a good crisis go to waste’, but it is often attributed to Emanuel Rahm, Obama’s Chief of Staff, who said the same thing in reference to the financial crisis of 2009. As we have seen in the last two years, crises can be good opportunities to push through policy changes. Viktor Orbán provides a good example. Crises can also be a way to make a financial killing, a practice known as ‘disaster capitalism’ (Loewenstein, 2017). Sometimes, it’s possible to change policy and turn a tidy profit at the same time. One example from the recent past shows us the way.

Following Hurricane Katrina in 2005, education was massively disrupted in New Orleans and the surrounding areas. Arne Duncan, who became Obama’s Secretary for Education a few years after Katrina, had this to say about the disaster: “Let me be really honest. I think the best thing that happened to the education system in New Orleans was Hurricane Katrina. That education system was a disaster, and it took Hurricane Katrina to wake up the community to say that ‘We have to do better.’” The reform that followed, inspired by Milton Friedman, involved replacing New Orleans’ public school system with privately run charter schools. The change took place with ‘military speed and precision’, compared to the ‘glacial pace’ with which levees and the electricity grid were repaired (Klein, 2007: 5). Nearly 5000 unionized teachers were fired, although some of the younger ones were rehired on reduced salaries. Most of the city’s poorer residents were still in exile when the changes took place: the impact on the most vulnerable students was entirely predictable. ‘The social and economic situation always bleeds into the school, said one researcher into the impact of the catastrophe.

Disaster capitalism may, then, be a useful lens through which to view the current situation (Moore et al., 2021). Betsy DeVos, Trump’s Secretary for Education, stated that the pandemic was an opportunity to ‘look very seriously at the fact that K-12 education for too long has been very static and very stuck in one method of delivering and making instruction available’ (Ferrari, 2020). What DeVos, who was famous for having described public education as a ‘dead end’, meant by this was privatization and digitalization, and privatization through digitalization. Although the pandemic is far from over, we can already begin to ask: has the crisis been wasted?

Turning from the US to Europe, a fascinating report by Zancajo et al (2022) examines the educational policy responses to Covid in a number of European countries. The first point to note is that the recovery plans of these countries is not fundamentally any different from pre-pandemic educational policy. The Covid-19 pandemic has simply ‘served as a catalyst to accelerate preexisting digitization policies in education systems’. Individual states are supported by the European Commission’s Digital Education Action Plan 2021–2027 (European Commission, 2021) which lists three main priorities: making use of technology, the development of digital skills for teachers and learners, and the increased use of data to improve education. The focus of attention of education policy in the national recovery plans of individual EU countries is almost completely monopolized by digitalization. Covid has not led to any reimagining of education: it has simple been ‘a path accelerator contributing to strengthening policy instruments and solutions that were already on the agenda (Zancajo et al., 2022). Less overtly obvious than digitalization has been the creeping privatization that occurs when a greater proportion of national education budgets is spent on technology provided by private companies.

Creeping privatization has been especially noticeable in British universities, which, for years, have been focusing on the most profitable ‘revenue streams’ and on cutting the costs of academic labour. The pandemic has been used by some (Leicester and Manchester, for example) as a justification for further restructuring, cost-cutting and the development of new digitally-driven business models (Nehring, 2021). In schools, the private technology providers were able to jump in quickly because the public sector was unprepared, and, in so doing, position themselves as essential services. The lack of preparedness of the public sector is not, of course, unsurprising, since it has been underfunded for so long. Underfund – create a crisis – privatize the solution: such has long been the ‘Shock Doctrine’ game plan of disaster capitalists. Naomi Klein has observed that where we have ended up in post-Covid education is probably where we would have ended up anyway: Covid accelerated the process by ten years.

Williamson and Hogan (2020) describe the current situation in the following terms:

The pivot to online learning and ‘emergency remote teaching’ has positioned educational technology (edtech) as an integral component of education globally, bringing private sector and commercial organisations into the centre of essential educational services. […] A global education industry of private and commercial organisations has played a significant role in educational provision during the Covid-19 crisis, working at local, national and international scales to insert edtech into educational systems and practices. It has often set the agenda, offered technical solutions for government departments of education to follow, and is actively pursuing long-term reforms whereby private technology companies would be embedded in public education systems during the recovery from the Covid-19 crisis and beyond it in new models of hybrid teaching and learning. […] Supported by multilateral policy influencing organisations and national government departments, these companies have integrated schools, teachers and students into their global cloud systems and online education platforms, raising the prospect of longterm dependencies of public education institutions on private technology infrastructures.

And where is educational equity in all this? Even the OECD is worried – more assessment is needed to identify learning losses, they say! A pandemic tale from California will give us a clue. When schools shut down, 50% of low-income California students lacked the necessary technology to access distance learning (Gutentag, 2020). Big Tech came riding to the rescue: donations from companies like HP, Amazon, Apple, Microsoft and Google made it possible for chromebooks and wifi hotspots to be made available for every student, and California legislators and corporations could congratulate themselves on closing the ‘digital divide’ (ibid.). To compensate for increased problems of homelessness, poverty, hunger, and discrimination, the most vulnerable students now have a laptop or tablet, with which they can generate data to be monetized by the tech vendors (Feathers, 2022).

References

Azevedo, J. P. W., Rogers, F. H., Ahlgren, S. E., Cloutier, M-H., Chakroun, B., Chang, G-C., Mizunoya, S., Reuge,N. J., Brossard, M., & Bergmann, J. L. (2021) The State of the Global Education Crisis : A Path to Recovery (English). Washington, D.C. : World Bank Group. https://documents1.worldbank.org/curated/en/416991638768297704/pdf/The-State-of-the-Global-Education-Crisis-A-Path-to-Recovery.pdf

European Commission. (2021) Digital education action plan 2021-2027. Resetting Education, Brussels

Feathers, T. (2022) This Private Equity Firm Is Amassing Companies That Collect Data on America’s Children. January 11th, 2022 The Markup https://themarkup.org/machine-learning/2022/01/11/this-private-equity-firm-is-amassing-companies-that-collect-data-on-americas-children

Ferrari, K. (2020) Disaster Capitalism Is Coming for Public Education. Jacobin 14 May 2020 https://jacobinmag.com/2020/05/public-education-schools-covid-coronavirus-charter-teachers

Gutentag, A. (2020) The Virtual Education Shock Doctrine. The Bellows https://www.thebellows.org/the-virtual-education-shock-doctrine/

Klein, N. (2007) The Shock Doctrine. New York: Metropolitan Books

Loewenstein, A. (2017) Disaster Capitalism. London: Verso Books

Moore, S. D. M., Jayme, B. D., Black, J. (2021) Disaster capitalism, rampant edtech opportunism, and the advancement of online learning in the era of COVID19. Critical Education, 12(2), 1-21.

Nehring, D. (2021) Is COVID-19 Enabling Academic Disaster Capitalism? Social Science Space 21 July 2021 https://www.socialsciencespace.com/2021/07/is-covid-19-enabling-academic-disaster-capitalism/

Williamson, B., & Hogan, A. (2020). Commercialisation and privatisation in/of education in the context of Covid-19. Education International, Brussels.

Zancajo, A., Verger, A. & Bolea, P. (2022) Digitalization and beyond: the effects of Covid-19 on post-pandemic educational policy and delivery in Europe, Policy and Society, puab016, https://doi.org/10.1093/polsoc/puab016

Zhao, Y. (2020) COVID-19 as a catalyst for educational change. Prospects 49: 29–33. https://doi.org/10.1007/s11125-020-09477-y

Zhao, Y., Emler, T. E., Snethen, A. & Yin, D. (2019) An Education Crisis is a Terrible Thing to Waste. New York: Teachers College Press