One of the most common criticisms of schooling is that it typically requires learners to study in lockstep, with everyone expected to use the same learning material at the same pace to achieve the same learning objectives. From everything we know about individual learner differences, this is an unreasonable and unrealisable expectation. It is only natural, therefore, that we should assume that self-paced learning is a better option. Self-paced learning is at the heart of technology-driven personalized learning. Often, it is the only meaningfully personalized aspect of technology-delivered courses.

Unfortunately, almost one hundred years of attempts to introduce elements of self-pacing into formal language instruction have failed to produce conclusive evidence of its benefits. For a more detailed look at the history of these failures, see my blog post on the topic, and for a more detailed look at Programmed Learning, a 1960s attempt to introduce self-pacing, see this post. This is not to say that self-pacing does not have a potentially important role to play. However, history should act as a warning that the simple provision of self-pacing opportunities through technology may be a necessary condition for successful self-pacing, but it is not a sufficient condition.

Of all the different areas of language learning that can be self-paced, I’ve long thought that technology might help the development of listening skills the most. Much contemporary real-world listening is, in any case, self-paced: why should the classroom not be? With online listening, we can use a variety of help options (Cross, 2017) – pause, rewind, speed control, speech-to-text, dictionary look-up, video / visual support – and we control the frequency and timing of this use. Online listening has become a ‘semi-recursive activity, less dependent on transient memory, inching its way closer to reading’ (Robin, 2007: 110). We don’t know which of these help options and which permutations of these options are most likely to lead to gains in listening skills, but it seems reasonable to believe that some of these options have strong potential. It is perhaps unlikely that research could ever provide a definitive answer to the question of optimal help options: different learners have different needs and different preferences (Cárdenas-Claros & Gruba, 2014). But what is clear is that self-pacing is necessary for these options to be used.

Moving away from whole-class lockstep listening practice towards self-paced independent listening has long been advocated by experts. John Field (2008: 47) identified a key advantage of independent listening: a learner ‘can replay the recording as often as she needs (achieving the kind of recursion that reading offers) and can focus upon specific stretches of the input which are difficult for her personally rather than for the class as a whole’. More recently, interest has also turned to the possibility of self-paced listening in assessment practices (Goodwin, 2017).

So, self-paced listening: what’s not to like? I’ve been pushing it with the teachers I work with for some time. But a recent piece of research from Kathrin Eberharter and colleagues (Eberharter et al., 2023) has given me pause for thought. The researchers wanted to know what effect self-pacing would have on the assessment of listening comprehension in a group of young teenage Austrian learners. They were particularly interested in how learners with SpLDs would be affected, and assumed that self-pacing would boost the performance of these learners. Disappointingly, they were wrong. Not only did self-pacing have, on average, no measurable impact on performance, it also seems that self-pacing may have put learners with shorter working-memory capacity and L1 literacy-related challenges at a disadvantage.

This research concerned self-paced listening in assessment (in this case the TOEFL Junior Standard test), not in learning. But might self-paced listening as part of a learning programme not be quite as beneficial as we might hope? The short answer, as ever, is probably that it depends. Eberhart et al speculate that young learners ‘might need explicit training and more practice in regulating their strategic listening behaviour in order to be able to improve their performance with the help of self-pacing’. This probably holds true for many older learners, too. In other words, it’s not the possibility of self-pacing in itself that will make a huge difference: it’s what a learner does or does not do while they are self-pacing that matters. To benefit from the technological affordances of online listening, learners need to know which strategies (and which tools) may help them. They may need ‘explicit training in exploiting the benefits of navigational freedom to enhance their metacognitive strategy use’ (Eberhart et al. 2023: 17). This shouldn’t surprise us: the role of metacognition is well established (Goh & Vandergrift, 2021).

As noted earlier, we do not really know which permutations of help options are likely to be of most help, but it is a relatively straightforward matter to encourage learners to experiment with them. We do, however, have a much clearer idea of the kinds of listening strategies that are likely to have a positive impact, and the most obvious way of providing this training is in the classroom. John Field (2008) suggested many approaches; Richard Cauldwell (2013) offers more; and Sheila Thorn’s recent ‘Integrating Authentic Listening into the Language Classroom’ (2021) adds yet more. If learners’ metacognitive knowledge, effective listening and help-option skills are going to develop, the training will need to involve ‘a cyclic approach […] throughout an entire course’ (Cross, 2017: 557).

If, on the other hand, our approach to listening in the classroom continues to be (as it is in so many coursebooks) one of testing listening through comprehension questions, we should not be too surprised when learners have little idea what strategy to approach when technology allows self-pacing. Self-paced self-testing of listening comprehension is likely to be of limited value.

References

Cárdenas-Claros, M. S. & Gruba, P. A. (2014) Listeners’ interactions with help options in CALL. Computer Assisted Language Learning, 27 (3): 228 – 245

Cauldwell, R. (2013) Phonology for Listening: Teaching the Stream of Speech. Speech in Action

Cross, J. (2017) Help options for L2 listening in CALL: A research agenda. Language Teaching, 50 (4), 544–560. https://doi.org/10.1017/S0261444817000209

Eberharter,K., Kormos, J.,  Guggenbichler, E.,  Ebner, V. S., Suzuki, S.,  Moser-Frötscher, D., Konrad, E. & Kremmel, B. (2023) Investigating the impact of self-pacing on the L2 listening performance of young learner candidates with differing L1 literacy skills. Language Testing 0 10.1177/02655322221149642 https://journals.sagepub.com/doi/epub/10.1177/02655322221149642

Field, J. (2008) Listening in the Language Classroom. Cambridge: Cambridge University Press

Goh, C. C. M. & Vandergrift, L. (2021) Teaching and learning second language listening: Metacognition in action (2nd ed.). Routledge. https://doi.org/10.4324/9780429287749

Goodwin, S. J. (2017) Locus of control in L2 English listening assessment [Doctoral dissertation]. Georgia State University. https://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1037&context=alesl_diss

Robin, R. (2007) Commentary: Learner-based listening and technological authenticity. Language Learning & Technology, 11 (1): 109-115. https://www.lltjournal.org/item/461/

Thorn, S. (2021) Integrating Authentic Listening into the Language Classroom. Shoreham-by-Sea: Pavilion

There has recently been a spate of articles and blog posts about design thinking and English language teaching. You could try ‘Design Thinking in Digital Language Learning’, by Speex, provider of ‘online coaching and assessment solutions’, ‘Design Thinking Activities in the ESL Classroom’, brought to you by Express Publishing, market leaders in bandwagon-jumping, or a podcast on ‘Design thinking’ from LearnJam. Or, if you happen to be going to the upcoming IATEFL conference, there are three presentations to choose from:

  • Design thinking, a sticky side up path to innovators
  • ESP course development for cultural creative design with design thinking
  • Reimagining teacher-centered professional development – can design thinking help?

The term ‘design thinking’ dates back decades, but really took off in popularity around 2005, and the following year, it was a theme at the World Economic Forum (Woudhuysen, 2011) The Harvard Business Review was pushing the idea in 2008 and The Economist ran a conference on the topic two years later. Judging from Google Trends, its popularity appeared to peak about a year ago, but the current dip might only be temporary. It’s especially popular in Peru and Singapore, for some reason. It is now strongly associated with Stanford University, the spiritual home of Silicon Valley, where you can join a three-and-a-half day design thinking bootcamp if you have $14,000 to spare.

What you would probably get for your money is a better understanding of ‘an approach to problem-solving based on a few easy-to-grasp principles that sound obvious: ‘Show Don’t Tell,’ ‘Focus on Human Values,’ ‘Craft Clarity,’ ‘Embrace Experimentation,’ ‘Mindful of Process,’ ‘Bias Toward Action,’ and ‘Radical Collaboration’’ (Miller, 2015). In the Stanford model of design thinking, which is the most commonly cited, this boils down to five processes: empathize, define, ideate, prototype and test.

I appreciate that this must sound a bit vague. I’d make things clearer if I could, but the problem is that ‘the deeper you dig into Design Thinking, the vaguer it becomes’ (Vinsel, 2017). If one thing is clear, however, it’s that things aren’t very clear (Johansson-Sköldberg et al., 2013), and haven’t been since the bandwagon got rolling. Back in 2010, at the 8th Design Thinking Research Symposium, Badke-Schaub et al. (2010) entitled their paper ‘Design thinking: a paradigm on its way from dilution to meaninglessness’. At a more recent conference, Bouwman et al. (2019) reported that the term is ‘becoming more and more vague’. So, is it a five-step process or not? According to Marty Neumeier, author of many books on design thinking, it is not: ‘that’s crap design thinking, of which there is plenty, I agree’.

My first direct experience of design thinking was back in 2015/16 when I took part in a meeting with publishers to discuss a new coursebook project. My main recollection of this was brainstorming various ideas, writing them down on Post-its, and adding them to other Post-its on the walls around the room. I think this was a combination of the empathizing and defining stages, but I could be wrong. Some years later, I took part in an online colloquium where we did something similar, except the Post-its were now digitalized using the Miro collaborative whiteboard. On both these occasions, the scepticism was palpable (except on the part of the facilitators), but we could all console ourselves that we were being cutting-edge in our approach to problem-solving.

Not everyone has been quite so ambivalent. Graphic designer, Natasha Jen, entitled her talk ‘Design Thinking is Bullsh*t’ and urged design practitioners to avoid the jargon and buzzwords associated with their field, to engage in more self-criticism, to base their ideas on evidence, and to stop assuming that their five-step process is needed for anything and everything (Vinsel, 2017). Vinsel (2017) likens design thinking to syphilis. Even the Harvard Business Review has changed its tune. Iskander (2018) doesn’t mince her words:

When it comes to design thinking, the bloom is off the rose. Billed as a set of tools for innovation, design thinking has been enthusiastically and, to some extent, uncritically adopted by firms and universities alike as an approach for the development of innovative solutions to complex problems. But skepticism about design thinking has now begun to seep out onto the pages of business magazines and educational publications. The criticisms are several: that design thinking is poorly defined; that the case for its use relies more on anecdotes than data; that it is little more than basic commonsense, repackaged and then marketed for a hefty consulting fee. As some of these design thinking concepts have sloshed into the world of policy, and social change efforts have been re-cast as social innovation, the queasiness around the approach has also begun to surface in the field of public policy.

Design thinking meets all the criteria needed to be called a fad (Brindle & Stearns, 2021). And like all fads from the corporate world, it has arrived in ELT past its sell-by date. Its travelling companions are terms like innovation, disruption, agile, iteration, reframing, hubs, thought leaders and so on. See below for a slide from Natasha Jen’s talk. As fads go, it is fairly harmless, and there may well be some design-thinking-inspired activities that could be useful in a language classroom. But it’s worth remembering that, for all its associations with ‘innovation’, the driving force has always been commercialization (Vinsel, 2017). In ELT, it’s about new products – courses, coursebooks, apps and so on. Whatever else may be intended, use of the term signals alignment with corporate values, an awareness of what is (was?) hip and hot in the start-up world. It’s a discourse-shaper, reframing our questions and concerns as engineering problems, suggesting that solutions to pretty much everything can be found by thinking in the right kind of corporate way. No wonder it was catnip to my publishers.

References

Badke-Schaub, P.G., Roozenburg, N.F.M., & Cardoso, C. (2010) Design thinking: a paradigm on its way from dilution to meaninglessness? In K. Dorst, S. Stewart, I. Staudinger, B. Paton, & A. Dong (Eds.), Proceedings of the 8th Design Thinking Research Symposium (DTRS8) (pp. 39-49). DAB documents.

Bouwman, S., Voorendt, J., Eisenbart, B. & McKilligan, S. (2019) Design Thinking: An Approach with Various Perceptions. Proceedings of the Design Society: International Conference on Engineering Design. Cambridge: Cambridge University Press

Brindle, M. C. & Stearns, P. N. (2001) Facing up to Management Faddism: A New Look at an Old Force. Westport, CT: Quorum Books

Iskander, N. (2018) Design Thinking Is Fundamentally Conservative and Preserves the Status Quo. Harvard Business Review, September 5, 2018

Johansson-Sköldberg, U., Woodilla, J. & Çetinkaya, M. (2013) Design Thinking: Past, Present and Possible Futures. Creativity and Innovation Management, 22 (2): 121 – 146

Miller, P. N. (2015) Is ‘Design Thinking’ the New Liberal Arts? The Chronicle of Higher Education March 26, 2015

Vinsel, L. (2017) Design Thinking is Kind of Like Syphilis — It’s Contagious and Rots Your Brains. Medium Dec 6, 2018

Woudhuysen, J. (2011) The Craze for Design Thinking: Roots, A Critique, and toward an Alternative. Design Principles And Practices: An International Journal, Vol. 5

Last September, Cambridge published a ‘Sustainability Framework for ELT’, which attempts to bring together environmental, social and economic sustainability. It’s a kind of 21st century skills framework and is designed to help teachers ‘to integrate sustainability skills development’ into their lessons. Among the sub-skills that are listed, a handful grabbed my attention:

  • Identifying and understanding obstacles to sustainability
  • Broadening discussion and including underrepresented voices
  • Understanding observable and hidden consequences
  • Critically evaluating sustainability claims
  • Understanding the bigger picture

Hoping to brush up my skills in these areas, I decided to take a look at the upcoming BETT show in London, which describes itself as ‘the biggest Education Technology exhibition in the world’. BETT and its parent company, Hyve, ‘are committed to redefining sustainability within the event industry and within education’. They are doing this by reducing their ‘onsite printing and collateral’. (‘Event collateral’ is an interesting event-industry term that refers to all the crap that is put into delegate bags, intended to ‘enhance their experience of the event’.) BETT and Hyve are encouraging all sponsors to go paperless, too, ‘switching from seat-drop collateral to QR codes’, and delegate bags will no longer be offered. They are partnering with various charities to donate ‘surplus food and furniture’ to local community projects, they are donating to other food charities that support families in need, and they are recycling all of the aisle banners into tote bags. Keynote speakers will include people like Sally Uren, CEO of ‘Forum for the Future’, who will talk about ‘Transforming carbon neutral education for a just and regenerative future’.

BETT and Hyve want us to take their corporate and social responsibility very seriously. All of these initiatives are very commendable, even though I wouldn’t go so far as to say that they will redefine sustainability within the event industry and education. But there is a problem – and it’s not that the world is already over-saturated with recycled tote bags. As the biggest jamboree of this kind in the world, the show attracts over 600 vendors and over 30,000 visitors, with over 120 countries represented. Quite apart from all the collateral and surplus furniture, the carbon and material footprint of the event cannot be negligible. Think of all those start-up solution-providers flying and driving into town, AirB’n’B-ing for the duration, and Ubering around town after hours, for a start.

But this is not really the problem, either. Much as the event likes to talk about ‘driving impact and improving outcomes for teachers and learners’, the clear and only purpose of the event is to sell stuff. It is to enable the investors in the 600+ edtech solution-providers in the exhibition area to move towards making a return on their investment. If we wanted to talk seriously about sustainability, the question that needs to be asked is: to what extent does all the hardware and software on sale contribute in any positive and sustainable way to education? Is there any meaningful social benefit to be derived from all this hardware and software, or is it all primarily just a part of a speculative, financial game? Is the corporate social responsibility of BETT / Hyve a form of green-washing to disguise the stimulation of more production and consumption? Is it all just a kind of environmentalism of the rich’ (Dauvergne, 2016).

Edtech is not the most pressing of environmental problems – indeed, there are examples of edtech that are likely more sustainable than the non-tech alternatives – but the sustainability question remains. There are at least four environmental costs to edtech:

  • The energy-greedy data infrastructures that lie behind digital transactions
  • The raw ingredients of digital devices
  • The environmentally destructive manufacture and production of digital devices
  • The environmental cost of dismantling and disposing digital hardware (Selwyn, 2018)

Some forms of edtech are more environmentally costly than others. First, we might consider the material costs. Going back to pre-internet days, think of the countless tonnes of audio cassettes, VCR tapes, DVDs and CD-ROMs. Think of the discarded playback devices, language laboratories and IWBs. None of these are easily recyclable and most have ended up in landfill, mostly in countries that never used these products. These days the hardware that is used for edtech is more often a device that serves other non-educational purposes, but the planned obsolescence of our phones, tablets and laptops is a huge problem for sustainability.

More important now are probably the energy costs of edtech. Audio and video streaming might seem more environmentally friendly than CDs and DVDs, but, depending on how often the CD or DVD is used, the energy cost of streaming (especially high quality video) can be much higher than using the physical format. AI ups the ante significantly (Brevini, 2022). Five years ago, a standard ‘AI training model in linguistics emit more than 284 tonnes of carbon dioxide equivalent’ (Strubell et al., 2019). With exponentially greater volumes of data now being used, the environmental cost is much, much higher. Whilst VR vendors will tout the environmental benefits of cutting down on travel, getting learners together in a physical room may well have a much lower carbon footprint than meeting in the Metaverse.

When doing the calculus of edtech, we need to evaluate the use-value of the technology. Does the tech actually have any clear educational (or other social) benefit, or is its value primarily in terms of its exchange-value?

To illustrate the difference between use-value and exchange-value, I’d like to return again to the beginnings of modern edtech in ELT. As the global market for ELT materials mushroomed in the 1990s, coursebook publishers realised that, for a relatively small investment, they could boost their sales by bringing out ‘new editions’ of best-selling titles. This meant a new cover, replacing a few texts and topics, making minor modifications to other content, and, crucially, adding extra features. As the years went by, these extra features became digital: CD-ROMs, DVDs, online workbooks and downloadables of various kinds. The publishers knew that sales depended on the existence of these shiny new things, even if many buyers made minimal use or zero use of them. But they gave the marketing departments and sales reps a pitch, and justified an increase in unit price. Did these enhanced coursebooks actually represent any increase in use-value? Did learners make better or faster progress in English as a result? On the whole, the answer has to be an unsurprising and resounding no. We should not be surprised if hundreds of megabytes of drag-and-drop grammar practice fail to have much positive impact on learning outcomes. From the start, it was the impact on the exchange-value (sales and profits) of these products that was the driving force.

Edtech vendors have always wanted to position themselves to potential buyers as ‘solution providers’, trumpeting the use-value of what they are selling. When it comes to attracting investors, it’s a different story, one that is all about minimum viable products, scalability and return on investment.

There are plenty of technologies that have undisputed educational use-value in language learning and teaching. Google Docs, Word, Zoom and YouTube come immediately to mind. Not coincidentally, they are not technologies that were designed for educational purposes. But when you look at specifically educational technology, It becomes much harder (though not impossible) to identify unambiguous gains in use-value. Most commonly, the technology holds out the promise of improved learning, but evidence that it has actually achieved this is extremely rare. Sure, a bells-and-whistles LMS offers exciting possibilities for flipped or blended learning, but research that demonstrates the effectiveness of these approaches in the real world is sadly lacking. Sure, VR might seem to offer a glimpse of motivated learners interacting meaningfully in the Metaverse, but I wouldn’t advise you to bet on it.

And betting is what most edtech is all about. An eye-watering $16.1 billion of venture capital was invested in global edtech in 2020. What matters is not that any of these products or services have any use-value, but that they are perceived to have a use-value. Central to this investment is the further commercialisation and privatisation of education (William & Hogan 2020). BETT is a part of this.

Returning to the development of my sustainability skills, I still need to consider the bigger picture. I’ve suggested that it is difficult to separate edtech from a consideration of capitalism, a system that needs to manufacture consumption, to expand production and markets in order to survive (Dauvergne, 2016: 48). Economic growth is the sine qua non of this system, and it is this that makes the British government (and others) so keen on BETT. Education and edtech in particular are rapidly growing markets. But growth is only sustainable, in environmental terms, if it is premised on things that we actually need, rather than things which are less necessary and ecologically destructive (Hickel, 2020). At the very least, as Selwyn (2021) noted, we need more diverse thinking: ‘What if environmental instability cannot be ‘solved’ simply through the expanded application of digital technologies but is actually exacerbated through increased technology use?

References

Brevini, B. (2022) Is AI Good for the Planet? Cambridge: Polity Press

Dauvergne, P. (2016) Environmentalism of the Rich. Cambridge, Mass.: MIT Press

Hickel, J. (2020) Less Is More. London: William Heinemann

Selwyn, N. (2018) EdTech is killing us all: facing up to the environmental consequences of digital education. EduResearch Matters 22 October, 2018. https://www.aare.edu.au/blog/?p=3293

Selwyn, N. (2021) Ed-Tech Within Limits: Anticipating educational technology in times of environmental crisis. E-Learning and Digital Media, 18 (5): 496 – 510. https://journals.sagepub.com/doi/pdf/10.1177/20427530211022951

Strubell, E., Ganesh, A. & McCallum, A. (2019) Energy and Policy Considerations for Deep Learning in NLP. Cornell University: https://arxiv.org/pdf/1906.02243.pdf

Williamson, B. & Hogan, A. (2020) Commercialisation and privatisation in / of education in the context of Covid-19. Education International

Recent years have seen a proliferation of computer-assisted pronunciations trainers (CAPTs), both as stand-alone apps and as a part of broader language courses. The typical CAPT records the learner’s voice, compares this to a model of some kind, detects differences between the learner and the model, and suggests ways that the learner may more closely approximate to the model (Agarwal & Chakraborty, 2019). Most commonly, the focus is on individual phonemes, rather than, as in Richard Cauldwell’s ‘Cool Speech’ (2012), on the features of fluent natural speech (Rogerson-Revell, 2021).

The fact that CAPTs are increasingly available and attractive ‘does not of course ensure their pedagogic value or effectiveness’ … ‘many are technology-driven rather than pedagogy-led’ (Rogerson-Revell, 2021). Rogerson-Revell (2021) points to two common criticisms of CAPTs. Firstly, their pedagogic accuracy sometimes falls woefully short. He gives the example of a unit on intonation in one app, where users are told that ‘when asking questions in English, our voice goes up in pitch’ and ‘we lower the pitch of our voice at the end of questions’. Secondly, he observes that CAPTs often adopt a one-size-fits-all approach, despite the fact that we know that issues of pronunciation are extremely context-sensitive: ‘a set of learners in one context will need certain features that learners in another context do not’ (Levis, 2018: 239).

There are, in addition, technical challenges that are not easy to resolve. Many CAPTs rely on automatic speech recognition (ASR), which can be very accurate with some accents, but much less so with other accents (including many non-native-speaker accents) (Korzekwa et al., 2022). Anyone using a CAPT will experience instances of the software identifying pronunciation problems that are not problems, and failing to identify potentially more problematic issues (Agarwal & Chakraborty, 2019).

We should not, therefore, be too surprised if these apps don’t always work terribly well. Some apps, like the English File Pronunciation app, have been shown to be effective in helping the perception and production of certain phonemes by a very unrepresentative group of Spanish learners of English (Fouz-González, 2020), but this tells us next to nothing about the overall effectiveness of the app. Most CAPTs have not been independently reviewed, and, according to a recent meta-analysis of CAPTs (Mahdi & Al Khateeb, 2019), the small number of studies are ‘all of very low quality’. This, unfortunately, renders their meta-analysis useless.

Even if the studies in the meta-analysis had not been of very low quality, we would need to pause before digesting any findings about CAPTs’ effectiveness. Before anything else, we need to develop a good understanding of what they might be effective at. It’s here that we run headlong into the problem of native-speakerism (Holliday, 2006; Kiczkowiak, 2018).

The pronunciation model that CAPTs attempt to push learners towards is a native-speaker model. In the case of ELSA Speak, for example, this is a particular kind of American accent, although ‘British and other accents’ will apparently soon be added. Xavier Anguera, co-founder and CTO of ELSA Speak, in a fascinating interview with Paul Raine of TILTAL, happily describes his product as ‘an app that is for accent reduction’. Accent reduction is certainly a more accurate way of describing CAPTs than accent promotion.

Accent reduction, or the attempt to mimic an imagined native-speaker pronunciation, is now ‘rarely put forward by teachers or researchers as a worthwhile goal’ (Levis, 2018: 33) because it is only rarely achievable and, in many contexts, inappropriate. In addition, accent reduction cannot easily be separated from accent prejudice. Accent reduction courses and products ‘operate on the assumption that some accents are more legitimate than others’ (Ennser-Kananen, et al., 2021) and there is evidence that they can ‘reinscribe racial inequalities’ (Ramjattan, 2019). Accent reduction is quintessentially native-speakerist.

Rather than striving towards a native-speaker accentedness, there is a growing recognition among teachers, methodologists and researchers that intelligibility may be a more appropriate learning goal (Levis, 2018) than accentedness. It has been over 20 years since Jennifer Jenkins (2000) developed her Lingua Franca Core (LFC), a relatively short list of pronunciation features that she considered central to intelligibility in English as a Lingua Franca contexts (i.e. the majority of contexts in which English is used). Intelligibility as the guiding principle of pronunciation teaching continues to grow in influence, spurred on by the work of Walker (2010), Kiczkowiak & Lowe (2018), Patsko & Simpson (2019) and Hancock (2020), among others.

Unfortunately, intelligibility is a deceptively simple concept. What exactly it is, is ‘not an easy question to answer’ writes John Levis (2018) before attempting his own answer in the next 250 pages. As admirable as the LFC may be as an attempt to offer a digestible and actionable list of key pronunciation features, it ‘remains controversial in many of its recommendations. It lacks robust empirical support, assumes that all NNS contexts are similar, and does not take into account the importance of stigma associated with otherwise intelligible pronunciations’ (Levis, 2018: 47). Other attempts to list features of intelligibility fare no better in Levis’s view: they are ‘a mishmash of incomplete and contradictory recommendations’ (Levis, 2018: 49).

Intelligibility is also complex because of the relationship between intelligibility and comprehensibility, or the listener’s willingness to understand – their attitude or stance towards the speaker. Comprehensibility is a mediation concept (Ennser-Kananen, et al., 2021). It is a two-way street, and intelligibility-driven approaches need to take this into account (unlike the accent-reduction approach which places all the responsibility for comprehensibility on the shoulders of the othered speaker).

The problem of intelligibility becomes even more thorny when it comes to designing a pronunciation app. Intelligibility and comprehensibility cannot easily be measured (if at all!), and an app’s algorithms need a concrete numerically-represented benchmark towards which a user / learner can be nudged. Accentedness can be measured (even if the app has to reify a ‘native-speaker accent’ to do so). Intelligibility / Comprehensibility is simply not something, as Xavier Anguera acknowledges, that technology can deal with. In this sense, CAPTs cannot avoid being native-speakerist.

At this point, we might ride off indignantly into the sunset, but a couple of further observations are in order. First of all, accentedness and comprehensibility are not mutually exclusive categories. Anguera notes that intelligibility can be partly improved by reducing accentedness, and some of the research cited by Levis (2018) backs him up on this. But precisely how much and what kind of accent reduction improves intelligibility is not knowable, so the use of CAPTs is something of an optimistic stab in the dark. Like all stabs in the dark, there are dangers. Secondly, individual language learners may be forgiven for not wanting to wait for accent prejudice to become a thing of the past: if they feel that they will suffer less from prejudice by attempting here and now to reduce their ‘foreign’ accent, it is not for me, I think, to pass judgement. The trouble, of course, is that CAPTs contribute to the perpetuation of the prejudices.

There is, however, one area where the digital evaluation of accentedness is, I think, unambiguously unacceptable. According to Rogerson-Revell (2021), ‘Australia’s immigration department uses the Pearson Test of English (PTE) Academic as one of five tests. The PTE tests speaking ability using voice recognition technology and computer scoring of test-takers’ audio recordings. However, L1 English speakers and highly proficient L2 English speakers have failed the oral fluency section of the English test, and in some cases it appears that L1 speakers achieve much higher scores if they speak unnaturally slowly and carefully’. Human evaluations are not necessarily any better.

References

Agarwal, C. & Chakraborty, P. (2019) A review of tools and techniques for computer aided pronunciation training (CAPT) in English. Education and Information Technologies, 24: 3731–3743. https://doi.org/10.1007/s10639-019-09955-7

Cauldwell, R (2012) Cool Speech app. Available at: http://www.speechinaction.org/cool-speech-2

Fouz-González, J (2020) Using apps for pronunciation training: An empirical evaluation of the English File Pronunciation app. Language Learning & Technology, 24(1): 62–85.

Ennser-Kananen, J., Halonen, M. & Saarinen, T. (2021) “Come Join Us and Lose Your Accent!” Accent Modification Courses as Hierarchization of International Students. Journal of International Students 11 (2): 322 – 340

Holliday, A. (2006) Native-speakerism. ELT Journal, 60 (4): 385 – 387

Jenkins. J. (2000) The Phonology of English as a Lingua Franca. Oxford: Oxford University Press

Hancock, M. (2020) 50 Tips for Teaching Pronunciation. Cambridge: Cambridge University Press

Kiczkowiak, M. (2018) Native Speakerism in English Language Teaching: Voices From Poland. Doctoral dissertation.

Kiczkowiak, M & Lowe, R. J. (2018) Teaching English as a Lingua Franca. Stuttgart: DELTA Publishing

Korzekwa, D., Lorenzo-Trueba, J., Thomas Drugman, T. & Kostek, B. (2022) Computer-assisted pronunciation training—Speech synthesis is almost all you need. Speech Communication, 142: 22 – 33

Levis, J. M. (2018) Intelligibility, Oral Communication, and the Teaching of Pronunciation. Cambridge: Cambridge University Press

Mahdi, H. S. & Al Khateeb, A. A. (2019) The effectiveness of computer-assisted pronunciation training: A meta-analysis. Review of Education, 7 (3): 733 – 753

Patsko, L. & Simpson, K. (2019) How to Write Pronunciation Activities. ELT Teacher 2 Writer https://eltteacher2writer.co.uk/our-books/how-to-write-pronunciation-activities/

Ramjattan, V. A. (2019) Racializing the problem of and solution to foreign accent in business. Applied Linguistics Review, 13 (4). https://doi.org/10.1515/applirev2019-0058

Rogerson-Revell, P. M. (2021) Computer-Assisted Pronunciation Training (CAPT): Current Issues and Future Directions. RELC Journal, 52(1), 189–205. https://doi.org/10.1177/0033688220977406

Walker, R. (2010) Teaching the Pronunciation of English as a Lingua Franca. Oxford: Oxford University Press

Generative AI and ELT Materials

Posted: December 20, 2022 in Uncategorized

I must begin by apologizing for my last, flippant blog post where I spun a tale about using generative AI to produce ‘teacher development content’ and getting rid of teacher trainers. I had just watched a webinar by Giuseppe Tomasello of edugo.ai, ‘Harnessing Generative AI to Supercharge Language Education’, and it felt as if much of the script of this webinar had been generated by ChatGPT: ‘write a relentlessly positive product pitch for a language teaching platform in the style of a typical edtech vendor’.

In the webinar, Tomasello talked about using GPT-3 to generate texts for language learners. Yes, it’s doable, the results are factually and linguistically accurate, but dull in the extreme (see Steve Smith’s experiment with some texts for French learners). André Hedlund summarises: ‘limited by a rigid structure … no big differences in style or linguistic devices. They follow a recipe, a formula.’ Much like many coursebook / courseware texts, in other words.

More interesting texts can be generated with GPT-3 when the prompts are crafted in careful detail, but this requires creative human imagination. Crafting such prompts requires practice, trial and error, and it requires knowledge of the intended readers.

AI can be used to generate dull texts at a certain level (B1, B2, C1, etc.), but reliability is a problem. So many variables (starting with familiarity with the subject matter and the learner’s first language) impact on levels of reading difficulty that automation can never satisfactorily resolve this challenge.

However, the interesting question is not whether we can quickly generate courseware-like texts using GPT-3, but whether we should even want to. What is the point of using such texts? Giuseppe Tomasello’s demonstration made it clear that the way these texts should be exploited is by using (automatically generating) a series of comprehension questions and a list of key words which can also be (automatically) converted into a variety of practice tasks (flashcards, gapfills, etc.). Like courseware texts, then, these texts are language objects (TALOs), to be tested for comprehension and mined for items to be deliberately learnt. They are not, in any meaningful way, texts as vehicles of information (TAVIs) – see here for more about TALOs and TAVIs.

Comprehension testing is a standard feature of language teaching, but there are good reasons to believe that it will do little, if anything, to improve a learner’s reading skills (see, for example, Swan & Walter, 2017; Grabe, W. & Yamashita, J., 2022) or language competence. It has nothing to do with the kind of extensive reading (see, for example, Leather & Uden, 2021) that is known to be such a powerful force in language acquisition. This use of text is all about deliberate language study. We are talking here about a synthetic approach to language learning.

The problem, of course, is that deliberate language study is far from uncontested. Synthetic approaches wrongly assume that ‘the explicit teaching of declarative knowledge (knowledge about the L2) will lead to the procedural knowledge learners need in order to successfully use the L2 for communicative purpose’ (see Geoff Jordan on this topic). Even if it were true that some explicit teaching was of value, it’s widely agreed that explicit teaching should not form the core of a language learning syllabus. The edugo.ai product, however, is basically all about explicit teaching.

Judging from the edugo.ai presentation on YouTube, the platform offers a wide range of true / false, multiple choice questions, gapfills, dictations and so on, all of which can be gamified. The ‘methodology’ is called ‘Flip and Flop the Classroom’. In this approach, learners do some self-study (presumably of some pre-selected discrete language item(s), practise it in the synchronous lesson, and then have more self-study where this language is reviewed. In the ‘flop’ section, the learner’s spoken contribution to the live lesson is recorded, transcribed and analysed by the system which identifies aspects of the learner’s language which can be improved through personalized practice.

The focus is squarely on accuracy, and the importance of accuracy is underlined by the presenter’s observation that Communicative Language Teaching does not focus enough on accuracy. Accuracy is also to the fore in another task type, strangely called ‘chunking’ (see below). Apparently, this will lead to fluency: ‘The goal of this template is that they can repeat it a lot of times and reach fluency at the end’.

The YouTube presentation is called ‘How to structure your language course using popular pedagogical approaches’ and suggests that you can mix’n’match bits from ‘Grammar Translation’, ‘Direct Method’ and ‘CLT’. Sorry, Giuseppe, you can mix’n’match all you like, but you can’t be methodologically agnostic. This is a synthetic approach all the way down the line. As such, it’s not going to supercharge language education. It’s basically more of what we already have too much of.

Let’s assume, though, for a moment that what we really want is a kind of supercharged, personalizable, quickly generated combination of vocabulary flashcards and ‘English Grammar in Use’ loaded with TALOs. How does this product stand up? We’ll need to consider two related questions: (1) its reliability, and (2) how time-saving it actually is.

As far as I can judge from the YouTube presentation, reliability is not exactly optimal. There’s the automated key word identifier that identified ‘Denise’, ‘the’ and ‘we’ as ‘key words’ in an extract of learner-produced text (‘Hello, my name is Denise and I founded my makeup company in 2021. We produce skin care products using 100% natural ingredients.’). There’s the automated multiple choice / translation generator which tests your understanding of ‘Denise’ (see below), and there’s the voice recognition software which transcribed ‘it cost’ as ‘they cost’.

In the more recent ‘webinar’ (i.e commercial presentation) that has not yet been uploaded to YouTube, participants identified a number of other bloopers. In short, reliability is an issue. This shouldn’t surprise anyone. Automation of some of these tasks is extremely difficult (see my post about the automated generation of vocabulary learning materials). Perhaps impossible … but how much error is acceptable?

edugo.ai does not sell content: they sell a platform for content-generation, course-creation and selling. Putative clients are institutions wanting to produce and sell learning content of the synthetic kind. The problem with a lack of reliability, any lack of reliability, is that you immediately need skilled staff to work with the platform, to check for error, to edit, to improve on the inevitable limitations of the AI (starting, perhaps, with the dull texts it has generated). It is disingenuous to suggest that anyone can do this without substantial training / supervision. Generative AI only offers a time-saving route, which does not sacrifice reliability, if a skilled and experienced writer is working with it.

edugo.ai is a young start-up that raised $345K in first round funding in September of last year. The various technologies they are using are expensive, and a lot more funding will be needed to make the improvements and additions (such as spaced repetition) that are necessary. In both presentations, there was lots of talk that the platform will be able to do this and will be able to do that. For the moment, though, nothing has been proved, and my suspicion is that some of the problems they are trying to solve do not have technological solutions. First of all, they’ll need a better understanding of what these problems are, and, for that, there has to be a coherent and credible theory of second language acquisition. There are all sorts of good uses that GPT-3 / AI could be put to in language teaching. I doubt this is one of them.

To wrap up, here’s a little question. What are the chances that edugo.ai’s claims that the product will lead to ‘+50% student engagement’ and ‘5X Faster creating language courses’ were also generated by GPT-3?

References

Grabe, W. & Yamashita, J. (2022) Reading in a Second Language 2nd edition. New York: Cambridge University Press

Leather, S. & Uden, J. (2021) Extensive Reading. New York: Routledge

Swan, M. & Walter, C. (2017) Misunderstanding comprehension. ELT Journal, 71 (2): 228 – 236

Who can tell where a blog post might lead? Over six years ago I wrote about adaptive professional development for teachers. I imagined the possibility of bite-sized, personalized CPD material. Now my vision is becoming real.

For the last two years, I have been working with a start-up that has been using AI to generate text using GPT-3 large language models. GPT-3 has recently been in the news because of the phenomenal success of the newly released ChatGPT. The technology certainly has a wow factor, but it has been around for a while now. ChatGPT can generate texts of various genres on any topic (with a few exceptions like current affairs) and the results are impressive. Imagine, then, how much more impressive the results can be when the kind of text is limited by genre and topic, allowing the software to be trained much more reliably.

This is what we have been working on. We took as our training corpus a huge collection of English language teaching teacher development texts that we could access online: blogs from all the major publishers, personal blogs, transcriptions from recorded conference presentations and webinars, magazine articles directed at teachers, along with books from publishers such as DELTA and Pavilion ELT, etc. We identified topics that seemed to be of current interest and asked our AI to generate blog posts. Later, we were able to get suggestions of topics from the software itself.

We then contacted a number of teachers and trainers who contribute to the publishers’ blogs and contracted them, first, to act as human trainers for the software, and, second, to agree to their names being used as the ‘authors’ of the blog posts we generated. In one or two cases, the authors thought that they had actually written the posts themselves! Next we submitted these posts to the marketing departments of the publishers (who run the blogs). Over twenty were submitted in this way, including:

  • What do teachers need to know about teaching 21st century skills in the English classroom?
  • 5 top ways of improving the well-being of English teachers
  • Teaching leadership skills in the primary English classroom
  • How can we promote eco-literacy in the English classroom?
  • My 10 favourite apps for English language learners

We couldn’t, of course, tell the companies that AI had been used to write the copy, but once we were sure that nobody had ever spotted the true authorship of this material, we were ready to move to the next stage of the project. We approached the marketing executives of two publishers and showed how we could generate teacher development material at a fraction of the current cost and in a fraction of the time. Partnerships were quickly signed.

Blog posts were just the beginning. We knew that we could use the same technology to produce webinar scripts, using learning design insights to optimise the webinars. The challenge we faced was that webinars need a presenter. We experimented with using animations, but feedback indicated that participants like to see a face. This is eminently doable, using our contracted authors and deep fake technology, but costs are still prohibitive. It remains cheaper and easier to use our authors delivering the scripts we have generated. This will no doubt change before too long.

The next obvious step was to personalize the development material. Large publishers collect huge amounts of data about visitors to their sites using embedded pixels. It is also relatively cheap and easy to triangulate this data with information from the customer databases and from activity on social media (especially Facebook). We know what kinds of classes people teach, and we know which aspects of teacher development they are interested in.

Publishers have long been interested in personalizing marketing material, and the possibility of extending this to the delivery of real development content is clearly exciting. (See below an email I received this week from the good folks at OUP marketing.)

Earlier this year one of our publishing partners began sending links to personalized materials of the kind we were able to produce with AI. The experiment was such a success that we have already taken it one stage further.

One of the most important clients of our main publishing partner employs hundreds of teachers to deliver online English classes using courseware that has been tailored to the needs of the institution. With so many freelance teachers working for them, along with high turnover of staff, there is inevitably a pressing need for teacher training to ensure optimal delivery. Since the classes are all online, it is possible to capture precisely what is going on. Using an AI-driven tool that was inspired by the Visible Classroom app (informed by the work of John Hattie), we can identify the developmental needs of the teachers. What kinds of activities are they using? How well do they exploit the functionalities of the platform? What can be said about the quality of their teacher talk? We combine this data with everything else and our proprietary algorithms determine what kinds of training materials each teacher receives. It doesn’t stop there. We can also now evaluate the effectiveness of these materials by analysing the learning outcomes of the students.

Teaching efficacy can by massively increased, whilst the training budget of the institution can be slashed. If all goes well, there will be no further need for teacher trainers at all. We won’t be stopping there. If results such as these can be achieved in teacher training, there’s no reason why the same technology cannot be leveraged for the teaching itself. Most of our partner’s teaching and testing materials are now quickly and very cheaply generated using GPT-3.5. If you want to see how this is done, check out the work of edugo.AI (a free trial is available) which can generate gapfills and comprehension test questions in a flash. As for replacing the teachers, we’re getting there. For the time being, though, it’s more cost-effective to use freelancers and to train them up.

You could be forgiven for wondering what, precisely, digital literacies are. In the first edition of ‘Digital Literacies’, Dudeney et al. (2013:2) define the term as ‘the individual and social skills needed to effectively interpret, manage, share and create meaning in the growing range of digital communication channels’. This is pretty broad, and would seem to encompass more or less anything that people do with digital technology, including the advanced arts of trolling and scamming. Nine years later, in the new edition of this book (Pegrum et al., 2022:5), the authors modify their definition a little: ‘the individual and social skills needed to effectively manage meaning in an era of digitally networked, often blended, communication’. This is broader still. In the intervening years there has been a massive proliferation of ways of describing specific digital literacies, as well as more frameworks of digital literacies than anyone (bar people writing about the topic) could possibly want. Of course, there is much in common between all these descriptive and taxonomic efforts, but there is also much that differs. What, precisely, ‘digital literacies’ means changes over both time and space. It carries different meanings in Australia, Sweden and Argentina, and, perhaps, it only makes sense to have a local conceptualisation of the term (Pangrazio et al., 2020). By the time you have figured out what these differences are, things will have moved on. Being ‘digitally-literate’ literate is an ongoing task.

What, precisely, ‘digital literacies’ are only really matters when we are told that it is vital to teach them. It’s easy to agree that digital skills are quite handy in this networked world, but, unless we have a very clear idea of what they are, it’s not going to be easy to know which ones to teach or how to teach them. Before we get caught up in the practical pedagogical details, it might be useful to address three big questions:

  • How useful it is to talk about digital literacies?
  • Can digital literacies be taught?
  • Should digital literacies be taught as part of the English language curriculum?

How useful is it to talk about digital literacies?

Let’s take one example of a framework: the Cambridge Life Competencies Framework (CLC). The CLC lists six key competencies (creative thinking, critical thinking, communication, collaboration, learning to learn, and social responsibilities). Underpinning and informing these six competencies are three ‘foundation layers’: ‘emotional development’, ‘discipline knowledge’ and ‘digital literacy’. Digital literacy is broken down as follows:

It’s a curious amalgam of relatively straightforward skills and much more complex combinations of skills with knowledge, attitudes and dispositions. In the former category (see the first box in the chart above), we would find things like the ability to use tags, hashtags, search engines, and filters. In the latter (see the second box in the chart above), we would find things like the ability to recognise fake news or to understand how and why personally targeted advertising is such a money-spinner.

Another example, this one, from Pegrum et al (2018), is more complex and significantly more detailed. On the more technical side, we see references to the ability to navigate within multimodal gaming, VR and AR environments, or the ability to write and modify computer code. And for more complex combinations of skills, knowledge, attitudes and dispositions, we have things like the ability to develop a reputation and exert influence within online networks, or ‘the ability to exert a positive influence (online) by adopting a stance of intellectual humility and ethical pluralism’.

This is all a far remove from only seven years ago when ‘digital literacies’ were described as ‘the practices of reading, writing and communication made possible by digital media’ (Hafner et al., 2015) and the kinds of skills required were almost all closely connected to reading and writing. The digital world has changed, and so has our understanding of what it means to operate effectively within that world. Perhaps it is time, too, to change our terminology: ‘literacies’ is still with us, but it seems almost wilfully misleading. ‘Abilities’ or ‘competencies’ would seem to be much more appropriate terms to refer to what we are discussing in these frameworks, but ‘abilities’ probably isn’t sciency enough, and ‘competencies’ has already been done to death.

The problem with lumping all these things together under a single superordinate is that it seems to imply that there is some sort of equivalence between all the subordinate labels, that there is some categorial similarity. Pegrum et al (2022) acknowledge that there are differences of complexity between these ‘literacies’ – they use a star system to indicate degree of complexity. But I think that there is no sufficiently strong reason to put some of these things together in the first place. Dudeney at al (2013: 14) note that some of their literacies are ‘macroliteracies’ – ‘in other words, a literacy which draws on numerous other literacies – and involves linguistic, multimedia, spatial, kinaesthetic and other skills’. Why, then, call them ‘literacies’ at all? The only connection between knowing how to generate a LOLcat post and knowing how to ‘remain non-judgemental towards new perspectives, multiple viewpoints, and shifting contexts’ is that both are related to our use of digital technology. But since there is very little in our lives that is not now related in some way to digital technology, is this good enough reason to bracket these two abilities together?

Pegrum et al (2022) found that they needed to expand their list of digital literacies in the new edition of their book, and they will no doubt need to do so again nine years from now. But is the fact that something could be included in a taxonomy a good reason for actually including it? ‘Code literacy’, for example, seems rather less urgent now than it did nine years ago. I have never been convinced by gaming literacy or remix literacy. Are these really worth listing alongside the others in the table? Even if they are, nobody (including Pegrum et al.) would disagree that some prioritisation is necessary. However, when we refer to ‘digital literacies’ and how vital it is to teach them, we typically don’t specify a particular literacy and not another. We risk committing the logical error of assuming that something that holds true for a group or category, also holds true for all the members of the group or subordinates of the category.

Can digital literacies be taught?

There is clearly no particular problem in teaching and learning some digital literacies, especially the more technical ones. Unfortunately, the more specific and technical we are (e.g. when we mention a particular digital tool), the more likely it is that its shelf-life will be limited. Hardware comes and goes (I haven’t had to format a floppy disc for a while), as do apps and software. To the risk of wasting time teaching a skill that may soon be worthless, we may add the risk of not including literacies that have not yet found their way into the taxonomies. Examples include knowing how to avoid plagiarism detectors (as opposed to avoiding plagiarism) or how to use GPT-3 (and soon GPT-4) text generators. Handy for students.

The choice of digital tools is crucial when one of the key pieces of advice for teaching digital literacy is to integrate the use of digital tools into lessons (e.g. in the Cambridge Life Competencies Digital Literacy booklet). This advice skates over the key questions of which tool, and which literacy is being targeted (and why). Watching TikTok videos, using Mentimeter in class, or having a go with a VR headset may serve various educational purposes, but it would be stretching a point to argue that these activities will do much for anyone’s digital literacy. Encouraging teachers to integrate technology into their lessons (government policy in some countries) makes absolutely no sense unless the desired outcome – digital literacy – is precisely defined in advance. It rarely is. See here for further discussion.

Encouragement to include technology, any old technology, in lessons is almost never grounded in claims that a particular technical skill (e.g. navigating TikTok) has any pressing value. Rather, the justification usually comes from reference to what might be called ‘higher-order’ skills, like critical thinking: what I referred to earlier as curious amalgams of relatively straightforward skills and much more complex combinations of skills with knowledge, attitudes and dispositions.

The problem here is that it remains very uncertain whether things like ethical literacy or critical digital literacy are likely to be learnt through instruction. They can certainly be practised, and Pegrum et al (2022) have some very nice activities. The aims of these activities is typically described using a vague ‘raise awareness of’ formula, but whether they will lead, for example, to any improved ability ‘to exert a positive influence (online) by adopting a stance of intellectual humility and ethical pluralism’ is debatable. Much as the world might be a better place if classroom activities of this kind did actually work, research evidence is sadly lacking. For a more detailed look at the problems of trying to teach critical digital literacy / media information literacy, see here.

Should digital literacies be part of the English language curriculum?

So, is it ‘crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Pegrum et al, 2022)? Well, it all depends on which digital literacies we are talking about. It also depends on what kind of learners in what kinds of learning contexts. And it depends on both institutional objectives and the personal objectives of the learners themselves. So, ‘crucial’, no, but we’ll put the choice of adjective down to rhetorical flourish.

Is it true that ‘digital literacies are as important to language learning as […] reading and writing skills […]’ (Pegrum et al., 2022: 1)? Clearly not. Since it’s hard to imagine any kind of digital literacy without some reading skills preceding it, the claim that they are comparable in importance is also best understood as rhetorical flourish.

A modicum of critical (digital) literacy is helpful when it comes to reading literature on digital literacies.

References

Dudeney, G., Hockly, N. & Pegrum, M. (2013) Digital Literacies. Harlow: Pearson Education

Hafner, C.A., Chik, A. & Jones, R. H. (2015) Digital Literacies and Language Learning, Language Learning & Technology, 19 (3): 1-  7

Pangrazio, L., Godhe, A.-L., & Ledesma, A. G. L. (2020) What is digital literacy? A comparative review of publications across three language contexts. E-Learning and Digital Media, 17(6), 442–459. https://doi.org/10.1177/204275302094629

Pegrum, M., Hockly, N. & Dudeney, G. (2022) Digital Literacies 2nd Edition. New York: Routledge

Pegrum, M., Dudeney, G. & Hockly, N. (2018) Digital Literacies Revisited. The European Journal of Applied Linguistics and TEFL, 7 (2): 3 – 24

Motivation and research

Ljke quite a few of the topics I have explored in this blog, motivation is something about which we can all agree on its importance, but without being entirely clear about what it means. It is closely connected to a number of other human attributes – reasons for learning, goal-setting, strength of desire to achieve goals, attitudes towards and interest in English, effort and self-regulation, learner autonomy … (Lamb, 2016) and the list could be continued. In fact, it means so many things that the American Psychological Association has considered deleting the word as a search term in the main psychological database (Dörnyei & Ushioda, 2013).

In the world of language learning, research into motivation got going over 60 years ago (Gardner & Lambert, 1959), really took off in the 1990s, and has become ‘one of the most popular research topics, showing an exponential increase in quantity year after year’ (Al-Hoorie et al., 2021: 139). The main reason for this is no doubt the widely shared perception of the importance of ‘motivation’ (and demotivation), but also, perhaps, because motivation is seen as an easy topic among novice researchers (Ushioda, 2016), relying, as it typically does, on a questionnaire.

However, all is not well in this world of language motivation research. First of all, researchers are questioning whether motivation to learn a language is fundamentally any different from motivation to learn anything else (Al-Hoorie & Hiver, 2020). Some research suggests that it is not, and that the complex network of ‘identity, emotions, social and political factors, both inside and outside of school’ (Al-Hoorie et al., 2021: 141) that are seen as relevant to language learning motivation apply equally to learning maths. Attempts to carve out a particular space for language learning motivation (such as Dörnyei’s (2009) ‘L2 motivational self system’) may have much appeal, but are less convincing when studied more closely (Al-Hoorie, 2018).

All of which leaves us where exactly? The conclusion of Al-Hoorie et al (2021) is that we might gain a better understanding of language learning motivation through the lens of complex dynamic systems theory, but the complexity of any insights gained makes it unlikely that this would lead to ‘workable pedagogical recommendations’. Since the whole point of researching motivation is to generate ‘workable pedagogical recommendations’, Al-Hoorie et al’s inescapable conclusion is that motivation research should be abandoned … and that attention should shift ‘to the more tangible and actionable construct of engagement’. This view would seem to be shared by Mercer and Dörnyei (2020). But is ‘engagement’ really any more tangible and actionable than ‘motivation’? I don’t think so. The concept of ‘engagement’ unifies motivation and its activation, according to Mercer & Dörnyei (2020: 6), which means that ‘engagement’ is an even more complex concept than motivation alone.

The mantra of researchers is ‘more research needed’ (Maley, 2016), so, even when criticising the research, it’s hardly surprising that Al-Hoorie et al argue for more research … just with a different focus. So, besides abandoning ‘motivation’ and looking at ‘engagement’ instead, more research that is ‘interventional in nature’ is needed – as it is a ‘rare commodity’ (Al-Hoorie et al., 2021: 141-2).

Motivation and practice

There’s no shortage of stuff out there telling us how to do motivation in the language classroom. There are books full of practical ideas and tips (e.g. Dörnyei & Hadfield, 2013; Renandya, 2015; Thorner, 2017). There is also any amount of online stuff about motivation, the main purpose of which is to sell something: a brand, a product (such as a coursebook or an app) or an idea (such as coaching). And then there are the ‘pedagogical applications’ that come at the end of the research papers, which ‘more often than not do not logically and unambiguously follow from the results of the research’ (Al-Hoorie et al., 2021: 138).

There are two big problems with all of this. We know that motivational classroom interventions can ‘work’, but we cannot actually measure ‘motivation’. We can only measure proxies for motivation, and the most common of these is self-reported intended effort – which actually tells us very little. Achievement may correlate with intended effort … but it may not! Much of the research literature implies that motivational interventions may be helpful, but fails to demonstrate clearly that they will be (see Howard et al., 2021, as an example). Equally problematic is the fact that we don’t know which kinds of interventions are likely to be most beneficial (see, for example, Lazowski & Hulleman, 2015). In other words, we are in the dark.

This is not to say that some of the tips and practical classroom ideas are not worth trying out. Most tips you will come across will strike you as self-evident (e.g. intrinsic beats extrinsic, success breeds motivation, rewards beat punishment) and, like Renandya (2015), quite reasonably draw on mainstream motivational theory. Regarding the practical side of things, Al-Hoorie et al (2021: 147) conclude that ‘probably the best advice to give to a novice teacher is not to bury themselves in recently published language motivation research, but to simply rely on experience and trial and error, and perhaps a good mentor’. As for good, experienced teachers, they already know ‘far more about motivating students than the sum of knowledge that can be gained from research’ (Henry et al, 2019: 15).

It is therefore just as well that it wouldn’t cross most teachers’ minds to even think of exploring this research.

Motivation and symbolic power

Hoorie et al (2021: 139- 142) observe that ‘giving advice to teachers has become de rigueur of late, which strikes us as antithetical to engaging in necessary critical reflection and the limits of available empirical evidence’. Teachers, they note, have to ‘make constant, split-second decisions to adapt to changing and evolving contexts. Asking teachers to learn how to teach from research findings is akin to asking an individual to learn how to drive or swim through reading books sans actual practice. Books might help in some respects, but in the end drivers and swimmers have to refine their skills through sustained practice and by trial and error due to the complex and unpredictable nature of context’.

In this light, it is hard not to view the discourse of language motivation research through the lens of ‘symbolic power’. It is hard not to reflect on the relation of research and practice in solving real-world language-related problems, to wonder whether such problem-solving has been hijacked by ‘professional experts’, and to wonder about the devaluation of the contribution of practitioners (Kramsch, 2021: 201).

References

Al-Hoorie, A.H. (2018) The L2 motivational self system: A meta-analysis. Studies in Second Language Learning and Teaching, 8 (4) https://pressto.amu.edu.pl/index.php/ssllt/article/view/12295

Al-Hoorie, A. H. & Hiver, P. (2020) The fundamental difference hypothesis: expanding the conversation in language learning motivation. SAGE Open, 10 (3) https://journals.sagepub.com/doi/full/10.1177/2158244020945702

Al-Hoorie, A.H., Hiver, P., Kim, T.Y. & De Costa, P. I. (2021) The Identity Crisis in Language Motivation Research. Journal of Language and Social Psychology, 40 (1): 136 – 153

Dörnyei, Z. (2009) The L2 motivational self system. In Z. Dörnyei & E. Ushioda (Eds.) Motivation, language identity and the L2 self (pp. 9-42). Bristol, UK: Multilingual Matters.

Dörnyei, Z. & Hadfield, J. (2013) Motivating Learning. Harlow: Pearson

Dörnyei, Z. & Ushioda, E. (2013) Teaching and Researching Motivation 2nd Edition. Abingdon: Routledge

Gardner, R. C. & Lambert, W. E. (1959) Motivational variables in second-language acquisition. Canadian Journal of Psychology / Revue Canadienne de Psychologie, 13 (4): 266 – 272

Henry, A., Sundqvist, P. & Thorsen, C. (2019) Motivational Practice: Insights from the Classroom. Studentlitteratur

Howard, J. L., Bureau, J. S., Guay, F., Chong, J. X. Y. & Ryan, R. M. (2021) Student Motivation and Associated Outcomes: A Meta-Analysis From Self-Determination Theory. Perspectives on Psychological Science, https://doi.org/10.1177%2F1745691620966789

Kramsch, C. (2021) Language as Symbolic Power. Cambridge: CUP

Lamb, M. (2016) Motivation. In Hall, G. (Ed.) The Routledge Handbook of English Language Teaching. Abingdon: Routledge. pp. 324 -338

Lazowski, R. & Hulleman, C. (2015) Motivation Interventions in Education: A Meta-Analytic Review. Review of Educational Research, 86 (2)

Maley, A. (2016) ‘More research is needed’ – A Mantra too Far? Humanising Language Teaching, 18 (3)

Mercer, S. & Dörnyei, Z. (2020) Engaging Language Learners in Contemporary Classrooms. Cambridge: CUP

Renandya, W.A. (2015) L2 motivation: Whose responsibility is it? English Language Teaching, 27 (4): 177-189.

Thorner, N. (2017) Motivational Teaching. Oxford: OUP

Ushioda, E. (2016) Language learning motivation through a small lens: a research agenda. Language Teaching, 49 (4): 564 – 577

When I last blogged about teacher wellbeing in August 2020, we were in the early throes of COVID, and Sarah Mercer and Tammy Gregersen had recently published their timely book about wellbeing (Mercer & Gregersen, 2020). Now, over two years later, it seems appropriate to take another look at the topic, to evaluate the status of the concept of ‘wellbeing’ in ELT.

Wellbeing as an object of study

The first thing to be said is that wellbeing is doing just fine. Since 1995, the frequency of use of ‘subjective well-being’ in books has increased by a factor of eight, and, across multiple languages, academic attention to wellbeing and related concepts like ‘happiness’ is growing (Barrington-Leigh, 2022). Interest in teacher wellbeing is no exception to this trend. There are, however, a few problems, according to a recent systematic review of the research literature (Hascher & Waber, 2021). There is, apparently, little consensus on how the term should be defined. There is little in the way of strong evidence that wellbeing correlates with good teaching, and, to my surprise, there is a lack of studies pointing to actual shortfalls in teacher wellbeing. Empirical evidence regarding the effectiveness of programmes aiming to foster teacher wellbeing is, less surprisingly, scarce.

Researchers in English language teacher wellbeing are well aware of all this and are doing their best to fill in the gaps. A ‘research group for wellbeing in language education’ has recently been formed at the University of Graz in Austria, where Sarah Mercer works. This is part of a push to promote positive psychology in language teaching publications, and the output of Sarah Mercer, Tammy Gregersen and their associates has been prodigious.

Next year will see the publication of a book-length treatment of the topic with ‘Teacher Well-Being in English Language Teaching An Ecological Approach’ (Herrera et al, 2023). It will be interesting to see to what extent teacher wellbeing is dealt with as a social or political issue, as opposed to something amenable to the interventions of positive psychology.

In the wider world of education, wellbeing is not as frequently seen through the lens of positive psychology as it is in ELT circles. Other perspectives exist: a focus on working conditions or a focus on mental health, for example (Hascher & Waber, 2021). And then there is neuroscience and wellbeing, which I am eagerly awaiting an ELT perspective on. I have learnt that certain brain patterns are related to lower well-being (in the medial prefrontal cortex, posterior cingulate cortex/ praecuneus, and angular gyrus areas, to be gratuitously specific). Lower wellbeing correlates with patterns that are found when the brain is at wakeful rest, such as during daydreaming and mind-wandering (Bartels et al. 2022). All of which sounds, to me, like a strong argument for mindfulness practices. Keep your eye out for ELT publishers’ webinars (see below) and you’ll no doubt hear someone taking this line, along with some nice fMRI images.

Wellbeing and self-help

Academic study of wellbeing proceeds apace, but the ultimate justification for this research can only be found in its ability to help generate solutions to a real-world problem. In this sense, it is no different from the field of applied linguistics in general (from where most of the ELT wellbeing researchers come): it is its ability to solve problems which ‘alone justifies its existence in the first place’ (Widdowson, 2018: 142).

But here we run into something of a brick wall. Whilst it is generally acknowledged that improvements to teacher wellbeing require ‘structural and systemic levels of change’ and that ‘teachers should not have to compensate for fundamental flaws in the system as a whole’ (Mercer & Gregersen, 2020: 9), the ‘solutions’ that are proposed are never primarily about systems, but always about ‘me’. Take a look at any blog post on teacher wellbeing in ELT and you will see what could be called the psychologizing of the political. This process is at the heart of the positive psychology movement which so dominates the current world of wellbeing in ELT.

A look at the Teacher Wellbeing SIG of BRAZ-TESOL (on Facebook or Instagram) gives a good sample of the kind of advice that is on offer: write out a self-appreciation list, respect others, remember you are unique, be grateful, smile, develop emotional intelligence and a growth mindset, start with yourself, take care of yourself, look after your ‘authentic self’, set goals, believe that nothing is impossible, take small steps, pause and breathe, spend time with positive people, learn to say no, and so on. This advice is offered in all seriousness, but is not so very different from the kind of advice offered by @lifeadvicebot on Twitter (‘Are you struggling with the impact of sexism? Consider cultivating a sense of gratitude’ or ‘Worried about racism? Why not try stretching your back and shoulders?).

I don’t mean to suggest that mindfulness and the other nostrums on offer will be of no benefit to anybody at all, but, however well-intentioned such advice may be, it may be ‘rather better for its promoters than for its putative beneficiaries’ (Widdowson, 2021: 47). The advice is never new or original. It is rutted with the ‘grooves of borrowed thought’, lifting directly from the long tradition of self-help literature, of which it is yet another exemplar. Like all self-improvement literature, you don’t need any deep commitment to read it. Written in an accessible style (and in the case of the BRAZ-TESOL SIG in the form of illustrated inspirational quotes), there is a slight problem with all this advice. If you do decide to dive into it repeatedly, you will quickly discover ‘that it is not such a long way from surface to bottom’ (Lichterman, 1992: 427). Like all self-help literature, as Csikszentmihalyi (1990) observed on the back cover of his best-selling work, it will probably have no effect whatsoever. Whether you agree with Csikszentmihalyi or not, there is a delicious irony in the fact that this comment appeared on the back cover of his own self-help book. Like all positive psychologists, he thought he had something new and scientifically grounded to say.

There are also increasing numbers of wellbeing coaches – a thoroughly unsurprisingly development. Many of them are positive psychology adepts, some describe themselves as neuro-science based, and have a background in Neuro-Linguistic Processing. In the context of education, expect the phrase ‘life skills’ to be thrown in from time to time. See this article from Humanising Language Teaching as an example.

But self-help literature treads familiar ground. Work on the self may seem like ‘an antidote to the anxiety-provoking uncertainties of [our] economic and social order’ (McGee, 2005: 43), but it has nowhere to go and is doomed to follow its Sisyphean path. If research into teacher wellbeing in ELT cannot shake off its association with positive psychology and self-help, its justification (and interest in it) will soon slip away.

Wellbeing as a marketing tool

Wellbeing is ideally positioned as a marketing trope … as long as the connections between low wellbeing and pay / working conditions are not dwelled on. It’s a ‘new’ and ‘virtuous’ topic that sits comfortably beside inclusivity, sustainability and environmental awareness. Teaching is a caring profession: a marketing focus on wellbeing is intended to be taken as a sign that the marketers care too. They have your best interests at heart. And when the marketing comes in the form of wellbeing tips, the marketers are offering for free something which is known to be appreciated by many teachers. Some teacher wellbeing books, like the self-published ‘The Teacher’s Guide to Self-Care: Build Resilience, Avoid Burnout, and Bring a Happier and Healthier You to the Classroom’ (Forst, 2020), have sold in considerable quantities.

BETT, which organises a global series of education shows whose purpose is to market information technology in education, is a fascinating example of wellbeing marketing. The BETT shows and the website are packed with references to wellbeing, combining the use of wellbeing to market products unrelated to wellbeing, at the same time as marketing wellbeing products. Neat, eh? Most of these uses of ‘wellbeing’ are from the last couple of years. The website has a wellbeing ‘hub’. Click on an article entitled ‘Student Wellbeing Resources’ and you’ll be taken to a list of products you can buy. Other articles, like ‘Fostering well-being and engagement with Microsoft education solutions’, are clearer from the get-go.

All the major ELT publishers have jumped on the bandwagon. Some examples … Macmillan has a ‘wellness space’ (‘a curated playlist of on-demand webinars and practical resources to specifically support your well-being – and for you to return to as often as you like’). They were also ‘delighted to have championed mindfulness at the IATEFL conference this year!’ Pearson has a ‘wellbeing zone’ – ‘packed with free resources to support teachers, parents and young people with mental health and wellbeing – from advice on coping with anxiety and exam stress, to fun activities and mindfulness’. Last year, Express Publishing chose to market one of its readers with the following introductory line: ‘#Reading for pleasure improves #empathy, #socialrelationships and our general #wellbeing’. And on it goes.

Without going as far as to say that these are practices of ‘wellbeing washing’, it is only realistic, not cynical, to wonder just how seriously these organisations take questions of teacher wellbeing. There are certainly few ELT writers who feel that their publishers have the slightest concern about their wellbeing. Similarly, we might consider the British Council, which is ‘committed to supporting policymakers, school leaders and teachers in improving mental wellbeing in schools’. But less committed, it would seem, to their own teachers in Kabul or to their staff who went on strike earlier this year in protest at forced redundancies and outsourcing of jobs.

How long ‘wellbeing’ will continue to be seen as a useful marketing trope in ELT remains to be seen. It will be hard to sustain for very long, since there is so little to say about it without repetition, and since everyone is in on the game. My guess is that ‘wellbeing’ will soon be superseded by ‘sustainability’. ‘Sustainability’ is a better hooray word than ‘wellbeing’, because it combines environmental quality and wellbeing, throwing in ‘lifelong learning’ and ‘social justice’ for good measure (Kapranov, 2022). The wellbeing zones and hubs won’t need to be dismantled just yet, but there may well be a shift towards more sustainable self-care. Here are some top tips taken from How To Self-Care The Sustainable Way on the Wearth website: snooze your way to wellbeing, indulge and preen your body, grab a cuppa, slip into a warming bath, mindfully take care of your mind, retail therapy the wholesome way. All carbon-neutral, vegan and cruelty-free.

References

Barrington-Leigh, C. P. (2022) Trends in Conceptions of Progress and Well-being. In Helliwell, J. F., Layard, R., Sachs, J. D., De Neve, J.-E., Aknin, L. B. & Wang, S. World Happiness Report 2022. https://happiness-report.s3.amazonaws.com/2022/WHR+22.pdf  New York: Sustainable Development Solutions Network.

Bartels, M., Nes, R. B., Armitage, J. M., van de Weijer, M. P., de Vries L. P. & Haworth, C. M. A. (2022) Exploring the Biological Basis for Happiness. In Helliwell, J. F., Layard, R., Sachs, J. D., De Neve, J.-E., Aknin, L. B. & Wang, S. World Happiness Report 2022. https://happiness-report.s3.amazonaws.com/2022/WHR+22.pdf  New York: Sustainable Development Solutions Network.

Csikszentmihalyi, M. (1990) Flow: The Psychology of Optimal Experience. New York: Harper & Row

Forst, S. (2020) The Teacher’s Guide to Self-Care: Build Resilience, Avoid Burnout, and Bring a Happier and Healthier You to the Classroom. The Designer Teacher, LLC

Hascher, T. & Waber, J. (2021) Teacher well-being: A systematic review of the research literature from the year 2000–2019. Educational Research Review, 34 https://www.sciencedirect.com/science/article/pii/S1747938X21000348

Kapranov, O. (2022) The Discourse of Sustainability in English Language Teaching (ELT) at the University of Oxford: Analyzing Discursive Representations. Journal of Teacher Education for Sustainability, 24 (1):35-48 https://sciendo.com/article/10.2478/jtes-2022-0004

Pentón Herrera, L. J., Martínez-Alba, G. & Trinh, E. (Eds.) (2023) Teacher Well-Being in English Language Teaching: An Ecological Approach. Abingdon: Routledge

Lichterman, P. (1992) Self-help reading as a thin culture. Media, Culture and Society, 14: 421 – 447

McGee, M. (2005) Self-Help, Inc. Oxford: OUP

Mercer, S. & Gregersen, T. (2020) Teacher Wellbeing. Oxford: OUP

Widdowson, H. G. (2018) Applied linguistics as a transdisciplinary practice: What’s in a prefix? AILA Review, 31 (1): 135- 142

Widdowson, H. G. (2021) On the Subject of English. Berlin: De Gruyter

This post is a piece of mediation – an attempt to help you understand the concept of mediation itself. In order to mediate this concept, I am engaging in an act of linguistic mediation, helping you to understand the language of the discourse of mediation, which may, at times, seem obscure. See, for example, the last sentence in this paragraph, a sentence which should not be taken too seriously. This is also an act of cultural mediation, a bridge between you, as reader, and the micro-culture of people who write earnestly about mediation. And, finally, since one can also mediate a text for oneself, it could also be argued that I am adopting an action-oriented approach in which I am myself a social agent and a lifelong learner, using all my plurilingual resources to facilitate pluricultural space in our multidiverse society.

Mediation has become a de-jour topic since the publication of the Companion Volume of the CEFR (North et al., 2018). Since then, it has been the subject of over 20 Erasmus+ funded projects, one of which (MiLLaT, 2021), (funded to the tune of 80,672.00 € and a collaboration between universities in Poland, Czechia, Lithuania and Finland), offers a practical guide for teachers, and which I’ll draw on heavily here.

This guide describes mediation as a ‘complex matter’, but I beg to differ. The guide says that ‘mediation involves facilitating understanding and communication and collaborating to construct new meaning through languaging or plurilanguaging both on the individual and social level’. Packed as it is with jargon, I will employ three of the six key mediation strategies to make this less opaque. These are streamlining (or restructuring) text, breaking down complicated information, and adjusting language (North & Piccardo, 2016: 457). Basically, mediation simply means helping to understand, in a very wide variety of ways and in the broadest possible sense. The mediation pie is big and can be sliced up in many ways: the number of categories and sub-categories make it seem like something bigger than it is. The idea is ‘not something new or unknown’ in language teaching (MilLLaT, 2021).

What is relatively new is the language in which mediation is talked about and the way in which it is associated with other concepts, plurilingualism and pluricultural competence in particular. (Both these concepts require a separate mediating blog post to deconstruct them.) Here, though, I’ll focus briefly on the kinds of language that are used to talk about mediation. A quick glossary:

  • facilitating collaborative interaction with peers = communicative pair / group work
  • facilitating pluricultural space = texts / discussion with cultural content
  • collaborating in a group – collaborating to construct meaning = group work
  • facilitating communication in delicate situations and disagreements = more group work
  • relaying specific information in writing = writing a report
  • processing text in writing = writing a summary

See? It’s not all that complex, after all.

Neither, it must be said, is there anything new about the activities that have been proposed to promote mediation skills. MiLLaT offers 39 classroom activities, divided up into those suitable for synchronous and asynchronous classes. Some are appropriate for polysynchronous classes – which simply means a mixture of synchronous and asynchronous, in case you were wondering.

To make things clearer still, here is a selection of the activities suggested in MiLLaT. I’ll spare you the lengthy explanations of precisely which mediation skills and strategies these activities are supposed to develop.

  • Students read texts and watch videos about malaria, before working in groups to develop a strategy to eradicate malaria from a particular village.
  • Students do a jigsaw reading or video viewing, discuss the information they have come across and do a follow-up task (e.g. express their own opinions, make a presentation).
  • Students read an article / watch a video (about Easter in another country), do some ‘lexical and comprehension activities’, then post messages on a discussion forum about how they will spend Easter.
  • Students read a text about Easter in Spain from an authentic source in Spanish, complete a fill-in-the-blanks exercise using the information and practising the vocabulary they learned from the text, then describe a local event / holiday themselves.
  • Students read a text about teachers, discuss the features of good/bad educators and create a portrait of an ideal teacher.
  • Students read extracts from the CEFR, interview a teacher (in L1) about the school’s plurilingual practices, then make a presentation on the topic in L2.
  • One student shows the others some kind of visual presentation. The rest discuss it in groups, before the original student tells the others about it and leads a discussion.
  • Students analyse a text on Corporate Social Responsibility, focusing on the usage of relevant vocabulary.
  • Students working in groups ‘teach’ a topic to their group members using figures/diagrams.
  • Students read a text about inclusive writing, then identify examples of inclusive language from a ‘Politically Correct Bedtime Story’, reflect on these examples, posting their thoughts in a forum.
  • Students watch a TED talk and write down the top five areas they paid attention to when watching the talk, share a summary of their observations with the rest of their group, and give written feedback to the speaker.
  • Students read a text and watch a video about note-taking and mindmapping, before reading an academic text and rendering it as a mindmap.
  • Students explore a range of websites and apps that may be helpful for self-study.
  • Students practise modal verbs by completing a gapped transcript of an extract from ‘Schindler’s List’.
  • Students practise regular and irregular pasts by gap-filling the song ‘Don’t Cry for Me Argentina’.
  • Students practise the present continuous by giving a running commentary on an episode of ‘Mr Bean’.

You could be forgiven for wondering what some of this has to do with mediation. Towards the end of this list, some of the examples are not terribly communicative or real-world, but they could legitimately be described as pedagogical mediation. Or ‘teaching’, for short.

Much could be said about the quality of some of the MiLLaT activities, the lack of originality, the (lack of) editing, topics that are already dated, copyright issues, and even the value of the activities. Was this really worth €80,000? However, the main point I’d like to make is that, when it comes down to classroom practicalities, you won’t find anything new. Rather than trawling through the MiLLaT documents, I’d recommend you splash out on Chiappini and Mansur’s (2021) ‘Activities for Mediation’ if you’re looking for some ready-made mediation ideas. Alternatively, take any tried and tested communicative classroom task, and describe it using some mediation jargon. If you do this, you’ll have the added bonus of practising your own mediation strategies (you could, for example, read the CEFR Companion Volume in a language other than your own, mentally translate into another language, and then amplify the text using the jargon from the CEFR CV). It will do wonders for your sociolinguistic, pragmatic, plurilingual and pluricultural competences.

Now that we have mediation etherized upon a table, there is an overwhelming question that cannot be avoided. Is the concept of mediation worth it, after all? I like the fact that mediation between two or more languages (Stathopoulou, 2015) has helped to legitimize interlingual activities in the ELT classroom, but such legitimization does not really require the notion of mediation. This is more than amply provided for by research into L1 use in L2 learning, as well as by the theoretical framework of translanguaging. But beyond that? I’m certainly not the first person to have asked the question. Bart Deygers (2019), for example, argues that the CEFR CV ‘does not truly engage with well-founded criticism’, and neither does it ‘refer to the many empirical studies that have been conducted since 2001’ that could have helped it. He refers to a ‘hermetic writing style’ and its use of ‘vague and impressionistic language’. Mediation, he argues, would be better seen ‘as a value statement rather than as a real theoretical– conceptual innovation’. From the list above of practical activities, it would be also hard to argue that there is anything innovative in its classroom implementation. Mediation advocates will respond by saying ‘that is not what we meant at all, that is not it, at all’ as they come and go, talking of North and Piccardo. Mediation may offer rich pickings for grants of various kinds, it may seem to be a compelling topic for conference presentations, training courses and publications, but I’m not convinced it has much else going for it,

References

Chiappini, R. & Mansur, E. (2021). Activities for Mediation. Delta Publishing: Stuttgart

Deygers, B. (2019). The CEFR Companion Volume: Between Research-Based Policy and Policy-Based Research. Applied Linguistics 2019: 0/0: 1–7

MiLLaT (Mediation in Language Learning and Teaching). (2021). Guide for Language Teachers: Traditional and Synchronous Tasks https://ec.europa.eu/programmes/erasmus-plus/project-result-content/2d9860e2-96ee-46aa-9bc6-1595cfcd1893/MiLLaT_Guide_for_Teachers_IO_03.pdf and Guide for Language Teachers: Asynchronous and Polysynchronous Tasks https://ec.europa.eu/programmes/erasmus-plus/project-result-content/3d819e5a-35d7-4137-a2c8-697d22bf6b79/Materials_Developing_Mediation_for_Asynchronous_and_Polysynchronous_Online_Courses_1_.pdf

North, B. & Piccardo, E. (2016). Developing illustrative descriptors of aspects of mediation for the Common European Framework of Reference (CEFR): A Council of Europe Project. Language Teaching, 49 (3): 455 – 459

North, B., Goodier, T., Piccardo, E. et al. (2018). Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Companion Volume With New Descriptors. Strasbourg: Council of Europe

Stathopoulou, M. (2015). Cross-Language Mediation in Foreign Language Teaching and Testing. Bristol: Multilingual Matters