Archive for March, 2023

When the internet arrived on our desktops in the 1990s, language teachers found themselves able to access huge amounts of authentic texts of all kinds. It was a true game-changer. But when it came to ELT dedicated websites, the pickings were much slimmer. There was a very small number of good ELT resource sites (onestopenglish stood out from the crowd), but more ubiquitous and more enduring were the sites offering downloadable material shared by teachers. One of these,, currently has 1,082,522 registered users, compared to the 700,000+ of onestopenglish.

The resources on offer at sites such as these range from texts and scripted dialogues, along with accompanying comprehension questions, to grammar explanations and gapfills, vocabulary matching tasks and gapfills, to lists of prompts for discussions. Almost all of it is unremittingly awful, a terrible waste of the internet’s potential.

Ten years later, interactive online possibilities began to appear. Before long, language teachers found themselves able to use things like blogs, wikis and Google Docs. It was another true game changer. But when it came to ELT dedicated things, the pickings were much slimmer. There is some useful stuff (flashcard apps, for example) out there, but more ubiquitous are interactive versions of the downloadable dross that already existed. Learning platforms, which have such rich possibilities, are mostly loaded with gapfills, drag-and-drop, multiple choice, and so on. Again, it seems such a terrible waste of the technology’s potential. And all of this runs counter to what we know about how people learn another language. It’s as if decades of research into second language acquisition had never taken place.

And now we have AI and large language models like GPT. The possibilities are rich and quite a few people, like Sam Gravell and Svetlana Kandybovich, have already started suggesting interesting and creative ways of using the technology for language teaching. Sadly, though, technology has a tendency to bring out the worst in approaches to language teaching, since there’s always a bandwagon to be jumped on. Welcome to Twee, A.I. powered tools for English teachers, where you can generate your own dross in a matter of seconds. You can generate texts and dialogues, pitched at one of three levels, with or without target vocabulary, and produce comprehension questions (open questions, T / F, or M / C), exercises where vocabulary has to be matched to definitions, word-formation exercises, gapfills. The name of the site has been carefully chosen (Cambridge dictionary defines ‘twee’ as ‘artificially attractive’).

I decided to give it a try. Twee uses the same technology as ChatGPT and the results were unsurprising. I won’t comment in any detail on the intrinsic interest or the accuracy of factual information in the texts. They are what you might expect if you have experimented with ChatGPT. For the same reason, I won’t go into details about the credibility or naturalness of the dialogues. Similarly, the ability of Twee to gauge the appropriacy of texts for particular levels is poor: it hasn’t been trained on a tagged learner corpus. In any case, having only three level bands (A1/A2, B1/B2 and C1/C2) means that levelling is far too approximate. Suffice to say that the comprehension questions, vocabulary-item selection, vocabulary practice activities would all require very heavy editing.

Twee is still in beta, and, no doubt, improvements will come as the large language models on which it draws get bigger and better. Bilingual functionality is a necessary addition, and is doable. More reliable level-matching would be nice, but it’s a huge technological challenge, besides being theoretically problematic. But bigger problems remain and these have nothing to do with technology. Take a look at the examples below of how Twee suggests its reading comprehension tasks (open questions, M / C, T / F) could be used with some Beatles songs.

Is there any point getting learners to look at a ‘dialogue’ (on the topic of yellow submarines) like the one below? Is there any point getting learners to write essays using prompts such as those below?

What possible learning value could tasks such as these have? Is there any credible theory of language learning behind any of this, or is it just stuff that would while away some classroom time? AI meets ESLprintables – what a waste of the technology’s potential!

Edtech vendors like to describe their products as ‘solutions’, but the educational challenges, which these products are supposedly solutions to, often remain unexamined. Poor use of technology can exacerbate these challenges by making inappropriate learning materials more easily available.

One of the most common criticisms of schooling is that it typically requires learners to study in lockstep, with everyone expected to use the same learning material at the same pace to achieve the same learning objectives. From everything we know about individual learner differences, this is an unreasonable and unrealisable expectation. It is only natural, therefore, that we should assume that self-paced learning is a better option. Self-paced learning is at the heart of technology-driven personalized learning. Often, it is the only meaningfully personalized aspect of technology-delivered courses.

Unfortunately, almost one hundred years of attempts to introduce elements of self-pacing into formal language instruction have failed to produce conclusive evidence of its benefits. For a more detailed look at the history of these failures, see my blog post on the topic, and for a more detailed look at Programmed Learning, a 1960s attempt to introduce self-pacing, see this post. This is not to say that self-pacing does not have a potentially important role to play. However, history should act as a warning that the simple provision of self-pacing opportunities through technology may be a necessary condition for successful self-pacing, but it is not a sufficient condition.

Of all the different areas of language learning that can be self-paced, I’ve long thought that technology might help the development of listening skills the most. Much contemporary real-world listening is, in any case, self-paced: why should the classroom not be? With online listening, we can use a variety of help options (Cross, 2017) – pause, rewind, speed control, speech-to-text, dictionary look-up, video / visual support – and we control the frequency and timing of this use. Online listening has become a ‘semi-recursive activity, less dependent on transient memory, inching its way closer to reading’ (Robin, 2007: 110). We don’t know which of these help options and which permutations of these options are most likely to lead to gains in listening skills, but it seems reasonable to believe that some of these options have strong potential. It is perhaps unlikely that research could ever provide a definitive answer to the question of optimal help options: different learners have different needs and different preferences (Cárdenas-Claros & Gruba, 2014). But what is clear is that self-pacing is necessary for these options to be used.

Moving away from whole-class lockstep listening practice towards self-paced independent listening has long been advocated by experts. John Field (2008: 47) identified a key advantage of independent listening: a learner ‘can replay the recording as often as she needs (achieving the kind of recursion that reading offers) and can focus upon specific stretches of the input which are difficult for her personally rather than for the class as a whole’. More recently, interest has also turned to the possibility of self-paced listening in assessment practices (Goodwin, 2017).

So, self-paced listening: what’s not to like? I’ve been pushing it with the teachers I work with for some time. But a recent piece of research from Kathrin Eberharter and colleagues (Eberharter et al., 2023) has given me pause for thought. The researchers wanted to know what effect self-pacing would have on the assessment of listening comprehension in a group of young teenage Austrian learners. They were particularly interested in how learners with SpLDs would be affected, and assumed that self-pacing would boost the performance of these learners. Disappointingly, they were wrong. Not only did self-pacing have, on average, no measurable impact on performance, it also seems that self-pacing may have put learners with shorter working-memory capacity and L1 literacy-related challenges at a disadvantage.

This research concerned self-paced listening in assessment (in this case the TOEFL Junior Standard test), not in learning. But might self-paced listening as part of a learning programme not be quite as beneficial as we might hope? The short answer, as ever, is probably that it depends. Eberhart et al speculate that young learners ‘might need explicit training and more practice in regulating their strategic listening behaviour in order to be able to improve their performance with the help of self-pacing’. This probably holds true for many older learners, too. In other words, it’s not the possibility of self-pacing in itself that will make a huge difference: it’s what a learner does or does not do while they are self-pacing that matters. To benefit from the technological affordances of online listening, learners need to know which strategies (and which tools) may help them. They may need ‘explicit training in exploiting the benefits of navigational freedom to enhance their metacognitive strategy use’ (Eberhart et al. 2023: 17). This shouldn’t surprise us: the role of metacognition is well established (Goh & Vandergrift, 2021).

As noted earlier, we do not really know which permutations of help options are likely to be of most help, but it is a relatively straightforward matter to encourage learners to experiment with them. We do, however, have a much clearer idea of the kinds of listening strategies that are likely to have a positive impact, and the most obvious way of providing this training is in the classroom. John Field (2008) suggested many approaches; Richard Cauldwell (2013) offers more; and Sheila Thorn’s recent ‘Integrating Authentic Listening into the Language Classroom’ (2021) adds yet more. If learners’ metacognitive knowledge, effective listening and help-option skills are going to develop, the training will need to involve ‘a cyclic approach […] throughout an entire course’ (Cross, 2017: 557).

If, on the other hand, our approach to listening in the classroom continues to be (as it is in so many coursebooks) one of testing listening through comprehension questions, we should not be too surprised when learners have little idea what strategy to approach when technology allows self-pacing. Self-paced self-testing of listening comprehension is likely to be of limited value.


Cárdenas-Claros, M. S. & Gruba, P. A. (2014) Listeners’ interactions with help options in CALL. Computer Assisted Language Learning, 27 (3): 228 – 245

Cauldwell, R. (2013) Phonology for Listening: Teaching the Stream of Speech. Speech in Action

Cross, J. (2017) Help options for L2 listening in CALL: A research agenda. Language Teaching, 50 (4), 544–560.

Eberharter,K., Kormos, J.,  Guggenbichler, E.,  Ebner, V. S., Suzuki, S.,  Moser-Frötscher, D., Konrad, E. & Kremmel, B. (2023) Investigating the impact of self-pacing on the L2 listening performance of young learner candidates with differing L1 literacy skills. Language Testing 0 10.1177/02655322221149642

Field, J. (2008) Listening in the Language Classroom. Cambridge: Cambridge University Press

Goh, C. C. M. & Vandergrift, L. (2021) Teaching and learning second language listening: Metacognition in action (2nd ed.). Routledge.

Goodwin, S. J. (2017) Locus of control in L2 English listening assessment [Doctoral dissertation]. Georgia State University.

Robin, R. (2007) Commentary: Learner-based listening and technological authenticity. Language Learning & Technology, 11 (1): 109-115.

Thorn, S. (2021) Integrating Authentic Listening into the Language Classroom. Shoreham-by-Sea: Pavilion