Archive for the ‘learning theory’ Category

When the internet arrived on our desktops in the 1990s, language teachers found themselves able to access huge amounts of authentic texts of all kinds. It was a true game-changer. But when it came to ELT dedicated websites, the pickings were much slimmer. There was a very small number of good ELT resource sites (onestopenglish stood out from the crowd), but more ubiquitous and more enduring were the sites offering downloadable material shared by teachers. One of these, ESLprintables.com, currently has 1,082,522 registered users, compared to the 700,000+ of onestopenglish.

The resources on offer at sites such as these range from texts and scripted dialogues, along with accompanying comprehension questions, to grammar explanations and gapfills, vocabulary matching tasks and gapfills, to lists of prompts for discussions. Almost all of it is unremittingly awful, a terrible waste of the internet’s potential.

Ten years later, interactive online possibilities began to appear. Before long, language teachers found themselves able to use things like blogs, wikis and Google Docs. It was another true game changer. But when it came to ELT dedicated things, the pickings were much slimmer. There is some useful stuff (flashcard apps, for example) out there, but more ubiquitous are interactive versions of the downloadable dross that already existed. Learning platforms, which have such rich possibilities, are mostly loaded with gapfills, drag-and-drop, multiple choice, and so on. Again, it seems such a terrible waste of the technology’s potential. And all of this runs counter to what we know about how people learn another language. It’s as if decades of research into second language acquisition had never taken place.

And now we have AI and large language models like GPT. The possibilities are rich and quite a few people, like Sam Gravell and Svetlana Kandybovich, have already started suggesting interesting and creative ways of using the technology for language teaching. Sadly, though, technology has a tendency to bring out the worst in approaches to language teaching, since there’s always a bandwagon to be jumped on. Welcome to Twee, A.I. powered tools for English teachers, where you can generate your own dross in a matter of seconds. You can generate texts and dialogues, pitched at one of three levels, with or without target vocabulary, and produce comprehension questions (open questions, T / F, or M / C), exercises where vocabulary has to be matched to definitions, word-formation exercises, gapfills. The name of the site has been carefully chosen (Cambridge dictionary defines ‘twee’ as ‘artificially attractive’).

I decided to give it a try. Twee uses the same technology as ChatGPT and the results were unsurprising. I won’t comment in any detail on the intrinsic interest or the accuracy of factual information in the texts. They are what you might expect if you have experimented with ChatGPT. For the same reason, I won’t go into details about the credibility or naturalness of the dialogues. Similarly, the ability of Twee to gauge the appropriacy of texts for particular levels is poor: it hasn’t been trained on a tagged learner corpus. In any case, having only three level bands (A1/A2, B1/B2 and C1/C2) means that levelling is far too approximate. Suffice to say that the comprehension questions, vocabulary-item selection, vocabulary practice activities would all require very heavy editing.

Twee is still in beta, and, no doubt, improvements will come as the large language models on which it draws get bigger and better. Bilingual functionality is a necessary addition, and is doable. More reliable level-matching would be nice, but it’s a huge technological challenge, besides being theoretically problematic. But bigger problems remain and these have nothing to do with technology. Take a look at the examples below of how Twee suggests its reading comprehension tasks (open questions, M / C, T / F) could be used with some Beatles songs.

Is there any point getting learners to look at a ‘dialogue’ (on the topic of yellow submarines) like the one below? Is there any point getting learners to write essays using prompts such as those below?

What possible learning value could tasks such as these have? Is there any credible theory of language learning behind any of this, or is it just stuff that would while away some classroom time? AI meets ESLprintables – what a waste of the technology’s potential!

Edtech vendors like to describe their products as ‘solutions’, but the educational challenges, which these products are supposedly solutions to, often remain unexamined. Poor use of technology can exacerbate these challenges by making inappropriate learning materials more easily available.

One of the most common criticisms of schooling is that it typically requires learners to study in lockstep, with everyone expected to use the same learning material at the same pace to achieve the same learning objectives. From everything we know about individual learner differences, this is an unreasonable and unrealisable expectation. It is only natural, therefore, that we should assume that self-paced learning is a better option. Self-paced learning is at the heart of technology-driven personalized learning. Often, it is the only meaningfully personalized aspect of technology-delivered courses.

Unfortunately, almost one hundred years of attempts to introduce elements of self-pacing into formal language instruction have failed to produce conclusive evidence of its benefits. For a more detailed look at the history of these failures, see my blog post on the topic, and for a more detailed look at Programmed Learning, a 1960s attempt to introduce self-pacing, see this post. This is not to say that self-pacing does not have a potentially important role to play. However, history should act as a warning that the simple provision of self-pacing opportunities through technology may be a necessary condition for successful self-pacing, but it is not a sufficient condition.

Of all the different areas of language learning that can be self-paced, I’ve long thought that technology might help the development of listening skills the most. Much contemporary real-world listening is, in any case, self-paced: why should the classroom not be? With online listening, we can use a variety of help options (Cross, 2017) – pause, rewind, speed control, speech-to-text, dictionary look-up, video / visual support – and we control the frequency and timing of this use. Online listening has become a ‘semi-recursive activity, less dependent on transient memory, inching its way closer to reading’ (Robin, 2007: 110). We don’t know which of these help options and which permutations of these options are most likely to lead to gains in listening skills, but it seems reasonable to believe that some of these options have strong potential. It is perhaps unlikely that research could ever provide a definitive answer to the question of optimal help options: different learners have different needs and different preferences (Cárdenas-Claros & Gruba, 2014). But what is clear is that self-pacing is necessary for these options to be used.

Moving away from whole-class lockstep listening practice towards self-paced independent listening has long been advocated by experts. John Field (2008: 47) identified a key advantage of independent listening: a learner ‘can replay the recording as often as she needs (achieving the kind of recursion that reading offers) and can focus upon specific stretches of the input which are difficult for her personally rather than for the class as a whole’. More recently, interest has also turned to the possibility of self-paced listening in assessment practices (Goodwin, 2017).

So, self-paced listening: what’s not to like? I’ve been pushing it with the teachers I work with for some time. But a recent piece of research from Kathrin Eberharter and colleagues (Eberharter et al., 2023) has given me pause for thought. The researchers wanted to know what effect self-pacing would have on the assessment of listening comprehension in a group of young teenage Austrian learners. They were particularly interested in how learners with SpLDs would be affected, and assumed that self-pacing would boost the performance of these learners. Disappointingly, they were wrong. Not only did self-pacing have, on average, no measurable impact on performance, it also seems that self-pacing may have put learners with shorter working-memory capacity and L1 literacy-related challenges at a disadvantage.

This research concerned self-paced listening in assessment (in this case the TOEFL Junior Standard test), not in learning. But might self-paced listening as part of a learning programme not be quite as beneficial as we might hope? The short answer, as ever, is probably that it depends. Eberhart et al speculate that young learners ‘might need explicit training and more practice in regulating their strategic listening behaviour in order to be able to improve their performance with the help of self-pacing’. This probably holds true for many older learners, too. In other words, it’s not the possibility of self-pacing in itself that will make a huge difference: it’s what a learner does or does not do while they are self-pacing that matters. To benefit from the technological affordances of online listening, learners need to know which strategies (and which tools) may help them. They may need ‘explicit training in exploiting the benefits of navigational freedom to enhance their metacognitive strategy use’ (Eberhart et al. 2023: 17). This shouldn’t surprise us: the role of metacognition is well established (Goh & Vandergrift, 2021).

As noted earlier, we do not really know which permutations of help options are likely to be of most help, but it is a relatively straightforward matter to encourage learners to experiment with them. We do, however, have a much clearer idea of the kinds of listening strategies that are likely to have a positive impact, and the most obvious way of providing this training is in the classroom. John Field (2008) suggested many approaches; Richard Cauldwell (2013) offers more; and Sheila Thorn’s recent ‘Integrating Authentic Listening into the Language Classroom’ (2021) adds yet more. If learners’ metacognitive knowledge, effective listening and help-option skills are going to develop, the training will need to involve ‘a cyclic approach […] throughout an entire course’ (Cross, 2017: 557).

If, on the other hand, our approach to listening in the classroom continues to be (as it is in so many coursebooks) one of testing listening through comprehension questions, we should not be too surprised when learners have little idea what strategy to approach when technology allows self-pacing. Self-paced self-testing of listening comprehension is likely to be of limited value.

References

Cárdenas-Claros, M. S. & Gruba, P. A. (2014) Listeners’ interactions with help options in CALL. Computer Assisted Language Learning, 27 (3): 228 – 245

Cauldwell, R. (2013) Phonology for Listening: Teaching the Stream of Speech. Speech in Action

Cross, J. (2017) Help options for L2 listening in CALL: A research agenda. Language Teaching, 50 (4), 544–560. https://doi.org/10.1017/S0261444817000209

Eberharter,K., Kormos, J.,  Guggenbichler, E.,  Ebner, V. S., Suzuki, S.,  Moser-Frötscher, D., Konrad, E. & Kremmel, B. (2023) Investigating the impact of self-pacing on the L2 listening performance of young learner candidates with differing L1 literacy skills. Language Testing 0 10.1177/02655322221149642 https://journals.sagepub.com/doi/epub/10.1177/02655322221149642

Field, J. (2008) Listening in the Language Classroom. Cambridge: Cambridge University Press

Goh, C. C. M. & Vandergrift, L. (2021) Teaching and learning second language listening: Metacognition in action (2nd ed.). Routledge. https://doi.org/10.4324/9780429287749

Goodwin, S. J. (2017) Locus of control in L2 English listening assessment [Doctoral dissertation]. Georgia State University. https://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1037&context=alesl_diss

Robin, R. (2007) Commentary: Learner-based listening and technological authenticity. Language Learning & Technology, 11 (1): 109-115. https://www.lltjournal.org/item/461/

Thorn, S. (2021) Integrating Authentic Listening into the Language Classroom. Shoreham-by-Sea: Pavilion

This post is a piece of mediation – an attempt to help you understand the concept of mediation itself. In order to mediate this concept, I am engaging in an act of linguistic mediation, helping you to understand the language of the discourse of mediation, which may, at times, seem obscure. See, for example, the last sentence in this paragraph, a sentence which should not be taken too seriously. This is also an act of cultural mediation, a bridge between you, as reader, and the micro-culture of people who write earnestly about mediation. And, finally, since one can also mediate a text for oneself, it could also be argued that I am adopting an action-oriented approach in which I am myself a social agent and a lifelong learner, using all my plurilingual resources to facilitate pluricultural space in our multidiverse society.

Mediation has become a de-jour topic since the publication of the Companion Volume of the CEFR (North et al., 2018). Since then, it has been the subject of over 20 Erasmus+ funded projects, one of which (MiLLaT, 2021), (funded to the tune of 80,672.00 € and a collaboration between universities in Poland, Czechia, Lithuania and Finland), offers a practical guide for teachers, and which I’ll draw on heavily here.

This guide describes mediation as a ‘complex matter’, but I beg to differ. The guide says that ‘mediation involves facilitating understanding and communication and collaborating to construct new meaning through languaging or plurilanguaging both on the individual and social level’. Packed as it is with jargon, I will employ three of the six key mediation strategies to make this less opaque. These are streamlining (or restructuring) text, breaking down complicated information, and adjusting language (North & Piccardo, 2016: 457). Basically, mediation simply means helping to understand, in a very wide variety of ways and in the broadest possible sense. The mediation pie is big and can be sliced up in many ways: the number of categories and sub-categories make it seem like something bigger than it is. The idea is ‘not something new or unknown’ in language teaching (MilLLaT, 2021).

What is relatively new is the language in which mediation is talked about and the way in which it is associated with other concepts, plurilingualism and pluricultural competence in particular. (Both these concepts require a separate mediating blog post to deconstruct them.) Here, though, I’ll focus briefly on the kinds of language that are used to talk about mediation. A quick glossary:

  • facilitating collaborative interaction with peers = communicative pair / group work
  • facilitating pluricultural space = texts / discussion with cultural content
  • collaborating in a group – collaborating to construct meaning = group work
  • facilitating communication in delicate situations and disagreements = more group work
  • relaying specific information in writing = writing a report
  • processing text in writing = writing a summary

See? It’s not all that complex, after all.

Neither, it must be said, is there anything new about the activities that have been proposed to promote mediation skills. MiLLaT offers 39 classroom activities, divided up into those suitable for synchronous and asynchronous classes. Some are appropriate for polysynchronous classes – which simply means a mixture of synchronous and asynchronous, in case you were wondering.

To make things clearer still, here is a selection of the activities suggested in MiLLaT. I’ll spare you the lengthy explanations of precisely which mediation skills and strategies these activities are supposed to develop.

  • Students read texts and watch videos about malaria, before working in groups to develop a strategy to eradicate malaria from a particular village.
  • Students do a jigsaw reading or video viewing, discuss the information they have come across and do a follow-up task (e.g. express their own opinions, make a presentation).
  • Students read an article / watch a video (about Easter in another country), do some ‘lexical and comprehension activities’, then post messages on a discussion forum about how they will spend Easter.
  • Students read a text about Easter in Spain from an authentic source in Spanish, complete a fill-in-the-blanks exercise using the information and practising the vocabulary they learned from the text, then describe a local event / holiday themselves.
  • Students read a text about teachers, discuss the features of good/bad educators and create a portrait of an ideal teacher.
  • Students read extracts from the CEFR, interview a teacher (in L1) about the school’s plurilingual practices, then make a presentation on the topic in L2.
  • One student shows the others some kind of visual presentation. The rest discuss it in groups, before the original student tells the others about it and leads a discussion.
  • Students analyse a text on Corporate Social Responsibility, focusing on the usage of relevant vocabulary.
  • Students working in groups ‘teach’ a topic to their group members using figures/diagrams.
  • Students read a text about inclusive writing, then identify examples of inclusive language from a ‘Politically Correct Bedtime Story’, reflect on these examples, posting their thoughts in a forum.
  • Students watch a TED talk and write down the top five areas they paid attention to when watching the talk, share a summary of their observations with the rest of their group, and give written feedback to the speaker.
  • Students read a text and watch a video about note-taking and mindmapping, before reading an academic text and rendering it as a mindmap.
  • Students explore a range of websites and apps that may be helpful for self-study.
  • Students practise modal verbs by completing a gapped transcript of an extract from ‘Schindler’s List’.
  • Students practise regular and irregular pasts by gap-filling the song ‘Don’t Cry for Me Argentina’.
  • Students practise the present continuous by giving a running commentary on an episode of ‘Mr Bean’.

You could be forgiven for wondering what some of this has to do with mediation. Towards the end of this list, some of the examples are not terribly communicative or real-world, but they could legitimately be described as pedagogical mediation. Or ‘teaching’, for short.

Much could be said about the quality of some of the MiLLaT activities, the lack of originality, the (lack of) editing, topics that are already dated, copyright issues, and even the value of the activities. Was this really worth €80,000? However, the main point I’d like to make is that, when it comes down to classroom practicalities, you won’t find anything new. Rather than trawling through the MiLLaT documents, I’d recommend you splash out on Chiappini and Mansur’s (2021) ‘Activities for Mediation’ if you’re looking for some ready-made mediation ideas. Alternatively, take any tried and tested communicative classroom task, and describe it using some mediation jargon. If you do this, you’ll have the added bonus of practising your own mediation strategies (you could, for example, read the CEFR Companion Volume in a language other than your own, mentally translate into another language, and then amplify the text using the jargon from the CEFR CV). It will do wonders for your sociolinguistic, pragmatic, plurilingual and pluricultural competences.

Now that we have mediation etherized upon a table, there is an overwhelming question that cannot be avoided. Is the concept of mediation worth it, after all? I like the fact that mediation between two or more languages (Stathopoulou, 2015) has helped to legitimize interlingual activities in the ELT classroom, but such legitimization does not really require the notion of mediation. This is more than amply provided for by research into L1 use in L2 learning, as well as by the theoretical framework of translanguaging. But beyond that? I’m certainly not the first person to have asked the question. Bart Deygers (2019), for example, argues that the CEFR CV ‘does not truly engage with well-founded criticism’, and neither does it ‘refer to the many empirical studies that have been conducted since 2001’ that could have helped it. He refers to a ‘hermetic writing style’ and its use of ‘vague and impressionistic language’. Mediation, he argues, would be better seen ‘as a value statement rather than as a real theoretical– conceptual innovation’. From the list above of practical activities, it would be also hard to argue that there is anything innovative in its classroom implementation. Mediation advocates will respond by saying ‘that is not what we meant at all, that is not it, at all’ as they come and go, talking of North and Piccardo. Mediation may offer rich pickings for grants of various kinds, it may seem to be a compelling topic for conference presentations, training courses and publications, but I’m not convinced it has much else going for it,

References

Chiappini, R. & Mansur, E. (2021). Activities for Mediation. Delta Publishing: Stuttgart

Deygers, B. (2019). The CEFR Companion Volume: Between Research-Based Policy and Policy-Based Research. Applied Linguistics 2019: 0/0: 1–7

MiLLaT (Mediation in Language Learning and Teaching). (2021). Guide for Language Teachers: Traditional and Synchronous Tasks https://ec.europa.eu/programmes/erasmus-plus/project-result-content/2d9860e2-96ee-46aa-9bc6-1595cfcd1893/MiLLaT_Guide_for_Teachers_IO_03.pdf and Guide for Language Teachers: Asynchronous and Polysynchronous Tasks https://ec.europa.eu/programmes/erasmus-plus/project-result-content/3d819e5a-35d7-4137-a2c8-697d22bf6b79/Materials_Developing_Mediation_for_Asynchronous_and_Polysynchronous_Online_Courses_1_.pdf

North, B. & Piccardo, E. (2016). Developing illustrative descriptors of aspects of mediation for the Common European Framework of Reference (CEFR): A Council of Europe Project. Language Teaching, 49 (3): 455 – 459

North, B., Goodier, T., Piccardo, E. et al. (2018). Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Companion Volume With New Descriptors. Strasbourg: Council of Europe

Stathopoulou, M. (2015). Cross-Language Mediation in Foreign Language Teaching and Testing. Bristol: Multilingual Matters

The world of language learning and teaching is full of theoretical constructs and claims, most of which have their moment of glory in the sun before being eclipsed and disappearing from view. In a recent article looking at the theoretical claims of translanguaging enthusiasts, Jim Cummins (2021) suggests that three criteria might be used to evaluate them:

1 Empirical adequacy – to what extent is the claim consistent with all the relevant empirical evidence?

2 Logical coherence – to what extent is the claim internally consistent and non-contradictory?

3 Consequential validity – to what extent is the claim useful in promoting effective pedagogy and policies?

Take English as a Lingua Franca (ELF), for example. In its early days, there was much excitement about developing databases of ELF usage in order to identify those aspects of pronunciation and lexico-grammar that mattered for intercultural intelligibility. The Lingua Franca Core (a list of pronunciation features that are problematic in ELF settings when ELF users mix them up) proved to be the most lasting product of the early empirical research into ELF (Jenkins, 2000). It made intuitive good sense, was potentially empowering for learners and teachers, was clearly a useful tool in combating native-speakerism, and was relatively easy to implement in educational policy and practice.

But problems with the construct of ELF quickly appeared. ELF was a positive reframing of the earlier notion of interlanguage – an idea that had deficit firmly built in, since interlanguage was a point that a language learner had reached somewhere on the way to being like a native-speaker. Interlanguage contained elements of the L1, and this led to interest in how such elements might become fossilized, a metaphor with very negative connotations. With a strong desire to move away from framings of deficit, ELF recognised and celebrated code-switching as an integral element in ELF interactions (Seidlhofer, 2011: 105). Deviations from idealised native-speaker norms of English were no longer to be seen as errors in need of correction, but as legitimate forms of the language (of ELF) itself.

However, it soon became clear that it was not possible to describe ELF in terms of the particular language forms that its users employed. In response, ELF researchers reframed ELF. The focus shifted to how people of different language backgrounds used English to communicate in particular situations – how they languaged, in other words. ELF was no longer a thing, but an action. This helped in terms of internal consistency, but most teachers remained unclear about how the ELF.2 insight should impact on their classroom practices. If we can’t actually say what ELF looks like, what are teachers supposed to do with the idea? And much as we might like to wish away the idea of native speakers (and their norms), these ideas are very hard to expunge completely (MacKenzie, 2014: 170).

Twenty years after ELF became widely used as a term, ELF researchers lament the absence of any sizable changes in classroom practices (Bayyurt & Dewey, 2020). There are practices that meet the ELF seal of approval (see, for example, Kiczkowiak & Lowe, 2018), and these include an increase in exposure to the diversity of English use worldwide, engagement in critical classroom discussion about the globalisation of the English language, and non-penalisation of innovative, but intelligible forms (Galloway, 2018: 471). It is, however, striking that these practices long pre-date the construct of ELF. They are not direct products of ELF.

Part of the ‘problem’, as ELF researchers see it, has been that ELF has been so hard to define. Less generously, we might suggest that the construct of ELF was flawed from the start. Useful, no doubt, as a heuristic, but time to move on. Jennifer Jenkins, one of the most well-known names in ELF, has certainly not been afraid to move on. Her article (Jenkins, 2015) refines ELF.2 into ELF.3, which she now labels as ‘English as a Multilingual Franca’. In this reframed model, ELF is not so much concerned with the difference between native speakers and non-native speakers, as with the difference between monolinguals and multilinguals. Multilingual, rather than ‘English’, is now the superordinate attribute. Since ELF is now about interactions, rather than ELF as a collection of forms, it follows, in ELF.3, that ELF may not actually contain any English forms at all. There is a logic here, albeit somewhat convoluted, but there’s also a problem for ELF as a construct, too. If ELF is fundamentally about multilingual communication, what need is there for the term ‘ELF’? ‘Translanguaging’ will do perfectly well instead. The graph from Google Trends reveals the rises and falls of these two terms in the academic discourse space. After peaking in 2008 the term ‘English as a Lingua Franca’ now appears to be in irreversible decline.

So, let’s now turn to ‘translanguaging’. What do Cummins, and others, have to say about the construct? The word has not been around for long. Most people trace it back to the end of the last century (Baker, 2001) and a set of bilingual pedagogical practices in the context of Welsh-English bilingual programmes intended to revitalise the Welsh language. In the early days, translanguaging was no more than a classroom practice that allowed or encouraged the use (by both learners and teachers) of more than one language for the purposes of study. The object of study might be another language, or it might be another part of the curriculum. When I wrote a book about the use of L1 in the learning and teaching of English (Kerr, 2014), I could have called it ‘Translanguaging Activities’, but the editors and I felt that the word ‘translanguaging’ might be seen as obscure jargon. I defined the word at the time as ‘similar to code-switching, the process of mixing elements form two languages’.

But obscure jargon no longer. There is, for example, a nice little collection of activities that involve L1 for the EFL / ESL classroom put together by Jason Anderson http://www.jasonanderson.org.uk/downloads/Jasons_ideas_for_translanguaging_in_the_EFL_ESL_classroom.pdf that he has chosen to call ‘Ideas for translanguaging’. In practical terms, there’s nothing here that you might not have found twenty or more years ago (e.g. in Duff, 1989; or Deller & Rinvolucri, 2002), long before anyone started using the word ‘translanguaging’. Anderson’s motivation for choosing the word ‘translanguaging’ is that he hopes it will promote a change of mindset in which a spirit of (language) inclusivity prevails (Anderson, 2018). Another example: the different ways that L1 may be used in a language classroom have recently been investigated by Rabbidge (2019) in a book entitled ‘Translanguaging in EFL Contexts’. Rabbidge offers a taxonomy of translanguaging moments. These are a little different from previous classifications (e.g. Ellis, 1994; Kim & Elder, 2005), but only a little. The most significant novelty is that these moments are now framed as ‘translanguaging’, rather than as ‘use of L1’. Example #3: the most well-known and widely-sold book that offers practical ideas that are related to translanguaging is ‘The Translanguaging Classroom’ by García and colleagues (2017). English language teachers working in EFL / ESL / ESOL contexts are unlikely to find much, if anything, new here by way of practical ideas. What they will find, however, is a theoretical reframing. It is the theoretical reframing that Anderson and Rabbidge draw their inspiration from.

The construct of translanguaging, then, like English as a Lingua Franca, has brought little that is new in practical terms. Its consequential validity does not really need to be investigated, since the pedagogical reasons for some use of other languages in the learning / teaching of English were already firmly established (but not, perhaps, widely accepted) a long time ago. How about the theory? Does it stand up to closer scrutiny any better than ELF?

Like ELF, ‘translanguaging’ is generally considered not to be a thing, but an action. And, like ELF, it has a definition problem, so precisely what kind of action this might be is open to debate. For some, it isn’t even an action: Tian et al (2021: 4) refer to it as ‘more like an emerging perspective or lens that could provide new insights to understand and examine language and language (in) education’. Its usage bounces around from user to user, each of whom may appropriate it in different ways. It is in competition with other terms including translingual practice, multilanguaging, and plurilingualism (Li, 2018). It is what has been called a ‘strategically deployable shifter’ (Moore, 2015). It is also unquestionably a word that sets a tone, since ‘translanguaging’ is a key part of the discourse of multilingualism / plurilingualism, which is in clear opposition to the unfavourable images evoked by the term ‘monolingualism’, often presented as a methodological mistake or a kind of subjectivity gone wrong (Gramling, 2016: 4). ‘Translanguaging’ has become a hooray word: criticize it at your peril.

What started as a classroom practice has morphed into a theory (Li, 2018; García, 2009), one that is and is likely to remain unstable. The big questions centre around the difference between ‘strong translanguaging’ (a perspective that insists that ‘named languages’ are socially constructed and have no linguistic or cognitive reality) and ‘weak translanguaging’ (a perspective that acknowledges boundaries between named languages but seeks to soften them). There are discussions, too, about what to call these forms of translanguaging. The ‘strong’ version has been dubbed by Cummins (2021) ‘Unitary Translanguaging Theory’ and by Bonacina-Pugh et al. (2021) ‘Fluid Languaging Approach’. Corresponding terms for the ‘weak’ version are ‘Crosslinguistic Translanguaging Theory’ and ‘Fixed Language Approach’. Subsidiary, related debates centre around code-switching: is it a form of translanguaging or is it a construct better avoided altogether since it assumes separate linguistic systems (Cummins, 2021)?

It’s all very confusing. Cenoz and Gorter (2021) in their short guide to pedagogical translanguaging struggle for clarity, but fail to get there. They ‘completely agree’ with García about the fluid nature of languages as ‘social constructs’ with ‘no clear-cut boundaries’, but still consider named languages as ‘distinct’ and refer to them as such in their booklet. Cutting your way through this thicket of language is a challenge, to put it mildly. It’s also probably a waste of time. As Cummins (2021: 16) notes, the confusion is ‘completely unnecessary’ since ‘there is no difference in the instructional practices that are implied by so-called strong and weak versions of translanguaging’. There are also more important questions to investigate, not least the extent to which the approaches to multilingualism developed by people like García in the United States are appropriate or effective in other contexts with different values (Jaspers, 2018; 2019).

The monolingualism that both ELF and translanguaging stand in opposition to may be a myth, a paradigm or a pathology, but, whatever it is, it is deeply embedded in the ways that our societies are organised, and the ways that we think. It is, writes David Gramling (2016: 3), ‘clearly not yet inclined to be waved off the stage by a university professor, nor even by a ‘multilingual turn’.’ In the end, ELF failed to have much impact. It’s time for translanguaging to have a turn. So, out with the old, in with the new. Or perhaps not really all that new at all.

The king is dead. Long live the king and a happy new year!

References

Anderson, J. (2018) Reimagining English language learners from a translingual perspective. ELT Journal 72 (1): 26 – 37

Baker, C. (2001) Foundations of Bilingual Education and Bilingualism, 3rd edn. Bristol: Multilingual Matters

Bayyurt, Y. & Dewey, M. (2020) Locating ELF in ELT. ELT Journal, 74 (4): 369 – 376

Bonacina-Pugh, F., Da Costa Cabral, I., & Huang, J. (2021) Translanguaging in education. Language Teaching, 54 (4): 439-471

Cenoz, J. & Gorter, D. (2021) Pedagogical Translanguaging. Cambridge: Cambridge University Press

Cummins, J. (2021) Translanguaging: A critical analysis of theoretical claims. In Juvonen, P. & Källkvist, M. (Eds.) Pedagogical Translanguaging: Theoretical, Methodological and Empirical Perspectives. Bristol: Multilingual Matters pp. 7 – 36

Deller, S. & Rinvolucri, M. (2002) Using the Mother Tongue. Peaslake, Surrey: Delta

Duff, A. (1989) Translation. Oxford: OUP

Ellis, R. (1994) Instructed Second Language Acquisition. Oxford: OUP

Galloway, N. (2018) ELF and ELT Teaching Materials. In Jenkins, J., Baker, W. & Dewey, M. (Eds.) The Routledge Handbook of English as a Lingua Franca. Abingdon, Oxon.: Routledge, pp. 468 – 480.

García, O., Ibarra Johnson, S. & Seltzer, K. (2017) The Translanguaging Classroom. Philadelphia: Caslon

García, O. (2009) Bilingual Education in the 21st Century: A Global Perspective. Malden / Oxford: Wiley / Blackwell

Gramling, D. (2016) The Invention of Monolingualism. New York: Bloomsbury

Jaspers, J. (2019) Authority and morality in advocating heteroglossia. Language, Culture and Society, 1: 1, 83 – 105

Jaspers, J. (2018) The transformative limits of translanguaging. Language & Communication, 58: 1 – 10

Jenkins, J. (2000) The Phonology of English as an International Language. Oxford: Oxford University Press

Jenkins, J. (2015) Repositioning English and multilingualism in English as a lingua franca. Englishes in Practice, 2 (3): 49-85

Kerr, P. (2014) Translation and Own-language Activities. Cambridge: Cambridge University Press

Kiczkowiak, M. & Lowe, R. J. (2018) Teaching English as a Lingua Franca. Stuttgart: Delta

Kim, S.-H. & Elder, C. (2005) Language choices and pedagogical functions in the foreign language classroom: A cross-linguistic functional analysis of teacher talk. Language Teaching Research, 9 (4): 355 – 380

Li, W. (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39 (1): 9 – 30

MacKenzie, I. (2014) English as a Lingua Franca. Abingdon, Oxon.: Routledge

Moore, R. (2015) From Revolutionary Monolingualism to Reactionary Multilingualism: Top-Down Discourses of Linguistic Diversity in Europe, 1794 – present. Language and Communication, 44: 19 – 30

Rabbidge, M. (2019) Translanguaging in EFL Contexts. Abingdon, Oxon.: Routledge

Seidlhofer, B. (2011) Understanding English as a Lingua Franca. Oxford: OUP

Tian, Z., Aghai, L., Sayer, P. & Schissel, J. L. (Eds.) (2020) Envisioning TESOL through a translanguaging lens: Global perspectives. Cham, CH: Springer Nature.

‘Pre-teaching’ (of vocabulary) is a widely-used piece of language teaching jargon, but it’s a strange expression. The ‘pre’ indicates that it’s something that comes before something else that is more important, what Chia Suan Chong calls ‘the main event’, which is usually some reading or listening work. The basic idea, it seems, is to lessen the vocabulary load of the subsequent activity. If the focus on vocabulary were the ‘main event’, we might refer to the next activity as ‘post-reading’ or ‘post-listening’ … but we never do.

The term is used in standard training manuals by both Jim Scrivener (2005: 230 – 233) and Jeremy Harmer (2012: 137) and, with a few caveats, the practice is recommended. Now read this from the ELT Nile Glossary:

For many years teachers were recommended to pre-teach vocabulary before working on texts. Nowadays though, some question this, suggesting that the contexts that teachers are able to set up for pre-teaching are rarely meaningful and that pre-teaching in fact prevents learners from developing the attack strategies they need for dealing with challenging texts.

Chia is one of those doing this questioning. She suggests that ‘we cut out pre-teaching altogether and go straight for the main event. After all, if it’s a receptive skills lesson, then shouldn’t the focus be on reading/listening skills and strategies? And most importantly, pre-teaching prevents learners’ from developing a tolerance of ambiguity – a skill that is vital in language learning.’ Scott Thornbury is another who has expressed doubts about the value of PTV, although he is more circumspect in his opinions. He has argued that working out the meaning of vocabulary from context is probably a better approach and that PTV inadequately prepares learners for the real world. If we have to pre-teach, he argues, get it out of the way ‘as quickly and efficiently as possible’ … or ‘try post-teaching instead’.

Both Chia and Scott touch on the alternatives, and guessing the meaning of unknown words from context is one of them. I’ve discussed this area in an earlier post. Not wanting to rehash the content of that post here, the simple summary is this: it’s complicated. We cannot, with any degree of certainty, say that guessing meaning from context leads to more gains in either reading / listening comprehension or vocabulary development than PTV or one of the other alternatives – encouraging / allowing monolingual or bilingual dictionary look up (see this post on the topic), providing a glossary (see this post) or doing post-text vocabulary work.

In attempting to move towards a better understanding, the first problem is that there is very little research into the relationship between PTV and improved reading / listening comprehension. What there is (e.g. Webb, 2009) suggests that pre-teaching can improve comprehension and speed up reading, but there are other things that a teacher can do (e.g. previous presentation of comprehension questions or the provision of pictorial support) that appear to lead to more gains in these areas (Pellicer-Sánchez et al., 2021). It’s not exactly a ringing endorsement. There is even less research looking at the relationship between PTV and vocabulary development. What there is (Pellicer-Sánchez et al., 2021) suggests that pre-teaching leads to more vocabulary gains than when learners read without any support. But the reading-only condition is unlikely in most real-world learning contexts, where there is a teacher, dictionary or classmate who can be turned to. A more interesting contrast is perhaps between PTV and during-reading vocabulary instruction, which is a common approach in many classrooms. One study (File & Adams, 2010) looked at precisely this area and found little difference between the approaches in terms of vocabulary gains. The limited research does not provide us with any compelling reasons either for or against PTV.

Another problem is, as usual, that the research findings often imply more than was actually demonstrated. The abstract for the study by Pellicer-Sánchez et al (2021) states that pre‐reading instruction led to more vocabulary learning. But this needs to be considered in the light of the experimental details.

The study involved 87 L2 undergraduates and postgraduates studying at a British university. Their level of English was therefore very high, and we can’t really generalise to other learners at other levels in other conditions. The text that they read contained a number of pseudo-words and was 2,290 words long. The text itself, a narrative, was of no intrinsic interest, so the students reading it would treat it as an object of study and they would notice the pseudo-words, because their level of English was already high, and because they knew that the focus of the research was on ‘new words’. In other words, the students’ behaviour was probably not at all typical of a student in a ‘normal’ classroom. In addition, the pseudo-words were all Anglo-Saxon looking, and not therefore representative of the kinds of unknown items that students would encounter in authentic (or even pedagogical) texts (which would have a high proportion of words with Latin roots). I’m afraid I don’t think that the study tells us anything of value.

Perhaps research into an area like this, with so many variables that need to be controlled, is unlikely ever to provide teachers with clear answers to what appears to be a simple question: is PTV a good idea or not? However, I think we can get closer to something resembling useful advice if we take another tack. For this, I think two additional questions need to be asked. First, what is the intended main learning opportunity (note that I avoid the term ‘learning outcome’!) of the ‘main event’ – the reading or listening. Second, following on from the first question, what is the point of PTV, i.e. in what ways might it contribute to enriching the learning opportunities of the ‘main event’?

To answer the first question, I think it is useful to go back to a distinction made almost forty years ago in a paper by Tim Johns and Florence Davies (1983). They contrasted the Text as a Linguistic Object (TALO) with the Text as a Vehicle for Information (TAVI). The former (TALO) is something that language students study to learn language from in a direct way. It has typically been written or chosen to illustrate and to contextualise bits of grammar, and to provide opportunities for lexical ‘quarrying’. The latter (TAVI) is a text with intrinsic interest, read for information or pleasure, and therefore more appropriately selected by the learner, rather than the teacher. For an interesting discussion on TALO and TAVI, see this 2015 post from Geoff Jordan.

Johns and Davies wrote their article in pre-Headway days when texts in almost all coursebooks were unashamedly TALOs, and when what were called top-down reading skills (reading for gist / detail, etc.) were only just beginning to find their way into language teaching materials. TAVIs were separate, graded readers, for example. In some parts of the world, TALOs and TAVIs are still separate, often with one teacher dealing with the teaching of discrete items of language through TALOs, and another responsible for ‘skills development’ through TAVIs. But, increasingly, under the influence of British publishers and methodologists, attempts have been made to combine TALOs and TAVIs in a single package. The syllabus of most contemporary coursebooks, fundamentally driven by a discrete-item grammar plus vocabulary approach, also offer a ‘skills’ strand which requires texts to be intrinsically interesting, meaningful and relevant to today’s 21st century learners. The texts are required to carry out two functions.

Recent years have seen an increasingly widespread questioning of this approach. Does the exploitation of reading and listening texts in coursebooks (mostly through comprehension questions) actually lead to gains in reading and listening skills? Is there anything more than testing of comprehension going on? Or do they simply provide practice in strategic approaches to reading / listening, strategies which could probably be transferred from L1? As a result of the work of scholars like William Grabe (reading) and John Field and Richard Cauldwell (listening), there is now little, if any, debate in the world of research about these questions. If we want to develop the reading / listening skills of our students, the approach of most coursebooks is not the way to go about it. For a start, the reading texts are usually too short and the listening texts too long.

Most texts that are found in most contemporary coursebooks are TALOs dressed up to look like TAVIs. Their fundamental purpose is to illustrate and contextualise language that has either been pre-taught or will be explored later. They are first and foremost vehicles for language, and only secondarily vehicles for information. They are written and presented in as interesting a way as possible in order to motivate learners to engage with the TALO. Sometimes, they succeed.

However, there are occasions (even in coursebooks) when texts are TAVIs – used for purely ‘skills’ purposes, language use as opposed to language study. Typically, they (reading or listening texts) are used as springboards for speaking and / or writing practice that follows. It’s the information in the text that matters most.

So, where does all this take us with PTV? Here is my attempt at a break-down of advice.

1 TALOs where the text contains a set of new lexical items which are a core focus of the lesson

If the text is basically a contextualized illustration of a set of lexical items (and, usually, a particular grammatical structure), there is a strong case for PTV. This is, of course, assuming that these items are of sufficiently high frequency to be suitable candidates for direct vocabulary instruction. If this is so, there is also a strong case to be made for the PTV to be what has been called ‘rich instruction’, which ‘involves (1) spending time on the word; (2) explicitly exploring several aspects of what is involved in knowing a word; and (3) involving learners in thoughtfully and actively processing the word’ (Nation, 2013: 117). In instances like this, PTV is something of a misnomer. It’s just plain teaching, and is likely to need as much, or more, time than exploration of the text (which may be viewed as further practice of / exposure to the lexis).

If the text is primarily intended as lexical input, there is also a good case to be made for making the target items it contains more salient by, for example, highlighting them or putting them in bold (Choi, 2017). At the same time, if ‘PTV’ is to lead to lexical gains, these are likely to be augmented by post-reading tasks which also focus explicitly on the target items (Sonbul & Schmitt, 2010).

2 TALOs which contain a set of lexical items that are necessary for comprehension of the text, but not a core focus of the lesson (e.g. because they are low-frequency)

PTV is often time-consuming, and necessarily so if the instruction is rich. If it is largely restricted to matching items to meanings (e.g. through translation), it is likely to have little impact on vocabulary development, and its short-term impact on comprehension appears to be limited. Research suggests that the use of a glossary is more efficient, since learners will only refer to it when they need to (whereas PTV is likely to devote some time to some items that are known to some learners, and this takes place before the knowledge is required … and may therefore be forgotten in the interim). Glossaries lead to better comprehension (Alessi & Dwyer, 2008).

3 TAVIs

I don’t have any principled objection to the occasional use of texts as TALOs, but it seems fairly clear that a healthy textual diet for language learners will contain substantially more TAVIs than TALOs, substantially more extensive reading than intensive reading of the kind found in most coursebooks. If we focused less often on direct instruction of grammar (a change of emphasis which is long overdue), there would be less need for TALOs, anyway. With TAVIs, there seems to be no good reason for PTV: glossaries or digital dictionary look-up will do just fine.

However, one alternative justification and use of PTV is offered by Scott Thornbury. He suggests identifying a relatively small number of keywords from a text that will be needed for global understanding. Some of them may be unknown to the learners, and for these, learners use dictionaries to check meaning. Then, looking at the list of key words learners predict what the text will be about. The rationale here is that if learners engage with these words before encountering them in the text, it ‘may be an effective way of activating a learner’s schema for the text, and this may help to support comprehension’ (Ballance, 2018). However, as Ballance notes, describing this kind of activity as PTV would be something of a misnomer: it is a useful addition to a teacher’s repertoire of schema-activation activities (which might be used with both TAVIs and TALOs).

In short …

The big question about PTV, then, is not one of ‘yes’ or ‘no’. It’s about the point of the activity. Balance (2018) offers a good summary:

‘In sum, for teachers to use PTV effectively, it is essential that they clearly identify a rationale for including PTV within a lesson, select the words to be taught in conjunction with this rationale and also design the vocabulary learning or development exercise in a manner that is commensurate with this rationale. The rationale should be the determining factor in the design of a PTV component within a lesson, and different rationales for using PTV naturally lead to markedly different selections of vocabulary items to be studied and different exercise designs.’

REFERENCES

Alessi, S. & Dwyer, A. (2008). Vocabulary assistance before and during reading. Reading in a Foreign Language, 20 (2): pp. 246 – 263

Ballance, O. J. (2018). Strategies for pre-teaching vocabulary in context. In The TESOL Encyclopedia of English Language Teaching (pp. 1-7). Wiley. https://doi.org/10.1002/9781118784235.eelt0732

Choi, S. (2017). Processing and learning of enhanced English collocations: An eye movement study. Language Teaching Research, 21, 403–426. https://doi.org/10.1177/1362168816653271

File, K. A. & Adams, R. (2010). Should vocabulary instruction be integrated or isolated? TESOL Quarterly, 24, 222–249.

Harmer, J. (2012). Essential Teacher Knowledge. Harlow: Pearson

Johns, T. & Davies, F. (1983). Text as a vehicle for information: the classroom use of written texts in teaching reading in a foreign language. Reading in a Foreign Language, 1 (1): pp. 1 – 19

Nation, I. S. P. (2013). Learning Vocabulary in Another Language 2nd Edition. Cambridge: Cambridge University Press

Pellicer-Sánchez, A., Conklin, K. & Vilkaitė-Lozdienė, L. (2021). The effect of pre-reading instruction on vocabulary learning: An investigation of L1 and L2 readers’ eye movements. Language Learning, 0 (0), 0-0. https://onlinelibrary.wiley.com/doi/full/10.1111/lang.12430

Scrivener, J. (2005). Learning Teaching 2nd Edition. Oxford: Macmillan

Sonbul, S. & Schmitt, N. (2010). Direct teaching of vocabulary after reading: is it worth the effort? ELT Journal 64 (3): pp.253 – 260

Webb, S. (2009). The effects of pre‐learning vocabulary on reading comprehension and writing. The Canadian Modern Language Review, 65 (3): pp. 441–470.

I’ve long felt that the greatest value of technology in language learning is to facilitate interaction between learners, rather than interaction between learners and software. I can’t claim any originality here. Twenty years ago, Kern and Warschauer (2000) described ‘the changing nature of computer use in language teaching’, away from ‘grammar and vocabulary tutorials, drill and practice programs’, towards computer-mediated communication (CMC). This change has even been described as a paradigm shift (Ciftci & Kocoglu, 2012: 62), although I suspect that the shift has affected approaches to research much more than it has actual practices.

However, there is one application of CMC that is probably at least as widespread in actual practice as it is in the research literature: online peer feedback. Online peer feedback on writing, especially in the development of academic writing skills in higher education, is certainly very common. To a much lesser extent, online peer feedback on speaking (e.g. in audio and video blogs) has also been explored (see, for example, Yeh et al., 2019 and Rodríguez-González & Castañeda, 2018).

Peer feedback

Interest in feedback has spread widely since the publication of Hattie and Timperley’s influential ‘The Power of Feedback’, which argued that ‘feedback is one of the most powerful influences on learning and achievement’ (Hattie & Timperley, 2007: 81). Peer feedback, in particular, has generated much optimism in the general educational literature as a formative practice (Double et al., 2019) because of its potential to:

  • ‘promote a sense of ownership, personal responsibility, and motivation,
  • reduce assessee anxiety and improve acceptance of negative feedback,
  • increase variety and interest, activity and interactivity, identification and bonding, self-confidence, and empathy for others’ (Topping, 1988: 256)
  • improve academic performance (Double et al., 2019).

In the literature on language learning, this enthusiasm is mirrored and peer feedback is generally recommended by both methodologists and researchers (Burkert & Wally, 2013). The reasons given, in addition to those listed above, include the following:

  • it can benefit both the receiver and the giver of feedback (Storch & Aldossary, 2019: 124),
  • it requires the givers of feedback to listen to or read attentively the language of their peers, and, in the process, may provide opportunities for them to make improvements in their own speaking and writing (Alshuraidah & Storch, 2019: 166–167,
  • it can facilitate a move away from a teacher centred classroom, and promote independent learning (and the skill of self-correction) as well as critical thinking (Hyland & Hyland, 2019: 7),
  • the target reader is an important consideration in any piece of writing (it is often specified in formal assessment tasks). Peer feedback may be especially helpful in developing the idea of what audience the writer is writing for (Nation, 2009: 139),
  • many learners are very receptive to peer feedback (Biber et al., 2011: 54),
  • it can reduce a teacher’s workload.

The theoretical arguments in support of peer feedback are supported to some extent by research. A recent meta-analysis found ‘an overall small to medium effect of peer assessment on academic performance’ (Double et al., 2019) in general educational settings. In language learning, ‘recent research has provided generally positive evidence to support the use of peer feedback in L2 writing classes’ (Yu & Lee, 2016: 467). However, ‘firm causal evidence is as yet unavailable’ (Yu & Lee, 2016: 466).

Online peer feedback

Taking peer feedback online would seem to offer a number of advantages over traditional face-to-face oral or written channels. These include:

  • a significant reduction of the logistical burden (Double et al.: 2019) because there are fewer constraints of time and place (Ho, 2015: 1),
  • the possibility (with many platforms) of monitoring students’ interactions more closely (DiGiovanni & Nagaswami, 2001: 268),
  • the encouragement of ‘greater and more equal member participation than face-to-face feedback’ (Yu & Lee, 2016: 469),
  • the possibility of reducing learners’ anxiety (which may be greater in face-to-face settings and / or when an immediate response to feedback is required) (Yeh et al.: 2019: 1).

Given these potential advantages, it is disappointing to find that a meta-analysis of peer assessment in general educational contexts did not find any significant difference between online and offline feedback (Double et al.:2019). Similarly, in language learning contexts, Yu & Lee (2016: 469) report that ‘there is inconclusive evidence about the impact of computer-mediated peer feedback on the quality of peer comments and text revisions’. The rest of this article is an exploration of possible reasons why online peer feedback is not more effective than it is.

The challenges of online peer feedback

Peer feedback is usually of greatest value when it focuses on the content and organization of what has been expressed. Learners, however, have a tendency to focus on formal accuracy, rather than on the communicative success (or otherwise) of their peers’ writing or speaking. Training can go a long way towards remedying this situation (Yu & Lee, 2016: 472 – 473): indeed, ‘the importance of properly training students to provide adequately useful peer comments cannot be over-emphasized’ (Bailey & Cassidy, 2018: 82). In addition, clearly organised rubrics to guide the feedback giver, such as those offered by feedback platforms like Peergrade, may also help to steer feedback in appropriate directions. There are, however, caveats which I will come on to.

A bigger problem occurs when the interaction which takes places when learners are supposedly engaged in peer feedback is completely off-task. In one analysis of students’ online discourse in two writing tasks, ‘meaning negotiation, error correction, and technical actions seldom occurred and […] social talk, task management, and content discussion predominated the chat’ (Liang, 2010: 45). One proposed solution to this is to grade peer comments: ‘reviewers will be more motivated to spend time in their peer review process if they know that their instructors will assess or even grade their comments’ (Choi, 2014: 225). Whilst this may sometimes be an effective strategy, the curtailment of social chat may actually create more problems than it solves, as we will see later.

Other challenges of peer feedback may be even less amenable to solutions. The most common problem concerns learners’ attitudes towards peer feedback: some learners are not receptive to feedback from their peers, preferring feedback from their teachers (Maas, 2017), and some learners may be reluctant to offer peer feedback for fear of giving offence. Attitudinal issues may derive from personal or cultural factors, or a combination of both. Whatever the cause, ‘interpersonal variables play a substantial role in determining the type and quality of peer assessment’ (Double et al., 2019). One proposed solution to this is to anonymise the peer feedback process, since it might be thought that this would lead to greater honesty and fewer concerns about loss of face. Research into this possibility, however, offers only very limited support: two studies out of three found little benefit of anonymity (Double et al., 2019). What is more, as with the curtailment of social chat, the practice must limit the development of the interpersonal relationship, and therefore positive pair / group dynamics (Liang, 2010: 45), that is necessary for effective collaborative work.

Towards solutions?

Online peer feedback is a form of computer-supported collaborative learning (CSCL), and it is to research in this broader field that I will now turn. The claim that CSCL ‘can facilitate group processes and group dynamics in ways that may not be achievable in face-to-face collaboration’ (Dooly, 2007: 64) is not contentious, but, in order for this to happen, a number of ‘motivational or affective perceptions are important preconditions’ (Chen et al., 2018: 801). Collaborative learning presupposes a collaborative pattern of peer interaction, as opposed to expert-novice, dominant- dominant, dominant-passive, or passive-passive patterns (Yu & Lee, 2016: 475).

Simply putting students together into pairs or groups does not guarantee collaboration. Collaboration is less likely to take place when instructional management focusses primarily on cognitive processes, and ‘socio-emotional processes are ignored, neglected or forgotten […] Social interaction is equally important for affiliation, impression formation, building social relationships and, ultimately, the development of a healthy community of learning’ (Kreijns et al., 2003: 336, 348 – 9). This can happen in all contexts, but in online environments, the problem becomes ‘more salient and critical’ (Kreijns et al., 2003: 336). This is why the curtailment of social chat, the grading of peer comments, and the provision of tight rubrics may be problematic.

There is no ‘single learning tool or strategy’ that can be deployed to address the challenges of online peer feedback and CSCL more generally (Chen et al., 2018: 833). In some cases, for personal or cultural reasons, peer feedback may simply not be a sensible option. In others, where effective online peer feedback is a reasonable target, the instructional approach must find ways to train students in the specifics of giving feedback on a peer’s work, to promote mutual support, to show how to work effectively with others, and to develop the language skills needed to do this (assuming that the target language is the language that will be used in the feedback).

So, what can we learn from looking at online peer feedback? I think it’s the same old answer: technology may confer a certain number of potential advantages, but, unfortunately, it cannot provide a ‘solution’ to complex learning issues.

 

Note: Some parts of this article first appeared in Kerr, P. (2020). Giving feedback to language learners. Part of the Cambridge Papers in ELT Series. Cambridge: Cambridge University Press. Available at: https://www.cambridge.org/gb/files/4415/8594/0876/Giving_Feedback_minipaper_ONLINE.pdf

 

References

Alshuraidah, A. and Storch, N. (2019). Investigating a collaborative approach to feedback. ELT Journal, 73 (2), pp. 166–174

Bailey, D. and Cassidy, R. (2018). Online Peer Feedback Tasks: Training for Improved L2 Writing Proficiency, Anxiety Reduction, and Language Learning Strategies. CALL-EJ, 20(2), pp. 70-88

Biber, D., Nekrasova, T., and Horn, B. (2011). The Effectiveness of Feedback for L1-English and L2-Writing Development: A Meta-Analysis, TOEFL iBT RR-11-05. Princeton: Educational Testing Service. Available at: https://www.ets.org/Media/Research/pdf/RR-11-05.pdf

Burkert, A. and Wally, J. (2013). Peer-reviewing in a collaborative teaching and learning environment. In Reitbauer, M., Campbell, N., Mercer, S., Schumm Fauster, J. and Vaupetitsch, R. (Eds.) Feedback Matters. Frankfurt am Main: Peter Lang, pp. 69–85

Chen, J., Wang, M., Kirschner, P.A. and Tsai, C.C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88 (6) (2018), pp. 799-843

Choi, J. (2014). Online Peer Discourse in a Writing Classroom. International Journal of Teaching and Learning in Higher Education, 26 (2): pp. 217 – 231

Ciftci, H. and Kocoglu, Z. (2012). Effects of Peer E-Feedback on Turkish EFL Students’ Writing Performance. Journal of Educational Computing Research, 46 (1), pp. 61 – 84

DiGiovanni, E. and Nagaswami. G. (2001). Online peer review: an alternative to face-to-face? ELT Journal 55 (3), pp. 263 – 272

Dooly, M. (2007). Joining forces: Promoting metalinguistic awareness through computer-supported collaborative learning. Language Awareness, 16 (1), pp. 57-74

Double, K.S., McGrane, J.A. and Hopfenbeck, T.N. (2019). The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies. Educational Psychology Review (2019)

Hattie, J. and Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), pp. 81–112

Ho, M. (2015). The effects of face-to-face and computer-mediated peer review on EFL writers’ comments and revisions. Australasian Journal of Educational Technology, 2015, 31(1)

Hyland K. and Hyland, F. (2019). Contexts and issues in feedback on L2 writing. In Hyland K. & Hyland, F. (Eds.) Feedback in Second Language Writing. Cambridge: Cambridge University Press, pp. 1–22

Kern, R. and Warschauer, M. (2000). Theory and practice of network-based language teaching. In M. Warschauer and R. Kern (eds) Network-Based Language Teaching: Concepts and Practice. New York: Cambridge University Press. pp. 1 – 19

Kreijns, K., Kirschner, P. A. and Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research. Computers in Human Behavior, 19(3), pp. 335-353

Liang, M. (2010). Using Synchronous Online Peer Response Groups in EFL Writing: Revision-Related Discourse. Language Learning and Technology, 14 (1), pp. 45 – 64

Maas, C. (2017). Receptivity to learner-driven feedback. ELT Journal, 71 (2), pp. 127–140

Nation, I. S. P. (2009). Teaching ESL / EFL Reading and Writing. New York: Routledge

Panadero, E. and Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 1–26

Rodríguez-González, E. and Castañeda, M. E. (2018). The effects and perceptions of trained peer feedback in L2 speaking: impact on revision and speaking quality, Innovation in Language Learning and Teaching, 12 (2), pp. 120-136, DOI: 10.1080/17501229.2015.1108978

Storch, N. and Aldossary, K. (2019). Peer Feedback: An activity theory perspective on givers’ and receivers’ stances. In Sato, M. and Loewen, S. (Eds.) Evidence-based Second Language Pedagogy. New York: Routledge, pp. 123–144

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68 (3), pp. 249-276.

Yeh, H.-C., Tseng, S.-S., and Chen, Y.-S. (2019). Using Online Peer Feedback through Blogs to Promote Speaking Performance. Educational Technology & Society, 22 (1), pp. 1–14

Yu, S. and Lee, I. (2016). Peer feedback in second language writing (2005 – 2014). Language Teaching, 49 (4), pp. 461 – 493

From time to time, I have mentioned Programmed Learning (or Programmed Instruction) in this blog (here and here, for example). It felt like time to go into a little more detail about what Programmed Instruction was (and is) and why I think it’s important to know about it.

A brief description

The basic idea behind Programmed Instruction was that subject matter could be broken down into very small parts, which could be organised into an optimal path for presentation to students. Students worked, at their own speed, through a series of micro-tasks, building their mastery of each nugget of learning that was presented, not progressing from one to the next until they had demonstrated they could respond accurately to the previous task.

There were two main types of Programmed Instruction: linear programming and branching programming. In the former, every student would follow the same path, the same sequence of frames. This could be used in classrooms for whole-class instruction and I tracked down a book (illustrated below) called ‘Programmed English Course Student’s Book 1’ (Hill, 1966), which was an attempt to transfer the ideas behind Programmed Instruction to a zero-tech, class environment. This is very similar in approach to the material I had to use when working at an Inlingua school in the 1980s.

Programmed English Course

Comparatives strip

An example of how self-paced programming worked is illustrated here, with a section on comparatives.

With branching programming, ‘extra frames (or branches) are provided for students who do not get the correct answer’ (Kay et al., 1968: 19). This was only suitable for self-study, but it was clearly preferable, as it allowed for self-pacing and some personalization. The material could be presented in books (which meant that students had to flick back and forth in their books) or with special ‘teaching machines’, but the latter were preferred.

In the words of an early enthusiast, Programmed Instruction was essentially ‘a device to control a student’s behaviour and help him to learn without the supervision of a teacher’ (Kay et al.,1968: 58). The approach was inspired by the work of Skinner and it was first used as part of a university course in behavioural psychology taught by Skinner at Harvard University in 1957. It moved into secondary schools for teaching mathematics in 1959 (Saettler, 2004: 297).

Enthusiasm and uptake

The parallels between current enthusiasm for the power of digital technology to transform education and the excitement about Programmed Instruction and teaching machines in the 1960s are very striking (McDonald et al., 2005: 90). In 1967, it was reported that ‘we are today on the verge of what promises to be a revolution in education’ (Goodman, 1967: 3) and that ‘tremors of excitement ran through professional journals and conferences and department meetings from coast to coast’ (Kennedy, 1967: 871). The following year, another commentator referred to the way that the field of education had been stirred ‘with an almost Messianic promise of a breakthrough’ (Ornstein, 1968: 401). Programmed instruction was also seen as an exciting business opportunity: ‘an entire industry is just coming into being and significant sales and profits should not be too long in coming’, wrote one hopeful financial analyst as early as 1961 (Kozlowski, 1967: 47).

The new technology seemed to offer a solution to the ‘problems of education’. Media reports in 1963 in Germany, for example, discussed a shortage of teachers, large classes and inadequate learning progress … ‘an ‘urgent pedagogical emergency’ that traditional teaching methods could not resolve’ (Hof, 2018). Individualised learning, through Programmed Instruction, would equalise educational opportunity and if you weren’t part of it, you would be left behind. In the US, two billion dollars were spent on educational technology by the government in the decade following the passing of the National Defense Education Act, and this was added to by grants from private foundations. As a result, ‘the production of teaching machines began to flourish, accompanied by the marketing of numerous ‘teaching units’ stamped into punch cards as well as less expensive didactic programme books and index cards. The market grew dramatically in a short time’ (Hof, 2018).

In the field of language learning, however, enthusiasm was more muted. In the year in which he completed his doctoral studies[1], the eminent linguist, Bernard Spolsky noted that ‘little use is actually being made of the new technique’ (Spolsky, 1966). A year later, a survey of over 600 foreign language teachers at US colleges and universities reported that only about 10% of them had programmed materials in their departments (Valdman, 1968: 1). In most of these cases, the materials ‘were being tried out on an experimental basis under the direction of their developers’. And two years after that, it was reported that ‘programming has not yet been used to any very great extent in language teaching, so there is no substantial body of experience from which to draw detailed, water-tight conclusions’ (Howatt, 1969: 164).

By the early 1970s, Programmed Instruction was already beginning to seem like yesterday’s technology, even though the principles behind it are still very much alive today (Thornbury (2017) refers to Duolingo as ‘Programmed Instruction’). It would be nice to think that language teachers of the day were more sceptical than, for example, their counterparts teaching mathematics. It would be nice to think that, like Spolsky, they had taken on board Chomsky’s (1959) demolition of Skinner. But the widespread popularity of Audiolingual methods suggests otherwise. Audiolingualism, based essentially on the same Skinnerian principles as Programmed Instruction, needed less outlay on technology. The machines (a slide projector and a record or tape player) were cheaper than the teaching machines, could be used for other purposes and did not become obsolete so quickly. The method also lent itself more readily to established school systems (i.e. whole-class teaching) and the skills sets of teachers of the day. Significantly, too, there was relatively little investment in Programmed Instruction for language teaching (compared to, say, mathematics), since this was a smallish and more localized market. There was no global market for English language learning as there is today.

Lessons to be learned

1 Shaping attitudes

It was not hard to persuade some educational authorities of the value of Programmed Instruction. As discussed above, it offered a solution to the problem of ‘the chronic shortage of adequately trained and competent teachers at all levels in our schools, colleges and universities’, wrote Goodman (1967: 3), who added, there is growing realisation of the need to give special individual attention to handicapped children and to those apparently or actually retarded’. The new teaching machines ‘could simulate the human teacher and carry out at least some of his functions quite efficiently’ (Goodman, 1967: 4). This wasn’t quite the same thing as saying that the machines could replace teachers, although some might have hoped for this. The official line was more often that the machines could ‘be used as devices, actively co-operating with the human teacher as adaptive systems and not just merely as aids’ (Goodman, 1967: 37). But this more nuanced message did not always get through, and ‘the Press soon stated that robots would replace teachers and conjured up pictures of classrooms of students with little iron men in front of them’ (Kay et al., 1968: 161).

For teachers, though, it was one thing to be told that the machines would free their time to perform more meaningful tasks, but harder to believe when this was accompanied by a ‘rhetoric of the instructional inadequacies of the teacher’ (McDonald, et al., 2005: 88). Many teachers felt threatened. They ‘reacted against the ‘unfeeling machine’ as a poor substitute for the warm, responsive environment provided by a real, live teacher. Others have seemed to take it more personally, viewing the advent of programmed instruction as the end of their professional career as teachers. To these, even the mention of programmed instruction produces a momentary look of panic followed by the appearance of determination to stave off the ominous onslaught somehow’ (Tucker, 1972: 63).

Some of those who were pushing for Programmed Instruction had a bigger agenda, with their sights set firmly on broader school reform made possible through technology (Hof, 2018). Individualised learning and Programmed Instruction were not just ends in themselves: they were ways of facilitating bigger changes. The trouble was that teachers were necessary for Programmed Instruction to work. On the practical level, it became apparent that a blend of teaching machines and classroom teaching was more effective than the machines alone (Saettler, 2004: 299). But the teachers’ attitudes were crucial: a research study involving over 6000 students of Spanish showed that ‘the more enthusiastic the teacher was about programmed instruction, the better the work the students did, even though they worked independently’ (Saettler, 2004: 299). In other researched cases, too, ‘teacher attitudes proved to be a critical factor in the success of programmed instruction’ (Saettler, 2004: 301).

2 Returns on investment

Pricing a hyped edtech product is a delicate matter. Vendors need to see a relatively quick return on their investment, before a newer technology knocks them out of the market. Developments in computing were fast in the late 1960s, and the first commercially successful personal computer, the Altair 8800, appeared in 1974. But too high a price carried obvious risks. In 1967, the cheapest teaching machine in the UK, the Tutorpack (from Packham Research Ltd), cost £7 12s (equivalent to about £126 today), but machines like these were disparagingly referred to as ‘page-turners’ (Higgins, 1983: 4). A higher-end linear programming machine cost twice this amount. Branching programme machines cost a lot more. The Mark II AutoTutor (from USI Great Britain Limited), for example, cost £31 per month (equivalent to £558), with eight reels of programmes thrown in (Goodman, 1967: 26). A lower-end branching machine, the Grundytutor, could be bought for £ 230 (worth about £4140 today).

Teaching machines (from Goodman)AutoTutor Mk II (from Goodman)

This was serious money, and any institution splashing out on teaching machines needed to be confident that they would be well used for a long period of time (Nordberg, 1965). The programmes (the software) were specific to individual machines and the content could not be updated easily. At the same time, other technological developments (cine projectors, tape recorders, record players) were arriving in classrooms, and schools found themselves having to pay for technical assistance and maintenance. The average teacher was ‘unable to avail himself fully of existing aids because, to put it bluntly, he is expected to teach for too many hours a day and simply has not the time, with all the administrative chores he is expected to perform, either to maintain equipment, to experiment with it, let alone keeping up with developments in his own and wider fields. The advent of teaching machines which can free the teacher to fulfil his role as an educator will intensify and not diminish the problem’ (Goodman, 1967: 44). Teaching machines, in short, were ‘oversold and underused’ (Cuban, 2001).

3 Research and theory

Looking back twenty years later, B. F. Skinner conceded that ‘the machines were crude, [and] the programs were untested’ (Skinner, 1986: 105). The documentary record suggests that the second part of this statement is not entirely true. Herrick (1966: 695) reported that ‘an overwhelming amount of research time has been invested in attempts to determine the relative merits of programmed instruction when compared to ‘traditional’ or ‘conventional’ methods of instruction. The results have been almost equally overwhelming in showing no significant differences’. In 1968, Kay et al (1968: 96) noted that ‘there has been a definite effort to examine programmed instruction’. A later meta-analysis of research in secondary education (Kulik et al.: 1982) confirmed that ‘Programmed Instruction did not typically raise student achievement […] nor did it make students feel more positively about the subjects they were studying’.

It was not, therefore, the case that research was not being done. It was that many people were preferring not to look at it. The same holds true for theoretical critiques. In relation to language learning, Spolsky (1966) referred to Chomsky’s (1959) rebuttal of Skinner’s arguments, adding that ‘there should be no need to rehearse these inadequacies, but as some psychologists and even applied linguists appear to ignore their existence it might be as well to remind readers of a few’. Programmed Instruction might have had a limited role to play in language learning, but vendors’ claims went further than that and some people believed them: ‘Rather than addressing themselves to limited and carefully specified FL tasks – for example the teaching of spelling, the teaching of grammatical concepts, training in pronunciation, the acquisition of limited proficiency within a restricted number of vocabulary items and grammatical features – most programmers aimed at self-sufficient courses designed to lead to near-native speaking proficiency’ (Valdman, 1968: 2).

4 Content

When learning is conceptualised as purely the acquisition of knowledge, technological optimists tend to believe that machines can convey it more effectively and more efficiently than teachers (Hof, 2018). The corollary of this is the belief that, if you get the materials right (plus the order in which they are presented and appropriate feedback), you can ‘to a great extent control and engineer the quality and quantity of learning’ (Post, 1972: 14). Learning, in other words, becomes an engineering problem, and technology is its solution.

One of the problems was that technology vendors were, first and foremost, technology specialists. Content was almost an afterthought. Materials writers needed to be familiar with the technology and, if not, they were unlikely to be employed. Writers needed to believe in the potential of the technology, so those familiar with current theory and research would clearly not fit in. The result was unsurprising. Kennedy (1967: 872) reported that ‘there are hundreds of programs now available. Many more will be published in the next few years. Watch for them. Examine them critically. They are not all of high quality’. He was being polite.

5 Motivation

As is usually the case with new technologies, there was a positive novelty effect with Programmed Instruction. And, as is always the case, the novelty effect wears off: ‘students quickly tired of, and eventually came to dislike, programmed instruction’ (McDonald et al.: 89). It could not really have been otherwise: ‘human learning and intrinsic motivation are optimized when persons experience a sense of autonomy, competence, and relatedness in their activity. Self-determination theorists have also studied factors that tend to occlude healthy functioning and motivation, including, among others, controlling environments, rewards contingent on task performance, the lack of secure connection and care by teachers, and situations that do not promote curiosity and challenge’ (McDonald et al.: 93). The demotivating experience of using these machines was particularly acute with younger and ‘less able’ students, as was noted at the time (Valdman, 1968: 9).

The unlearned lessons

I hope that you’ll now understand why I think the history of Programmed Instruction is so relevant to us today. In the words of my favourite Yogi-ism, it’s like deja vu all over again. I have quoted repeatedly from the article by McDonald et al (2005) and I would highly recommend it – available here. Hopefully, too, Audrey Watters’ forthcoming book, ‘Teaching Machines’, will appear before too long, and she will, no doubt, have much more of interest to say on this topic.

References

Chomsky N. 1959. ‘Review of Skinner’s Verbal Behavior’. Language, 35:26–58.

Cuban, L. 2001. Oversold & Underused: Computers in the Classroom. (Cambridge, MA: Harvard University Press)

Goodman, R. 1967. Programmed Learning and Teaching Machines 3rd edition. (London: English Universities Press)

Herrick, M. 1966. ‘Programmed Instruction: A critical appraisal’ The American Biology Teacher, 28 (9), 695 -698

Higgins, J. 1983. ‘Can computers teach?’ CALICO Journal, 1 (2)

Hill, L. A. 1966. Programmed English Course Student’s Book 1. (Oxford: Oxford University Press)

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47:4, 445-465

Howatt, A. P. R. 1969. Programmed Learning and the Language Teacher. (London: Longmans)

Kay, H., Dodd, B. & Sime, M. 1968. Teaching Machines and Programmed Instruction. (Harmondsworth: Penguin)

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 / 6, 47 – 54

Kulik, C.-L., Schwalb, B. & Kulik, J. 1982. ‘Programmed Instruction in Secondary Education: A Meta-analysis of Evaluation Findings’ Journal of Educational Research, 75: 133 – 138

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 / 2, 84 – 98

Nordberg, R. B. 1965. Teaching machines-six dangers and one advantage. In J. S. Roucek (Ed.), Programmed teaching: A symposium on automation in education (pp. 1–8). (New York: Philosophical Library)

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 / 7, 401 – 410

Post, D. 1972. ‘Up the programmer: How to stop PI from boring learners and strangling results’. Educational Technology, 12(8), 14–1

Saettler, P. 2004. The Evolution of American Educational Technology. (Greenwich, Conn.: Information Age Publishing)

Skinner, B. F. 1986. ‘Programmed Instruction Revisited’ The Phi Delta Kappan, 68 (2), 103 – 110

Spolsky, B. 1966. ‘A psycholinguistic critique of programmed foreign language instruction’ International Review of Applied Linguistics in Language Teaching, Volume 4, Issue 1-4: 119–130

Thornbury, S. 2017. Scott Thornbury’s 30 Language Teaching Methods. (Cambridge: Cambridge University Press)

Tucker, C. 1972. ‘Programmed Dictation: An Example of the P.I. Process in the Classroom’. TESOL Quarterly, 6(1), 61-70

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14

 

 

 

[1] Spolsky’ doctoral thesis for the University of Montreal was entitled ‘The psycholinguistic basis of programmed foreign language instruction’.

 

 

 

 

 

Book_coverIn my last post, I looked at shortcomings in edtech research, mostly from outside the world of ELT. I made a series of recommendations of ways in which such research could become more useful. In this post, I look at two very recent collections of ELT edtech research. The first of these is Digital Innovations and Research in Language Learning, edited by Mavridi and Saumell, and published this February by the Learning Technologies SIG of IATEFL. I’ll refer to it here as DIRLL. It’s available free to IATEFL LT SIG members, and can be bought for $10.97 as an ebook on Amazon (US). The second is the most recent edition (February 2020) of the Language Learning & Technology journal, which is open access and available here. I’ll refer to it here as LLTJ.

In both of these collections, the focus is not on ‘technology per se, but rather issues related to language learning and language teaching, and how they are affected or enhanced by the use of digital technologies’. However, they are very different kinds of publication. Nobody involved in the production of DIRLL got paid in any way (to the best of my knowledge) and, in keeping with its provenance from a teachers’ association, has ‘a focus on the practitioner as teacher-researcher’. Almost all of the contributing authors are university-based, but they are typically involved more in language teaching than in research. With one exception (a grant from the EU), their work was unfunded.

The triannual LLTJ is funded by two American universities and published by the University of Hawaii Press. The editors and associate editors are well-known scholars in their fields. The journal’s impact factor is high, close to the impact factor of the paywalled reCALL (published by the University of Cambridge), which is the highest-ranking journal in the field of CALL. The contributing authors are all university-based, many with a string of published articles (in prestige journals), chapters or books behind them. At least six of the studies were funded by national grant-awarding bodies.

I should begin by making clear that there was much in both collections that I found interesting. However, it was not usually the research itself that I found informative, but the literature review that preceded it. Two of the chapters in DIRLL were not really research, anyway. One was the development of a template for evaluating ICT-mediated tasks in CLIL, another was an advocacy of comics as a resource for language teaching. Both of these were new, useful and interesting to me. LLTJ included a valuable literature review of research into VR in FL learning (but no actual new research). With some exceptions in both collections, though, I felt that I would have been better off curtailing my reading after the reviews. Admittedly, there wouldn’t be much in the way of literature reviews if there were no previous research to report …

It was no surprise to see the learners who were the subjects of this research were overwhelmingly university students. In fact, only one article (about a high-school project in Israel, reported in DIRLL) was not about university students. The research areas focused on reflected this bias towards tertiary contexts: online academic reading skills, academic writing, online reflective practices in teacher training programmes, etc.

In a couple of cases, the selection of experimental subjects seemed plain bizarre. Why, if you want to find out about the extent to which Moodle use can help EAP students become better academic readers (in DIRLL), would you investigate this with a small volunteer cohort of postgraduate students of linguistics, with previous experience of using Moodle and experience of teaching? Is a less representative sample imaginable? Why, if you want to investigate the learning potential of the English File Pronunciation app (reported in LLTJ), which is clearly most appropriate for A1 – B1 levels, would you do this with a group of C1-level undergraduates following a course in phonetics as part of an English Studies programme?

More problematic, in my view, was the small sample size in many of the research projects. The Israeli virtual high school project (DIRLL), previously referred to, started out with only 11 students, but 7 dropped out, primarily, it seems, because of institutional incompetence: ‘the project was probably doomed […] to failure from the start’, according to the author. Interesting as this was as an account of how not to set up a project of this kind, it is simply impossible to draw any conclusions from 4 students about the potential of a VLE for ‘interaction, focus and self-paced learning’. The questionnaire investigating experience of and attitudes towards VR (in DIRLL) was completed by only 7 (out of 36 possible) students and 7 (out of 70+ possible) teachers. As the author acknowledges, ‘no great claims can be made’, but then goes on to note the generally ‘positive attitudes to VR’. Perhaps those who did not volunteer had different attitudes? We will never know. The study of motivational videos in tertiary education (DIRLL) started off with 15 subjects, but 5 did not complete the necessary tasks. The research into L1 use in videoconferencing (LLTJ) started off with 10 experimental subjects, all with the same L1 and similar cultural backgrounds, but there was no data available from 4 of them (because they never switched into L1). The author claims that the paper demonstrates ‘how L1 is used by language learners in videoconferencing as a social semiotic resource to support social presence’ – something which, after reading the literature review, we already knew. But the paper also demonstrates quite clearly how L1 is not used by language learners in videoconferencing as a social semiotic resource to support social presence. In all these cases, it is the participants who did not complete or the potential participants who did not want to take part that have the greatest interest for me.

Unsurprisingly, the LLTJ articles had larger sample sizes than those in DIRLL, but in both collections the length of the research was limited. The production of one motivational video (DIRLL) does not really allow us to draw any conclusions about the development of students’ critical thinking skills. Two four-week interventions do not really seem long enough to me to discover anything about learner autonomy and Moodle (DIRLL). An experiment looking at different feedback modes needs more than two written assignments to reach any conclusions about student preferences (LLTJ).

More research might well be needed to compensate for the short-term projects with small sample sizes, but I’m not convinced that this is always the case. Lacking sufficient information about the content of the technologically-mediated tools being used, I was often unable to reach any conclusions. A gamified Twitter environment was developed in one project (DIRLL), using principles derived from contemporary literature on gamification. The authors concluded that the game design ‘failed to generate interaction among students’, but without knowing a lot more about the specific details of the activity, it is impossible to say whether the problem was the principles or the particular instantiation of those principles. Another project, looking at the development of pronunciation materials for online learning (LLTJ), came to the conclusion that online pronunciation training was helpful – better than none at all. Claims are then made about the value of the method used (called ‘innovative Cued Pronunciation Readings’), but this is not compared to any other method / materials, and only a very small selection of these materials are illustrated. Basically, the reader of this research has no choice but to take things on trust. The study looking at the use of Alexa to help listening comprehension and speaking fluency (LLTJ) cannot really tell us anything about IPAs unless we know more about the particular way that Alexa is being used. Here, it seems that the students were using Alexa in an interactive storytelling exercise, but so little information is given about the exercise itself that I didn’t actually learn anything at all. The author’s own conclusion is that the results, such as they are, need to be treated with caution. Nevertheless, he adds ‘the current study illustrates that IPAs may have some value to foreign language learners’.

This brings me onto my final gripe. To be told that IPAs like Alexa may have some value to foreign language learners is to be told something that I already know. This wasn’t the only time this happened during my reading of these collections. I appreciate that research cannot always tell us something new and interesting, but a little more often would be nice. I ‘learnt’ that goal-setting plays an important role in motivation and that gamification can boost short-term motivation. I ‘learnt’ that reflective journals can take a long time for teachers to look at, and that reflective video journals are also very time-consuming. I ‘learnt’ that peer feedback can be very useful. I ‘learnt’ from two papers that intercultural difficulties may be exacerbated by online communication. I ‘learnt’ that text-to-speech software is pretty good these days. I ‘learnt’ that multimodal literacy can, most frequently, be divided up into visual and auditory forms.

With the exception of a piece about online safety issues (DIRLL), I did not once encounter anything which hinted that there may be problems in using technology. No mention of the use to which student data might be put. No mention of the costs involved (except for the observation that many students would not be happy to spend money on the English File Pronunciation app) or the cost-effectiveness of digital ‘solutions’. No consideration of the institutional (or other) pressures (or the reasons behind them) that may be applied to encourage teachers to ‘leverage’ edtech. No suggestion that a zero-tech option might actually be preferable. In both collections, the language used is invariably positive, or, at least, technology is associated with positive things: uncovering the possibilities, promoting autonomy, etc. Even if the focus of these publications is not on technology per se (although I think this claim doesn’t really stand up to close examination), it’s a little disingenuous to claim (as LLTJ does) that the interest is in how language learning and language teaching is ‘affected or enhanced by the use of digital technologies’. The reality is that the overwhelming interest is in potential enhancements, not potential negative effects.

I have deliberately not mentioned any names in referring to the articles I have discussed. I would, though, like to take my hat off to the editors of DIRLL, Sophia Mavridi and Vicky Saumell, for attempting to do something a little different. I think that Alicia Artusi and Graham Stanley’s article (DIRLL) about CPD for ‘remote’ teachers was very good and should interest the huge number of teachers working online. Chryssa Themelis and Julie-Ann Sime have kindled my interest in the potential of comics as a learning resource (DIRLL). Yu-Ju Lan’s article about VR (LLTJ) is surely the most up-to-date, go-to article on this topic. There were other pieces, or parts of pieces, that I liked, too. But, to me, it’s clear that ‘more research is needed’ … much less than (1) better and more critical research, and (2) more digestible summaries of research.

Colloquium

At the beginning of March, I’ll be going to Cambridge to take part in a Digital Learning Colloquium (for more information about the event, see here ). One of the questions that will be explored is how research might contribute to the development of digital language learning. In this, the first of two posts on the subject, I’ll be taking a broad overview of the current state of play in edtech research.

I try my best to keep up to date with research. Of the main journals, there are Language Learning and Technology, which is open access; CALICO, which offers quite a lot of open access material; and reCALL, which is the most restricted in terms of access of the three. But there is something deeply frustrating about most of this research, and this is what I want to explore in these posts. More often than not, research articles end with a call for more research. And more often than not, I find myself saying ‘Please, no, not more research like this!’

First, though, I would like to turn to a more reader-friendly source of research findings. Systematic reviews are, basically literature reviews which can save people like me from having to plough through endless papers on similar subjects, all of which contain the same (or similar) literature review in the opening sections. If only there were more of them. Others agree with me: the conclusion of one systematic review of learning and teaching with technology in higher education (Lillejord et al., 2018) was that more systematic reviews were needed.

Last year saw the publication of a systematic review of research on artificial intelligence applications in higher education (Zawacki-Richter, et al., 2019) which caught my eye. The first thing that struck me about this review was that ‘out of 2656 initially identified publications for the period between 2007 and 2018, 146 articles were included for final synthesis’. In other words, only just over 5% of the research was considered worthy of inclusion.

The review did not paint a very pretty picture of the current state of AIEd research. As the second part of the title of this review (‘Where are the educators?’) makes clear, the research, taken as a whole, showed a ‘weak connection to theoretical pedagogical perspectives’. This is not entirely surprising. As Bates (2019) has noted: ‘since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). As a result, such AI applications tend to adopt a very behaviourist model of learning: present / test / feedback.’ More generally, it is clear that technology adoption (and research) is being driven by technology enthusiasts, with insufficient expertise in education. The danger is that edtech developers ‘will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning’ (Lynch, 2017).

This, then, is the first of my checklist of things that, collectively, researchers need to do to improve the value of their work. The rest of this list is drawn from observations mostly, but not exclusively, from the authors of systematic reviews, and mostly come from reviews of general edtech research. In the next blog post, I’ll look more closely at a recent collection of ELT edtech research (Mavridi & Saumell, 2020) to see how it measures up.

1 Make sure your research is adequately informed by educational research outside the field of edtech

Unproblematised behaviourist assumptions about the nature of learning are all too frequent. References to learning styles are still fairly common. The most frequently investigated skill that is considered in the context of edtech is critical thinking (Sosa Neira, et al., 2017), but this is rarely defined and almost never problematized, despite a broad literature that questions the construct.

2 Adopt a sceptical attitude from the outset

Know your history. Decades of technological innovation in education have shown precious little in the way of educational gains and, more than anything else, have taught us that we need to be sceptical from the outset. ‘Enthusiasm and praise that are directed towards ‘virtual education, ‘school 2.0’, ‘e-learning and the like’ (Selwyn, 2014: vii) are indications that the lessons of the past have not been sufficiently absorbed (Levy, 2016: 102). The phrase ‘exciting potential’, for example, should be banned from all edtech research. See, for example, a ‘state-of-the-art analysis of chatbots in education’ (Winkler & Söllner, 2018), which has nothing to conclude but ‘exciting potential’. Potential is fine (indeed, it is perhaps the only thing that research can unambiguously demonstrate – see section 3 below), but can we try to be a little more grown-up about things?

3 Know what you are measuring

Measuring learning outcomes is tricky, to say the least, but it’s understandable that researchers should try to focus on them. Unfortunately, ‘the vast array of literature involving learning technology evaluation makes it challenging to acquire an accurate sense of the different aspects of learning that are evaluated, and the possible approaches that can be used to evaluate them’ (Lai & Bower, 2019). Metrics such as student grades are hard to interpret, not least because of the large number of variables and the danger of many things being conflated in one score. Equally, or possibly even more, problematic, are self-reporting measures which are rarely robust. It seems that surveys are the most widely used instrument in qualitative research (Sosa Neira, et al., 2017), but these will tell us little or nothing when used for short-term interventions (see point 5 below).

4 Ensure that the sample size is big enough to mean something

In most of the research into digital technology in education that was analysed in a literature review carried out for the Scottish government (ICF Consulting Services Ltd, 2015), there were only ‘small numbers of learners or teachers or schools’.

5 Privilege longitudinal studies over short-term projects

The Scottish government literature review (ICF Consulting Services Ltd, 2015), also noted that ‘most studies that attempt to measure any outcomes focus on short and medium term outcomes’. The fact that the use of a particular technology has some sort of impact over the short or medium term tells us very little of value. Unless there is very good reason to suspect the contrary, we should assume that it is a novelty effect that has been captured (Levy, 2016: 102).

6 Don’t forget the content

The starting point of much edtech research is the technology, but most edtech, whether it’s a flashcard app or a full-blown Moodle course, has content. Research reports rarely give details of this content, assuming perhaps that it’s just fine, and all that’s needed is a little tech to ‘present learners with the ‘right’ content at the ‘right’ time’ (Lynch, 2017). It’s a foolish assumption. Take a random educational app from the Play Store, a random MOOC or whatever, and the chances are you’ll find it’s crap.

7 Avoid anecdotal accounts of technology use in quasi-experiments as the basis of a ‘research article’

Control (i.e technology-free) groups may not always be possible but without them, we’re unlikely to learn much from a single study. What would, however, be extremely useful would be a large, collated collection of such action-research projects, using the same or similar technology, in a variety of settings. There is a marked absence of this kind of work.

8 Enough already of higher education contexts

Researchers typically work in universities where they have captive students who they can carry out research on. But we have a problem here. The systematic review of Lundin et al (2018), for example, found that ‘studies on flipped classrooms are dominated by studies in the higher education sector’ (besides lacking anchors in learning theory or instructional design). With some urgency, primary and secondary contexts need to be investigated in more detail, not just regarding flipped learning.

9 Be critical

Very little edtech research considers the downsides of edtech adoption. Online safety, privacy and data security are hardly peripheral issues, especially with younger learners. Ignoring them won’t make them go away.

More research?

So do we need more research? For me, two things stand out. We might benefit more from, firstly, a different kind of research, and, secondly, more syntheses of the work that has already been done. Although I will probably continue to dip into the pot-pourri of articles published in the main CALL journals, I’m looking forward to a change at the CALICO journal. From September of this year, one issue a year will be thematic, with a lead article written by established researchers which will ‘first discuss in broad terms what has been accomplished in the relevant subfield of CALL. It should then outline which questions have been answered to our satisfaction and what evidence there is to support these conclusions. Finally, this article should pose a “soft” research agenda that can guide researchers interested in pursuing empirical work in this area’. This will be followed by two or three empirical pieces that ‘specifically reflect the research agenda, methodologies, and other suggestions laid out in the lead article’.

But I think I’ll still have a soft spot for some of the other journals that are coyer about their impact factor and that can be freely accessed. How else would I discover (it would be too mean to give the references here) that ‘the effective use of new technologies improves learners’ language learning skills’? Presumably, the ineffective use of new technologies has the opposite effect? Or that ‘the application of modern technology represents a significant advance in contemporary English language teaching methods’?

References

Bates, A. W. (2019). Teaching in a Digital Age Second Edition. Vancouver, B.C.: Tony Bates Associates Ltd. Retrieved from https://pressbooks.bccampus.ca/teachinginadigitalagev2/

ICF Consulting Services Ltd (2015). Literature Review on the Impact of Digital Technology on Learning and Teaching. Edinburgh: The Scottish Government. https://dera.ioe.ac.uk/24843/1/00489224.pdf

Lai, J.W.M. & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education, 133(1), 27-42. Elsevier Ltd. Retrieved January 14, 2020 from https://www.learntechlib.org/p/207137/

Levy, M. 2016. Researching in language learning and technology. In Farr, F. & Murray, L. (Eds.) The Routledge Handbook of Language Learning and Technology. Abingdon, Oxon.: Routledge. pp.101 – 114

Lillejord S., Børte K., Nesje K. & Ruud E. (2018). Learning and teaching with technology in higher education – a systematic review. Oslo: Knowledge Centre for Education https://www.forskningsradet.no/siteassets/publikasjoner/1254035532334.pdf

Lundin, M., Bergviken Rensfeldt, A., Hillman, T. et al. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education 15, 20 (2018) doi:10.1186/s41239-018-0101-6

Lynch, J. (2017). How AI Will Destroy Education. Medium, November 13, 2017. https://buzzrobot.com/how-ai-will-destroy-education-20053b7b88a6

Mavridi, S. & Saumell, V. (Eds.) (2020). Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL

Selwyn, N. (2014). Distrusting Educational Technology. New York: Routledge

Sosa Neira, E. A., Salinas, J. and de Benito Crosetti, B. (2017). Emerging Technologies (ETs) in Education: A Systematic Review of the Literature Published between 2006 and 2016. International Journal of Emerging Technologies in Education, 12 (5). https://online-journals.org/index.php/i-jet/article/view/6939

Winkler, R. & Söllner, M. (2018): Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. In: Academy of Management Annual Meeting (AOM). Chicago, USA. https://www.alexandria.unisg.ch/254848/1/JML_699.pdf

Zawacki-Richter, O., Bond, M., Marin, V. I. And Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education 2019

by Philip Kerr & Andrew Wickham

from IATEFL 2016 Birmingham Conference Selections (ed. Tania Pattison) Faversham, Kent: IATEFL pp. 75 – 78

ELT publishing, international language testing and private language schools are all industries: products are produced, bought and sold for profit. English language teaching (ELT) is not. It is an umbrella term that is used to describe a range of activities, some of which are industries, and some of which (such as English teaching in high schools around the world) might better be described as public services. ELT, like education more generally, is, nevertheless, often referred to as an ‘industry’.

Education in a neoliberal world

The framing of ELT as an industry is both a reflection of how we understand the term and a force that shapes our understanding. Associated with the idea of ‘industry’ is a constellation of other ideas and words (such as efficacy, productivity, privatization, marketization, consumerization, digitalization and globalization) which become a part of ELT once it is framed as an industry. Repeated often enough, ‘ELT as an industry’ can become a metaphor that we think and live by. Those activities that fall under the ELT umbrella, but which are not industries, become associated with the desirability of industrial practices through such discourse.

The shift from education, seen as a public service, to educational managerialism (where education is seen in industrial terms with a focus on efficiency, free market competition, privatization and a view of students as customers) can be traced to the 1980s and 1990s (Gewirtz, 2001). In 1999, under pressure from developed economies, the General Agreement on Trade in Services (GATS) transformed education into a commodity that could be traded like any other in the marketplace (Robertson, 2006). The global industrialisation and privatization of education continues to be promoted by transnational organisations (such as the World Bank and the OECD), well-funded free-market think-tanks (such as the Cato Institute), philanthro-capitalist foundations (such as the Gates Foundation) and educational businesses (such as Pearson) (Ball, 2012).

Efficacy and learning outcomes

Managerialist approaches to education require educational products and services to be measured and compared. In ELT, the most visible manifestation of this requirement is the current ubiquity of learning outcomes. Contemporary coursebooks are full of ‘can-do’ statements, although these are not necessarily of any value to anyone. Examples from one unit of one best-selling course include ‘Now I can understand advice people give about hotels’ and ‘Now I can read an article about unique hotels’ (McCarthy et al. 2014: 74). However, in a world where accountability is paramount, they are deemed indispensable. The problem from a pedagogical perspective is that teaching input does not necessarily equate with learning uptake. Indeed, there is no reason why it should.

Drawing on the Common European Framework of Reference for Languages (CEFR) for inspiration, new performance scales have emerged in recent years. These include the Cambridge English Scale and the Pearson Global Scale of English. Moving away from the broad six categories of the CEFR, such scales permit finer-grained measurement and we now see individual vocabulary and grammar items tagged to levels. Whilst such initiatives undoubtedly support measurements of efficacy, the problem from a pedagogical perspective is that they assume that language learning is linear and incremental, as opposed to complex and jagged.

Given the importance accorded to the measurement of language learning (or what might pass for language learning), it is unsurprising that attention is shifting towards the measurement of what is probably the most important factor impacting on learning: the teaching. Teacher competency scales have been developed by Cambridge Assessment, the British Council and EAQUALS (Evaluation and Accreditation of Quality Language Services), among others.

The backwash effects of the deployment of such scales are yet to be fully experienced, but the likely increase in the perception of both language learning and teacher learning as the synthesis of granularised ‘bits of knowledge’ is cause for concern.

Digital technology

Digital technology may offer advantages to both English language teachers and learners, but its rapid growth in language learning is the result, primarily but not exclusively, of the way it has been promoted by those who stand to gain financially. In education, generally, and in English language teaching, more specifically, advocacy of the privatization of education is always accompanied by advocacy of digitalization. The global market for digital English language learning products was reported to be $2.8 billion in 2015 and is predicted to reach $3.8 billion by 2020 (Ambient Insight, 2016).

In tandem with the increased interest in measuring learning outcomes, there is fierce competition in the market for high-stakes examinations, and these are increasingly digitally delivered and marked. In the face of this competition and in a climate of digital disruption, companies like Pearson and Cambridge English are developing business models of vertical integration where they can provide and sell everything from placement testing, to courseware (either print or delivered through an LMS), teaching, assessment and teacher training. Huge investments are being made in pursuit of such models. Pearson, for example, recently bought GlobalEnglish, Wall Street English, and set up a partnership with Busuu, thus covering all aspects of language learning from resources provision and publishing to off- and online training delivery.

As regards assessment, the most recent adult coursebook from Cambridge University Press (in collaboration with Cambridge English Language Assessment), ‘Empower’ (Doff, et. Al, 2015) sells itself on a combination of course material with integrated, validated assessment.

Besides its potential for scalability (and therefore greater profit margins), the appeal (to some) of platform-delivered English language instruction is that it facilitates assessment that is much finer-grained and actionable in real time. Digitization and testing go hand in hand.

Few English language teachers have been unaffected by the move towards digital. In the state sectors, large-scale digitization initiatives (such as the distribution of laptops for educational purposes, the installation of interactive whiteboards, the move towards blended models of instruction or the move away from printed coursebooks) are becoming commonplace. In the private sectors, online (or partially online) language schools are taking market share from the traditional bricks-and-mortar institutions.

These changes have entailed modifications to the skill-sets that teachers need to have. Two announcements at this conference reflect this shift. First of all, Cambridge English launched their ‘Digital Framework for Teachers’, a matrix of six broad competency areas organised into four levels of proficiency. Secondly, Aqueduto, the Association for Quality Education and Training Online, was launched, setting itself up as an accreditation body for online or blended teacher training courses.

Teachers’ pay and conditions

In the United States, and likely soon in the UK, the move towards privatization is accompanied by an overt attack on teachers’ unions, rights, pay and conditions (Selwyn, 2014). As English language teaching in both public and private sectors is commodified and marketized it is no surprise to find that the drive to bring down costs has a negative impact on teachers worldwide. Gwynt (2015), for example, catalogues cuts in funding, large-scale redundancies, a narrowing of the curriculum, intensified workloads (including the need to comply with ‘quality control measures’), the deskilling of teachers, dilapidated buildings, minimal resources and low morale in an ESOL department in one British further education college. In France, a large-scale study by Wickham, Cagnol, Wright and Oldmeadow (Linguaid, 2015; Wright, 2016) found that EFL teachers in the very competitive private sector typically had multiple employers, limited or no job security, limited sick pay and holiday pay, very little training and low hourly rates that were deteriorating. One of the principle drivers of the pressure on salaries is the rise of online training delivery through Skype and other online platforms, using offshore teachers in low-cost countries such as the Philippines. This type of training represents 15% in value and up to 25% in volume of all language training in the French corporate sector and is developing fast in emerging countries. These examples are illustrative of a broad global trend.

Implications

Given the current climate, teachers will benefit from closer networking with fellow professionals in order, not least, to be aware of the rapidly changing landscape. It is likely that they will need to develop and extend their skill sets (especially their online skills and visibility and their specialised knowledge), to differentiate themselves from competitors and to be able to demonstrate that they are in tune with current demands. More generally, it is important to recognise that current trends have yet to run their full course. Conditions for teachers are likely to deteriorate further before they improve. More than ever before, teachers who want to have any kind of influence on the way that marketization and industrialization are shaping their working lives will need to do so collectively.

References

Ambient Insight. 2016. The 2015-2020 Worldwide Digital English Language Learning Market. http://www.ambientinsight.com/Resources/Documents/AmbientInsight_2015-2020_Worldwide_Digital_English_Market_Sample.pdf

Ball, S. J. 2012. Global Education Inc. Abingdon, Oxon.: Routledge

Doff, A., Thaine, C., Puchta, H., Stranks, J. and P. Lewis-Jones 2015. Empower. Cambridge: Cambridge University Press

Gewirtz, S. 2001. The Managerial School: Post-welfarism and Social Justice in Education. Abingdon, Oxon.: Routledge

Gwynt, W. 2015. ‘The effects of policy changes on ESOL’. Language Issues 26 / 2: 58 – 60

McCarthy, M., McCarten, J. and H. Sandiford 2014. Touchstone 2 Student’s Book Second Edition. Cambridge: Cambridge University Press

Linguaid, 2015. Le Marché de la Formation Langues à l’Heure de la Mondialisation. Guildford: Linguaid

Robertson, S. L. 2006. ‘Globalisation, GATS and trading in education services.’ published by the Centre for Globalisation, Education and Societies, University of Bristol, Bristol BS8 1JA, UK at http://www.bris.ac.uk/education/people/academicStaff/edslr/publications/04slr

Selwyn, N. 2014. Distrusting Educational Technology. New York: Routledge

Wright, R. 2016. ‘My teacher is rich … or not!’ English Teaching Professional 103: 54 – 56