Archive for the ‘research’ Category

My attention was recently drawn (thanks to Grzegorz Śpiewak) to a recent free publication from OUP. It’s called ‘Multimodality in ELT: Communication skills for today’s generation’ (Donaghy et al., 2023) and it’s what OUP likes to call a ‘position paper’: it offers ‘evidence-based recommendations to support educators and learners in their future success’. Its topic is multimodal (or multimedia) literacy, a term used to describe the importance for learners of being able ‘not just to understand but to create multimedia messages, integrating text with images, sounds and video to suit a variety of communicative purposes and reach a range of target audiences’ (Dudeney et al., 2013: 13).

Grzegorz noted the author of this paper’s ‘positively charged, unhedged language to describe what is arguably a most complex problem area’. As an example, he takes the summary of the first section and circles questionable and / or unsubstantiated claims. It’s just one example from a text that reads more like a ‘manifesto’ than a balanced piece of evidence-reporting. The verb ‘need’ (in the sense of ‘must’, as in ‘teachers / learners / students need to …’) appears no less than 57 times. The modal ‘should’ (as in ‘teachers / learners / students should …’) clocks up 27 appearances.

What is it then that we all need to do? Essentially, the argument is that English language teachers need to develop their students’ multimodal literacy by incorporating more multimodal texts and tasks (videos and images) in all their lessons. The main reason for this appears to be that, in today’s digital age, communication is more often multimodal than not (i.e. monomodal written or spoken text). As an addendum, we are told that multimodal classroom practices are a ‘fundamental part of inclusive teaching’ in classes with ‘learners with learning difficulties and disabilities’. In case you thought it was ironic that such an argument would be put forward in a flat monomodal pdf, OUP also offers the same content through a multimodal ‘course’ with text, video and interactive tasks.

It might all be pretty persuasive, if it weren’t so overstated. Here are a few of the complex problem areas.

What exactly is multimodal literacy?

We are told in the paper that there are five modes of communication: linguistic, visual, aural, gestural and spatial. Multimodal literacy consists, apparently, of the ability

  • to ‘view’ multimodal texts (noticing the different modes, and, for basic literacy, responding to the text on an emotional level, and, for more advanced literacy, respond to it critically)
  • to ‘represent’ ideas and information in a multimodal way (posters, storyboards, memes, etc.)

I find this frustratingly imprecise. First: ‘viewing’. Noticing modes and reacting emotionally to a multimedia artefact do not take anyone very far on the path towards multimodal literacy, even if they are necessary first steps. It is only when we move towards a critical response (understanding the relative significance of different modes and problematizing our initial emotional response) that we can really talk about literacy (see the ‘critical literacy’ of Pegrum et al., 2018). We’re basically talking about critical thinking, a concept as vague and contested as any out there. Responding to a multimedia artefact ‘critically’ can mean more or less anything and everything.

Next: ‘representing’. What is the relative importance of ‘viewing’ and ‘representing’? What kinds of representations (artefacts) are important, and which are not? Presumably, they are not all of equal importance. And, whichever artefact is chosen as the focus, a whole range of technical skills will be needed to produce the artefact in question. So, precisely what kind of representing are we talking about?

Priorities in the ELT classroom

The Oxford authors write that ‘the main focus as English language teachers should obviously be on language’. I take this to mean that the ‘linguistic mode’ of communication should be our priority. This seems reasonable, since it’s hard to imagine any kind of digital literacy without some reading skills preceding it. But, again, the question of relative importance rears its ugly head. The time available for language leaning and teaching is always limited. Time that is devoted to the visual, aural, gestural or spatial modes of communication is time that is not devoted to the linguistic mode.

There are, too, presumably, some language teaching contexts (I’m thinking in particular about some adult, professional contexts) where the teaching of multimodal literacy would be completely inappropriate.

Multimodal literacy is a form of digital literacy. Writers about digital literacies like to say things like ‘digital literacies are as important to language learning as […] reading and writing skills’ or it is ‘crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Pegrum et al, 2022). The question then arises: how important, in relative terms, are the various digital literacies? Where does multimodal literacy stand?

The Oxford authors summarise their view as follows:

There is a need for a greater presence of images, videos, and other multimodal texts in ELT coursebooks and a greater focus on using them as a starting point for analysis, evaluation, debate, and discussion.

My question to them is: greater than what? Typical contemporary courseware is already a whizzbang multimodal jamboree. There seem to me to be more pressing concerns with most courseware than supplementing it with visuals or clickables.

Evidence

The Oxford authors’ main interest is unquestionably in the use of video. They recommend extensive video viewing outside the classroom and digital story-telling activities inside. I’m fine with that, so long as classroom time isn’t wasted on getting to grips with a particular digital tool (e.g. a video editor, which, a year from now, will have been replaced by another video editor).

I’m fine with this because it involves learners doing meaningful things with language, and there is ample evidence to indicate that a good way to acquire language is to do meaningful things with it. However, I am less than convinced by the authors’ claim that such activities will strengthen ‘active and critical viewing, and effective and creative representing’. My scepticism derives firstly from my unease about the vagueness of the terms ‘viewing’ and ‘representing’, but I have bigger reservations.

There is much debate about the extent to which general critical thinking can be taught. General critical viewing has the same problems. I can apply critical viewing skills to some topics, because I have reasonable domain knowledge. In my case, it’s domain knowledge that activates my critical awareness of rhetorical devices, layout, choice of images and pull-out quotes, multimodal add-ons and so on. But without the domain knowledge, my critical viewing skills are likely to remain uncritical.

Perhaps most importantly of all, there is a lack of reliable research about ‘the extent to which language instructors should prioritize multimodality in the classroom’ (Kessler, 2022: 552). There are those, like the authors of this paper, who advocate for a ‘strong version’ of multimodality. Others go for a ‘weak version’ ‘in which non-linguistic modes should only minimally support or supplement linguistic instruction’ (Kessler, 2022: 552). And there are others who argue that multimodal activities may actually detract from or stifle L2 development (e.g. Manchón, 2017). In the circumstances, all the talk of ‘needs to’ and ‘should’ is more than a little premature.

Assessment

The authors of this Oxford paper rightly note that, if we are to adopt a multimodal approach, ‘it is important that assessment requirements take into account the multimodal nature of contemporary communication’. The trouble is that there are no widely used assessments (to my knowledge) that do this (including Oxford’s own tests). English language reading tests (like the Oxford Test of English) measure the comprehension of flat printed texts, as a proxy for reading skills. This is not the place to question the validity of such reading tests. Suffice to say that ‘little consensus exists as to what [the ability to read another language] entails, how it develops, and how progress in development can be monitored and fostered’ (Koda, 2021).

No doubt there are many people beavering away at trying to figure out how to assess multimodal literacy, but the challenges they face are not negligible. Twenty-first century digital (multimodal) literacy includes such things as knowing how to change the language of an online text to your own (and vice versa), how to bring up subtitles, how to convert written text to speech, how to generate audio scripts. All such skills may well be very valuable in this digital age, and all of them limit the need to learn another language.

Final thoughts

I can’t help but wonder why Oxford University Press should bring out a ‘position paper’ that is so at odds with their own publishing and assessing practices, and so at odds with the paper recently published in their flagship journal, ELT Journal. There must be some serious disconnect between the Marketing Department, which commissions papers such as these, and other departments within the company. Why did they allow such overstatement, when it is well known that many ELT practitioners (i.e. their customers) have the view that ‘linguistically based forms are (and should be) the only legitimate form of literacy’ (Choi & Yi, 2016)? Was it, perhaps, the second part of the title of this paper that appealed to the marketing people (‘Communication Skills for Today’s Generation’) and they just thought that ‘multimodality’ had a cool, contemporary ring to it? Or does the use of ‘multimodality’ help the marketing of courses like Headway and English File with additional multimedia bells and whistles? As I say, I can’t help but wonder.

If you want to find out more, I’d recommend the ELT Journal article, which you can access freely without giving your details to the marketing people.

Finally, it is perhaps time to question the logical connection between the fact that much reading these days is multimodal and the idea that multimodal literacy should be taught in a language classroom. Much reading that takes place online, especially with multimodal texts, could be called ‘hyper reading’, characterised as ‘sort of a brew of skimming and scanning on steroids’ (Baron, 2021: 12). Is this the kind of reading that should be promoted with language learners? Baron (2021) argues that the answer to this question depends on the level of reading skills of the learner. The lower the level, the less beneficial it is likely to be. But for ‘accomplished readers with high levels of prior knowledge about the topic’, hyper-reading may be a valuable approach. For many language learners, monomodal deep reading, which demands ‘slower, time-demanding cognitive and reflective functions’ (Baron, 2021: x – xi) may well be much more conducive to learning.

References

Baron, N. S. (2021) How We Read Now. Oxford: Oxford University Press

Choi, J. & Yi, Y. (2016) Teachers’ Integration of Multimodality into Classroom Practices for English Language Learners’ TESOL Journal, 7 (2): 3-4 – 327

Donaghy, K. (author), Karastathi, S. (consultant), Peachey, N. (consultant), (2023). Multimodality in ELT: Communication skills for today’s generation [PDF]. Oxford University Press. https://elt.oup.com/feature/global/expert/multimodality (registration needed)

Dudeney, G., Hockly, N. & Pegrum, M. (2013) Digital Literacies. Harlow: Pearson Education

Kessler, M. (2022) Multimodality. ELT Journal, 76 (4): 551 – 554

Koda, K. (2021) Assessment of Reading. https://doi.org/10.1002/9781405198431.wbeal0051.pub2

Manchón, R. M. (2017) The Potential Impact of Multimodal Composition on Language Learning. Journal of Second Language Writing, 38: 94 – 95

Pegrum, M., Dudeney, G. & Hockly, N. (2018) Digital Literacies Revisited. The European Journal of Applied Linguistics and TEFL, 7 (2): 3 – 24

Pegrum, M., Hockly, N. & Dudeney, G. (2022) Digital Literacies 2nd Edition. New York: Routledge

I’ve written about the relationship (or, rather, the lack of one) between language teachers and language teaching research before. I’m talking about the kind of research that is primarily of the ‘what-works’ variety, since that is likely to be of most relevance to teachers. It’s the kind of research that asks questions like: can correction be beneficial to language learners? Or: can spaced repetition be helpful in vocabulary acquisition? Whether teachers find this relevant or not, there is ample evidence that the vast majority rarely look at it (Borg, 2009).

See here, for example, for a discussion of calls from academic researchers for more dialogue between researchers and teachers. The desire, on the part of researchers, for teachers to engage more (or even a little) with research, continues to grow, as shown by two examples. The first is the development of TESOLgraphics, which aims to make research ‘easy to read and understand to ESL, EFL, EAP, ESP, ESOL, EAL, TEFL teachers’ by producing infographic summaries. The second is a proposed special issue of the journal ‘System’ devoted to ‘the nexus of research and practice in and for language teacher education’ and hopes to find ways of promoting more teacher engagement with research. Will either of these initiatives have much impact? I doubt it, and to explain why, I need to take you on a little detour.

The map and the territory

Riffing off an ultra-short story by Jorge Luis Borges (‘On Exactitude in Science’, 1946), the corpus linguist Michael Stubbs (2013) wrote a piece entitled ‘Of Exactitude in Linguistics’, which marked his professional retirement. In it, he described a world where

the craft of Descriptive Linguistics attained such Perfection that the Transcription of a single short Conversation covered the floor of an entire University seminar room, and the Transcription of a Representative Sample of a single Text-Type covered the floor area of a small department to a depth of several feet. In the course of time, especially after the development of narrow phonetic transcription with intonational and proxemic annotation, even these extensive Records were found somehow wanting, and with the advent of fully automatic voice-to-orthography transcription, the weight of the resulting Text Collections threatened severe structural damage to University buildings.

As with all humour, there’s more than a grain of truth behind this Borgesian fantasy. These jokes pick up on what is known as the Richardson Effect, named after a British mathematician who noted that the length of the coastline of Great Britain varies according to the size of the units that are used to measure it – the smaller the unit, the longer the coastline. But at what point does increasing exactitude cease to tell us anything of value?

Both Borges and Lewis Fry Richardson almost certainly knew Lewis Carroll’s novel ‘Sylvie and Bruno Concluded’ (1893) which features a map that has the scale of a mile to a mile. This extraordinarily accurate map is, however, never used, since it is too large to spread out. The cost of increasing exactitude is practical usefulness.

The map of language

Language is rather like a coastline when it comes to drilling down in order to capture its features with smaller and smaller units of measurement. Before very long, you are forced into making decisions about the variety of the language and the contexts of use that you are studying. Precisely what kind of English are you measuring? At some point, you get down to the level of idiolect, but idiolects can be broken down further as they vary depending on the contexts of use. The trouble, of course, is that idiolects tell us little that is of value about the much broader ‘language’ that you set out to measure in the first place. The linguistic map obscures the linguistic terrain.

In ultra close-up, we can no longer distinguish one named language from another just by using linguistic criteria (Makoni & Pennycook, 20077:1). Extending this logic further, it makes little sense to even talk about named languages like English, to talk about first or second languages, about native speakers or about language errors. The close-up view requires us to redefine the thing – language – that we set out to define and describe. English is no longer a fixed and largely territorial system owned by native-speakers, but a dynamic, complex, social, deterritorialized practice owned by its users (May, 2013; Meier, 2017; Li Wei, 2018). In this view, both the purpose and the consequence of describing language in this way is to get away from the social injustice of native-speaker norms, of accentism, and linguistic prejudice.

A load of Ballungs

Language is a fuzzy and context-dependent concept. It is ‘too multifaceted to be measured on a single metric without loss of meaning, and must be represented by a matrix of indices or by several different measures depending on which goals and values are at play’ (Tal, 2020). In the philosophy of measurement, concepts like these are known as ‘Ballung’ concepts (Cartwright & Bradburn, 2011). Much of what is studied by researchers into language learning are also ‘Ballung’ concepts. Language proficiency and language acquisition are ‘Ballung’ concepts, too. As are reading and listening skills, mediation, metacognition and motivation. Critical thinking and digital literacies … the list goes on. Research into all these areas is characterised by multiple and ever-more detailed taxonomies, as researchers struggle to define precisely what it is that they are studying. It is in the nature of most academic study that it strives towards exactitude by becoming more and more specialised in its analysis of ‘ever more particular fractions of our world’ (Pardo-Guerra, 2022: 17).

But the perspective on language of Makoni, Pennycook, Li Wei et al is not what we might call the ‘canonical view’, the preferred viewpoint of the majority of people in apprehending the reality of the outside world (Palmer, 1981). Canonical views of language are much less close-up and allow for the unproblematic differentiation of one language from another. Canonical views – whether of social constructs like language or everyday objects like teacups or birds – become canonical because they are more functional for many people for everyday purposes than less familiar perspectives. If you want to know how far it is to walk from A to B along a coastal footpath, the more approximate measure of metres is more useful than one that counts every nook and cranny in microns. Canonical views can, of course, change over time – if the purpose to which they are put changes, too.

Language teaching research

There is a clear preference in academia for quantitative, empirical research where as many variables as possible are controlled. Research into language teaching is no different. It’s not enough to ask, in general terms, about the impact on learning of correction or spaced repetition. ‘What works’ is entirely context-dependent (Al-Hoorie, et al., 2023: 278). Since all languages, language learners and language learning contexts are ‘ultimately different’ (Widdowson, 2023: 397), there’s never any end to the avenues that researchers can explore: it is a ‘self-generating academic area of inquiry’ (ibid.). So we can investigate the impact of correction on the writing (as opposed to the speaking) of a group of Spanish (as opposed to another nationality) university students (as opposed to another age group) in an online setting (as opposed to face-to-face) where the correction is delayed (as opposed to immediate) and delivered by WhatsApp (as opposed to another medium) (see, for example, Murphy et al., 2023). We could carry on playing around with the variables for as long as we like – this kind of research has already been going on for decades.

When it comes to spaced repetition, researchers need to consider the impact of different algorithms (e.g. the length of the spaces) on different kinds of learners (age, level, motivation, self-regulation, etc.) in their acquisition of different kinds of lexical items (frequency, multi-word units, etc.) and how these items are selected and grouped, the nature of this acquisition (e.g. is it for productive use or is it purely recognition?). And so on (see the work of Tatsuya Nakata, for example).

Such attempts to control the variables are a necessary part of scientific enquiry, they are part of the ‘disciplinary agenda’, but they are unlikely to be of much relevance to most teachers. Researchers need precision, but the more they attempt to ‘approximate the complexities of real life, the more unwieldy [their] theories inevitably become’ (Al-Hoorie et al., 2023). Teachers, on the other hand, are typically more interested in canonical views that can lead to general take-aways that can be easily applied in their lessons. It is only secondary research in the form of meta-analyses or literature reviews (of the kind that TESOLgraphics) that can avoid the Richardson Effect and might offer something of help to the average classroom practitioner. But this secondary research, stripped of the contextual variables, can only be fairly vague. It can only really tell us, for example, that some form of written correction or spaced repetition may be helpful to some learners in some contexts some of the time. In need of ‘substantial localization’, it has been argued that the broad-stroke generalisations are often closer to ‘pseudo-applications’ (Al-Hoorie et al., 2023) than anything that is reliably actionable. That is not to say, however, that broad-stroke generalisations are of no value at all.

Finding the right map

Henry Widdowson (e.g. 2023) has declared himself sceptical about the practical relevance of SLA research. Reading journals like ‘Studies in Second Language Acquisition’ or ‘System’, it’s hard not to agree. Attempts to increase the accessibility of research (e.g. open-access or simple summaries) may not have the desired impact since they do not do anything about ‘the tenuous link between research and practice’ (Hwang, 2023). They cannot bridge the ‘gap between two sharply contrasting kinds of knowledge’ (McIntyre, 2006).

There is an alternative: classroom-based action research carried out by teachers. One of the central ideas behind it is that teachers may benefit more from carrying out their own research than from reading someone else’s. Enthusiasm for action research has been around for a long time: it was very fashionable in the 1980s when I trained as a teacher. In the 1990s, there was a series of conferences for English language teachers called ‘Teachers Develop Teachers Research’ (see, for example, Field et al., 1997). Tirelessly promoted by people like Richard Smith, Paula Rebolledo (Smith et al., 2014) and Anne Burns, action research seems to be gaining traction. A recent British Council publication (Burns, 2023) is a fine example of what insights teachers may gain and act on with an exploratory action research approach.

References

Al-Hoorie A. H., Hiver, P., Larsen-Freeman, D. & Lowie, W. (2023) From replication to substantiation: A complexity theory perspective. Language Teaching, 56 (2): pp. 276 – 291

Borg, S. (2009) English language teachers’ conceptions of research. Applied Linguistics, 30 (3): 358 – 88

Burns, A. (Ed.) (2023) Exploratory Action Research in Thai Schools: English teachers identifying problems, taking action and assessing results. Bangkok, Thailand: British Council

Cartwright, N., Bradburn, N. M., & Fuller, J. (2016) A theory of measurement. Working Paper. Centre for Humanities Engaging Science and Society (CHESS), Durham.

Field, J., Graham, A., Griffiths, E. & Head. K. (Eds.) (1997) Teachers Develop Teachers Research 2. Whitstable, Kent: IATEFl

Hwang, H.-B. (2023) Is evidence-based L2 pedagogy achievable? The research–practice dialogue in grammar instruction. The Modern Language Journal, 2023: 1 – 22 https://onlinelibrary.wiley.com/doi/full/10.1111/modl.12864

Li Wei. (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39 (1): 9 – 30

Makoni, S. & Pennycook, A. (Eds.) (2007) Disinventing and Reconstituting Languages. Clevedon: Multilingual Matters

May. S. (Ed.) (2013) The multilingual turn: Implications for SLA, TESOL and Bilingual education. New York: Routledge

McIntyre, D. (2006) Bridging the gap between research and practice. Cambridge Journal of Education 35 (3): 357 – 382

Meier, G. S. (2017) The multilingual turn as a critical movement in education: assumptions, challenges and a need for reflection. Applied Linguistics Review, 8 (1): 131-161

Murphy, B., Mackay J. & Tragant, E. (2023) ‘(Ok I think I was totally wrong: new try!)’: language learning in WhatsApp through the provision of delayed corrective feedback provided during and after task performance’, The Language Learning Journal, DOI: 10.1080/09571736.2023.2223217

Palmer, S.E. et al. (1981) Canonical perspective and the perception of objects. In Longand, J. & Baddeley. A. (Eds.) Attention and Performance IX. Hillsdale, NJ: Erlbaum. pp. 135 – 151

Pardo-Guerra, J. P. (2022) The Quantified Scholar. New York: Columbia University Press

Smith, R., Connelly, T. & Rebolledo, P. (2014). Teacher research as CPD: A project with Chilean secondary school teachers. In D. Hayes (Ed.), Innovations in the continuing professional development of English language teachers (pp. 111–128). The British Council.

Tal, E. “Measurement in Science”, In The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (Ed.), https://plato.stanford.edu/archives/fall2020/entries/measurement-science/

Widdowson, H. (2023) Webinar on the subject of English and applied linguistics. Language Teaching, 56 (3): 393 – 401

Motivation and research

Ljke quite a few of the topics I have explored in this blog, motivation is something about which we can all agree on its importance, but without being entirely clear about what it means. It is closely connected to a number of other human attributes – reasons for learning, goal-setting, strength of desire to achieve goals, attitudes towards and interest in English, effort and self-regulation, learner autonomy … (Lamb, 2016) and the list could be continued. In fact, it means so many things that the American Psychological Association has considered deleting the word as a search term in the main psychological database (Dörnyei & Ushioda, 2013).

In the world of language learning, research into motivation got going over 60 years ago (Gardner & Lambert, 1959), really took off in the 1990s, and has become ‘one of the most popular research topics, showing an exponential increase in quantity year after year’ (Al-Hoorie et al., 2021: 139). The main reason for this is no doubt the widely shared perception of the importance of ‘motivation’ (and demotivation), but also, perhaps, because motivation is seen as an easy topic among novice researchers (Ushioda, 2016), relying, as it typically does, on a questionnaire.

However, all is not well in this world of language motivation research. First of all, researchers are questioning whether motivation to learn a language is fundamentally any different from motivation to learn anything else (Al-Hoorie & Hiver, 2020). Some research suggests that it is not, and that the complex network of ‘identity, emotions, social and political factors, both inside and outside of school’ (Al-Hoorie et al., 2021: 141) that are seen as relevant to language learning motivation apply equally to learning maths. Attempts to carve out a particular space for language learning motivation (such as Dörnyei’s (2009) ‘L2 motivational self system’) may have much appeal, but are less convincing when studied more closely (Al-Hoorie, 2018).

All of which leaves us where exactly? The conclusion of Al-Hoorie et al (2021) is that we might gain a better understanding of language learning motivation through the lens of complex dynamic systems theory, but the complexity of any insights gained makes it unlikely that this would lead to ‘workable pedagogical recommendations’. Since the whole point of researching motivation is to generate ‘workable pedagogical recommendations’, Al-Hoorie et al’s inescapable conclusion is that motivation research should be abandoned … and that attention should shift ‘to the more tangible and actionable construct of engagement’. This view would seem to be shared by Mercer and Dörnyei (2020). But is ‘engagement’ really any more tangible and actionable than ‘motivation’? I don’t think so. The concept of ‘engagement’ unifies motivation and its activation, according to Mercer & Dörnyei (2020: 6), which means that ‘engagement’ is an even more complex concept than motivation alone.

The mantra of researchers is ‘more research needed’ (Maley, 2016), so, even when criticising the research, it’s hardly surprising that Al-Hoorie et al argue for more research … just with a different focus. So, besides abandoning ‘motivation’ and looking at ‘engagement’ instead, more research that is ‘interventional in nature’ is needed – as it is a ‘rare commodity’ (Al-Hoorie et al., 2021: 141-2).

Motivation and practice

There’s no shortage of stuff out there telling us how to do motivation in the language classroom. There are books full of practical ideas and tips (e.g. Dörnyei & Hadfield, 2013; Renandya, 2015; Thorner, 2017). There is also any amount of online stuff about motivation, the main purpose of which is to sell something: a brand, a product (such as a coursebook or an app) or an idea (such as coaching). And then there are the ‘pedagogical applications’ that come at the end of the research papers, which ‘more often than not do not logically and unambiguously follow from the results of the research’ (Al-Hoorie et al., 2021: 138).

There are two big problems with all of this. We know that motivational classroom interventions can ‘work’, but we cannot actually measure ‘motivation’. We can only measure proxies for motivation, and the most common of these is self-reported intended effort – which actually tells us very little. Achievement may correlate with intended effort … but it may not! Much of the research literature implies that motivational interventions may be helpful, but fails to demonstrate clearly that they will be (see Howard et al., 2021, as an example). Equally problematic is the fact that we don’t know which kinds of interventions are likely to be most beneficial (see, for example, Lazowski & Hulleman, 2015). In other words, we are in the dark.

This is not to say that some of the tips and practical classroom ideas are not worth trying out. Most tips you will come across will strike you as self-evident (e.g. intrinsic beats extrinsic, success breeds motivation, rewards beat punishment) and, like Renandya (2015), quite reasonably draw on mainstream motivational theory. Regarding the practical side of things, Al-Hoorie et al (2021: 147) conclude that ‘probably the best advice to give to a novice teacher is not to bury themselves in recently published language motivation research, but to simply rely on experience and trial and error, and perhaps a good mentor’. As for good, experienced teachers, they already know ‘far more about motivating students than the sum of knowledge that can be gained from research’ (Henry et al, 2019: 15).

It is therefore just as well that it wouldn’t cross most teachers’ minds to even think of exploring this research.

Motivation and symbolic power

Hoorie et al (2021: 139- 142) observe that ‘giving advice to teachers has become de rigueur of late, which strikes us as antithetical to engaging in necessary critical reflection and the limits of available empirical evidence’. Teachers, they note, have to ‘make constant, split-second decisions to adapt to changing and evolving contexts. Asking teachers to learn how to teach from research findings is akin to asking an individual to learn how to drive or swim through reading books sans actual practice. Books might help in some respects, but in the end drivers and swimmers have to refine their skills through sustained practice and by trial and error due to the complex and unpredictable nature of context’.

In this light, it is hard not to view the discourse of language motivation research through the lens of ‘symbolic power’. It is hard not to reflect on the relation of research and practice in solving real-world language-related problems, to wonder whether such problem-solving has been hijacked by ‘professional experts’, and to wonder about the devaluation of the contribution of practitioners (Kramsch, 2021: 201).

References

Al-Hoorie, A.H. (2018) The L2 motivational self system: A meta-analysis. Studies in Second Language Learning and Teaching, 8 (4) https://pressto.amu.edu.pl/index.php/ssllt/article/view/12295

Al-Hoorie, A. H. & Hiver, P. (2020) The fundamental difference hypothesis: expanding the conversation in language learning motivation. SAGE Open, 10 (3) https://journals.sagepub.com/doi/full/10.1177/2158244020945702

Al-Hoorie, A.H., Hiver, P., Kim, T.Y. & De Costa, P. I. (2021) The Identity Crisis in Language Motivation Research. Journal of Language and Social Psychology, 40 (1): 136 – 153

Dörnyei, Z. (2009) The L2 motivational self system. In Z. Dörnyei & E. Ushioda (Eds.) Motivation, language identity and the L2 self (pp. 9-42). Bristol, UK: Multilingual Matters.

Dörnyei, Z. & Hadfield, J. (2013) Motivating Learning. Harlow: Pearson

Dörnyei, Z. & Ushioda, E. (2013) Teaching and Researching Motivation 2nd Edition. Abingdon: Routledge

Gardner, R. C. & Lambert, W. E. (1959) Motivational variables in second-language acquisition. Canadian Journal of Psychology / Revue Canadienne de Psychologie, 13 (4): 266 – 272

Henry, A., Sundqvist, P. & Thorsen, C. (2019) Motivational Practice: Insights from the Classroom. Studentlitteratur

Howard, J. L., Bureau, J. S., Guay, F., Chong, J. X. Y. & Ryan, R. M. (2021) Student Motivation and Associated Outcomes: A Meta-Analysis From Self-Determination Theory. Perspectives on Psychological Science, https://doi.org/10.1177%2F1745691620966789

Kramsch, C. (2021) Language as Symbolic Power. Cambridge: CUP

Lamb, M. (2016) Motivation. In Hall, G. (Ed.) The Routledge Handbook of English Language Teaching. Abingdon: Routledge. pp. 324 -338

Lazowski, R. & Hulleman, C. (2015) Motivation Interventions in Education: A Meta-Analytic Review. Review of Educational Research, 86 (2)

Maley, A. (2016) ‘More research is needed’ – A Mantra too Far? Humanising Language Teaching, 18 (3)

Mercer, S. & Dörnyei, Z. (2020) Engaging Language Learners in Contemporary Classrooms. Cambridge: CUP

Renandya, W.A. (2015) L2 motivation: Whose responsibility is it? English Language Teaching, 27 (4): 177-189.

Thorner, N. (2017) Motivational Teaching. Oxford: OUP

Ushioda, E. (2016) Language learning motivation through a small lens: a research agenda. Language Teaching, 49 (4): 564 – 577

There’s a video on YouTube from Oxford University Press in which the presenter, the author of a coursebook for primary English language learners (‘Oxford Discover’), describes an activity where students have a short time to write some sentences about a picture they have been shown. Then, working in pairs, they read aloud their partner’s sentences and award themselves points, with more points being given for sentences that others have not come up with. For lower level, young learners, it’s not a bad activity. It provides opportunities for varied skills practice of a limited kind and, if it works, may be quite fun and motivating. However, what I found interesting about the video is that it is entitled ‘How to teach critical thinking skills: speaking’ and the book that is being promoted claims to develop ‘21st Century Skills in critical thinking, communication, collaboration and creativity’. The presenter says that the activity achieves its critical thinking goals by promoting ‘both noticing and giving opinions, […] two very important critical thinking skills.’

Noticing (or observation) and giving opinions are often included in lists of critical thinking skills, but, for this to be the case, they must presumably be exercised in a critical way – some sort of reasoning must be involved. This is not the case here, so only the most uncritical understanding of critical thinking could consider this activity to have any connection to critical thinking. Whatever other benefits might accrue from it, it seems highly unlikely that the students’ ability to notice or express opinions will be developed.

My scepticism is not shared by many users of the book. Oxford University Press carried out a scientific-sounding ‘impact study’: this consisted of a questionnaire (n = 198) in which ‘97% of teachers reported that using Oxford Discover helps their students to improve in the full range of 21st century skills, with critical thinking and communication scoring the highest’.

Enthusiasm for critical thinking activities is extremely widespread. In 2018, TALIS, the OECD Teaching and Learning International Survey (with more than 4000 respondents) found that ‘over 80% of teachers feel confident in their ability to vary instructional strategies in their classroom and help students think critically’ and almost 60% ‘frequently or always’ ‘give students tasks that require students to think critically.’ Like the Oxford ‘impact study’, it’s worth remembering that these are self-reporting figures.

This enthusiasm is shared in the world of English language teaching, reflected in at least 17 presentations at the 2021 IATEFL conference that discussed practical ideas for promoting critical thinking. These ranged from the more familiar (e.g. textual analysis in EAP) to the more original – developing critical thinking through the use of reading reaction journals, multicultural literature, fables, creative arts performances, self-expression, escape rooms, and dice games.

In most cases, it would appear that the precise nature of the critical thinking that was ostensibly being developed was left fairly vague. This vagueness is not surprising. Practically the only thing that writers about critical thinking in education can agree on is that there is no general agreement about what, precisely, critical thinking is. Lai (2011) offers an accessible summary of a range of possible meanings, but points out that, in educational contexts, its meaning is often rather vague and encompasses other concepts (such as higher order thinking skills) which also lack clarity. Paul Dummett and John Hughes (2019: 4) plump for ‘a mindset that involves thinking reflectively, rationally and reasonably’ – a vague definition which leaves unanswered two key questions: to what extent is it a skill set or a disposition? Are these skills generic or domain specific?

When ‘critical thinking’ is left undefined, it is impossible to evaluate the claims that a particular classroom activity will contribute to the development of critical thinking. However, irrespective of the definition, there are good reasons to be sceptical about the ability of educational activities to have a positive impact on the generic critical thinking skills of learners in English language classes. There can only be critical-thinking value in the activity described at the beginning of this post if learners somehow transfer the skills they practise in the activity to other domains of their lives. This is, of course, possible, but, if we approach the question with a critical disposition, we have to conclude that it is unlikely. We may continue to believe the opposite, but this would be an uncritical act of faith.

The research evidence on the efficacy of teaching generic critical thinking is not terribly encouraging (Tricot & Sweller, 2014). There’s no shortage of anecdotal support for classroom critical thinking, but ‘education researchers have spent over a century searching for, and failing to find evidence of, transfer to unrelated domains by the use of generic-cognitive skills’ (Sweller, 2022). One recent meta-analysis (Huber & Kuncel, 2016) found insufficient evidence to justify the explicit teaching of generic critical thinking skills at college level. In an earlier blog post https://adaptivelearninginelt.wordpress.com/2020/10/16/fake-news-and-critical-thinking-in-elt/ looking at the impact of critical thinking activities on our susceptibility to fake news, I noted that research was unable to find much evidence of the value of media literacy training. When considerable time is devoted to generic critical thinking training and little or no impact is found, how likely is it that the kind of occasional, brief one-off activity in the ELT classroom will have the desired impact? Without going as far as to say that critical thinking activities in the ELT classroom have no critical-thinking value, it is uncontentious to say that we still do not know how to define critical thinking, how to assess evidence of it, or how to effectively practise and execute it (Gay & Clark, 2021).

It is ironic that there is so little critical thinking about critical thinking in the world of English language teaching, but it should not be particularly surprising. Teachers are no more immune to fads than anyone else (Fuertes-Prieto et al., 2020). Despite a complete lack of robust evidence to support them, learning styles and multiple intelligences influenced language teaching for many years. Mindfulness, growth mindsets, grit are more contemporary influences and, like critical thinking, will go the way of learning styles when the commercial and institutional forces that currently promote them find the lack of empirical supporting evidence problematic.

Critical thinking is an educational aim shared by educational authorities around the world, promoted by intergovernmental bodies like the OECD, the World Bank, the EU, and the United Nations. In Japan, for example, the ‘Ministry of Education (MEXT) puts critical thinking (CT) at the forefront of its ‘global jinzai’ (human capital for a global society) directive’ (Gay & Clark, 2021). It is taught as an academic discipline in some universities in Russia (Ivlev et al, 2021) and plans are underway to introduce it into schools in Saudi Arabia. https://www.arabnews.com/node/1764601/saudi-arabia I suspect that it doesn’t mean quite the same thing in all these places.

Critical thinking is also an educational aim that most teachers can share. Few like to think of themselves as Gradgrinds, bashing facts into their pupils’ heads: turning children into critical thinkers is what education is supposed to be all about. It holds an intuitive appeal, and even if we (20% of teachers in the TALIS survey) lack confidence in our ability to promote critical thinking in the classroom, few of us doubt the importance of trying to do so. Like learning styles, multiple intelligences and growth mindsets, it seems possible that, with critical thinking, we are pushing the wrong thing, but for the right reasons. But just how much evidence, or lack of evidence, do we need before we start getting critical about critical thinking?

References

Dummett, P. & Hughes, J. (2019) Critical Thinking in ELT. Boston: National Geographic Learning

Fuertes-Prieto, M.Á., Andrés-Sánchez, S., Corrochano-Fernández, D. et al. (2020) Pre-service Teachers’ False Beliefs in Superstitions and Pseudosciences in Relation to Science and Technology. Science & Education 29, 1235–1254 (2020). https://doi.org/10.1007/s11191-020-00140-8

Gay, S. & Clark, G. (2021) Revisiting Critical Thinking Constructs and What This Means for ELT. Critical Thinking and Language Learning, 8 (1): pp. 110 – 147

Huber, C.R. & Kuncel, N.R. (2016) Does College Teach Critical Thinking? A Meta-Analysis. Review of Educational Research. 2016: 86 (2) pp.:431-468. doi:10.3102/0034654315605917

Ivlev, V. Y., Pozdnyakov, M. V., Inozemtsez, V. A. & Chernyak, A. Z. (2021) Critical Thinking in the Structure of Educational Programs in Russian Universities. Advances in Social Science, Education and Humanities Research, volume 555: pp. 121 -128

Lai, E.R. 2011. Critical Thinking: A Literature Review. Pearson. http://images.pearsonassessments.com/images/tmrs/CriticalThinkingReviewFINAL.pdf

Sweller, J. (2022) Some Critical Thoughts about Critical and Creative Thinking. Sydney: The Centre for Independent Studies Analysis Paper 32

Tricot, A., & Sweller, J. (2014) Domain-specific knowledge and why teaching generic skills does not work. Educational Psychology Review, 26, 265- 283.

In the latest issue of ‘Language Teaching’, there’s a ‘state-of-the-art’ article by Frank Boers entitled ‘Glossing and vocabulary learning’. The effect of glosses (‘a brief definition or synonym, either in L1 or L2, which is provided with [a] text’ (Nation, 2013: 238)) on reading comprehension and vocabulary acquisition has been well researched over the years. See Kim et al. (2020) for just one recent meta-analysis.

It’s a subject I have written about before on this blog (see here), when I focussed on Plonsky ad Ziegler (2016), a critical evaluation of a number of CALL meta-analyses, including a few that investigated glosses. Plonsky and Ziegler found that glosses can have a positive effect on language learning, that digital glosses may be more valuable than paper-based ones, and that both L1 and L2 glosses can be beneficial (clearly, the quality / accuracy of the gloss is as important as the language it is written in). Different learners have different preferences. Boers’ article covers similar ground, without, I think, adding any new takeaways. It concludes with a predictable call for further research.

Boers has a short section on the ‘future of glossing’ in which he notes that (1) ‘onscreen reading [is] becoming the default mode’, and (2) that ‘materials developers no longer need to create glosses themselves, but can insert hyperlinks to online resources’. This is not the future, but the present. In my last blog post on glossing (August 2017), I discussed Lingro, a digital dictionary tool that you can have running in the background, allowing you to click on any word on any website and bring up L1 or L2 glosses. My reservation about Lingro was that the quality of the glosses left much to be desired, relying as they did on Wiktionary. Things would be rather different if it used decent content – sourced, for example, from Oxford dictionaries, Robert (for French) or Duden (for German).

And this is where the content for the Google Dictionary for Chrome extension comes from. It’s free, and takes only seconds to install. It allows you to double-click on a word to bring up translations or English definitions. One more click will take you to a more extensive dictionary page. It also allows you to select a phrase or longer passage and bring up translations generated by Google Translate. It allows you to keep track of the items you have looked up, and to download these on a spreadsheet, which can then be converted to flashcards (e.g. Quizlet) if you wish. If you use the Safari browser, a similar tool is already installed. It has similar features to the Google extension, but also offers you the possibility of linking to examples of the targeted word in web sources like Wikipedia.

Boers was thinking of the provision of hyperlinks, but with these browser extensions it is entirely up to the reader of a text to decide how many and which items to look up, what kind of items (single words, phrases or longer passages) they want to look up, how far they want to explore the information available to them, and what they want to do with the information (e.g. store / record it).

It’s extraordinary that a ‘state-of-the-art article’ in an extremely reputable journal should be so out of date. The value of glossing in language learning is in content-focussed reading, and these tools mean that any text on the web can be glossed. I think this means that further research of the kind that Boers means would be a waste of time and effort. The availability of free technology does not, of course, solve all our problems. Learners will continue to benefit from guidance, support and motivation in selecting appropriate texts to read. They will likely benefit from training in optimal ways of using these browser extensions. They may need help in finding a balance between content-focussed reading and content-focussed reading with a language learning payoff.

References

Boers, F. (2022). Glossing and vocabulary learning. Language Teaching, 55 (1), 1 – 23

Kim, H.S., Lee, J.H. & Lee, H. (2020). The relative effects of L1 and L2 glosses on L2 learning: A meta-analysis. Language Teaching Research. December 2020.

Nation, I.S.P. (2013). Learning Vocabulary in Another Language. Cambridge: Cambridge University Press

Plonsky, L. & Ziegler, N. (2016). The CALL–SLA interface: insights from a second-order synthesis. Language Learning & Technology 20 / 2: 17 – 37

The world of language learning and teaching is full of theoretical constructs and claims, most of which have their moment of glory in the sun before being eclipsed and disappearing from view. In a recent article looking at the theoretical claims of translanguaging enthusiasts, Jim Cummins (2021) suggests that three criteria might be used to evaluate them:

1 Empirical adequacy – to what extent is the claim consistent with all the relevant empirical evidence?

2 Logical coherence – to what extent is the claim internally consistent and non-contradictory?

3 Consequential validity – to what extent is the claim useful in promoting effective pedagogy and policies?

Take English as a Lingua Franca (ELF), for example. In its early days, there was much excitement about developing databases of ELF usage in order to identify those aspects of pronunciation and lexico-grammar that mattered for intercultural intelligibility. The Lingua Franca Core (a list of pronunciation features that are problematic in ELF settings when ELF users mix them up) proved to be the most lasting product of the early empirical research into ELF (Jenkins, 2000). It made intuitive good sense, was potentially empowering for learners and teachers, was clearly a useful tool in combating native-speakerism, and was relatively easy to implement in educational policy and practice.

But problems with the construct of ELF quickly appeared. ELF was a positive reframing of the earlier notion of interlanguage – an idea that had deficit firmly built in, since interlanguage was a point that a language learner had reached somewhere on the way to being like a native-speaker. Interlanguage contained elements of the L1, and this led to interest in how such elements might become fossilized, a metaphor with very negative connotations. With a strong desire to move away from framings of deficit, ELF recognised and celebrated code-switching as an integral element in ELF interactions (Seidlhofer, 2011: 105). Deviations from idealised native-speaker norms of English were no longer to be seen as errors in need of correction, but as legitimate forms of the language (of ELF) itself.

However, it soon became clear that it was not possible to describe ELF in terms of the particular language forms that its users employed. In response, ELF researchers reframed ELF. The focus shifted to how people of different language backgrounds used English to communicate in particular situations – how they languaged, in other words. ELF was no longer a thing, but an action. This helped in terms of internal consistency, but most teachers remained unclear about how the ELF.2 insight should impact on their classroom practices. If we can’t actually say what ELF looks like, what are teachers supposed to do with the idea? And much as we might like to wish away the idea of native speakers (and their norms), these ideas are very hard to expunge completely (MacKenzie, 2014: 170).

Twenty years after ELF became widely used as a term, ELF researchers lament the absence of any sizable changes in classroom practices (Bayyurt & Dewey, 2020). There are practices that meet the ELF seal of approval (see, for example, Kiczkowiak & Lowe, 2018), and these include an increase in exposure to the diversity of English use worldwide, engagement in critical classroom discussion about the globalisation of the English language, and non-penalisation of innovative, but intelligible forms (Galloway, 2018: 471). It is, however, striking that these practices long pre-date the construct of ELF. They are not direct products of ELF.

Part of the ‘problem’, as ELF researchers see it, has been that ELF has been so hard to define. Less generously, we might suggest that the construct of ELF was flawed from the start. Useful, no doubt, as a heuristic, but time to move on. Jennifer Jenkins, one of the most well-known names in ELF, has certainly not been afraid to move on. Her article (Jenkins, 2015) refines ELF.2 into ELF.3, which she now labels as ‘English as a Multilingual Franca’. In this reframed model, ELF is not so much concerned with the difference between native speakers and non-native speakers, as with the difference between monolinguals and multilinguals. Multilingual, rather than ‘English’, is now the superordinate attribute. Since ELF is now about interactions, rather than ELF as a collection of forms, it follows, in ELF.3, that ELF may not actually contain any English forms at all. There is a logic here, albeit somewhat convoluted, but there’s also a problem for ELF as a construct, too. If ELF is fundamentally about multilingual communication, what need is there for the term ‘ELF’? ‘Translanguaging’ will do perfectly well instead. The graph from Google Trends reveals the rises and falls of these two terms in the academic discourse space. After peaking in 2008 the term ‘English as a Lingua Franca’ now appears to be in irreversible decline.

So, let’s now turn to ‘translanguaging’. What do Cummins, and others, have to say about the construct? The word has not been around for long. Most people trace it back to the end of the last century (Baker, 2001) and a set of bilingual pedagogical practices in the context of Welsh-English bilingual programmes intended to revitalise the Welsh language. In the early days, translanguaging was no more than a classroom practice that allowed or encouraged the use (by both learners and teachers) of more than one language for the purposes of study. The object of study might be another language, or it might be another part of the curriculum. When I wrote a book about the use of L1 in the learning and teaching of English (Kerr, 2014), I could have called it ‘Translanguaging Activities’, but the editors and I felt that the word ‘translanguaging’ might be seen as obscure jargon. I defined the word at the time as ‘similar to code-switching, the process of mixing elements form two languages’.

But obscure jargon no longer. There is, for example, a nice little collection of activities that involve L1 for the EFL / ESL classroom put together by Jason Anderson http://www.jasonanderson.org.uk/downloads/Jasons_ideas_for_translanguaging_in_the_EFL_ESL_classroom.pdf that he has chosen to call ‘Ideas for translanguaging’. In practical terms, there’s nothing here that you might not have found twenty or more years ago (e.g. in Duff, 1989; or Deller & Rinvolucri, 2002), long before anyone started using the word ‘translanguaging’. Anderson’s motivation for choosing the word ‘translanguaging’ is that he hopes it will promote a change of mindset in which a spirit of (language) inclusivity prevails (Anderson, 2018). Another example: the different ways that L1 may be used in a language classroom have recently been investigated by Rabbidge (2019) in a book entitled ‘Translanguaging in EFL Contexts’. Rabbidge offers a taxonomy of translanguaging moments. These are a little different from previous classifications (e.g. Ellis, 1994; Kim & Elder, 2005), but only a little. The most significant novelty is that these moments are now framed as ‘translanguaging’, rather than as ‘use of L1’. Example #3: the most well-known and widely-sold book that offers practical ideas that are related to translanguaging is ‘The Translanguaging Classroom’ by García and colleagues (2017). English language teachers working in EFL / ESL / ESOL contexts are unlikely to find much, if anything, new here by way of practical ideas. What they will find, however, is a theoretical reframing. It is the theoretical reframing that Anderson and Rabbidge draw their inspiration from.

The construct of translanguaging, then, like English as a Lingua Franca, has brought little that is new in practical terms. Its consequential validity does not really need to be investigated, since the pedagogical reasons for some use of other languages in the learning / teaching of English were already firmly established (but not, perhaps, widely accepted) a long time ago. How about the theory? Does it stand up to closer scrutiny any better than ELF?

Like ELF, ‘translanguaging’ is generally considered not to be a thing, but an action. And, like ELF, it has a definition problem, so precisely what kind of action this might be is open to debate. For some, it isn’t even an action: Tian et al (2021: 4) refer to it as ‘more like an emerging perspective or lens that could provide new insights to understand and examine language and language (in) education’. Its usage bounces around from user to user, each of whom may appropriate it in different ways. It is in competition with other terms including translingual practice, multilanguaging, and plurilingualism (Li, 2018). It is what has been called a ‘strategically deployable shifter’ (Moore, 2015). It is also unquestionably a word that sets a tone, since ‘translanguaging’ is a key part of the discourse of multilingualism / plurilingualism, which is in clear opposition to the unfavourable images evoked by the term ‘monolingualism’, often presented as a methodological mistake or a kind of subjectivity gone wrong (Gramling, 2016: 4). ‘Translanguaging’ has become a hooray word: criticize it at your peril.

What started as a classroom practice has morphed into a theory (Li, 2018; García, 2009), one that is and is likely to remain unstable. The big questions centre around the difference between ‘strong translanguaging’ (a perspective that insists that ‘named languages’ are socially constructed and have no linguistic or cognitive reality) and ‘weak translanguaging’ (a perspective that acknowledges boundaries between named languages but seeks to soften them). There are discussions, too, about what to call these forms of translanguaging. The ‘strong’ version has been dubbed by Cummins (2021) ‘Unitary Translanguaging Theory’ and by Bonacina-Pugh et al. (2021) ‘Fluid Languaging Approach’. Corresponding terms for the ‘weak’ version are ‘Crosslinguistic Translanguaging Theory’ and ‘Fixed Language Approach’. Subsidiary, related debates centre around code-switching: is it a form of translanguaging or is it a construct better avoided altogether since it assumes separate linguistic systems (Cummins, 2021)?

It’s all very confusing. Cenoz and Gorter (2021) in their short guide to pedagogical translanguaging struggle for clarity, but fail to get there. They ‘completely agree’ with García about the fluid nature of languages as ‘social constructs’ with ‘no clear-cut boundaries’, but still consider named languages as ‘distinct’ and refer to them as such in their booklet. Cutting your way through this thicket of language is a challenge, to put it mildly. It’s also probably a waste of time. As Cummins (2021: 16) notes, the confusion is ‘completely unnecessary’ since ‘there is no difference in the instructional practices that are implied by so-called strong and weak versions of translanguaging’. There are also more important questions to investigate, not least the extent to which the approaches to multilingualism developed by people like García in the United States are appropriate or effective in other contexts with different values (Jaspers, 2018; 2019).

The monolingualism that both ELF and translanguaging stand in opposition to may be a myth, a paradigm or a pathology, but, whatever it is, it is deeply embedded in the ways that our societies are organised, and the ways that we think. It is, writes David Gramling (2016: 3), ‘clearly not yet inclined to be waved off the stage by a university professor, nor even by a ‘multilingual turn’.’ In the end, ELF failed to have much impact. It’s time for translanguaging to have a turn. So, out with the old, in with the new. Or perhaps not really all that new at all.

The king is dead. Long live the king and a happy new year!

References

Anderson, J. (2018) Reimagining English language learners from a translingual perspective. ELT Journal 72 (1): 26 – 37

Baker, C. (2001) Foundations of Bilingual Education and Bilingualism, 3rd edn. Bristol: Multilingual Matters

Bayyurt, Y. & Dewey, M. (2020) Locating ELF in ELT. ELT Journal, 74 (4): 369 – 376

Bonacina-Pugh, F., Da Costa Cabral, I., & Huang, J. (2021) Translanguaging in education. Language Teaching, 54 (4): 439-471

Cenoz, J. & Gorter, D. (2021) Pedagogical Translanguaging. Cambridge: Cambridge University Press

Cummins, J. (2021) Translanguaging: A critical analysis of theoretical claims. In Juvonen, P. & Källkvist, M. (Eds.) Pedagogical Translanguaging: Theoretical, Methodological and Empirical Perspectives. Bristol: Multilingual Matters pp. 7 – 36

Deller, S. & Rinvolucri, M. (2002) Using the Mother Tongue. Peaslake, Surrey: Delta

Duff, A. (1989) Translation. Oxford: OUP

Ellis, R. (1994) Instructed Second Language Acquisition. Oxford: OUP

Galloway, N. (2018) ELF and ELT Teaching Materials. In Jenkins, J., Baker, W. & Dewey, M. (Eds.) The Routledge Handbook of English as a Lingua Franca. Abingdon, Oxon.: Routledge, pp. 468 – 480.

García, O., Ibarra Johnson, S. & Seltzer, K. (2017) The Translanguaging Classroom. Philadelphia: Caslon

García, O. (2009) Bilingual Education in the 21st Century: A Global Perspective. Malden / Oxford: Wiley / Blackwell

Gramling, D. (2016) The Invention of Monolingualism. New York: Bloomsbury

Jaspers, J. (2019) Authority and morality in advocating heteroglossia. Language, Culture and Society, 1: 1, 83 – 105

Jaspers, J. (2018) The transformative limits of translanguaging. Language & Communication, 58: 1 – 10

Jenkins, J. (2000) The Phonology of English as an International Language. Oxford: Oxford University Press

Jenkins, J. (2015) Repositioning English and multilingualism in English as a lingua franca. Englishes in Practice, 2 (3): 49-85

Kerr, P. (2014) Translation and Own-language Activities. Cambridge: Cambridge University Press

Kiczkowiak, M. & Lowe, R. J. (2018) Teaching English as a Lingua Franca. Stuttgart: Delta

Kim, S.-H. & Elder, C. (2005) Language choices and pedagogical functions in the foreign language classroom: A cross-linguistic functional analysis of teacher talk. Language Teaching Research, 9 (4): 355 – 380

Li, W. (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39 (1): 9 – 30

MacKenzie, I. (2014) English as a Lingua Franca. Abingdon, Oxon.: Routledge

Moore, R. (2015) From Revolutionary Monolingualism to Reactionary Multilingualism: Top-Down Discourses of Linguistic Diversity in Europe, 1794 – present. Language and Communication, 44: 19 – 30

Rabbidge, M. (2019) Translanguaging in EFL Contexts. Abingdon, Oxon.: Routledge

Seidlhofer, B. (2011) Understanding English as a Lingua Franca. Oxford: OUP

Tian, Z., Aghai, L., Sayer, P. & Schissel, J. L. (Eds.) (2020) Envisioning TESOL through a translanguaging lens: Global perspectives. Cham, CH: Springer Nature.

We need to talk

Posted: December 13, 2021 in Discourse, research
Tags: , , ,

In 1994, in a well-known TESOL Quarterly article entitled ‘The dysfunctions of theory/practice discourse’, Mark A. Clarke explored the imbalance in the relationship between TESOL researchers and English language teachers, and the way in which the former typically frame the latter as being less expert than themselves. In the last 25 years, the topic has regularly resurfaced, most recently with the latest issue of the Modern Language Journal, a special issue devoted entirely to ‘the Research-Practice Dialogue in Second Language Learning and Teaching’ (Sato & Loewen, 2022). At the heart of the matter is the fact that most teachers are just not terribly interested in research and rarely, if ever, read it (Borg, 2009). Much has been written on whether or not this matters, but that is not my concern here.

Sato and Loewen’s introductory article reviews the reasons behind the lack of dialogue between researchers and teachers, and, in an unintentionally comic meta move, argue that more research is needed into teachers’ lack of interest in research. This is funny because one of the reasons for a lack of dialogue between researchers and teachers is that ‘teachers have been researched too much ON and not enough WITH’ (Farrell, 2016): most research has not been carried out ‘for the teacher’s benefit and needs’, with the consequence being that ‘the result is purely academic’. Sato and Loewen’s primary focus in the article is on ‘classroom teachers’, with whom they would like to see more ‘dialogue’, but, as they acknowledge, they publish in a research journal whose ‘most likely readers are researchers’. They do not appear to have read Alan Maley’s ‘‘More Research is Needed’ – A Mantra Too Far?’ (Maley, 2016). Perhaps the article (and the Humanising Language Teaching magazine it is from) passed under their radar because it’s written for teachers (not researchers), it’s free and does not have an impact factor?

I wasn’t entirely convinced by the argument that more research about research is needed, not least because Sato and Loewen provide a fairly detailed analysis of the obstacles that exist to dialogue between researchers and teachers. They divide these into two categories:

Epistemological obstacles: the framing of researchers as generators of knowledge and teachers as consumers of knowledge; teachers’ scepticism about the relevance of some research findings to real-world teaching situations; the different discourse communities inhabited by researchers and teachers, as evidenced by the academic language choices of the former.

Practical obstacles: institutional expectations for researchers to publish in academic journals and a lack of time for researchers to engage in dialogue with teachers; teachers’ lack of time and lack of access to research.

Nothing new here, nothing contentious, either. Nothing new, either, in their argument that more dialogue between researchers and teachers would be of mutual benefit. They acknowledge that ‘In the current status of the research-practice relationship, it is researchers who are concerned about transferability of their findings to classrooms. Practitioners may not have burning motivation or urgent needs to reach out to researchers’. Consequently, it is researchers who should ‘shoulder the lion’s share of responsibility in this initiative’. This implies that, while the benefit could be mutual, it is not mutually proportionate, since researchers have both more to lose and more to gain.

They argue that it would be helpful to scrutinize closely the relationship between researchers and teachers (they prefer to use the word ‘practitioners’) and that researchers need to reflect on their own beliefs and practices, in particular the way that researchers are stake-holders in the research-practice relationship. I was disappointed that they didn’t go into more detail here and would like to suggest one angle of the ‘cui bono’ question worth exploring. The work of TESOL researchers is mostly funded by TESOL teaching. It is funded, in other words, by selling a commodity – TESOL – to a group of consumers … who are teachers. If we frame researchers as vendors and teachers as (potential) clients, [1] a rather different light is shone on pleas for more dialogue.

The first step, Sato and Loewen claim, towards achieving such a dialogue would be ‘nurturing a collaborative mindset in both researchers and teachers’. And the last of four steps to removing the obstacles to dialogue would be ‘institutional support’ for both teachers and researchers. But without institutional support, mindsets are unlikely to become more collaborative, and the suggestions for institutional support (e.g. time release and financial support for teachers) are just pie in the sky. Perhaps sensing this, Sato and Loewen conclude the article by asking whether their desire to see a more collaborative mindset (and, therefore, more dialogue) is just a dream. Back in 1994, Mark Clarke had this to say:

The only real solution to the problems I have identified would be to turn the hierarchy on its head, putting teachers on the top and arraying others-pundits, professors, administrators, researchers, and so forth-below them. This would require a major change in our thinking and in our behavior and, however reasonable it may appear to be, I do not see this happening. (Clarke, 1994: 18)

In 2017, ELT Journal published an entertaining piece of trolling by Péter Medgyes, ‘The (ir)relevance of academic research for the language teacher’, in which he refers to the expert status of researchers as related to the ‘orthodox and mistaken belief that by virtue of having churned out tons of academic papers and books, they must be performing an essential service for language education’. It is not hard to imagine the twinkle in his eye as he wrote it. In the same volume, Amos Paran (2017) picks up the bait, arguing for more dialogue between researchers and teachers. In response to Paran’s plea, Medgyes points out that there is an irony in preaching the importance of dialogue in a top-down manner. ‘As long as the playing field is uneven, it is absurd to talk about dialogue, if a dialogue is at all necessary’, he writes. The same holds true for Sato and Loewen. They acknowledge (Sato & Loewen, 2018) that ‘researchers’ top-down attitudes will not facilitate the dialogue’, but, try as they might, their own mindset is seemingly inescapable. In one article that was attempting to reach out to teachers (Sato, Loewen & Kim, 2021), they managed to make one teacher trainer, Sandy Millin, feel that teachers were being unfairly attacked.

The phrase ‘We need to talk’ has been described as, perhaps, the most dreaded four words in the English language. When you hear it, you know (1) that someone wants to talk to you (and not the other way round), (2) that, whether you want to talk or not, the other person will have their say, (3) that the discussion will almost certainly involve some criticism of you, and this may be merited, and (4) whatever happens next, it is unlikely that your relationship will improve.

References

Borg, S. (2009). English language teachers’ conceptions of research. Applied Linguistics, 30 (3): 358 – 88

Clarke, M. (1994). The dysfunctions of theory/practice discourse. TESOL Quarterly, 28: 9-26.

Farrell, T. (2016). Reflection, reflection, reflection. Responses to the Chapter:  More Research is Needed – A Mantra Too Far? Humanising Language Teaching, 18 (3) http://old.hltmag.co.uk/jun16/mart.htm

Maley, A. (2016). ‘More Research is Needed’ – A Mantra Too Far? Humanising Language Teaching, 18 (3) http://old.hltmag.co.uk/jun16/mart.htm

Medgyes, P. (2017). The (ir)relevance of academic research for the language teacher. ELT Journal, 71 (4): 491–498

Paran, A. (2017). ‘Only connect’: researchers and teachers in dialogue. ELT Journal, 71 (4): 499 – 508

Sato, M., & Loewen, S. (2022). The research-practice dialogue in second language learning and teaching: Past, present, and future. The Modern Language Journal, 106 (3)

Sato, M. & Loewen, S. (Eds.) (2019) Evidence-Based Second Language Pedagogy. New York: Routledge

Sato, M. & Loewen, S. (2018). Do teachers care about research? The research–pedagogy dialogue. ELT Journal 73 (1): 1 – 10

Sato, M., Loewen, S. & Kim, Y. J. (2021) The role and value of researchers for teachers: five principles for mutual benefit. TESOL AL Forum September 2021. http://newsmanager.commpartners.com/tesolalis/issues/2021-08-30/email.html#4


[1] I am actually a direct customer of Sato and Loewen, having bought for £35 last year a copy of their edited volume ‘Evidence-Based Second Language Pedagogy’. According to the back cover, it is a ‘cutting-edge collection of empirical research [which closes] the gap between research and practice’. In reality, it’s a fairly random collection of articles of very mixed quality, many of which are co-authored by ‘top scholars’ and the PhD students they are supervising. It does nothing to close any gaps between research and practice and I struggle to see how it could be of any conceivable benefit to teachers.

I’ve written about mindset before (here), but a recent publication caught my eye, and I thought it was worth sharing.

Earlier this year, the OECD produced a report on its 2018 PISA assessments. This was significant because it was the first time that the OECD had attempted to measure mindsets and correlate them to academic achievements. Surveying some 600,000 15-year-old students in 78 countries and economies, it is, to date, the biggest, most global attempt to study the question. Before going any further, a caveat is in order. The main focus of PISA 2018 was on reading, so any correlations that are found between mindsets and achievement can only be interpreted in the context of gains in reading skills. This is important to bear in mind, as previous research into mindsets indicates that mindsets may have different impacts on different school subjects.

There has been much debate about how best to measure mindsets and, indeed, whether they can be measured at all. The OECD approached the question by asking students to respond to the statement ‘Your intelligence is something about you that you can’t change very much’ by choosing “strongly disagree”, “disagree”, “agree”, or “strongly agree”. Disagreeing with the statement was considered a precursor of a growth mindset, as it is more likely that someone who thinks intelligence can change will challenge him/herself to improve it. Across the sample, almost two-thirds of students showed a growth mindset, but there were big differences between countries, with students in Estonia, Denmark, and Germany being much more growth-oriented than those in Greece, Mexico or Poland (among OECD countries) and the Philippines, Panama, Indonesia or Kosovo (among the non-OECD countries). In line with previous research, students from socio-economically advantaged backgrounds presented a growth mindset more often than those from socio-economically disadvantaged backgrounds.

I have my problems with the research methodology. A 15-year-old from a wealthy country is much more likely than peers in other countries to have experienced mindset interventions in school: motivational we-can-do-it posters, workshops on neuroplasticity, biographical explorations of success stories and the like. In some places, some students have been so exposed to this kind of thing that school leaders have realised that growth mindset interventions should be much more subtle, avoiding the kind of crude, explicit proselytising that simply makes many students roll their eyes. In contexts such as these, most students now know what they are supposed to believe concerning the malleability of intelligence, irrespective of what they actually believe. Therefore, asking them, in a formal context, to respond to statements which are obviously digging at mindsets is an invitation to provide what they know is the ‘correct response’. Others, who have not been so fortunate in receiving mindset training, are less likely to know the correct answer. Therefore, the research results probably tell us as much about educational practices as they do about mindsets. There are other issues with the chosen measurement tool, discussed in the report, including acquiescent bias and the fact that the cognitive load required by the question increases the likelihood of a random response. Still, let’s move on.

The report found that growth mindsets correlated with academic achievement in some (typically wealthier) countries, but not in others. Wisely, the report cautions that the findings do not establish cause-and-effect relations. This is wise because a growth mindset may, to some extent, be the result of academic success, rather than the cause. As the report observes, students performing well may associate their success to internal characteristics of effort and perseverance, while those performing poorly may attribute it to immutable characteristics to preserve their self-esteem.

However, the report does list the ways in which a growth mindset can lead to better achievement. These include valuing school more, setting more ambitious learning goals, higher levels of self-efficacy, higher levels of motivation and lower levels of fear of failure. This is a very circular kind of logic. These attributes are the attributes of growth mindset, but are they the results of a growth mindset or simply the constituent parts of it? Incidentally, they were measured in the same way as the measurement of mindset, by asking students to respond to statements like “I find satisfaction in working as hard as I can” or “My goal is to learn as much as possible”. The questions are so loaded that we need to be very sceptical about the meaning of the results. The concluding remarks to this section of the report clearly indicate the bias of the research. The question that is asked is not “Can growth mindset lead to better results?” but “How can growth mindset lead to better results?”

Astonishingly, the research did not investigate the impact of growth mindset interventions in schools on growth mindset. Perhaps, this is too hard to do in any reliable way. After all, what counts as a growth mindset intervention? A little homily from the teacher about how we can all learn from our mistakes or some nice posters on the walls? Or a more full-blooded workshop about neural plasticity with follow-up tasks? Instead, the research investigated more general teaching practices. The results were interesting. The greatest impacts on growth mindset come when students perceive their teachers as being supportive in a safe learning environment, and when teachers adapt their teaching to the needs of the class, as opposed to simply following a fixed syllabus. The findings about teacher feedback were less clear: “Whether teacher feedback influences students’ growth mindset development or the other way around, further research is required to investigate this relationship, and why it could differ according to students’ proficiency in reading”.

The final chapter of this report does not include any references to data from the PISA 2018 exercise. Instead, it repeats, in a very selective way, previous research findings such as:

  • Growth mindset interventions yield modest average treatment effects, but larger effects for specific subgroups.
  • Growth-mindset interventions fare well in both scalability and cost-effectiveness dimensions.

It ignores any discussion about whether we should be bothering with growth mindsets at all. It tells us something we already know (about the importance of teacher support and adapting teaching to the needs of the class), but somehow concludes that “growth mindset interventions […] can be cost-effective ways to raise students’ outcomes on a large scale”. It is, to my mind, a classic example, of ‘research’ that is looking to prove a point, rather than critically investigate a phenomenon. In that sense, it is the very opposite of science.

OECD (2021) Sky’s the Limit: Growth Mindset, Students, and Schools in PISA. https://www.oecd.org/pisa/growth-mindset.pdf

‘Pre-teaching’ (of vocabulary) is a widely-used piece of language teaching jargon, but it’s a strange expression. The ‘pre’ indicates that it’s something that comes before something else that is more important, what Chia Suan Chong calls ‘the main event’, which is usually some reading or listening work. The basic idea, it seems, is to lessen the vocabulary load of the subsequent activity. If the focus on vocabulary were the ‘main event’, we might refer to the next activity as ‘post-reading’ or ‘post-listening’ … but we never do.

The term is used in standard training manuals by both Jim Scrivener (2005: 230 – 233) and Jeremy Harmer (2012: 137) and, with a few caveats, the practice is recommended. Now read this from the ELT Nile Glossary:

For many years teachers were recommended to pre-teach vocabulary before working on texts. Nowadays though, some question this, suggesting that the contexts that teachers are able to set up for pre-teaching are rarely meaningful and that pre-teaching in fact prevents learners from developing the attack strategies they need for dealing with challenging texts.

Chia is one of those doing this questioning. She suggests that ‘we cut out pre-teaching altogether and go straight for the main event. After all, if it’s a receptive skills lesson, then shouldn’t the focus be on reading/listening skills and strategies? And most importantly, pre-teaching prevents learners’ from developing a tolerance of ambiguity – a skill that is vital in language learning.’ Scott Thornbury is another who has expressed doubts about the value of PTV, although he is more circumspect in his opinions. He has argued that working out the meaning of vocabulary from context is probably a better approach and that PTV inadequately prepares learners for the real world. If we have to pre-teach, he argues, get it out of the way ‘as quickly and efficiently as possible’ … or ‘try post-teaching instead’.

Both Chia and Scott touch on the alternatives, and guessing the meaning of unknown words from context is one of them. I’ve discussed this area in an earlier post. Not wanting to rehash the content of that post here, the simple summary is this: it’s complicated. We cannot, with any degree of certainty, say that guessing meaning from context leads to more gains in either reading / listening comprehension or vocabulary development than PTV or one of the other alternatives – encouraging / allowing monolingual or bilingual dictionary look up (see this post on the topic), providing a glossary (see this post) or doing post-text vocabulary work.

In attempting to move towards a better understanding, the first problem is that there is very little research into the relationship between PTV and improved reading / listening comprehension. What there is (e.g. Webb, 2009) suggests that pre-teaching can improve comprehension and speed up reading, but there are other things that a teacher can do (e.g. previous presentation of comprehension questions or the provision of pictorial support) that appear to lead to more gains in these areas (Pellicer-Sánchez et al., 2021). It’s not exactly a ringing endorsement. There is even less research looking at the relationship between PTV and vocabulary development. What there is (Pellicer-Sánchez et al., 2021) suggests that pre-teaching leads to more vocabulary gains than when learners read without any support. But the reading-only condition is unlikely in most real-world learning contexts, where there is a teacher, dictionary or classmate who can be turned to. A more interesting contrast is perhaps between PTV and during-reading vocabulary instruction, which is a common approach in many classrooms. One study (File & Adams, 2010) looked at precisely this area and found little difference between the approaches in terms of vocabulary gains. The limited research does not provide us with any compelling reasons either for or against PTV.

Another problem is, as usual, that the research findings often imply more than was actually demonstrated. The abstract for the study by Pellicer-Sánchez et al (2021) states that pre‐reading instruction led to more vocabulary learning. But this needs to be considered in the light of the experimental details.

The study involved 87 L2 undergraduates and postgraduates studying at a British university. Their level of English was therefore very high, and we can’t really generalise to other learners at other levels in other conditions. The text that they read contained a number of pseudo-words and was 2,290 words long. The text itself, a narrative, was of no intrinsic interest, so the students reading it would treat it as an object of study and they would notice the pseudo-words, because their level of English was already high, and because they knew that the focus of the research was on ‘new words’. In other words, the students’ behaviour was probably not at all typical of a student in a ‘normal’ classroom. In addition, the pseudo-words were all Anglo-Saxon looking, and not therefore representative of the kinds of unknown items that students would encounter in authentic (or even pedagogical) texts (which would have a high proportion of words with Latin roots). I’m afraid I don’t think that the study tells us anything of value.

Perhaps research into an area like this, with so many variables that need to be controlled, is unlikely ever to provide teachers with clear answers to what appears to be a simple question: is PTV a good idea or not? However, I think we can get closer to something resembling useful advice if we take another tack. For this, I think two additional questions need to be asked. First, what is the intended main learning opportunity (note that I avoid the term ‘learning outcome’!) of the ‘main event’ – the reading or listening. Second, following on from the first question, what is the point of PTV, i.e. in what ways might it contribute to enriching the learning opportunities of the ‘main event’?

To answer the first question, I think it is useful to go back to a distinction made almost forty years ago in a paper by Tim Johns and Florence Davies (1983). They contrasted the Text as a Linguistic Object (TALO) with the Text as a Vehicle for Information (TAVI). The former (TALO) is something that language students study to learn language from in a direct way. It has typically been written or chosen to illustrate and to contextualise bits of grammar, and to provide opportunities for lexical ‘quarrying’. The latter (TAVI) is a text with intrinsic interest, read for information or pleasure, and therefore more appropriately selected by the learner, rather than the teacher. For an interesting discussion on TALO and TAVI, see this 2015 post from Geoff Jordan.

Johns and Davies wrote their article in pre-Headway days when texts in almost all coursebooks were unashamedly TALOs, and when what were called top-down reading skills (reading for gist / detail, etc.) were only just beginning to find their way into language teaching materials. TAVIs were separate, graded readers, for example. In some parts of the world, TALOs and TAVIs are still separate, often with one teacher dealing with the teaching of discrete items of language through TALOs, and another responsible for ‘skills development’ through TAVIs. But, increasingly, under the influence of British publishers and methodologists, attempts have been made to combine TALOs and TAVIs in a single package. The syllabus of most contemporary coursebooks, fundamentally driven by a discrete-item grammar plus vocabulary approach, also offer a ‘skills’ strand which requires texts to be intrinsically interesting, meaningful and relevant to today’s 21st century learners. The texts are required to carry out two functions.

Recent years have seen an increasingly widespread questioning of this approach. Does the exploitation of reading and listening texts in coursebooks (mostly through comprehension questions) actually lead to gains in reading and listening skills? Is there anything more than testing of comprehension going on? Or do they simply provide practice in strategic approaches to reading / listening, strategies which could probably be transferred from L1? As a result of the work of scholars like William Grabe (reading) and John Field and Richard Cauldwell (listening), there is now little, if any, debate in the world of research about these questions. If we want to develop the reading / listening skills of our students, the approach of most coursebooks is not the way to go about it. For a start, the reading texts are usually too short and the listening texts too long.

Most texts that are found in most contemporary coursebooks are TALOs dressed up to look like TAVIs. Their fundamental purpose is to illustrate and contextualise language that has either been pre-taught or will be explored later. They are first and foremost vehicles for language, and only secondarily vehicles for information. They are written and presented in as interesting a way as possible in order to motivate learners to engage with the TALO. Sometimes, they succeed.

However, there are occasions (even in coursebooks) when texts are TAVIs – used for purely ‘skills’ purposes, language use as opposed to language study. Typically, they (reading or listening texts) are used as springboards for speaking and / or writing practice that follows. It’s the information in the text that matters most.

So, where does all this take us with PTV? Here is my attempt at a break-down of advice.

1 TALOs where the text contains a set of new lexical items which are a core focus of the lesson

If the text is basically a contextualized illustration of a set of lexical items (and, usually, a particular grammatical structure), there is a strong case for PTV. This is, of course, assuming that these items are of sufficiently high frequency to be suitable candidates for direct vocabulary instruction. If this is so, there is also a strong case to be made for the PTV to be what has been called ‘rich instruction’, which ‘involves (1) spending time on the word; (2) explicitly exploring several aspects of what is involved in knowing a word; and (3) involving learners in thoughtfully and actively processing the word’ (Nation, 2013: 117). In instances like this, PTV is something of a misnomer. It’s just plain teaching, and is likely to need as much, or more, time than exploration of the text (which may be viewed as further practice of / exposure to the lexis).

If the text is primarily intended as lexical input, there is also a good case to be made for making the target items it contains more salient by, for example, highlighting them or putting them in bold (Choi, 2017). At the same time, if ‘PTV’ is to lead to lexical gains, these are likely to be augmented by post-reading tasks which also focus explicitly on the target items (Sonbul & Schmitt, 2010).

2 TALOs which contain a set of lexical items that are necessary for comprehension of the text, but not a core focus of the lesson (e.g. because they are low-frequency)

PTV is often time-consuming, and necessarily so if the instruction is rich. If it is largely restricted to matching items to meanings (e.g. through translation), it is likely to have little impact on vocabulary development, and its short-term impact on comprehension appears to be limited. Research suggests that the use of a glossary is more efficient, since learners will only refer to it when they need to (whereas PTV is likely to devote some time to some items that are known to some learners, and this takes place before the knowledge is required … and may therefore be forgotten in the interim). Glossaries lead to better comprehension (Alessi & Dwyer, 2008).

3 TAVIs

I don’t have any principled objection to the occasional use of texts as TALOs, but it seems fairly clear that a healthy textual diet for language learners will contain substantially more TAVIs than TALOs, substantially more extensive reading than intensive reading of the kind found in most coursebooks. If we focused less often on direct instruction of grammar (a change of emphasis which is long overdue), there would be less need for TALOs, anyway. With TAVIs, there seems to be no good reason for PTV: glossaries or digital dictionary look-up will do just fine.

However, one alternative justification and use of PTV is offered by Scott Thornbury. He suggests identifying a relatively small number of keywords from a text that will be needed for global understanding. Some of them may be unknown to the learners, and for these, learners use dictionaries to check meaning. Then, looking at the list of key words learners predict what the text will be about. The rationale here is that if learners engage with these words before encountering them in the text, it ‘may be an effective way of activating a learner’s schema for the text, and this may help to support comprehension’ (Ballance, 2018). However, as Ballance notes, describing this kind of activity as PTV would be something of a misnomer: it is a useful addition to a teacher’s repertoire of schema-activation activities (which might be used with both TAVIs and TALOs).

In short …

The big question about PTV, then, is not one of ‘yes’ or ‘no’. It’s about the point of the activity. Balance (2018) offers a good summary:

‘In sum, for teachers to use PTV effectively, it is essential that they clearly identify a rationale for including PTV within a lesson, select the words to be taught in conjunction with this rationale and also design the vocabulary learning or development exercise in a manner that is commensurate with this rationale. The rationale should be the determining factor in the design of a PTV component within a lesson, and different rationales for using PTV naturally lead to markedly different selections of vocabulary items to be studied and different exercise designs.’

REFERENCES

Alessi, S. & Dwyer, A. (2008). Vocabulary assistance before and during reading. Reading in a Foreign Language, 20 (2): pp. 246 – 263

Ballance, O. J. (2018). Strategies for pre-teaching vocabulary in context. In The TESOL Encyclopedia of English Language Teaching (pp. 1-7). Wiley. https://doi.org/10.1002/9781118784235.eelt0732

Choi, S. (2017). Processing and learning of enhanced English collocations: An eye movement study. Language Teaching Research, 21, 403–426. https://doi.org/10.1177/1362168816653271

File, K. A. & Adams, R. (2010). Should vocabulary instruction be integrated or isolated? TESOL Quarterly, 24, 222–249.

Harmer, J. (2012). Essential Teacher Knowledge. Harlow: Pearson

Johns, T. & Davies, F. (1983). Text as a vehicle for information: the classroom use of written texts in teaching reading in a foreign language. Reading in a Foreign Language, 1 (1): pp. 1 – 19

Nation, I. S. P. (2013). Learning Vocabulary in Another Language 2nd Edition. Cambridge: Cambridge University Press

Pellicer-Sánchez, A., Conklin, K. & Vilkaitė-Lozdienė, L. (2021). The effect of pre-reading instruction on vocabulary learning: An investigation of L1 and L2 readers’ eye movements. Language Learning, 0 (0), 0-0. https://onlinelibrary.wiley.com/doi/full/10.1111/lang.12430

Scrivener, J. (2005). Learning Teaching 2nd Edition. Oxford: Macmillan

Sonbul, S. & Schmitt, N. (2010). Direct teaching of vocabulary after reading: is it worth the effort? ELT Journal 64 (3): pp.253 – 260

Webb, S. (2009). The effects of pre‐learning vocabulary on reading comprehension and writing. The Canadian Modern Language Review, 65 (3): pp. 441–470.

A week or so ago, someone in the Macmillan marketing department took it upon themselves to send out this tweet. What grabbed my attention was the claim that it is ‘a well-known fact’ that teaching students a growth mindset makes them perform better academically over time. The easily demonstrable reality (which I’ll come on to) is that this is not a fact. It’s fake news, being used for marketing purposes. The tweet links to a blog post of over a year ago. In it, Chia Suan Chong offers five tips for developing a growth mindset in students: educating students about neuroplasticity, delving deeper into success stories, celebrating challenges and mistakes, encouraging students to go outside their comfort zones, and giving ‘growth-mindset-feedback’. All of which, she suggests, might help our students. Indeed, it might, and, even if it doesn’t, it might be worth a try anyway. Chia doesn’t make any claims beyond the potential of the suggested strategies, so I wonder where the Macmillan Twitter account person got the ‘well-known fact’.

If you google ‘mindset ELT’, you will find webpage after webpage offering tips about how to promote growth mindset in learners. It’s rare for the writers of these pages to claim that the positive effects of mindset interventions are a ‘fact’, but it’s even rarer to come across anyone who suggests that mindset interventions might be an à la mode waste of time and effort. Even in more serious literature (e.g. Mercer, S. & Ryan, S. (2010). A mindset for EFL: learners’ beliefs about the role of natural talent. ELT Journal, 64 (4): 436 – 444), the approach is fundamentally enthusiastic, with no indication that there might be a problem with mindset theory. Given that this enthusiasm is repeated so often, perhaps we should not blame the Macmillan tweeter for falling victim to the illusory truth effect. After all, it appears that 98% of teachers in the US feel that growth mindset approaches should be adopted in schools (Hendrick, 2019).

Chia suggests that we can all have fixed mindsets in certain domains (e.g. I know all about that, there’s nothing more I can learn). One domain where it seems that fixed mindsets are prevalent is mindset theory itself. This post is an attempt to nudge towards more ‘growth’ and, in trying to persuade you to be more sceptical, I will quote as much as possible from Carol Dweck, the founder of mindset theory, and her close associates.

Carol Dweck’s book ‘Mindset: The New Psychology of Success’ appeared in 2006. In it, she argued that people can be placed on a continuum between those who have ‘a fixed mindset–those who believe that abilities are fixed—[and who] are less likely to flourish [and] those with a growth mindset–those who believe that abilities can be developed’ (from the back cover of the updated (2007) version of the book). There was nothing especially new about the idea. It is very close to Bandura’s (1982) theory of self-efficacy, which will be familiar to anyone who has read Zoltán Dörnyei’s more recent work on motivation in language learning. It’s closely related to Carl Roger’s (1969) ideas about self-concept and it’s not a million miles removed, either, from Maslow’s (1943) theory of self-actualization. The work of Rogers and Maslow was at the heart of the ‘humanistic turn’ in ELT in the latter part of the 20th century (see, for example, Early, 1981), so mindset theory is likely to resonate with anyone who was inspired by the humanistic work of people like Moskowitz, Stevick or Rinvolucri. The appeal of mindset theory is easy to see. Besides its novelty value, it resonates emotionally with the values that many teachers share, writes Tom Bennett: it feels right that you don’t criticise the person, but invite them to believe that, through hard work and persistence, you can achieve.

We might even trace interest in the importance of self-belief back to the Stoics (who, incidentally but not coincidentally, are experiencing a revival of interest), but Carol Dweck introduced a more modern flavour to the old wine and packaged it skilfully and accessibly in shiny new bottles. Her book was a runaway bestseller, with sales in the millions, and her TED Talk has now had over 11 million views. It was in education that mindset theory became particularly popular. As a mini-industry it is now worth millions and millions. Just one research project into the efficacy of one mindset product has received 3.5 million dollars in US federal funding.

But, much like other ideas that have done a roaring trade in popular psychology (Howard Gardner’s ‘multiple intelligences theory, for example) which seem to offer simple solutions to complex problems, there was soon pushback. It wasn’t hard for critics to scoff at motivational ‘yes-you-can’ posters in classrooms or accounts of well-meaning but misguided teacher interventions, like this one reported by Carl Hendrick:

One teacher [took] her children out into the pristine snow covering the school playground, she instructed them to walk around, taking note of their footprints. “Look at these paths you’ve been creating,” the teacher said. “In the same way that you’re creating new pathways in the snow, learning creates new pathways in your brain.”

Carol Dweck was sympathetic to the critics. She has described the early reaction to her book as ‘uncontrollable’. She freely admits that she and her colleagues had underestimated the issues around mindset interventions in the classrooms and that such interventions were ‘not yet evidence-based’. She identified two major areas where mindset interventions have gone awry. The first of these is when a teacher teaches the concept of mindsets to students, but does not change other policies and practices in the classroom. The second is that some teachers have focussed too much on praising their learners’ efforts. Teachers have taken mindset recipes and tips, without due consideration. She says:

Teachers have to ask, what exactly is the evidence suggesting? They have to realise it takes deep thought and deep experimentation on their part in the classroom to see how best the concept can be implemented there. This should be a group enterprise, in which they share what worked, what did not work, for whom and when. People need to recognise we are researchers, we have produced a body of evidence that says under these conditions this is what happened. We have not explored all the conditions that are possible. Teacher feedback on what is working and not working is hugely valuable to us to tell us what we have not done and what we need to do.

Critics like Dylan William, Carl Hendrick and Timothy Bates found that it was impossible to replicate Dweck’s findings, and that there were at best weak correlations between growth mindset and academic achievement, and between mindset interventions and academic gains. They were happy to concede that typical mindset interventions would not do any harm, but asked whether the huge amounts of money being spent on mindset would not be better invested elsewhere.

Carol Dweck seems to like the phrase ‘not yet’. She argues, in her TED Talk, that simply using the words ‘not yet’ can build students’ confidence, and her tip is often repeated by others. She also talks about mindset interventions being ‘not yet evidence-based’, which is a way of declaring her confidence that they soon will be. But, with huge financial backing, Dweck and her colleagues have recently been carrying out a lot of research and the results are now coming in. There are a small number of recent investigations that advocates of mindset interventions like to point to. For reasons of space, I’ll refer to two of them.

The first (Outes-Leon, et al., 2020) of these looked at an intervention with children in the first grades in a few hundred Peruvian secondary schools. The intervention consisted of students individually reading a text designed to introduce them to the concept of growth-mindset. This was followed by a group debate about the text, before students had to write individually a reflective letter to a friend/relative describing what they had learned. In total, this amounted to about 90 minutes of activity. Subsequently, teachers made a subjective assessment of the ‘best’ letters and attached these to the classroom wall, along with a growth mindset poster, for the rest of the school year. Teachers were also asked to take a picture of the students alongside the letters and the poster and to share this picture by email.

Academic progress was measured 2 and 14 months after the intervention and compared to a large control group. The short-term (2 months) impact of the intervention was positive for mathematics, but less so for reading comprehension. (Why?) These gains were only visible in regional schools, not at all in metropolitan schools. Similar results were found when looking at the medium-term (14 month) impact. The reasons for this are unclear. It is hypothesized that the lower-achieving students in regional schools might benefit more from the intervention. Smaller class sizes in regional schools might also be a factor. But, of course, many other explanations are possible.

The paper is entitled The Power of Believing You Can Get Smarter. The authors make it clear that they were looking for positive evidence of the intervention and they were supported by mindset advocates (e.g. David Yeager) from the start. It was funded by the World Bank, which is a long-standing advocate of growth mindset interventions. (Rather jumping the gun, the World Bank’s Mindset Team wrote in 2014 that teaching growth mindset is not just another policy fad. It is backed by a burgeoning body of empirical research.) The paper’s authors conclude that ‘the benefits of the intervention were relevant and long-lasting in the Peruvian context’, and they focus strongly on the low costs of the intervention. They acknowledge that the way the tool is introduced (design of the intervention) and the context in which this occurs (i.e., school and teacher characteristics) both matter to understand potential gains. But without understanding the role of the context, we haven’t really learned anything practical that we can take away from the research. Our understanding of the power of believing you can get smarter has not been meaningfully advanced.

The second of these studies (Yeager et al., 2019) took many thousands of lower-achieving American 9th graders from a representative sample of schools. It is a very well-designed and thoroughly reported piece of research. The intervention consisted of two 25-minute online sessions, 20 days apart, which sought to reduce the negative effort beliefs of students (the belief that having to try hard or ask for help means you lack ability), fixed-trait attributions (the attribution that failure stems from low ability) and performance avoidance goals (the goal of never looking stupid). An analysis of academic achievement at the end of the school year indicated clearly that the intervention led to improved performance. These results lead to very clear grounds for optimism about the potential of growth mindset interventions, but the report is careful to avoid overstatement. We have learnt about one particular demographic with one particular intervention, but it would be wrong to generalise beyond that. The researchers had hoped that the intervention would help to compensate for unsupportive school norms, but found that this was not the case. Instead, they found that it was when the peer norm supported the adoption of intellectual challenges that the intervention promoted sustained benefits. Context, as in the Peruvian study, was crucial. The authors write:

We emphasize that not all forms of growth mindset interventions can be expected to increase grades or advanced course-taking, even in the targeted subgroups. New growth mindset interventions that go beyond the module and population tested here will need to be subjected to rigorous development and validation processes.

I think that a reasonable conclusion from reading this research is that it may well be worth experimenting with growth mindset interventions in English language classes, but without any firm expectation of any positive impact. If nothing else, the interventions might provide useful, meaningful practice of the four skills. First, though, it would make sense to read two other pieces of research (Sisk et al., 2018; Burgoyne et al., 2020). Unlike the projects I have just discussed, these were not carried out by researchers with an a priori enthusiasm for growth-mindset interventions. And the results were rather different.

The first of these (Sisk et al., 2018) was a meta-analysis of the literature. It found that there was only a weak correlation between mindset and academic achievement, and only a weak correlation between mindset interventions and academic gains. It did, however, lend support to one of the conclusions of Yeager et al (2019), that such interventions may benefit students who are academically at risk.

The second (Burgoyne et al., 2020) found that the foundations of mind-set theory are not firm and that bold claims about mind-set appear to be overstated. Other constructs such as self-efficacy and need for achievement, [were] found to correlate much more strongly with presumed associates of mind-set.

So, where does this leave us? We are clearly a long way from ‘facts’; mindset interventions are ‘not yet evidence-based’. Carl Hendrick (2019) provides a useful summary:

The truth is we simply haven’t been able to translate the research on the benefits of a growth mindset into any sort of effective, consistent practice that makes an appreciable difference in student academic attainment. In many cases, growth mindset theory has been misrepresented and miscast as simply a means of motivating the unmotivated through pithy slogans and posters. […] Recent evidence would suggest that growth mindset interventions are not the elixir of student learning that many of its proponents claim it to be. The growth mindset appears to be a viable construct in the lab, which, when administered in the classroom via targeted interventions, doesn’t seem to work at scale. It is hard to dispute that having a self-belief in their own capacity for change is a positive attribute for students. Paradoxically, however, that aspiration is not well served by direct interventions that try to instil it.

References

Bandura, Albert (1982). Self-efficacy mechanism in human agency. American Psychologist, 37 (2): pp. 122–147. doi:10.1037/0003-066X.37.2.122.

Burgoyne, A. P., Hambrick, D. Z., & Macnamara, B. N. (2020). How Firm Are the Foundations of Mind-Set Theory? The Claims Appear Stronger Than the Evidence. Psychological Science, 31(3), 258–267. https://doi.org/10.1177/0956797619897588

Early, P. (Ed.) ELT Documents 1113 – Humanistic Approaches: An Empirical View. London: The British Council

Dweck, C. S. (2006). Mindset: The New Psychology of Success. New York: Ballantine Books

Hendrick, C. (2019). The growth mindset problem. Aeon,11 March 2019.

Maslow, A. (1943). A Theory of Human Motivation. Psychological Review, 50: pp. 370-396.

Outes-Leon, I., Sanchez, A. & Vakis, R. (2020). The Power of Believing You Can Get Smarter : The Impact of a Growth-Mindset Intervention on Academic Achievement in Peru (English). Policy Research working paper, no. WPS 9141 Washington, D.C. : World Bank Group. http://documents.worldbank.org/curated/en/212351580740956027/The-Power-of-Believing-You-Can-Get-Smarter-The-Impact-of-a-Growth-Mindset-Intervention-on-Academic-Achievement-in-Peru

Rogers, C. R. (1969). Freedom to Learn: A View of What Education Might Become. Columbus, Ohio: Charles Merill

Sisk, V. F., Burgoyne, A. P., Sun, J., Butler, J. L., Macnamara, B. N. (2018). To what extent and under which circumstances are growth mind-sets important to academic achievement? Two meta-analyses. Psychological Science, 29, 549–571. doi:10.1177/0956797617739704

Yeager, D.S., Hanselman, P., Walton, G.M. et al. (2019). A national experiment reveals where a growth mindset improves achievement. Nature 573, 364–369. https://doi.org/10.1038/s41586-019-1466-y