Seven years ago, the British Council brought out a report (Dearden, 2014), entitled ‘English as a medium of instruction – a growing global phenomenon’. The report noted the ‘rapid expansion’ of EMI provision, but observed that in many countries ‘there is a shortage of linguistically qualified teachers; there are no stated expectations of English language proficiency; there appear to be few organisational or pedagogical guidelines which might lead to effective EMI teaching and learning; there is little or no EMI content in initial teacher education (teacher preparation) programmes and continuing professional development (in-service) courses’.

Given issues such as these, we should not expect research findings about the efficacy of EMI to be unequivocally positive, and the picture that emerges from EMI research is decidedly mixed. In some countries, learning of academic content has deteriorated, and drop-out rates have been high, but we do not have enough information to make global generalisations. Improvements in English language skills are also often disappointing, although a number of research reports indicate gains in listening. We cannot, however, assume that following EMI studies will lead to greater language gains than, say, attending fewer hours of an intensive English course. The idea that two birds can be killed with one stone remains speculative.

The widespread rolling-out of EMI programmes in higher education has led to concerns about a negative effect on the status of other languages. There is also a danger that EMI may exacerbate social inequalities. Those who are most likely to benefit from the approach are ‘those whose life chances have already placed them in a position to benefit from education’ (Macaro, 2018). It is clear that EMI has spread globally without sufficient consideration of both its benefits and its costs.

This year, the British Council brought out another report on EMI (Sahan, et al., 2021), looking at EMI in ODA-categorised countries, i.e. receivers of foreign aid, mostly in the Global South. What has changed in the intervening seven years? The short answer is not a lot. Unabated growth continues: problematic issues remain problematic. Support for EMI lecturers remains limited and, when it is offered, usually takes the form of improving teachers’ general English proficiency. The idea that EMI lecturers might benefit from ‘training in appropriate materials selection, bilingual teaching pedagogies, strategies for teaching in multilingual or multicultural classrooms, [or] an awareness of their students’ disciplinary language needs’ does not seem to have taken root. The insight that EMI requires a shift in methodology in order to be effective has not really got through either, and this, despite the fact that it is well-known that many lecturers perceive EMI as a challenge. The growing body of research evidence showing the positive potential of plurilingual practices in higher education EMI (e.g. Duarte & van der Ploeg, 2019) is not, it would appear, widely known to universities around the world offering EMI classes. The only mention of ‘plurilingualism’ that I could find in this report is in the context of a discussion about how the internationalization (aka Englishization) of higher education acts as a counter-force to the plurilingualism promoted by bodies like the Council of Europe.

The home of the Council of Europe’s ‘European Centre for Modern Languages (ECML)’ is in Austria, where I happen to live. Here’s what the ECML’s website has to say about itself:

Developing every individual’s language repertoire and cultural identities and highlighting the social value of linguistic and cultural diversity lie at the core of ECML work. Plurilingual education embraces all language learning, e.g. home language/s, language/s of schooling, foreign languages, and regional and minority languages.

To support plurilingual education, a ‘Framework of Reference for Pluralistic Approaches to Languages and Cultures’ has been developed, along with a bank of resources and teaching materials that are linked to the descriptors in the frame of reference. Plurilingualism is clearly taken very seriously, and, across the country there are many interesting plurilingual initiatives in primary and secondary schools.

But not at universities. There is steady growth in EMI, especially at master’s level. Almost a quarter of all master’s at the University of Vienna, for example, are EMI. However, this has not been accompanied by any real thought about how EMI changes things or how EMI could best be implemented. It has simply been assumed that the only thing that differentiates teaching in German from using EMI is the choice of language itself (Dannerer et al., 2021). Only when things go wrong and are perceived as problematic (e.g. severe student dropout rates) ‘does the realization follow that there is so much more to teaching in another medium than language proficiency alone’ (ibid). Even language proficiency is not deemed especially worthy of serious consideration. Dannerer et al (2021) note that ‘the skills of teachers […] are neither tested nor required before they begin to offer courses in English. Although there are English language courses for students, academic, and administrative staff, they are mainly voluntary.’ There are no clear policies ‘as to when English or other languages should be employed, by whom, and for what’ (ibid). In summary, ‘linguistic and cultural plurality in Austrian higher education is not considered an asset that brings added value in terms of institutional diversity or internationalization at home’. Rather, in the context of EMI, it is something that can be Englishized and ignored.

Higher education EMI in Austria, then, is, in some ways, not so very different from EMI in the countries that feature in the recent British Council report. Or, for that matter, anywhere else in the world, with just a few exceptions (such as a number of universities in bilingual parts of Spain). My question is: why is this the case? Why would universities not actively pursue and promote plurilingual approaches as part of their EMI provision, if, as seems highly probable, this would result in learning gains? Are they really unaware of the potential benefits of plurilingual approaches in EMI? Is the literature out there (e.g. Paulsrud, et al., 2021) beyond their budgets? Have they, perhaps, just not got round to it yet? Is there, perhaps, some sort of problem (contracts? pay? time?) in training the lecturers? Or, as the British Council report seems to suggest, is there some irreconcilable tension between plurilingualism and the Englishizing world of most EMI? And, if this is the case, could it be that plurilingualism is fighting a losing battle?

References

Dannerer, M., Gaisch, M. & Smit, U. (2021) Englishization ‘under the radar’: Facts, policies, and trends in Austrian higher education. In Wilkinson, R. & Gabriëls, R. (Eds.) (2021) The Englishization of Higher Education in Europe. Amsterdam University Press, pp. 281 – 306

Dearden, J. (2014) English as a medium of instruction – a growing global phenomenon. London: British Council

Duarte, J. & van der Ploeg, M. (2019) Plurilingual lecturers in English medium instruction in the Netherlands: the key to plurilingual approaches in higher education? European Journal of Higher Education, 9 (3) https://www.tandfonline.com/doi/full/10.1080/21568235.2019.1602476

Macaro, E. (2018) English Medium Instruction. Oxford: Oxford University Press

Paulsrud, B., Tian, Z. & Toth, J. (Eds.) (2021) English-Medium Instruction and Translanguaging. Bristol: Multilingual Matters

Sahan, K., Mikolajewska, A., Rose, H., Macaro, E., Searle, M., Aizawa, I., Zhou, S. & Veitch, A. (2021) Global mapping of English as a medium of instruction in higher education: 2020 and beyond. London: British Council

Wilkinson, R. & Gabriëls, R. (2021) The untapped potentials of EMI programmes. The Dutch case, System, 103, 102639

Wilkinson, R. & Gabriëls, R. (Eds.) (2021) The Englishization of Higher Education in Europe. Amsterdam University Press.

Innovation and ELT

Next week sees the prize ceremony of the nineteenth edition of the British Council’s ELTons awards, celebrating ‘innovation in English language teaching and learning … the newest and most original courses, books, publications, apps, platforms, projects, and more.’ Since the Council launched the ELTons in 2003, it hasn’t been entirely clear what is meant by ‘innovation’. But, reflecting the use of the term in the wider (business) world, ‘innovation’ was seen as a positive value, an inherently good thing, and almost invariably connected to technological innovation. One of the award categories in the ELTons is for ‘digital innovation’, but many of the winners and shortlisted nominations in other categories have been primarily innovative in their use of technology (at first, CD-ROMs, before web-based applications became standard).

Historian Jill Lepore, among others, has traced the mantra of innovation at the start of this century back to renewed interest in the work of mid-20th century Austrian economist, Joseph Schumpeter, in the 1990s. Schumpeter wrote about ‘creative disruption’, and his ideas gained widespread traction with the publication in 1997 of Clayton Christensen’s ‘The Innovator’s Dilemma: The Revolutionary Book that Will Change the Way You Do Business’. Under Christensen, ‘creative disruption’ morphed into ‘disruptive innovation’. The idea was memorably expressed in Facebook’s motto of ‘Move fast and break things’. Disruptive innovation was always centrally concerned with expanding the market for commercial products by leveraging technology to gain access to more customers. Innovation, then, was and is a commercial strategy, and could be used either in product development or simply as an advertising slogan.

From the start of the innovation wave, the British Council has been keen to position itself in the vanguard. It does this for two reasons. Firstly, it needs to promote its own products and, with the cuts to British Council funding, its need to generate more income is increasingly urgent: ELT products are the main source of this income. Secondly, as part of the Council’s role in pushing British ‘soft power’, it seeks to promote Britain as a desirable, and therefore innovative, place to do business or study. This is wonderfully reflected in a series of videos for the Council’s LearnEnglish website called ‘Britain is Great’, subsets of which are entitled ‘Entrepreneurs are GREAT’ and ‘Innovation is GREAT’ with films celebrating the work of people like Richard Branson and James Dyson. For a while, the Council had a ‘Director, English Language Innovation’, and the current senior management team includes a ‘Director Digital, Partnerships and Innovation’ and a ‘Director Transformation’. With such a focus on innovation at the heart of its organisation, it is hardly surprising that the British Council should celebrate the idea in its ELTons awards. The ELTons celebrate the Council itself, and its core message, as much as they do the achievements of the award winners. Finalists in the ELTons receive a ‘promotional kit’ which includes ‘assets for the promotion of products or publications’. These assets (badges, banners, and so on) serve to promote the Council brand at the same time as advertising the shortlisted products themselves.

Innovation and a better world

Innovation, especially ‘disruptive innovation’, is not, however, what it used to be. The work of Clayton Christensen has been largely discredited (Russell & Vinsel, 2016). The Facebook motto has been changed and ‘the Era of “Move Fast and Break Things” Is Over’ (Taneja, 2019). The interest in ‘minimal viable products’ has shifted to an interest in ‘minimal virtuous products’. This is reflected in the marketing of edtech with the growing focus on how product X or Y will make the world a better place in some way. The ELTons introduced ‘Judges’ Commendations’ for ‘Equality, Diversity and Inclusion’ and, this year, a new commendation for ‘Environmental Sustainability and Climate Action’. Innovation is still celebrated, but ‘disruption’ has undergone a slide of meaning, so that it is more likely now to refer to disruption caused by the Covid pandemic, and our responses to it. For example, TESOL Italy’s upcoming annual conference, entitled ‘Disruptive Innovations in ELT’, encourages contributions not only about online study and ‘interactive e-learning platforms’, but also about ‘sustainable development and social justice’, ‘resilience, collaboration, empathy, digital literacy, soft skills, and global competencies’. Innovation is still presented as a good, even necessary, thing.

I am not suggesting that the conflation of innovation with positive social good is purely virtue-signalling, although it is sometimes clearly that. However, the rhetorical shift makes it harder for anyone to criticise innovations, when they are presented as solutions to problems that need to be solved. Allen et al (2021) argue that ‘those who propose solutions are always virtuous because they clearly care about a problem we must solve. Those who suggest the solution will not work, and who have no better solution, are denying the problem the opportunity of the resolution it so desperately needs’.

There are, though, good reasons to be wary of ‘innovation’ in education. First among these is the lessons of history, which teach us that today’s ‘next big thing’ is usually tomorrow’s ‘last next big thing’ (Allen, et al., 2021). On the technology front, from programmed instruction to interactive whiteboards, educational history is littered with artefacts that have been oversold and underused (Cuban, 2001). Away from technology, from Multiple Intelligences to personalized learning, we see the same waves of enthusiasm and widespread adoption, followed by waning interest and abandonment. The waste of money and effort along the way has been colossal, although that is not to say that there have not been some, sometimes significant, gains.

The second big reason to be wary of technological innovations in education is that they focus our attention on products of various kinds. But products are not at the heart of schooling: it is labour, especially the work of teachers, which occupies that place. It is not Zoom that made possible the continuation of education during the pandemic lockdowns. Indeed, in many parts of the world, lower-tech or zero-tech solutions had to be found. It was teachers’ readiness to adapt to the new circumstances that allowed education to stumble onwards during the crisis. Vinsel and Russell’s recent book, ‘The Innovation Delusion’ (2021) compellingly argues that the focus on innovation has led us to ‘devalue the work that underpins modern life’. They point out (Russell and Vinsel, 2016) that ‘feminist theorists have long argued that obsessions with technological novelty obscures all of the labour, including housework, that women, disproportionately, do to keep life on track’. Parallels with the relationship between teachers and technology are hard to avoid. The presentation of innovation as an inherently desirable value ‘rarely asks who benefits, to what end?’

The ‘ELT’ in the ELTons

It’s time to consider the ‘ELT’ part of the ELTons. ‘ELT’ is a hypothetical construct that is often presented as a concrete reality, rather than a loosely-bound constellation of a huge number of different practices and attitudes, many of which have very little in common with each other. This reification of ‘ELT’ can serve a number of purposes, one of which is to frame discourse in particular ways. In a post from a few years ago, Andrew Wickham and I discussed how the framing of ‘ELT’ (and education, more generally) as an industry serves particular interests, but may be detrimental to the interests of others.

Perhaps a useful way of viewing ‘ELT’ is as a discourse community. Borg (2003) argues that ‘membership of a discourse community is usually a matter of choice’. That is to say that you are part of ‘ELT’ if you choose to identify yourself as such. In Europe, huge numbers of English language teachers do not choose to identify themselves primarily as an ‘ELT teacher’: they may see this label as relevant to them, but a more immediate and primary self-identification is often as a ‘school teacher’, a ‘primary school teacher’, a ‘(modern) languages teacher’, a ‘CLIL teacher’, and so on. They work in the state / public sectors. The concerns and interests of those who do not self-identify as ‘ELT practitioners’ are most likely to revolve around their local contexts and issues. Those of us who self-identify as ‘ELT practitioners’ are more likely to be interested in what we share with others who self-identify in the same way in different parts of the globe. The relevance of local contexts and issues is mostly to be found in how they may shed light on more global concerns. If you prioritise the local over the global, your participation in the ‘ELT’ discourse community is likely to be limited. Things like the ELTons are simply off your radar.

Borg (2003) also points out that discourse communities typically have ‘experts who perform gatekeeping roles’. The discourse of ‘ELT’ is enacted in magazines, blogs, videos, webinars and conferences aimed at English language teachers. I exclude from this list academic journals and books which are known to be consulted only rarely by the vast majority of teachers. Similarly, I exclude the more accessible books that have been written specifically for English language teachers, which are mostly sold in minuscule quantities, except for those that are required reading for training courses. The greatest number of contributors to the discourse of ‘ELT’ are authors, developers and publishers of language teaching materials and tools, teachers representing product vendors or (directly or indirectly) promoting their own products, representatives of private teaching / training schools, and organisations, representatives of international examination bodies, and representatives of universities (which, in some countries, essentially function as private institutions (Chowdhury & Ha, 2014)).

In other words, the discourse of ‘ELT’ is shaped to a very significant extent by gatekeepers who have a product to sell. Their customers are often those who do not self-identify in the same way as members of the ‘ELT’ discourse community. The British Council is a key gatekeeper in this discourse and it is a private sector operator par excellence.

The lack of interest in the workers of ‘ELT’ is well documented – see for example the Teachers as Workers blog. It is hardly unexpected, especially in the private sector. The British Council has a long history of labour disputes. At the present time, the Public and Commercial Services Union in the UK is balloting members about strike action against forced redundancies, which ‘are disproportionately targeted at middle to lower graded staff, while at the same time new management positions and a new deputy chief executive officer post are to be created’. One of the aims of the union is to stop the privatisation / outsourcing of Council jobs. The British government’s recent failure to relocate British Council employees in Afghanistan led to over 100,000 people signing a petition demanding action. The public silence of the British Council did little to inspire confidence in their interest in their workers.

The Council is a many-headed beast, and some of these heads do very admirable work in sponsoring or supporting a large variety of valuable projects. I don’t think the ELTons is one of these. The ideology behind them is highly questionable, and their ‘best before’ date has long expired. And given the financial constraints that the Council is now operating under, the money might be better spent elsewhere.

References

Allen, R., Evans, M. & White, B. (2021) The Next Big Thing in School Improvement. Woodbridge: John Catt Educational

Borg, E. (2003) Discourse Community. ELT Journal 57 (4): 398-400

Chowdhury, R. & Ha, P. L. (2014) Desiring TESOL and International Education. Bristol: Multilingual Matters

Christensen, C. M. (1997) The Innovator’s Dilemma: The Revolutionary Book that Will Change the Way You Do Business. Cambridge: Harvard Business Review Press

Cuban, L. (2001) Oversold and Underused: Computers in the Classroom. Cambridge: Harvard University Press

Lepore, J. (2014) The Disruption Machine. The New Yorker, June 16, 2014. https://www.newyorker.com/magazine/2014/06/23/the-disruption-machine

Russell, A. L. & Vinsel, L. (2016) Hail the Maintainers. Aeon, 7 April 2016 https://aeon.co/essays/innovation-is-overvalued-maintenance-often-matters-more

Taneja, H. (2019) The Era of “Move Fast and Break Things” Is Over. Harvard Business Review, January 22, 2019, https://hbr.org/2019/01/the-era-of-move-fast-and-break-things-is-over

Vinsel, L. & Russell, A. L. (2020) The Innovation Delusion. New York: Currency Books

NB This is an edited version of the original review.

Words & Monsters is a new vocabulary app that has caught my attention. There are three reasons for this. Firstly, because it’s free. Secondly, because I was led to believe (falsely, as it turns out) that two of the people behind it are Charles Browne and Brent Culligan, eminently respectable linguists, who were also behind the development of the New General Service List (NGSL), based on data from the Cambridge English Corpus. And thirdly, because a lot of thought, effort and investment have clearly gone into the gamification of Words & Monsters (WAM). It’s to the last of these that I’ll turn my attention first.

WAM teaches vocabulary in the context of a battle between a player’s avatar and a variety of monsters. If users can correctly match a set of target items to definitions or translations in the available time, they ‘defeat’ the monster and accumulate points. The more points you have, the higher you advance through a series of levels and ranks. There are bonuses for meeting daily and weekly goals, there are leaderboards, and trophies and medals can be won. In addition to points, players also win ‘crystals’ after successful battles, and these crystals can be used to buy accessories which change the appearance of the avatar and give the player added ‘powers’. I was never able to fully understand precisely how these ‘powers’ affected the number of points I could win in battle. It remained as baffling to me as the whole system of values with Pokemon cards, which is presumably a large part of the inspiration here. Perhaps others, more used to games like Pokemon, would find it all much more transparent.

The system of rewards is all rather complicated, but perhaps this doesn’t matter too much. In fact, it might be the case that working out how reward systems work is part of what motivates people to play games. But there is another aspect to this: the app’s developers refer in their bumf to research by Howard-Jones and Jay (2016), which suggests that when rewards are uncertain, more dopamine is released in the mid-brain and this may lead to reinforcement of learning, and, possibly, enhancement of declarative memory function. Possibly … but Howard-Jones and Jay point out that ‘the science required to inform the manipulation of reward schedules for educational benefit is very incomplete.’ So, WAM’s developers may be jumping the gun a little and overstating the applicability of the neuroscientific research, but they’re not alone in that!

If you don’t understand a reward system, it’s certain that the rewards are uncertain. But WAM takes this further in at least two ways. Firstly, when you win a ‘battle’, you have to click on a plain treasure bag to collect your crystals, and you don’t know whether you’ll get one, two, three, or zero, crystals. You are given a semblance of agency, but, essentially, the whole thing is random. Secondly, when you want to convert your crystals into accessories for your avatar, random selection determines which accessory you receive, even though, again, there is a semblance of agency. Different accessories have different power values. This extended use of what the developers call ‘the thrill of uncertain rewards’ is certainly interesting, but how effective it is is another matter. My own reaction, after quite some time spent ‘studying’, to getting no crystals or an avatar accessory that I didn’t want was primarily frustration, rather than motivation to carry on. I have no idea how typical my reaction (more ‘treadmill’ than ‘thrill’) might be.

Unsurprisingly, for an app that has so obviously thought carefully about gamification, players are encouraged to interact with each other. As part of the early promotion, WAM is running, from 15 November to 19 December, a free ‘team challenge tournament’, allowing teams of up to 8 players to compete against each other. Ingeniously, it would appear to allow teams and players of varying levels of English to play together, with the app’s algorithms determining each individual’s level of lexical knowledge and therefore the items that will be presented / tested. Social interaction is known to be an important component of successful games (Dehghanzadeh et al., 2019), but for vocabulary apps there’s a huge challenge. In order to learn vocabulary from an app, learners need to put in time – on a regular basis. Team challenge tournaments may help with initial on-boarding of players, but, in the end, learning from a vocabulary app is inevitably and largely a solitary pursuit. Over time, social interaction is unlikely to be maintained, and it is, in any case, of a very limited nature. The other features of successful games – playful freedom and intrinsically motivating tasks (Driver, 2012) – are also absent from vocabulary apps. Playful freedom is mostly incompatible with points, badges and leaderboards. And flashcard tasks, however intrinsically motivating they may be at the outset, will always become repetitive after a while. In the end, what’s left, for those users who hang around long enough, is the reward system.

It’s also worth noting that this free challenge is of limited duration: it is a marketing device attempting to push you towards the non-free use of the app, once the initial promotion is over.

Gamified motivation tools are only of value, of course, if they motivate learners to spend their time doing things that are of clear learning value. To evaluate the learning potential of WAM, then, we need to look at the content (the ‘learning objects’) and the learning tasks that supposedly lead to acquisition of these items.

When you first use WAM, you need to play for about 20 minutes, at which point algorithms determine ‘how many words [you] know and [you can] see scores for English tests such as; TOEFL, TOEIC, IELTS, EIKEN, Kyotsu Shiken, CEFR, SAT and GRE’. The developers claim that these scores correlate pretty highly with actual test scores: ‘they are about as accurate as the tests themselves’, they say. If Browne and Culligan had been behind the app, I would have been tempted to accept the claim – with reservations: after all, it still allows for one item out of 5 to be wrongly identified. But, what is this CEFR test score that is referred to? There is no CEFR test, although many tests are correlated with CEFR. The two tools that I am most familiar with which allocate CEFR levels to individual words – Cambridge’s English Vocabulary Profile and Pearson’s Global Scale of English – often conflict in their results. I suspect that ‘CEFR’ was just thrown into the list of tests as an attempt to broaden the app’s appeal.

English target words are presented and practised with their translation ‘equivalents’ in Japanese. For the moment, Japanese is the only language available, which means the app is of little use to learners who don’t know any Japanese. It’s now well-known that bilingual pairings are more effective in deliberate language learning than using definitions in the same language as the target items. This becomes immediately apparent when, for example, a word like ‘something’ is defined (by WAM) as ‘a thing not known or specified’ and ‘anything’ as ‘a thing of whatever kind’. But although I’m in no position to judge the Japanese translations, there are reasons why I would want to check the spreadsheet before recommending the app. ‘Lady’ is defined as ‘polite word for a woman’; ‘missus’ is defined as ‘wife’; and ‘aye’ is defined as ‘yes’. All of these definitions are, at best, problematic; at worst, they are misleading. Are the Japanese translations more helpful? I wonder … Perhaps these are simply words that do not lend themselves to flashcard treatment?

Because I tested in to the app at C1 level, I was not able to evaluate the selection of words at lower levels. A pity. Instead, I was presented with words like ‘ablution’, ‘abrade’, ‘anode’, and ‘auspice’. The app claims to be suitable ‘for both second-language learners and native speakers’. For lower levels of the former, this may be true (but without looking at the lexical spreadsheets, I can’t tell). But for higher levels, however much fun this may be for some people, it seems unlikely that you’ll learn very much of any value. Outside of words in, say, the top 8000 frequency band, it is practically impossible to differentiate the ‘surrender value’ of words in any meaningful way. Deliberate learning of vocabulary only makes sense with high frequency words that you have a chance of encountering elsewhere. You’d be better off reading, extensively, rather than learning random words from an app. Words, which (for reasons I’ll come on to) you probably won’t actually learn anyway.

With very few exceptions, the learning objects in WAM are single words, rather than phrases, even when the item is of little or no value outside its use in a phrase. ‘Betide’ is defined as ‘to happen to; befall’ but this doesn’t tell a learner much that is useful. It’s practically only ever used following ‘woe’ (but what does ‘woe’ mean?!). Learning items can be checked in the ‘study guide’, which will show that ‘betide’ typically follows ‘woe’, but unless you choose to refer to the study guide (and there’s no reason, in a case like this, that you would know that you need to check things out more fully), you’ll be none the wiser. In other words, checking the study guide is unlikely to betide you. ‘Wee’, as another example, is treated as two items: (1) meaning ‘very small’ as in ‘wee baby’, and (2) meaning ‘very early in the morning’ as in ‘in the wee hours’. For the latter, ‘wee’ can only collocate with ‘in the’ and ‘hours’, so it makes little sense to present it as a single word. This is also an example of how, in some cases, different meanings of particular words are treated as separate learning objects, even when the two meanings are very close and, in my view, are hardly worth learning separately. Examples include ‘czar’ and ‘assonance’. Sometimes, cognates are treated as separate learning objects (e.g. ‘adulterate’ and ‘adulteration’ or ‘dolor’ and ‘dolorous’); with other words (e.g. ‘effulgence’), only one grammatical form appears to be given. I could not begin to figure out any rationale behind any of this.

All in all, then, there are reasons to be a little skeptical about some of the content. Up to level B2 – which, in my view, is the highest level at which it makes sense to use vocabulary flashcards – it may be of value, so long as your first language is Japanese. But given the claim that it can help you prepare for the ‘CEFR test’, I have to wonder …

The learning tasks require players to match target items to translations / definitions (in both directions), with the target item sometimes in written form, sometimes spoken. Users do not, as far as I can tell, ever have to produce the target item: they only have to select. The learning relies on spaced repetition, but there is no generative effect (known to enhance memorisation). When I was experimenting, there were a few words that I did not know, but I was usually able to get the correct answer by eliminating the distractors (a choice of one from three gives players a reasonable chance of guessing correctly). WAM does not teach users how to produce words; its focus is on receptive knowledge (of a limited kind). I learn, for example, what a word like ‘aye’ or ‘missus’ kind of means, but I learn nothing about how to use it appropriately. Contrary to the claims in WAM’s bumf (that ‘all senses and dimensions of each word are fully acquired’), reading and listening comprehension speeds may be improved, but appropriate and accurate use of these words in speaking and writing is much less likely to follow. Does WAM really ‘strengthen and expand the foundation levels of cognition that support all higher level thinking’, as is claimed?

Perhaps it’s unfair to mention some of the more dubious claims of WAM’s promotional material, but here is a small selection, anyway: ‘WAM unleashes the full potential of natural motivation’. ‘WAM promotes Flow by carefully managing the ratio of unknown words. Your mind moves freely in the channel below frustration and above boredom’.

WAM is certainly an interesting project, but, like all the vocabulary apps I have ever looked at, there have to be trade-offs between optimal task design and what will fit on a mobile screen, between freedoms and flexibility for the user and the requirements of gamified points systems, between the amount of linguistic information that is desirable and the amount that spaced repetition can deal with, between attempting to make the app suitable for the greatest number of potential users and making it especially appropriate for particular kinds of users. Design considerations are always a mix of the pedagogical and the practical / commercial. And, of course, the financial. And, like most edtech products, the claims for its efficacy need to be treated with a bucket of salt.

References

Dehghanzadeh, H., Fardanesh, H., Hatami, J., Talaee, E. & Noroozi, O. (2019) Using gamification to support learning English as a second language: a systematic review, Computer Assisted Language Learning, DOI: 10.1080/09588221.2019.1648298

Driver, P. (2012) The Irony of Gamification. In English Digital Magazine 3, British Council Portugal, pp. 21 – 24 http://digitaldebris.info/digital-debris/2011/12/31/the-irony-of-gamification-written-for-ied-magazine.html

Howard-Jones, P. & Jay, T. (2016) Reward, learning and games. Current Opinion in Behavioral Sciences, 10: 65 – 72

Five years ago, in 2016, there was an interesting debate in the pages of the journal ‘Psychological Review’. It began with an article by Jeffrey Bowers (2016a), a psychologist at the University of Bristol, who argued that neuroscience (as opposed to psychology) has little, or nothing, to offer us, and is unlikely ever to be able to do so, in terms of improving classroom instruction. He wasn’t the first to question the relevance of neuroscience to education (see, for example, Willingham, 2009), but this was a full-frontal attack. Bowers argued that ‘neuroscience rarely offers insights into instruction above and beyond psychology’ and that neuroscientific evidence that the brain changes in response to instruction are irrelevant. His article was followed by two counter-arguments (Gabrieli, 2016; Howard-Jones, et al., 2016), which took him to task for too narrowly limiting the scope of education to classroom instruction (neglecting, for example, educational policy), for ignoring the predictive power of neuroimaging on neurodevelopmental differences (and, therefore, its potential value in individualising curricula), and for failing to take account of the progress that neuroscience, in collaboration with educators, has already made. Bowers’ main argument, that educational neuroscience had little to tell us about teaching, was not really addressed in the counter-arguments, and Bowers (2016b) came back with a counter-counter-rebuttal.

The brain responding to seductive details

In some ways, the debate, like so many of the kind, suffered from the different priorities of the participants. For Gabriele and Howard-Jones et al., Bowers had certainly overstated his case, but they weren’t entirely in disagreement with him. Paul Howard-Jones has been quoted by André Hedlund as saying that ‘all neuroscience can do is confirm what we’ve been doing all along and give us new insights into a couple of new things’. One of Howard-Jones’ co-authors, Usha Goswami, director of the Centre for Neuroscience in Education at the University of Cambridge, has said that ‘there is a gulf between current science and classroom applications’ (Goswami, 2006).

For teachers, though, it is the classroom applications that are of interest. Claims for the relevance of neuroscience to ELT have been made by many. We [in ESL / EFL] need it, writes Curtis Kelly (2017). Insights from neuroscience can, apparently, make textbooks more ‘brain friendly’ (Helgesen & Kelly, 2015). Herbert Puchta’s books are advertised by Cambridge University Press as ‘based on the latest insights into how the brain works fresh from the field of neuroscience’. You can watch a British Council talk by Rachael Roberts, entitled ‘Using your brain: what neuroscience can teach us about learning’. And, in the year following the Bowers debate, Carol Lethaby and Patricia Harries gave a presentation at IATEFL Glasgow (Lethaby & Harries, 2018) entitled ‘Research and teaching: What has neuroscience ever done for us?’ – a title that I have lifted for this blog post. Lethaby and Harries provide a useful short summary of the relevance of neuroscience to ELT, and I will begin my discussion with that. They expand on this in their recent book (Lethaby, Mayne & Harries, 2021), a book I highly recommend.

So what, precisely, does neuroscience have to tell English language teachers? Lethaby and Harries put forward three main arguments. Firstly, neuroscience can help us to bust neuromyths (the examples they give are right / left brain dominance and learning styles). Secondly, it can provide information that informs teaching (the examples given are the importance of prior knowledge and the value of translation). Finally, it can validate existing best practice (the example given is the importance of prior knowledge). Let’s take a closer look.

I have always enjoyed a bit of neuromyth busting and I wrote about ‘Left brains and right brains in English language teaching’ a long time ago. It is certainly true that neuroscience has helped to dispel this myth: it is ‘simplistic at best and utter hogwash at worst’ (Dörnyei, 2009: 49). However, we did not need neuroscience to rubbish the practical teaching applications of this myth, which found their most common expression in Neuro-Linguistic Programming (NLP) and Brain Gym. Neuroscience simply banged in the final nail in the coffin of these trends. The same is true for learning styles and the meshing hypothesis. It’s also worth noting that, despite the neuroscientific evidence, such myths are taking a long time to die … a point I will return to at the end of this post.

Lethaby and Harries’s second and third arguments are essentially the same, unless, in their second point they are arguing that neuroscience can provide new information. I struggle, however, to see anything that is new. Neuroimaging apparently shows that the medial prefrontal cortex is activated when prior knowledge is accessed, but we have long known (since Vygotsky, at least!) that effective learning builds on previous knowledge. Similarly, the amygdala (known to be associated with the processing of emotions) may play an important role in learning, but we don’t need to know about the amygdala to understand the role of affect in learning. Lastly, the neuroscientific finding that different languages are not ‘stored’ in separate parts of the brain (Spivey & Hirsch, 2003) is useful to substantiate arguments that translation can have a positive role to play in learning another language, but convincing arguments predate findings such as these by many, many years. This would all seem to back up Howard-Jones’s observation about confirming what we’ve been doing and giving us new insights into a couple of new things. It isn’t the most compelling case for the relevance of neuroscience to ELT.

Chapter 2 of Carol Lethaby’s new book, ‘An Introduction to Evidence-based Teaching in the English Language Classroom’ is devoted to ‘Science and neuroscience’. The next chapter is called ‘Psychology and cognitive science’ and practically all the evidence for language teaching approaches in the rest of the book is drawn from cognitive (rather than neuro-) science. I think the same is true for the work of Kelly, Helgesen, Roberts and Puchta that I mentioned earlier.

It is perhaps the case these days that educationalists prefer to refer to ‘Mind, Brain, and Education Science’ (MBE) – the ‘intersection of neuroscience, education, and psychology’ – rather than educational neuroscience, but, looking at the literature of MBE, there’s a lot more education and psychology than there is neuroscience (although the latter always gets a mention). Probably the most comprehensive and well-known volume of practical ideas deriving from MBE is ‘Making Classrooms Better’ (Tokuhama-Espinosa, 2014). Of the 50 practical applications listed, most are either inspired by the work of John Hattie (2009) or the work of cognitive psychologists. Neuroscience hardly gets a look in.

To wrap up, I’d like to return to the question of neuroscience’s role in busting neuromyths. References to neuroscience, especially when accompanied by fMRI images, have a seductive appeal to many: they confer a sense of ‘scientific’ authority. Many teachers, it seems, are keen to hear about neuroscience (Pickering & Howard-Jones, 2007). Even when the discourse contains irrelevant neuroscientific information (diagrams of myelination come to mind), it seems that many of us find this satisfying (Weisberg et al., 2015; Weisberg et al., 2008). It gives an illusion of explanatory depth (Rozenblit & Keil, 2002), the so-called ‘seductive details effect’. You are far more likely to see conference presentations, blog posts and magazine articles extolling the virtues of neuroscientific findings than you are to come across things like I am writing here. But is it possible that the much-touted idea that neuroscience can bust neuromyths is itself a myth?

Sadly, we have learnt in recent times that scientific explanations have only very limited impact on the beliefs of large swathes of the population (including teachers, of course). Think of climate change and COVID. Why should neuroscience be any different? It probably isn’t. Scurich & Shniderman (2014) found that ‘neuroscience is more likely to be accepted and credited when it confirms prior beliefs’. We are more likely to accept neuroscientific findings because we ‘find them intuitively satisfying, not because they are accurate’ (Weisberg, et al. 2008). Teaching teachers about educational neuroscience may not make much, if any, difference (Tham et al., 2019). I think there is a danger in using educational neuroscience, seductive details and all, to validate what we already do (as opposed to questioning what we do). And for those who don’t already do these things, they’ll probably ignore such findings as there are, anyway.

References

Bowers, J. (2016a) The practical and principled problems with educational Neuroscience. Psychological Review 123 (5) 600 – 612

Bowers, J.S. (2016b) Psychology, not educational neuroscience, is the way forward for improving educational outcomes for all children: Reply to Gabrieli (2016) and Howard-Jones et al. (2016). Psychological Review. 123 (5):628-35.

Dörnyei, Z. (2009) The Psychology of Second Language Acquisition. Oxford: Oxford University Press

Gabrieli, J.D. (2016) The promise of educational neuroscience: Comment on Bowers (2016). Psychological Review. 123 (5):613-9

Goswami , U. (2006). Neuroscience and education: From research to practice? Nature Reviews Neuroscience, 7: 406 – 413

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge

Helgesen, M. & Kelly, C. (2015) Do-it-yourself: Ways to make your textbook more brain-friendly’ SPELT Quarterly, 30 (3): 32 – 37

Howard-Jones, P.A., Varma. S., Ansari, D., Butterworth, B., De Smedt, B., Goswami, U., Laurillard, D. & Thomas, M. S. (2016) The principles and practices of educational neuroscience: Comment on Bowers (2016). Psychological Review. 123 (5):620-7

Kelly, C. (2017) The Brain Studies Boom: Using Neuroscience in ESL/EFL Teacher Training. In Gregersen, T. S. & MacIntyre, P. D. (Eds.) Innovative Practices in Language Teacher Education pp.79-99 Springer

Lethaby, C. & Harries, P. (2018) Research and teaching: What has neuroscience ever done for us?’ in Pattison, T. (Ed.) IATEFL Glasgow Conference Selections 2017. Faversham, Kent, UK: IATEFL  p. 36- 37

Lethaby, C., Mayne, R. & Harries, P. (2021) An Introduction to Evidence-Based Teaching in the English Language Classroom. Shoreham-by-Sea: Pavilion Publishing

McCabe, D.P. & Castel, A.D. (2008) Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition 107: 343–352.

Pickering, S. J. & Howard-Jones, P. (2007) Educators’ views on the role of neuroscience in education: findings from a study of UK and international perspectives. Mind Brain Education 1: 109–113.

Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive science, 26(5), 521–562.

Scurich, N., & Shniderman, A. (2014) The selective allure of neuroscientific explanations. PLOS One, 9 (9), e107529. http://dx.doi.org/10.1371/journal.pone. 0107529.

Spivey, M. V. & Hirsch, J. (2003) ‘Shared and separate systems in bilingual language processing: Converging evidence from eyetracking and brain imaging’ Brain and Language, 86: 70 – 82

Tham, R., Walker, Z., Tan, S.H.D., Low, L.T. & Annabel Chan, S.H. (2019) Translating educational neuroscience for teachers. Learning: Research and Practice, 5 (2): 149-173 Singapore: National Institute of Education

Tokuhama-Espinosa, T. (2014) Making Classrooms Better. New York: Norton

Weisberg, D. S., Taylor, J. C. V. & Hopkins, E.J. (2015) Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision Making, Vol. 10, No. 5, September 2015, pp. 429–441

Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of cognitive neuroscience, 20 (3): 470–477.

Willingham, D. T. (2009). Three problems in the marriage of neuroscience and education. Cortex, 45: 54-55.

I’ve written about mindset before (here), but a recent publication caught my eye, and I thought it was worth sharing.

Earlier this year, the OECD produced a report on its 2018 PISA assessments. This was significant because it was the first time that the OECD had attempted to measure mindsets and correlate them to academic achievements. Surveying some 600,000 15-year-old students in 78 countries and economies, it is, to date, the biggest, most global attempt to study the question. Before going any further, a caveat is in order. The main focus of PISA 2018 was on reading, so any correlations that are found between mindsets and achievement can only be interpreted in the context of gains in reading skills. This is important to bear in mind, as previous research into mindsets indicates that mindsets may have different impacts on different school subjects.

There has been much debate about how best to measure mindsets and, indeed, whether they can be measured at all. The OECD approached the question by asking students to respond to the statement ‘Your intelligence is something about you that you can’t change very much’ by choosing “strongly disagree”, “disagree”, “agree”, or “strongly agree”. Disagreeing with the statement was considered a precursor of a growth mindset, as it is more likely that someone who thinks intelligence can change will challenge him/herself to improve it. Across the sample, almost two-thirds of students showed a growth mindset, but there were big differences between countries, with students in Estonia, Denmark, and Germany being much more growth-oriented than those in Greece, Mexico or Poland (among OECD countries) and the Philippines, Panama, Indonesia or Kosovo (among the non-OECD countries). In line with previous research, students from socio-economically advantaged backgrounds presented a growth mindset more often than those from socio-economically disadvantaged backgrounds.

I have my problems with the research methodology. A 15-year-old from a wealthy country is much more likely than peers in other countries to have experienced mindset interventions in school: motivational we-can-do-it posters, workshops on neuroplasticity, biographical explorations of success stories and the like. In some places, some students have been so exposed to this kind of thing that school leaders have realised that growth mindset interventions should be much more subtle, avoiding the kind of crude, explicit proselytising that simply makes many students roll their eyes. In contexts such as these, most students now know what they are supposed to believe concerning the malleability of intelligence, irrespective of what they actually believe. Therefore, asking them, in a formal context, to respond to statements which are obviously digging at mindsets is an invitation to provide what they know is the ‘correct response’. Others, who have not been so fortunate in receiving mindset training, are less likely to know the correct answer. Therefore, the research results probably tell us as much about educational practices as they do about mindsets. There are other issues with the chosen measurement tool, discussed in the report, including acquiescent bias and the fact that the cognitive load required by the question increases the likelihood of a random response. Still, let’s move on.

The report found that growth mindsets correlated with academic achievement in some (typically wealthier) countries, but not in others. Wisely, the report cautions that the findings do not establish cause-and-effect relations. This is wise because a growth mindset may, to some extent, be the result of academic success, rather than the cause. As the report observes, students performing well may associate their success to internal characteristics of effort and perseverance, while those performing poorly may attribute it to immutable characteristics to preserve their self-esteem.

However, the report does list the ways in which a growth mindset can lead to better achievement. These include valuing school more, setting more ambitious learning goals, higher levels of self-efficacy, higher levels of motivation and lower levels of fear of failure. This is a very circular kind of logic. These attributes are the attributes of growth mindset, but are they the results of a growth mindset or simply the constituent parts of it? Incidentally, they were measured in the same way as the measurement of mindset, by asking students to respond to statements like “I find satisfaction in working as hard as I can” or “My goal is to learn as much as possible”. The questions are so loaded that we need to be very sceptical about the meaning of the results. The concluding remarks to this section of the report clearly indicate the bias of the research. The question that is asked is not “Can growth mindset lead to better results?” but “How can growth mindset lead to better results?”

Astonishingly, the research did not investigate the impact of growth mindset interventions in schools on growth mindset. Perhaps, this is too hard to do in any reliable way. After all, what counts as a growth mindset intervention? A little homily from the teacher about how we can all learn from our mistakes or some nice posters on the walls? Or a more full-blooded workshop about neural plasticity with follow-up tasks? Instead, the research investigated more general teaching practices. The results were interesting. The greatest impacts on growth mindset come when students perceive their teachers as being supportive in a safe learning environment, and when teachers adapt their teaching to the needs of the class, as opposed to simply following a fixed syllabus. The findings about teacher feedback were less clear: “Whether teacher feedback influences students’ growth mindset development or the other way around, further research is required to investigate this relationship, and why it could differ according to students’ proficiency in reading”.

The final chapter of this report does not include any references to data from the PISA 2018 exercise. Instead, it repeats, in a very selective way, previous research findings such as:

  • Growth mindset interventions yield modest average treatment effects, but larger effects for specific subgroups.
  • Growth-mindset interventions fare well in both scalability and cost-effectiveness dimensions.

It ignores any discussion about whether we should be bothering with growth mindsets at all. It tells us something we already know (about the importance of teacher support and adapting teaching to the needs of the class), but somehow concludes that “growth mindset interventions […] can be cost-effective ways to raise students’ outcomes on a large scale”. It is, to my mind, a classic example, of ‘research’ that is looking to prove a point, rather than critically investigate a phenomenon. In that sense, it is the very opposite of science.

OECD (2021) Sky’s the Limit: Growth Mindset, Students, and Schools in PISA. https://www.oecd.org/pisa/growth-mindset.pdf

I came across this course description the other day. The course motto is ‘Honour the Learner’.

Discussions and assigned work covered in the curriculum includes multiple intelligence, positive deviance, brain science during stressful situations, how people learn, PBL, Failing Forward, Bloom’s Taxonomy, de-escalation, cognitive load and courageous conversations, with the interwoven golden threads of leadership theory and emotional intelligence.

I love the idea of positive deviance and the interwoven golden threads!

The description is of a coaching course for police professionals in Ontario. ‘The intense, week-long program follows a robust agenda that embraces a modified PBL-approach rather than a traditional, lecture-based format’. Participants write a personal mission statement and, in the process, they have ‘the opportunity to reflect on their own contributions and commitment to the effective, efficient and values-based delivery of policing’. As opposed to violence-based, for example.

The course began in 2017 and has been considered a success. But what sort of real impact has it had? And how has it adapted to going online? Is there anything people involved in language teaching can learn about coaching from the Ontario Police approach?

‘Coach’ (as in ‘life coach’) is, of course, a slightly tricky word. There are people who think it reflects an important reality in our lives, and others who struggle to take the word seriously. The former will write blog posts or give talks about coaching, the latter probably won’t read them.

The cause of coaching is not really helped by the lack of any broadly recognised certification. By people who think they can charge more just by claiming they are ‘coaches’. In the language teaching world, as elsewhere, some coaches are attempting to set up little trademarked enclaves, sprinkled with acronyms, pyramids and lightbulb illustrations, in order to differentiate just anyone who claims to be a coach from ‘proper coaches’ with certificates.

If you want to be certified, it’s not always easy to choose from the possibilities out there. I have recently read ‘Neurolanguage Coaching: Brain Friendly Language Learning’ by Rachel Paling. I’m normally very suspicious of anything with a ‘brain-friendly’ label. Worse still, I collocate ‘neuro’ more strongly with ‘bollox’ than with ‘language’. So it was an intriguing read. Without wanting to give too much away, I can tell you that it’s all to do with motivation (the limbic system, no less), being non-judgemental of the coachee, and breaking down language into manageable chunks: ‘from present tenses to future, to conditional etc.’ The key, continues the author, ‘is to start with grammar that gets the learner speaking the fastest. In English this would necessarily be the verbs ‘to be’, ‘to have’ and the impersonal ‘there is’ and ‘there are’, and the formulation of questions and negatives of these. Then it would be a step-by-step building the language: introducing present continuous as the real present and the present simple as the facts and habits tense’. (Paling, 2017: 83) And, hey pesto, it’s as simple as that, when you’ve mastered the necessary skills. To get a firmer understanding of this trademarked approach to language coaching, you’d probably have to follow one of the many courses that are certified by Efficient Language Coaching® (online, prices on request).

The International Language Coaching Association (https://internationallanguagecoaching.com/) would seem to be a competitor to Neurolanguage Coaching®. They, too, run courses: $450 for the Foundation Course, but the price includes ‘12 month membership in the pioneering ILCA community’. Rather a lot for 4 live sessions and 4 supplementary study videos. I suppose if you were really keen, you could do both. But times are tight, and instead of splashing out, I invested in ‘Coaching for Language Learning’ by Emmanuelle Betham, another self-published book (only $18.73 on Kindle). The author’s skills, according to LinkedIn, include, besides life coaching, Neuro Linguistic Programming, Mindfulness, and Clean Language. Again, I was intrigued.

As with ‘Neurolanguage Coaching’, there were quite a few slogans in ‘Coaching for Language Learning (CFLL)’, not a lot of awareness of SLA research (the work of Krashen seems to be the limit of the reading informing the CFLL approach), and a crude stereotyping of teachers and trainers as ‘informative’, and coaches as ‘transformative’. But, unlike Rachel Paling, Emmanuelle Betham seems to think that grammar instruction (even when delivered through coaching questions) is less helpful. Instead, learners need to learn to think in English, and the best way of doing that is by having mindful conversations with their coach. Nevertheless, there is a section on ‘coaching grammar rules’. Some standard teaching activities are also recommended. Running dictations can be used. The English language is not melodious like Roman languages. If you’re struggling to make sense of all this, you’re not alone. CFLL, you see, is ‘a new paradigm that needs to be appreciated in practice, as defined in its context, and which cannot be comprehended within the wisdom of previous hypotheses’ (Betham, 2018 – 2020: 45). What’s more, CFLL is primarily interested in ‘what works’: the concepts in the book ‘are not to be agreed or disagreed with, they are just examples of visualizations that have worked well for some learners in practice before’ (Betham, 34). You see, ‘truth is relative […] the rationale for our new paradigm, CFLL rests on the assumption that we are free to interpret and construct our own truth’ (Betham, 35). It’s all heady stuff.

Coming down to ground, the most useful thing I’ve read about language teaching and coaching is ‘From English Teacher to Learner Coach’ by Daniel Barber and Duncan Foord (2014). If coaching is ultimately about helping people to become more autonomous … in combination with education, it’s all about learner autonomy. At least, that was the message I took from this book. It’s very cheap, and it also comes in a ‘Student’s Book’ version, which is very handy if you want your students to try out a pile of suggestions for becoming more autonomous learners. I thought the suggestions were good and plentiful, but it all seemed to be more about learner autonomy than about language coaching. Perhaps it’s not unreasonable to claim they are the same thing?

But could AI do away with real-life coaches altogether? I’m very interested in the idea of coaching bots. How easy / hard would it be to fool people that they were chatting with a real-life coach, rather than an algorithm? It wouldn’t be too hard to load up a corpus of coaching conversational strategies, hedges and questions and automate a linkage between key words produced by the coachee (e.g. stress, frustration, work, COVID, resisting arrest) and a range of conversation prompts. Computers are getting better and better at doing empathy. How long before a coachbot passes the Turing test? Maybe this is what the Ontario Police are toying with?

There are, of course, plenty of coaching apps out there. Things like HabitBull, Coach.me, Symbifly, Mindsail … Mostly for sport, health and getting rich. They’re cheaper than paying for a coach, but the bonding experience is a bit different. Would they work with language learners or teachers? Somehow, I doubt it, but you never know. Do gamification and coaching fit together?

I’m not sure what I was hoping to learn from my exploration of coaching. I’m not sure what questions I was looking for answers to. Perhaps I needed a coach to guide me? But I have learned from Emmanuelle Betham that learning is a seed and I am a gardener. Or something like that.

Barber, D. & Foord, D. (2014) From English Teacher to Learner Coach.

Betham, E. (2018 – 2020) An Introduction to Coaching for Language Learning.

Paling, R. (2017) Neurolanguage Coaching: Brain Friendly Language Learning.

I’d never felt any need for a QR reader on my phone until one day, a few lockdowns ago, I had to scan a code in order to be allowed to sit down outside my nearest breadshop, Anker, to eat a sandwich. Since replacing my phone a week or so ago, it was only this morning that I felt the need to install a new reader. It will come as no surprise to learn that I have never used QR codes in a classroom, and probably never will.

A book that I co-authored a few years ago included QR codes on some pages, and these take you to video recordings of ‘real students’ carrying out tasks from the book. We don’t learn much about these students’ lives, but we can assume that they are learning English in pre-Covid days, when they went into a physical classroom from time to time. But now that the physical classroom is becoming a receding memory, I have to fear for the future of QR codes in language teaching. Who needs a barcode web link when you’re online already?

I’ve seen some fun suggestions for using QR codes in the classroom. Placing QR codes in prominent places around the school – linking to the codes reveals a set of questions or clues in a treasure hunt. Getting learners to prepare their own multimedia material to upload to an interactive map of their school / town / whatever. Other suggestions involve things like sticking QR codes around the walls of the classroom, or walking around with a QR code stuck on your back or your forehead. But they all require physical space, imagining face-to-face contact. And they all require that phones are allowed, which, in turn, requires a whole lot of administration in some places (e.g. with kids). The activities tend to be a bit juvenile.

Some suggestions for using QR codes are decidedly less fun, in my view. Notifying students of their homework assignments by sending them a QR code, for example. Or giving the answers to an exercise when they click on the link.

More ideas can be found here and in ETpedia Technology (Hockly, 2017) and no doubt some other places, too.

(Image from https://www.teacherspayteachers.com/Product/Pirate-Joke-QR-Codes-1262320 )

You can evaluate your own affective response to QR codes in education by pointing your phone at the image above. That’s tricky, of course, if you’re reading this on your phone, and not another device. (Someone is selling this for a dollar.)

So why are they used? According to Cruse and Brereton (2018), they ‘can make classroom activities more engaging and allow students to perform previously impossible or impractical tasks’. Those previously impossible or impractical tasks are, of course, no longer impossible or impractical when the whole class is online. And this leaves us with the main claim of QR advocates: use of these codes leads to more learner engagement. How well does the claim hold up?

With a little encouragement, most people would rather scan a code than manually type in a link. But we don’t really have any evidence that English language learners would be more motivated and engaged if they point their phone at codes. Perhaps, there are some like me who don’t really want to get their phone out. Eye-rollers who find it hard to suppress a groan when someone suggests you use Mentimeter. Of course, the way you feel about using your phone for activities like these may also depend on how good your wifi is (or whether you have any wifi).

Cruse and Brereton’s (2018) first ‘Principle of Good Practice’ is that QR use ‘should not be a gimmick’. If you’re not convinced by the engagement argument, what other reasons could there be? To promote learner autonomy and differentiation? To facilitate asynchronous learning? To support constructivist learning by providing multiple representations of reality and enabling ‘context- and content-dependent knowledge construction’ (Alizadeh, 2019)? To develop digital literacies? Evidence is lacking.

QR codes have soared in global reach since the start of the pandemic, especially for payments and advertising. I also came across a novel use for QR with code stickers designed for tombstones (‘bringing monuments into the 21st century’). I imagine, with a little more investment, scanning the code could generate a realistic hologram of the deceased. But someone needs to come up with a convincing way of using them in online language learning.

References

Alizadeh, M. (2019) Augmented/virtual reality promises for ELT practitioners. In P. Clements, A. Krause, & P. Bennett (Eds.), Diversity and inclusion. Tokyo: JALT.

Hockly, N. (2017) ETpedia Technology. Hove: Pavilion Publishing

Cruse, D. T. H., & Brereton, P. (2018) Integrating QR codes into ELT materials. In P. Clements, A. Krause, & P. Bennett (Eds.), Language teaching in a global age: Shaping the classroom, shaping the world. Tokyo: JALT.

On 21 January, I attended the launch webinar of DEFI (the Digital Education Futures Initiative), an initiative of the University of Cambridge, which seeks to work ‘with partners in industry, policy and practice to explore the field of possibilities that digital technology opens up for education’. The opening keynote speaker was Andrea Schleicher, head of education at the OECD. The OECD’s vision of the future of education is outlined in Schleicher’s book, ‘World Class: How to Build a 21st-Century School System’, freely available from the OECD, but his presentation for DEFI offers a relatively short summary. A recording is available here, and this post will take a closer look at some of the things he had to say.

Schleicher is a statistician and the coordinator of the OECD’s PISA programme. Along with other international organisations, such as the World Economic Forum and the World Bank (see my post here), the OECD promotes the global economization and corporatization of education, ‘based on the [human capital] view that developing work skills is the primary purpose of schooling’ (Spring, 2015: 14). In other words, the main proper function of education is seen to be meeting the needs of global corporate interests. In the early days of the COVID-19 pandemic, with the impact of school closures becoming very visible, Schleicher expressed concern about the disruption to human capital development, but thought it was ‘a great moment’: ‘the current wave of school closures offers an opportunity for experimentation and for envisioning new models of education’. Every cloud has a silver lining, and the pandemic has been a godsend for private companies selling digital learning (see my post about this here) and for those who want to reimagine education in a more corporate way.

Schleicher’s presentation for DEFI was a good opportunity to look again at the way in which organisations like the OECD are shaping educational discourse (see my post about the EdTech imaginary and ELT).

He begins by suggesting that, as a result of the development of digital technology (Google, YouTube, etc.) literacy is ‘no longer just about extracting knowledge’. PISA reading scores, he points out, have remained more or less static since 2000, despite the fact that we have invested (globally) more than 15% extra per student in this time. Only 9% of all 15-year-old students in the industrialised world can distinguish between fact and opinion.

To begin with, one might argue about the reliability and validity of the PISA reading scores (Berliner, 2020). One might also argue, as did a collection of 80 education experts in a letter to the Guardian, that the scores themselves are responsible for damaging global education, raising further questions about their validity. One might argue that the increased investment was spent in the wrong way (e.g. on hardware and software, rather than teacher training, for example), because the advice of organisations like OECD has been uncritically followed. And the statistic about critical reading skills is fairly meaningless unless it is compared to comparable metrics over a long time span: there is no reason to believe that susceptibility to fake news is any more of a problem now than it was, say, one hundred years ago. Nor is there any reason to believe that education can solve the fake-news problem (see my post about fake news and critical thinking here). These are more than just quibbles, but the main point that Schleicher is making is that education needs to change.

Schleicher next presents a graph which is designed to show that the amount of time that students spend studying correlates poorly with the amount they learn. His interest is in the (lack of) productivity of educational activities in some contexts. He goes on to argue that there is greater productivity in educational activities when learners have a growth mindset, implying (but not stating) that mindset interventions in schools would lead to a more productive educational environment.

Schleicher appears to confuse what students learn with the things they have learnt that have been measured by PISA. The two are obviously rather different, since PISA is only interested in a relatively small subset of the possible learning outcomes of schooling. His argument for growth mindset interventions hinges on the assumption that such interventions will lead to gains in reading scores. However, his graph demonstrates a correlation between growth mindset and reading scores, not a causal relationship. A causal relationship has not been clearly and empirically demonstrated (see my post about growth mindsets here) and recent work by Carol Dweck and her associates (e.g. Yeager et al., 2016), as well as other researchers (e.g. McPartlan et al, 2020), indicates that the relationship between gains in learning outcomes and mindset interventions is extremely complex.

Schleicher then turns to digitalisation and briefly discusses the positive and negative affordances of technology. He eulogizes platform companies before showing a slide designed to demonstrate that (in the workplace) there is a strong correlation between ICT use and learning. He concludes: ‘the digital world of learning is a hugely empowering world of learning’.

A brief paraphrase of this very disingenuous part of the presentation would be: technology can be good and bad, but I’ll only focus on the former. The discourse appears balanced, but it is anything but.

During the segment, Schleicher argues that technology is empowering, and gives the examples of ‘the most successful companies these days, they’re not created by a big industry, they’re created by a big idea’. This is plainly counterfactual. In the case of Alphabet and Facebook, profits did not follow from a ‘big idea’: the ideas changed as the companies evolved.

Schleicher then sketches a picture of an unpredictable future (pandemics, climate change, AI, cyber wars, etc.) as a way of framing the importance of being open (and resilient) to different futures and how we respond to them. He offers two different kinds of response: maintenance of the status quo, or ‘outsourcing’ of education. The pandemic, he suggests, has made more countries aware that the latter is the way forward.

In his discussion of the maintenance of the status quo, Schleicher talks about the maintenance of educational monopolies. By this, he must be referring to state monopolies on education: this is a favoured way of neoliberals of referring to state-sponsored education. But the extent to which, in 2021 in many OECD countries, the state has any kind of monopoly of education, is very open to debate. Privatization is advancing fast. Even in 2015, the World Education Forum’s ‘Final Report’ wrote that ‘the scale of engagement of nonstate actors at all levels of education is growing and becoming more diversified’. Schleicher goes on to talk about ‘large, bureaucratic school systems’, suggesting that such systems cannot be sufficiently agile, adaptive or responsive. ‘We should ask this question,’ he says, but his own answer to it is totally transparent: ‘changing education can be like moving graveyards’ is the title of the next slide. Education needs to be more like the health sector, he claims, which has been able to develop a COVID vaccine in such a short period of time. We need an education industry that underpins change in the same way as the health industry underpins vaccine development. In case his message isn’t yet clear enough, I’ll spell it out: education needs to be privatized still further.

Schleicher then turns to the ways in which he feels that digital technology can enhance learning. These include the use of AR, VR and AI. Technology, he says, can make learning so much more personalized: ‘the computer can study how you study, and then adapt learning so that it is much more granular, so much more adaptive, so much more responsive to your learning style’. He moves on to the field of assessment, again singing the praises of technology in the ways that it can offer new modes of assessment and ‘increase the reliability of machine rating for essays’. Through technology, we can ‘reunite learning and assessment’. Moving on to learning analytics, he briefly mentions privacy issues, before enthusing at greater length about the benefits of analytics.

Learning styles? Really? The reliability of machine scoring of essays? How reliable exactly? Data privacy as an area worth only a passing mention? The use of sensors to measure learners’ responses to learning experiences? Any pretence of balance appears now to have been shed. This is in-your-face sales talk.

Next up is a graph which purports to show the number of teachers in OECD countries who use technology for learners’ project work. This is followed by another graph showing the number of teachers who have participated in face-to-face and online CPD. The point of this is to argue that online CPD needs to become more common.

I couldn’t understand what point he was trying to make with the first graph. For the second, it is surely the quality of the CPD, rather than the channel, that matters.

Schleicher then turns to two further possible responses of education to unpredictable futures: ‘schools as learning hubs’ and ‘learn-as-you-go’. In the latter, digital infrastructure replaces physical infrastructure. Neither is explored in any detail. The main point appears to be that we should consider these possibilities, weighing up as we do so the risks and the opportunities (see slide below).

Useful ways to frame questions about the future of education, no doubt, but Schleicher is operating with a set of assumptions about the purpose of education, which he chooses not to explore. His fundamental assumption – that the primary purpose of education is to develop human capital in and for the global economy – is not one that I would share. However, if you do take that view, then privatization, economization, digitalization and the training of social-emotional competences are all reasonable corollaries, and the big question about the future concerns how to go about this in a more efficient way.

Schleicher’s (and the OECD’s) views are very much in accord with the libertarian values of the right-wing philanthro-capitalist foundations of the United States (the Gates Foundation, the Broad Foundation and so on), funded by Silicon Valley and hedge-fund managers. It is to the US that we can trace the spread and promotion of these ideas, but it is also, perhaps, to the US that we can now turn in search of hope for an alternative educational future. The privatization / disruption / reform movement in the US has stalled in recent years, as it has become clear that it failed to deliver on its promise of improved learning. The resistance to privatized and digitalized education is chronicled in Diane Ravitch’s latest book, ‘Slaying Goliath’ (2020). School closures during the pandemic may have been ‘a great moment’ for Schleicher, but for most of us, they have underscored the importance of face-to-face free public schooling. Now, with the electoral victory of Joe Biden and the appointment of a new US Secretary for Education (still to be confirmed), we are likely to see, for the first time in decades, an education policy that is firmly committed to public schools. The US is by far the largest contributor to the budget of the OECD – more than twice any other nation. Perhaps a rethink of the OECD’s educational policies will soon be in order?

References

Berliner D.C. (2020) The Implications of Understanding That PISA Is Simply Another Standardized Achievement Test. In Fan G., Popkewitz T. (Eds.) Handbook of Education Policy Studies. Springer, Singapore. https://doi.org/10.1007/978-981-13-8343-4_13

McPartlan, P., Solanki, S., Xu, D. & Sato, B. (2020) Testing Basic Assumptions Reveals When (Not) to Expect Mindset and Belonging Interventions to Succeed. AERA Open, 6 (4): 1 – 16 https://journals.sagepub.com/doi/pdf/10.1177/2332858420966994

Ravitch, D. (2020) Slaying Goliath: The Passionate Resistance to Privatization and the Fight to Save America’s Public School. New York: Vintage Books

Schleicher, A. (2018) World Class: How to Build a 21st-Century School System. Paris: OECD Publishing https://www.oecd.org/education/world-class-9789264300002-en.htm

Spring, J. (2015) Globalization of Education 2nd Edition. New York: Routledge

Yeager, D. S., et al. (2016) Using design thinking to improve psychological interventions: The case of the growth mindset during the transition to high school. Journal of Educational Psychology, 108(3), 374–391. https://doi.org/10.1037/edu0000098

All things told, it’s been a pretty good year for thought leaders. The face-to-face gigs have dried up, but there’s no shortage of online demand. Despite being identified, back in 2013, as one of the year’s most “insufferable” business buzzwords and clichés, thought leaders have hung on and are going strong. In fact, their numbers are increasing, or at least references to them are increasing. Ten years ago there was a tussle on Google Trends between ‘thought leader’ and ‘edtech’. The latter long ago zoomed into the stratosphere of search terms, but ‘thought leader’ has been chugging along quite nicely, despite a certain amount of flak that the term has taken. Concern about the precise nature of what is and what is not thought has been raised. There was a merciless parody-deconstruction of a TED talk by a comic pretending to be a thought leader (2.3 million views). Anand Giridharadas (2019) devoted a whole chapter of his best-selling ‘Winners Take All’ to the difference between thought leaders and critics. The former, Giridharas scoffs, love ‘an easy idea that goes down like gelato, an idea that gives hope while challenging nothing’. Elsewhere, in the New York Times, another writer jokes about thought leaders as a sort of wannabe highflying, good-doing yacht-to-yacht concept peddler. Thought leadership, in the withering words of one new book (Daub, 2020), is what some people in tech think is thinking.

But thought leadership is not rolling over and going away just yet. If you think you may have spotted a thought leader, the probability is that they have something about their thought leadership skills in the first line of their bio. You can double check someone’s aspiration to being a thought leader by their use of phrases like ‘reimagining’, ‘innovation’, ‘inclusivity’ and ‘disruption’.

The last of these is a real shibboleth and has to be used carefully. Everyone knows it is a nonsense of sorts: for every Uber there is a Hutzler 5711 banana slicer (I highly recommend the customer reviews on Amazon!). Still, you can get away with talking about ‘disruption’ if you’re in the right group of people.

We don’t have enough thought leaders in ELT. I’ve checked and there don’t seem to be too many of them out there. Broadly speaking, they can be divided into two types. There are those who are sometimes referred to by others as a ‘thought leader’ and there are those who only get referred to in that way when they’re talking about themselves. A good place to look for them is the British Council, whose remit includes thought leadership: it’s part of their ‘what we do’. But when you investigate more closely, it’s hard to identify who exactly is a ‘thought leader’ and who is just a ‘leading expert’. There’s a certain coyness about naming particular thought leaders. Not long ago, I saw a job advert for OUP which required ‘thought leadership on the exploitation of data science to drive the innovations in Assessment products and services’. I hope they filled the post satisfactorily. And Cambridge English has a Director of ‘Research and Thought Leadership’, but you can’t blame him for the job title.

Pearson offers webinars where you can find out about ‘what’s being discussed amongst our Thought Leaders’, but the presenters don’t come labelled ‘thought leader’, so you don’t know who’s a thought leader and who’s not. It’s all very tricky. TESOL is also quite oblique, promoting TESOL partnerships where you can reach ‘fellow thought leaders’ … who are never further identified.

There’s a clear need for these thought leaders to be made more visible. Who exactly are they? What’s their typical profile? ‘Who pays them’ would also be an interesting question.

Unfortunately, the BETT Show, which is a good place to spot a thought leader in the flesh, has been covid-cancelled. BETT has the laudable-sounding goal of ‘Creating a better future by transforming education’, but the future has been postponed and the transformation will be technological, enabling ‘educators and learners to thrive’. In March 2021, you can catch up with thought leaders, though, new and old, at BETT’s replacement event: Learnit Live. It’s ‘a five-day, global online event featuring global education leaders’ where you can acquire ‘the tools [needed] to thrive in a rapidly changing world’. Yes, the Future of Learning is Now.

The image is worth deconstructing a little. We’ve got measurement / accountability in the bar chart at the top. We’ve got inclusive collaboration in the handshake, insights with the electric bulb and an all-seeing eye, which I don’t think is meant to refer to data privacy issues. I’m not sure what the money icon is meant to represent, either, but perhaps I’m being obtuse. One thing is clear. The future of learning is on a screen banged down on a UK-centred globe. The event also guarantees no Zoom fatigue, and a refund is offered if you find the whole thing tedious. A General Ticket costs £160.00: thought leaders don’t come cheap.

Thought leaders are interlopers in the world of education. They really belong in the discourse of business, as reflected in the webpage of Global Thought Leaders . The adjectives say it all: changing, efficient, financial, forward-thinking, sustainable, technological, transparent. Education, however, sits a little uneasily with some of these attributes, and, for that reason, I, personally, find it hard to use the term without irony.

You can check out the list of the World’s Top 30 Education Gurus for 2020 here and it includes some of the usual suspects: Salman Khan, Sugata Mitra, the late Ken Robinson, John Hattie and Dylan William. White men, mostly. For more specifically ELT thought leaders, perhaps we should let them stay anonymous. Guruism, as Paola Rebolledo has reminded us, can be detrimental to our professional health. ‘Become your own guru,’ she calls and I would add, ‘Become your own thought leader’.

You can do this by reading Ayn Rand and ‘Talk like TED’ by Carmine Gallo. You might consider an online course on ‘Becoming a Thought Leader’ (the price includes a shareable certificate) to help you develop a compelling message, build influence, maximize your visibility, and track your impact. Or save money and buy ‘The Thought Leadership Manual: How to Grab Your Clients ….’ (I’ll leave you to complete the title). Find your niche, but focus on tech, that’s my advice.

Happy new year!

Philip

Daub, A. (2020) What Tech Calls Thinking. New York: Farrar, Straus and Giroux.

Giridharas, A. (2019) Winners Take All. New York: Knopf

Since I wrote my book of language-learning / teaching activities that involve the use of the learners’ own language (Kerr, 2014), one significant change has taken place. Some of these activities focused on machine translation tools, like Google Translate. The main concern at the time was the lack of reliability of these tools, and many teachers were strongly opposed to their students using them. It was easy to find examples of bad translation and to laugh at them. My favourite was an image of a crowd welcoming Pope Francis to Cuba, where a banner saying ‘Welcome Potato’ was supposedly a mistranslation of the Spanish ‘papa’, which can mean both ‘pope’ and ‘potato’. It’s a pity the image was Photoshopped.

My approach, feeling that it was impracticable and counter-productive to ban Google Translate altogether, was to exploit the poor quality of many of the translations as a way of training learners to use them more critically and more effectively. But, in the intervening years, the accuracy of online translation has much improved. One study (Aiken, 2019) found that Google Translate had improved by 34% over an 8-year period, although there were still significant differences in the accuracy of particular language pairings. Improvements will continue, and there are new services like DeepL Translator, which was launched in 2017, and, in my view, generally outperforms Google Translate, although fewer language pairings are available. 100% translation accuracy (if such a thing actually exists) may never be achievable, but for some kinds of texts with some language pairings, we are effectively there.

Training in using online translation is, however, still needed for some language pairings. There are two good ways of starting this.

1 Take a text in the learners’ L1 and machine-translate it into English. Highlight the errors and give it to the learners along with the original and a list of common error types (see below). The learners work together, looking at the highlighted errors and attempting to match them to one of the error types on the list.

2 Take a text in English and machine-translate it into the learners’ L1. The learners work together, first identifying and highlighting the errors they find, then comparing the translation with the original and attempting to identify the reasons for the error having happened.

At the time that I wrote this book, I would have advised against using Google Translate as a dictionary to look up single words, on the grounds that (1) the tool worked better the more context / co-text it had, and (2) there were usually better bilingual dictionaries available. My position has shifted somewhat, primarily because the features that Google Translate now offers have improved. There’s a video by Russell Stannard, called ‘Using Google Translate in Language Teaching -Tips and Ideas’,where Russell basically uses the software as a dictionary tool, and enthuses about the possibilities for pronunciation and listening work, for using the ‘favourites’ feature, and for exporting, via a spreadsheet, wordlists that are selected so they can be used with a spaced-repetition memory trainer.

You can find more ideas for using Google Translate as a pronunciation training tool in Minh Trang (2019).

One of the most common uses of machine translation by learners is undoubtedly in the production of written work. One recent piece of research (Tsai, 2019) came to the less than surprising conclusion that learners produced better drafts when doing so, and were happy to use it. Whether or not more learning took place when machine translation was used is another matter. O’Neill (2019) came to a similar conclusion, but found that students performed better with prior training. This training consisted of two 20-minutes sessions, where students tested the tool with examples before reviewing its strengths and weaknesses. More ideas for machine translation literacy training can be found in Bowker (2020).

I’d like to suggest a couple of further activities where Google Translate or DeepL can be used in the preparation of activities. In both cases, I’ll illustrate with the short original text from a newspaper (Der Standard) below:

Eine Passage in der neuen Covid-19-Verordnung erregt seit letzter Nacht besondere Aufmerksamkeit: das Alkoholverbot nach der Sperrstunde im Umfeld von Bars. Weil kein Ende definiert ist, sind manche in Sorge: Sind wir auf dem Weg in eine Prohibition? Konkret heißt es in der Novelle, die am Sonntag in Kraft tritt: „Nach der Sperrstunde dürfen im Umkreis von 50 Metern um Betriebsstätten der Gastgewerbe (sic!) keine alkoholischen Getränke konsumiert werden.“ Die Sperrstunde liegt in den meisten Lokalen bei 1.00 Uhr.

For the first activity, the students’ task is to translate this into English. Beforehand, translate the text using DeepL, and scramble the words, giving a copy of this scramble to the students.

1.00 am   50 meters   a   a   after   after   alcohol   alcoholic   amendment   are   are   around   attention   attracting   ban   bars   be   because   been   beverages   closing   come   consumed   Covid 19   curfew   curfew   defined   end   establishments   establishments   force   has   hospitality   in   in   in   into   is   is   last   may   most   new   night   no   no   of   of   on   on   on   one   passage   prohibition   radius   regulation   sic!   since   some   special   specifically   states   Sunday   the   the   the   the   the   the   time   to   vicinity   way   we   which   will   within   worried

The translation becomes a kind of jigsaw.

The second activity, only appropriate for more advanced learners, takes a text in the L1. Use two different translation tools to create separate translations, and correct any obvious errors (if there are any). Distribute these, along with the original to the students. Their task is, first, to identify and highlight any differences between the two versions. After that, they discuss each difference, saying which version they prefer (and why) or whether they have no preference.

Google Translate: One passage in the new Covid-19 regulation has been attracting special attention since last night: the ban on alcohol after the curfew in the vicinity of bars. Because no end is defined, some are concerned: are we on the way to prohibition? Specifically, the amendment, which comes into force on Sunday, says: “After the curfew, alcoholic beverages may not be consumed within 50 meters of the hospitality industry (sic!).” The curfew is at 1.00 a.m. in most restaurants.

Deepl: One passage in the new Covid 19 regulation has been attracting special attention since last night: the ban on alcohol after curfew in the vicinity of bars. Because no end is defined, some are worried: Are we on the way to a prohibition? Specifically, the amendment, which will come into force on Sunday, states: “After curfew, no alcoholic beverages may be consumed within a radius of 50 meters around hospitality establishments (sic!). The closing time is 1.00 am in most establishments.

One further activity that I would like to suggest makes use of the way that Google Translate translates each word as it goes, but amends previously translated words in the light of what follows. This is only suitable when Google Translate is accurate! The cleft example below (The thing that bothers me most is how long it will take) neatly illustrates the process. The following is a game-like exploitation. Project (or screen-share) Google Translate, set up to English and the learners’ own language. Tell the students that you are going to do a translation together. Tell them that the first word will be ‘the’, and ask them to predict how Google will translate it. Then, type in the word and everyone can see how Google translates it. Tell the students the next word (‘thing’) and again ask for their suggestions before typing it in. Carry on in the same way.

The

Das

The thing

Die Sache

The thing that

Die Sache, die

The thing that bothers

Das, was stört

The thing that bothers me

Das, was mich stört

The thing that bothers me most

Das, was mich am mesiten stört

The thing that bothers me most is

Das, was mich am mesiten stört, ist

The thing that bothers me most is how

Was mich am meisten stört, ist wie

The thing that bothers me most is how long

Was mich am meisten stört, ist wie lange

The thing that bothers me most is how long it

Das, was mich am meisten stört, ist, wie lange es dauert

The thing that bothers me most is how long it will

Was mich am meisten stört, ist, wie lange es dauern wird

The thing that bothers me most is how long it will take.

Was mich am meisten stört, ist, wie lange es dauern wird.

References

Aiken, M. (2019). An Updated Evaluation of Google Translate Accuracy. Studies in Linguistics and Literature, 3 (3) http://dx.doi.org/10.22158/sll.v3n3p253

Bowker, L. (2020) Machine translation literacy instruction for international business students and business English instructors. Journal of Business & Finance Librarianship 25 (1):1-19 https://www.researchgate.net/publication/343410145_Machine_translation_literacy_instruction_for_international_business_students_and_business_English_instructors

Kerr, P. (2014) Translation and Own-Language Activities. Cambridge: Cambridge University Press

Minh Trang, N. (2019) Using Google Translate as a Pronunciation Training Tool. LangLit, 5 (4), May 2019 https://www.researchgate.net/publication/333808794_USING_GOOGLE_TRANSLATE_AS_A_PRONUNCIATION_TRAINING_TOOL

O’Neill, E. M. (2019) Training students to use online translators and dictionaries: The impact on second language writing scores. International Journal of Research Studies in Language Learning, 8(2), 47-65

Tsai, S. (2019) Using google translate in EFL drafts: a preliminary investigation. Computer Assisted Language Learning, 32 (5-6): pp. 510–526. https://doi.org/10.1080/09588221.2018.1527361