Posts Tagged ‘translation’

The paragraph above was written by an AI-powered text generator called neuroflash https://app.neuro-flash.com/home which I told to produce a text on the topic ‘AI and education’. As texts on this topic go, it is both remarkable (in that it was not written by a human) and entirely unremarkable (in that it is practically indistinguishable from hundreds of human-written texts on the same subject). Neuroflash uses a neural network technology called GPT-3 – ‘a large language model’ – and ‘one of the most interesting and important AI systems ever produced’ (Chalmers, 2020). Basically, it generates text by predicting sequences of words based on huge databases. The nature of the paragraph above tells you all you need to know about the kinds of content that are usually found in texts about AI and education.

Not dissimilar from the neuroflash paragraph, educational commentary on uses of AI is characterised by (1) descriptions of AI tools already in use (e.g. speech recognition and machine translation) and (2) vague predictions which invariably refer to ‘the promise of personalised learning, adjusting what we give learners according to what they need to learn and keeping them motivated by giving them content that is of interest to them’ (Hughes, 2022). The question of what precisely will be personalised is unanswered: providing learners with optimal sets of resources (but which ones?), providing counselling services, recommendations or feedback for learners and teachers (but of what kind?) (Luckin, 2022). Nearly four years ago, I wrote https://adaptivelearninginelt.wordpress.com/2018/08/13/ai-and-language-teaching/ about the reasons why these questions remain unanswered. The short answer is that AI in language learning requires a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. This is lacking in SLA, or, at least, there is no general agreement on what it is. Worse, the models that are most commonly adopted in AI-driven programs (e.g. the deliberate learning of discrete items of grammar and vocabulary) are not supported by either current theory or research (see, for example, VanPatten & Smith, 2022).

In 2021, the IATEFL Learning Technologies SIG organised an event dedicated to AI in education. Unsurprisingly, there was a fair amount of input on AI in assessment, but my interest is in how AI might revolutionize how we learn and teach, not how we assess. What concrete examples did speakers provide?

Rose Luckin, the most well-known British expert on AI in education, kicked things off by mentioning three tools. One of these, Carnegie Learning, is a digital language course that looks very much like any of the ELT courses on offer from the big publishers – a fully blendable, multimedia (e.g. flashcards and videos) synthetic syllabus. This ‘blended learning solution’ is personalizable, since ‘no two students learn alike’, and, it claims, will develop a ‘lifelong love of language’. It appears to be premised on the idea of language learning as optimizing the delivery of ‘content’, of this content consisting primarily of discrete items, and of equating input with uptake. Been there, done that.

A second was Alelo Enskill https://www.alelo.com/about-us/ a chatbot / avatar roleplay program, first developed by the US military to teach Iraqi Arabic and aspects of Iraqi culture to Marines. I looked at the limitations of chatbot technology for language learning here https://adaptivelearninginelt.wordpress.com/2016/12/01/chatbots/ . The third tool mentioned by Luckin was Duolingo. Enough said.

Another speaker at this event was the founder and CEO of Edugo.AI https://www.edugo.ai/ , an AI-powered LMS which uses GPT-3. It allows schools to ‘create and upload on the platform any kind of language material (audio, video, text…). Our AI algorithms process and convert it in gamified exercises, which engage different parts of the brain, and gets students eager to practice’. Does this speaker know anything about gamification (for a quick read, I’d recommend Paul Driver (2012)) or neuroscience, I wonder. What, for that matter, does he know about language learning? Apparently, ‘language is not just about words, language is about sentences’ (Tomasello, 2022). Hmm, this doesn’t inspire confidence.

When you look at current uses of AI in language learning, there is very little (outside of testing, translation and speech ↔ text applications) that could justify enthusiastic claims that AI has any great educational potential. Skepticism seems to me a more reasonable and scientific response: de omnibus dubitandum.

Education is not the only field where AI has been talked up. When Covid hit us, AI was seen as the game-changing technology. It ‘could be deployed to make predictions, enhance efficiencies, and free up staff through automation; it could help rapidly process vast amounts of information and make lifesaving decisions’ (Chakravorti, 2022). The contribution of AI to the development of vaccines has been huge, but its role in diagnosing and triaging patients has been another matter altogether. Hundreds of predictive tools were developed: ‘none of them made a real difference, and some were potentially harmful’ (Heaven, 2021). Expectations were unrealistic and led to the deployment of tools before they were properly trialled. Thirty months down the line, a much more sober understanding of the potential of AI has emerged. Here, then, are the main lessons that have been learnt (I draw particularly on Engler, 2020, and Chakravorti, 2022) that are also relevant to education and language learning.

  • Anticipate what could go wrong before anticipating what might go right. Engler (2020) writes that ‘a poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem’. In language learning, it is highly unlikely that personalized recommendations will ever reach anything even approaching this level of reliability. What are the implications for individual learners whose learning is inappropriately personalised?
  • We also know that a significant problem with AI systems is bias (O’Neil, 2016). There is a well-documented history of discriminatory outcomes because of people’s race, gender, social class or disability profile. Bias needs to be addressed proactively, not reactively.
  • Acknowledge from the outset that, for AI to work, huge amounts of data related to prior outcomes will be needed. In the cases of both Covid and language learning, much of this data will be personal. This raises immediate questions of privacy and consent, especially for learners who are children. Don’t minimise this issue.
  • Common by-products of AI use include insecure data storage, digital surveillance and consolidation of market power (Engler, 2020). Since such issues are so pervasive, assume they will be a problem, and need to be considered in evaluating the potential of any piece of AI edtech.
  • Find out what the subject matter experts have to say: ‘AI is only helpful when applied judiciously by subject-matter experts—people with long-standing experience with the problem that they are trying to solve […] Effectively predicting a badly defined problem is worse than doing nothing at all’ (Engler, 2020). It may be that the problem the AI is trying to help solve is the wrong problem and not worth solving at all (e.g. more efficient ‘delivery’ of explicit grammatical knowledge, gamifying gapfills).

To bring this post to an end, I returned to the AI / GTP-3 driven neuroflash program. This time, I asked it to write a longer piece entitled ‘The dangers of artificial intelligence in education: what could go wrong?’. Here’s what it came up with:

There are a few potential dangers of artificial intelligence in education. One is that AI could be used to replace human teachers. This could lead to a decline in the quality of education, as machines are not yet capable of replicating the full range of human emotions and social interactions that are necessary for teaching. Additionally, AI could be used to create biased or “racist” educational materials, as has already happened with some commercial software products. Finally, AI could be used to track and monitor students in ways that invade their privacy, or that could be used to manipulate their behavior.

All of these dangers are real and should be taken seriously. However, it’s important to remember that AI is still a new technology, and its impact on education is not yet fully understood. As such, we should tread cautiously when implementing AI into our educational systems.

When it comes to education, artificial intelligence could be a double-edged sword. While AI has the potential to revolutionize the field of education and help students learn in more efficient ways, there are also dangers that need to be considered.

One of the dangers of AI in education is that it could lead to a more standardized and homogenized form of learning. If all students are relying on AI-based learning tools, they may all end up learning the same things in the same way. This could lead to a loss of creativity and individuality in education.

Another danger is that AI-based learning tools could become too reliant on data and algorithms. This could lead to a situation where students are simply being taught what the machines think they should know, rather than being allowed to explore and discover new things for themselves.

Finally, there is the danger that AI could eventually replace human teachers altogether. While this may seem like a far-fetched scenario, it is not outside the realm of possibility. If AI-based learning tools become sophisticated enough, they may one day be able to do the job of teaching just as well as humans can.

These are just some of the dangers that need to be considered when it comes to artificial intelligence in education. While AI has the potential to revolutionize education, we need to be careful about how we implement it so that we don’t end up doing more harm than good.

References

Chakravorti, B. (2022) Why AI Failed to Live Up to Its Potential During the Pandemic. Harvard Business Review March 17,2022. https://hbr.org/2022/03/why-ai-failed-to-live-up-to-its-potential-during-the-pandemic

Chalmers, D. (2020) Weinberg, Justin (ed.). “GPT-3 and General Intelligence”. Daily Nous. Philosophers On GPT-3 (updated with replies by GPT-3) July 30, 2020. https://dailynous.com/2020/07/30/philosophers-gpt-3/#chalmers

Driver, P. (2012) The Irony of Gamification. In English Digital Magazine 3, British Council Portugal, pp. 21 – 24 http://digitaldebris.info/digital-debris/2011/12/31/the-irony-of-gamification-written-for-ied-magazine.html

Engler, A. (2020) A guide to healthy skepticism of artificial intelligence and coronavirus. Washington D.C.: Brookings Institution https://www.brookings.edu/research/a-guide-to-healthy-skepticism-of-artificial-intelligence-and-coronavirus/

Heaven, W. D. (2021) Hundreds of AI tools have been built to catch covid. None of them helped. MIT Technology Review, July 30, 2021. https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/

Hughes, G. (2022) What lies at the end of the AI rainbow? IATEFL LTSIG Newsletter Issue April 2022

Luckin, R. (2022) The implications of AI for language learning and teaching. IATEFL LTSIG Newsletter Issue April 2022

O’Neil, C. (2016) Weapons of Math Destruction. London: Allen Lane

Tomasello, G. (2022) Next Generation of AI-Language Education Software:NLP & Language Modules (GPT3). IATEFL LTSIG Newsletter Issue April 2022

VanPatten, B. & Smith, M. (2022) Explicit and Implicit Learning in Second Language Acquisition. Cambridge: Cambridge University Press

In the latest issue of ‘Language Teaching’, there’s a ‘state-of-the-art’ article by Frank Boers entitled ‘Glossing and vocabulary learning’. The effect of glosses (‘a brief definition or synonym, either in L1 or L2, which is provided with [a] text’ (Nation, 2013: 238)) on reading comprehension and vocabulary acquisition has been well researched over the years. See Kim et al. (2020) for just one recent meta-analysis.

It’s a subject I have written about before on this blog (see here), when I focussed on Plonsky ad Ziegler (2016), a critical evaluation of a number of CALL meta-analyses, including a few that investigated glosses. Plonsky and Ziegler found that glosses can have a positive effect on language learning, that digital glosses may be more valuable than paper-based ones, and that both L1 and L2 glosses can be beneficial (clearly, the quality / accuracy of the gloss is as important as the language it is written in). Different learners have different preferences. Boers’ article covers similar ground, without, I think, adding any new takeaways. It concludes with a predictable call for further research.

Boers has a short section on the ‘future of glossing’ in which he notes that (1) ‘onscreen reading [is] becoming the default mode’, and (2) that ‘materials developers no longer need to create glosses themselves, but can insert hyperlinks to online resources’. This is not the future, but the present. In my last blog post on glossing (August 2017), I discussed Lingro, a digital dictionary tool that you can have running in the background, allowing you to click on any word on any website and bring up L1 or L2 glosses. My reservation about Lingro was that the quality of the glosses left much to be desired, relying as they did on Wiktionary. Things would be rather different if it used decent content – sourced, for example, from Oxford dictionaries, Robert (for French) or Duden (for German).

And this is where the content for the Google Dictionary for Chrome extension comes from. It’s free, and takes only seconds to install. It allows you to double-click on a word to bring up translations or English definitions. One more click will take you to a more extensive dictionary page. It also allows you to select a phrase or longer passage and bring up translations generated by Google Translate. It allows you to keep track of the items you have looked up, and to download these on a spreadsheet, which can then be converted to flashcards (e.g. Quizlet) if you wish. If you use the Safari browser, a similar tool is already installed. It has similar features to the Google extension, but also offers you the possibility of linking to examples of the targeted word in web sources like Wikipedia.

Boers was thinking of the provision of hyperlinks, but with these browser extensions it is entirely up to the reader of a text to decide how many and which items to look up, what kind of items (single words, phrases or longer passages) they want to look up, how far they want to explore the information available to them, and what they want to do with the information (e.g. store / record it).

It’s extraordinary that a ‘state-of-the-art article’ in an extremely reputable journal should be so out of date. The value of glossing in language learning is in content-focussed reading, and these tools mean that any text on the web can be glossed. I think this means that further research of the kind that Boers means would be a waste of time and effort. The availability of free technology does not, of course, solve all our problems. Learners will continue to benefit from guidance, support and motivation in selecting appropriate texts to read. They will likely benefit from training in optimal ways of using these browser extensions. They may need help in finding a balance between content-focussed reading and content-focussed reading with a language learning payoff.

References

Boers, F. (2022). Glossing and vocabulary learning. Language Teaching, 55 (1), 1 – 23

Kim, H.S., Lee, J.H. & Lee, H. (2020). The relative effects of L1 and L2 glosses on L2 learning: A meta-analysis. Language Teaching Research. December 2020.

Nation, I.S.P. (2013). Learning Vocabulary in Another Language. Cambridge: Cambridge University Press

Plonsky, L. & Ziegler, N. (2016). The CALL–SLA interface: insights from a second-order synthesis. Language Learning & Technology 20 / 2: 17 – 37

The world of language learning and teaching is full of theoretical constructs and claims, most of which have their moment of glory in the sun before being eclipsed and disappearing from view. In a recent article looking at the theoretical claims of translanguaging enthusiasts, Jim Cummins (2021) suggests that three criteria might be used to evaluate them:

1 Empirical adequacy – to what extent is the claim consistent with all the relevant empirical evidence?

2 Logical coherence – to what extent is the claim internally consistent and non-contradictory?

3 Consequential validity – to what extent is the claim useful in promoting effective pedagogy and policies?

Take English as a Lingua Franca (ELF), for example. In its early days, there was much excitement about developing databases of ELF usage in order to identify those aspects of pronunciation and lexico-grammar that mattered for intercultural intelligibility. The Lingua Franca Core (a list of pronunciation features that are problematic in ELF settings when ELF users mix them up) proved to be the most lasting product of the early empirical research into ELF (Jenkins, 2000). It made intuitive good sense, was potentially empowering for learners and teachers, was clearly a useful tool in combating native-speakerism, and was relatively easy to implement in educational policy and practice.

But problems with the construct of ELF quickly appeared. ELF was a positive reframing of the earlier notion of interlanguage – an idea that had deficit firmly built in, since interlanguage was a point that a language learner had reached somewhere on the way to being like a native-speaker. Interlanguage contained elements of the L1, and this led to interest in how such elements might become fossilized, a metaphor with very negative connotations. With a strong desire to move away from framings of deficit, ELF recognised and celebrated code-switching as an integral element in ELF interactions (Seidlhofer, 2011: 105). Deviations from idealised native-speaker norms of English were no longer to be seen as errors in need of correction, but as legitimate forms of the language (of ELF) itself.

However, it soon became clear that it was not possible to describe ELF in terms of the particular language forms that its users employed. In response, ELF researchers reframed ELF. The focus shifted to how people of different language backgrounds used English to communicate in particular situations – how they languaged, in other words. ELF was no longer a thing, but an action. This helped in terms of internal consistency, but most teachers remained unclear about how the ELF.2 insight should impact on their classroom practices. If we can’t actually say what ELF looks like, what are teachers supposed to do with the idea? And much as we might like to wish away the idea of native speakers (and their norms), these ideas are very hard to expunge completely (MacKenzie, 2014: 170).

Twenty years after ELF became widely used as a term, ELF researchers lament the absence of any sizable changes in classroom practices (Bayyurt & Dewey, 2020). There are practices that meet the ELF seal of approval (see, for example, Kiczkowiak & Lowe, 2018), and these include an increase in exposure to the diversity of English use worldwide, engagement in critical classroom discussion about the globalisation of the English language, and non-penalisation of innovative, but intelligible forms (Galloway, 2018: 471). It is, however, striking that these practices long pre-date the construct of ELF. They are not direct products of ELF.

Part of the ‘problem’, as ELF researchers see it, has been that ELF has been so hard to define. Less generously, we might suggest that the construct of ELF was flawed from the start. Useful, no doubt, as a heuristic, but time to move on. Jennifer Jenkins, one of the most well-known names in ELF, has certainly not been afraid to move on. Her article (Jenkins, 2015) refines ELF.2 into ELF.3, which she now labels as ‘English as a Multilingual Franca’. In this reframed model, ELF is not so much concerned with the difference between native speakers and non-native speakers, as with the difference between monolinguals and multilinguals. Multilingual, rather than ‘English’, is now the superordinate attribute. Since ELF is now about interactions, rather than ELF as a collection of forms, it follows, in ELF.3, that ELF may not actually contain any English forms at all. There is a logic here, albeit somewhat convoluted, but there’s also a problem for ELF as a construct, too. If ELF is fundamentally about multilingual communication, what need is there for the term ‘ELF’? ‘Translanguaging’ will do perfectly well instead. The graph from Google Trends reveals the rises and falls of these two terms in the academic discourse space. After peaking in 2008 the term ‘English as a Lingua Franca’ now appears to be in irreversible decline.

So, let’s now turn to ‘translanguaging’. What do Cummins, and others, have to say about the construct? The word has not been around for long. Most people trace it back to the end of the last century (Baker, 2001) and a set of bilingual pedagogical practices in the context of Welsh-English bilingual programmes intended to revitalise the Welsh language. In the early days, translanguaging was no more than a classroom practice that allowed or encouraged the use (by both learners and teachers) of more than one language for the purposes of study. The object of study might be another language, or it might be another part of the curriculum. When I wrote a book about the use of L1 in the learning and teaching of English (Kerr, 2014), I could have called it ‘Translanguaging Activities’, but the editors and I felt that the word ‘translanguaging’ might be seen as obscure jargon. I defined the word at the time as ‘similar to code-switching, the process of mixing elements form two languages’.

But obscure jargon no longer. There is, for example, a nice little collection of activities that involve L1 for the EFL / ESL classroom put together by Jason Anderson http://www.jasonanderson.org.uk/downloads/Jasons_ideas_for_translanguaging_in_the_EFL_ESL_classroom.pdf that he has chosen to call ‘Ideas for translanguaging’. In practical terms, there’s nothing here that you might not have found twenty or more years ago (e.g. in Duff, 1989; or Deller & Rinvolucri, 2002), long before anyone started using the word ‘translanguaging’. Anderson’s motivation for choosing the word ‘translanguaging’ is that he hopes it will promote a change of mindset in which a spirit of (language) inclusivity prevails (Anderson, 2018). Another example: the different ways that L1 may be used in a language classroom have recently been investigated by Rabbidge (2019) in a book entitled ‘Translanguaging in EFL Contexts’. Rabbidge offers a taxonomy of translanguaging moments. These are a little different from previous classifications (e.g. Ellis, 1994; Kim & Elder, 2005), but only a little. The most significant novelty is that these moments are now framed as ‘translanguaging’, rather than as ‘use of L1’. Example #3: the most well-known and widely-sold book that offers practical ideas that are related to translanguaging is ‘The Translanguaging Classroom’ by García and colleagues (2017). English language teachers working in EFL / ESL / ESOL contexts are unlikely to find much, if anything, new here by way of practical ideas. What they will find, however, is a theoretical reframing. It is the theoretical reframing that Anderson and Rabbidge draw their inspiration from.

The construct of translanguaging, then, like English as a Lingua Franca, has brought little that is new in practical terms. Its consequential validity does not really need to be investigated, since the pedagogical reasons for some use of other languages in the learning / teaching of English were already firmly established (but not, perhaps, widely accepted) a long time ago. How about the theory? Does it stand up to closer scrutiny any better than ELF?

Like ELF, ‘translanguaging’ is generally considered not to be a thing, but an action. And, like ELF, it has a definition problem, so precisely what kind of action this might be is open to debate. For some, it isn’t even an action: Tian et al (2021: 4) refer to it as ‘more like an emerging perspective or lens that could provide new insights to understand and examine language and language (in) education’. Its usage bounces around from user to user, each of whom may appropriate it in different ways. It is in competition with other terms including translingual practice, multilanguaging, and plurilingualism (Li, 2018). It is what has been called a ‘strategically deployable shifter’ (Moore, 2015). It is also unquestionably a word that sets a tone, since ‘translanguaging’ is a key part of the discourse of multilingualism / plurilingualism, which is in clear opposition to the unfavourable images evoked by the term ‘monolingualism’, often presented as a methodological mistake or a kind of subjectivity gone wrong (Gramling, 2016: 4). ‘Translanguaging’ has become a hooray word: criticize it at your peril.

What started as a classroom practice has morphed into a theory (Li, 2018; García, 2009), one that is and is likely to remain unstable. The big questions centre around the difference between ‘strong translanguaging’ (a perspective that insists that ‘named languages’ are socially constructed and have no linguistic or cognitive reality) and ‘weak translanguaging’ (a perspective that acknowledges boundaries between named languages but seeks to soften them). There are discussions, too, about what to call these forms of translanguaging. The ‘strong’ version has been dubbed by Cummins (2021) ‘Unitary Translanguaging Theory’ and by Bonacina-Pugh et al. (2021) ‘Fluid Languaging Approach’. Corresponding terms for the ‘weak’ version are ‘Crosslinguistic Translanguaging Theory’ and ‘Fixed Language Approach’. Subsidiary, related debates centre around code-switching: is it a form of translanguaging or is it a construct better avoided altogether since it assumes separate linguistic systems (Cummins, 2021)?

It’s all very confusing. Cenoz and Gorter (2021) in their short guide to pedagogical translanguaging struggle for clarity, but fail to get there. They ‘completely agree’ with García about the fluid nature of languages as ‘social constructs’ with ‘no clear-cut boundaries’, but still consider named languages as ‘distinct’ and refer to them as such in their booklet. Cutting your way through this thicket of language is a challenge, to put it mildly. It’s also probably a waste of time. As Cummins (2021: 16) notes, the confusion is ‘completely unnecessary’ since ‘there is no difference in the instructional practices that are implied by so-called strong and weak versions of translanguaging’. There are also more important questions to investigate, not least the extent to which the approaches to multilingualism developed by people like García in the United States are appropriate or effective in other contexts with different values (Jaspers, 2018; 2019).

The monolingualism that both ELF and translanguaging stand in opposition to may be a myth, a paradigm or a pathology, but, whatever it is, it is deeply embedded in the ways that our societies are organised, and the ways that we think. It is, writes David Gramling (2016: 3), ‘clearly not yet inclined to be waved off the stage by a university professor, nor even by a ‘multilingual turn’.’ In the end, ELF failed to have much impact. It’s time for translanguaging to have a turn. So, out with the old, in with the new. Or perhaps not really all that new at all.

The king is dead. Long live the king and a happy new year!

References

Anderson, J. (2018) Reimagining English language learners from a translingual perspective. ELT Journal 72 (1): 26 – 37

Baker, C. (2001) Foundations of Bilingual Education and Bilingualism, 3rd edn. Bristol: Multilingual Matters

Bayyurt, Y. & Dewey, M. (2020) Locating ELF in ELT. ELT Journal, 74 (4): 369 – 376

Bonacina-Pugh, F., Da Costa Cabral, I., & Huang, J. (2021) Translanguaging in education. Language Teaching, 54 (4): 439-471

Cenoz, J. & Gorter, D. (2021) Pedagogical Translanguaging. Cambridge: Cambridge University Press

Cummins, J. (2021) Translanguaging: A critical analysis of theoretical claims. In Juvonen, P. & Källkvist, M. (Eds.) Pedagogical Translanguaging: Theoretical, Methodological and Empirical Perspectives. Bristol: Multilingual Matters pp. 7 – 36

Deller, S. & Rinvolucri, M. (2002) Using the Mother Tongue. Peaslake, Surrey: Delta

Duff, A. (1989) Translation. Oxford: OUP

Ellis, R. (1994) Instructed Second Language Acquisition. Oxford: OUP

Galloway, N. (2018) ELF and ELT Teaching Materials. In Jenkins, J., Baker, W. & Dewey, M. (Eds.) The Routledge Handbook of English as a Lingua Franca. Abingdon, Oxon.: Routledge, pp. 468 – 480.

García, O., Ibarra Johnson, S. & Seltzer, K. (2017) The Translanguaging Classroom. Philadelphia: Caslon

García, O. (2009) Bilingual Education in the 21st Century: A Global Perspective. Malden / Oxford: Wiley / Blackwell

Gramling, D. (2016) The Invention of Monolingualism. New York: Bloomsbury

Jaspers, J. (2019) Authority and morality in advocating heteroglossia. Language, Culture and Society, 1: 1, 83 – 105

Jaspers, J. (2018) The transformative limits of translanguaging. Language & Communication, 58: 1 – 10

Jenkins, J. (2000) The Phonology of English as an International Language. Oxford: Oxford University Press

Jenkins, J. (2015) Repositioning English and multilingualism in English as a lingua franca. Englishes in Practice, 2 (3): 49-85

Kerr, P. (2014) Translation and Own-language Activities. Cambridge: Cambridge University Press

Kiczkowiak, M. & Lowe, R. J. (2018) Teaching English as a Lingua Franca. Stuttgart: Delta

Kim, S.-H. & Elder, C. (2005) Language choices and pedagogical functions in the foreign language classroom: A cross-linguistic functional analysis of teacher talk. Language Teaching Research, 9 (4): 355 – 380

Li, W. (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39 (1): 9 – 30

MacKenzie, I. (2014) English as a Lingua Franca. Abingdon, Oxon.: Routledge

Moore, R. (2015) From Revolutionary Monolingualism to Reactionary Multilingualism: Top-Down Discourses of Linguistic Diversity in Europe, 1794 – present. Language and Communication, 44: 19 – 30

Rabbidge, M. (2019) Translanguaging in EFL Contexts. Abingdon, Oxon.: Routledge

Seidlhofer, B. (2011) Understanding English as a Lingua Franca. Oxford: OUP

Tian, Z., Aghai, L., Sayer, P. & Schissel, J. L. (Eds.) (2020) Envisioning TESOL through a translanguaging lens: Global perspectives. Cham, CH: Springer Nature.

NB This is an edited version of the original review.

Words & Monsters is a new vocabulary app that has caught my attention. There are three reasons for this. Firstly, because it’s free. Secondly, because I was led to believe (falsely, as it turns out) that two of the people behind it are Charles Browne and Brent Culligan, eminently respectable linguists, who were also behind the development of the New General Service List (NGSL), based on data from the Cambridge English Corpus. And thirdly, because a lot of thought, effort and investment have clearly gone into the gamification of Words & Monsters (WAM). It’s to the last of these that I’ll turn my attention first.

WAM teaches vocabulary in the context of a battle between a player’s avatar and a variety of monsters. If users can correctly match a set of target items to definitions or translations in the available time, they ‘defeat’ the monster and accumulate points. The more points you have, the higher you advance through a series of levels and ranks. There are bonuses for meeting daily and weekly goals, there are leaderboards, and trophies and medals can be won. In addition to points, players also win ‘crystals’ after successful battles, and these crystals can be used to buy accessories which change the appearance of the avatar and give the player added ‘powers’. I was never able to fully understand precisely how these ‘powers’ affected the number of points I could win in battle. It remained as baffling to me as the whole system of values with Pokemon cards, which is presumably a large part of the inspiration here. Perhaps others, more used to games like Pokemon, would find it all much more transparent.

The system of rewards is all rather complicated, but perhaps this doesn’t matter too much. In fact, it might be the case that working out how reward systems work is part of what motivates people to play games. But there is another aspect to this: the app’s developers refer in their bumf to research by Howard-Jones and Jay (2016), which suggests that when rewards are uncertain, more dopamine is released in the mid-brain and this may lead to reinforcement of learning, and, possibly, enhancement of declarative memory function. Possibly … but Howard-Jones and Jay point out that ‘the science required to inform the manipulation of reward schedules for educational benefit is very incomplete.’ So, WAM’s developers may be jumping the gun a little and overstating the applicability of the neuroscientific research, but they’re not alone in that!

If you don’t understand a reward system, it’s certain that the rewards are uncertain. But WAM takes this further in at least two ways. Firstly, when you win a ‘battle’, you have to click on a plain treasure bag to collect your crystals, and you don’t know whether you’ll get one, two, three, or zero, crystals. You are given a semblance of agency, but, essentially, the whole thing is random. Secondly, when you want to convert your crystals into accessories for your avatar, random selection determines which accessory you receive, even though, again, there is a semblance of agency. Different accessories have different power values. This extended use of what the developers call ‘the thrill of uncertain rewards’ is certainly interesting, but how effective it is is another matter. My own reaction, after quite some time spent ‘studying’, to getting no crystals or an avatar accessory that I didn’t want was primarily frustration, rather than motivation to carry on. I have no idea how typical my reaction (more ‘treadmill’ than ‘thrill’) might be.

Unsurprisingly, for an app that has so obviously thought carefully about gamification, players are encouraged to interact with each other. As part of the early promotion, WAM is running, from 15 November to 19 December, a free ‘team challenge tournament’, allowing teams of up to 8 players to compete against each other. Ingeniously, it would appear to allow teams and players of varying levels of English to play together, with the app’s algorithms determining each individual’s level of lexical knowledge and therefore the items that will be presented / tested. Social interaction is known to be an important component of successful games (Dehghanzadeh et al., 2019), but for vocabulary apps there’s a huge challenge. In order to learn vocabulary from an app, learners need to put in time – on a regular basis. Team challenge tournaments may help with initial on-boarding of players, but, in the end, learning from a vocabulary app is inevitably and largely a solitary pursuit. Over time, social interaction is unlikely to be maintained, and it is, in any case, of a very limited nature. The other features of successful games – playful freedom and intrinsically motivating tasks (Driver, 2012) – are also absent from vocabulary apps. Playful freedom is mostly incompatible with points, badges and leaderboards. And flashcard tasks, however intrinsically motivating they may be at the outset, will always become repetitive after a while. In the end, what’s left, for those users who hang around long enough, is the reward system.

It’s also worth noting that this free challenge is of limited duration: it is a marketing device attempting to push you towards the non-free use of the app, once the initial promotion is over.

Gamified motivation tools are only of value, of course, if they motivate learners to spend their time doing things that are of clear learning value. To evaluate the learning potential of WAM, then, we need to look at the content (the ‘learning objects’) and the learning tasks that supposedly lead to acquisition of these items.

When you first use WAM, you need to play for about 20 minutes, at which point algorithms determine ‘how many words [you] know and [you can] see scores for English tests such as; TOEFL, TOEIC, IELTS, EIKEN, Kyotsu Shiken, CEFR, SAT and GRE’. The developers claim that these scores correlate pretty highly with actual test scores: ‘they are about as accurate as the tests themselves’, they say. If Browne and Culligan had been behind the app, I would have been tempted to accept the claim – with reservations: after all, it still allows for one item out of 5 to be wrongly identified. But, what is this CEFR test score that is referred to? There is no CEFR test, although many tests are correlated with CEFR. The two tools that I am most familiar with which allocate CEFR levels to individual words – Cambridge’s English Vocabulary Profile and Pearson’s Global Scale of English – often conflict in their results. I suspect that ‘CEFR’ was just thrown into the list of tests as an attempt to broaden the app’s appeal.

English target words are presented and practised with their translation ‘equivalents’ in Japanese. For the moment, Japanese is the only language available, which means the app is of little use to learners who don’t know any Japanese. It’s now well-known that bilingual pairings are more effective in deliberate language learning than using definitions in the same language as the target items. This becomes immediately apparent when, for example, a word like ‘something’ is defined (by WAM) as ‘a thing not known or specified’ and ‘anything’ as ‘a thing of whatever kind’. But although I’m in no position to judge the Japanese translations, there are reasons why I would want to check the spreadsheet before recommending the app. ‘Lady’ is defined as ‘polite word for a woman’; ‘missus’ is defined as ‘wife’; and ‘aye’ is defined as ‘yes’. All of these definitions are, at best, problematic; at worst, they are misleading. Are the Japanese translations more helpful? I wonder … Perhaps these are simply words that do not lend themselves to flashcard treatment?

Because I tested in to the app at C1 level, I was not able to evaluate the selection of words at lower levels. A pity. Instead, I was presented with words like ‘ablution’, ‘abrade’, ‘anode’, and ‘auspice’. The app claims to be suitable ‘for both second-language learners and native speakers’. For lower levels of the former, this may be true (but without looking at the lexical spreadsheets, I can’t tell). But for higher levels, however much fun this may be for some people, it seems unlikely that you’ll learn very much of any value. Outside of words in, say, the top 8000 frequency band, it is practically impossible to differentiate the ‘surrender value’ of words in any meaningful way. Deliberate learning of vocabulary only makes sense with high frequency words that you have a chance of encountering elsewhere. You’d be better off reading, extensively, rather than learning random words from an app. Words, which (for reasons I’ll come on to) you probably won’t actually learn anyway.

With very few exceptions, the learning objects in WAM are single words, rather than phrases, even when the item is of little or no value outside its use in a phrase. ‘Betide’ is defined as ‘to happen to; befall’ but this doesn’t tell a learner much that is useful. It’s practically only ever used following ‘woe’ (but what does ‘woe’ mean?!). Learning items can be checked in the ‘study guide’, which will show that ‘betide’ typically follows ‘woe’, but unless you choose to refer to the study guide (and there’s no reason, in a case like this, that you would know that you need to check things out more fully), you’ll be none the wiser. In other words, checking the study guide is unlikely to betide you. ‘Wee’, as another example, is treated as two items: (1) meaning ‘very small’ as in ‘wee baby’, and (2) meaning ‘very early in the morning’ as in ‘in the wee hours’. For the latter, ‘wee’ can only collocate with ‘in the’ and ‘hours’, so it makes little sense to present it as a single word. This is also an example of how, in some cases, different meanings of particular words are treated as separate learning objects, even when the two meanings are very close and, in my view, are hardly worth learning separately. Examples include ‘czar’ and ‘assonance’. Sometimes, cognates are treated as separate learning objects (e.g. ‘adulterate’ and ‘adulteration’ or ‘dolor’ and ‘dolorous’); with other words (e.g. ‘effulgence’), only one grammatical form appears to be given. I could not begin to figure out any rationale behind any of this.

All in all, then, there are reasons to be a little skeptical about some of the content. Up to level B2 – which, in my view, is the highest level at which it makes sense to use vocabulary flashcards – it may be of value, so long as your first language is Japanese. But given the claim that it can help you prepare for the ‘CEFR test’, I have to wonder …

The learning tasks require players to match target items to translations / definitions (in both directions), with the target item sometimes in written form, sometimes spoken. Users do not, as far as I can tell, ever have to produce the target item: they only have to select. The learning relies on spaced repetition, but there is no generative effect (known to enhance memorisation). When I was experimenting, there were a few words that I did not know, but I was usually able to get the correct answer by eliminating the distractors (a choice of one from three gives players a reasonable chance of guessing correctly). WAM does not teach users how to produce words; its focus is on receptive knowledge (of a limited kind). I learn, for example, what a word like ‘aye’ or ‘missus’ kind of means, but I learn nothing about how to use it appropriately. Contrary to the claims in WAM’s bumf (that ‘all senses and dimensions of each word are fully acquired’), reading and listening comprehension speeds may be improved, but appropriate and accurate use of these words in speaking and writing is much less likely to follow. Does WAM really ‘strengthen and expand the foundation levels of cognition that support all higher level thinking’, as is claimed?

Perhaps it’s unfair to mention some of the more dubious claims of WAM’s promotional material, but here is a small selection, anyway: ‘WAM unleashes the full potential of natural motivation’. ‘WAM promotes Flow by carefully managing the ratio of unknown words. Your mind moves freely in the channel below frustration and above boredom’.

WAM is certainly an interesting project, but, like all the vocabulary apps I have ever looked at, there have to be trade-offs between optimal task design and what will fit on a mobile screen, between freedoms and flexibility for the user and the requirements of gamified points systems, between the amount of linguistic information that is desirable and the amount that spaced repetition can deal with, between attempting to make the app suitable for the greatest number of potential users and making it especially appropriate for particular kinds of users. Design considerations are always a mix of the pedagogical and the practical / commercial. And, of course, the financial. And, like most edtech products, the claims for its efficacy need to be treated with a bucket of salt.

References

Dehghanzadeh, H., Fardanesh, H., Hatami, J., Talaee, E. & Noroozi, O. (2019) Using gamification to support learning English as a second language: a systematic review, Computer Assisted Language Learning, DOI: 10.1080/09588221.2019.1648298

Driver, P. (2012) The Irony of Gamification. In English Digital Magazine 3, British Council Portugal, pp. 21 – 24 http://digitaldebris.info/digital-debris/2011/12/31/the-irony-of-gamification-written-for-ied-magazine.html

Howard-Jones, P. & Jay, T. (2016) Reward, learning and games. Current Opinion in Behavioral Sciences, 10: 65 – 72

Five years ago, in 2016, there was an interesting debate in the pages of the journal ‘Psychological Review’. It began with an article by Jeffrey Bowers (2016a), a psychologist at the University of Bristol, who argued that neuroscience (as opposed to psychology) has little, or nothing, to offer us, and is unlikely ever to be able to do so, in terms of improving classroom instruction. He wasn’t the first to question the relevance of neuroscience to education (see, for example, Willingham, 2009), but this was a full-frontal attack. Bowers argued that ‘neuroscience rarely offers insights into instruction above and beyond psychology’ and that neuroscientific evidence that the brain changes in response to instruction are irrelevant. His article was followed by two counter-arguments (Gabrieli, 2016; Howard-Jones, et al., 2016), which took him to task for too narrowly limiting the scope of education to classroom instruction (neglecting, for example, educational policy), for ignoring the predictive power of neuroimaging on neurodevelopmental differences (and, therefore, its potential value in individualising curricula), and for failing to take account of the progress that neuroscience, in collaboration with educators, has already made. Bowers’ main argument, that educational neuroscience had little to tell us about teaching, was not really addressed in the counter-arguments, and Bowers (2016b) came back with a counter-counter-rebuttal.

The brain responding to seductive details

In some ways, the debate, like so many of the kind, suffered from the different priorities of the participants. For Gabriele and Howard-Jones et al., Bowers had certainly overstated his case, but they weren’t entirely in disagreement with him. Paul Howard-Jones has been quoted by André Hedlund as saying that ‘all neuroscience can do is confirm what we’ve been doing all along and give us new insights into a couple of new things’. One of Howard-Jones’ co-authors, Usha Goswami, director of the Centre for Neuroscience in Education at the University of Cambridge, has said that ‘there is a gulf between current science and classroom applications’ (Goswami, 2006).

For teachers, though, it is the classroom applications that are of interest. Claims for the relevance of neuroscience to ELT have been made by many. We [in ESL / EFL] need it, writes Curtis Kelly (2017). Insights from neuroscience can, apparently, make textbooks more ‘brain friendly’ (Helgesen & Kelly, 2015). Herbert Puchta’s books are advertised by Cambridge University Press as ‘based on the latest insights into how the brain works fresh from the field of neuroscience’. You can watch a British Council talk by Rachael Roberts, entitled ‘Using your brain: what neuroscience can teach us about learning’. And, in the year following the Bowers debate, Carol Lethaby and Patricia Harries gave a presentation at IATEFL Glasgow (Lethaby & Harries, 2018) entitled ‘Research and teaching: What has neuroscience ever done for us?’ – a title that I have lifted for this blog post. Lethaby and Harries provide a useful short summary of the relevance of neuroscience to ELT, and I will begin my discussion with that. They expand on this in their recent book (Lethaby, Mayne & Harries, 2021), a book I highly recommend.

So what, precisely, does neuroscience have to tell English language teachers? Lethaby and Harries put forward three main arguments. Firstly, neuroscience can help us to bust neuromyths (the examples they give are right / left brain dominance and learning styles). Secondly, it can provide information that informs teaching (the examples given are the importance of prior knowledge and the value of translation). Finally, it can validate existing best practice (the example given is the importance of prior knowledge). Let’s take a closer look.

I have always enjoyed a bit of neuromyth busting and I wrote about ‘Left brains and right brains in English language teaching’ a long time ago. It is certainly true that neuroscience has helped to dispel this myth: it is ‘simplistic at best and utter hogwash at worst’ (Dörnyei, 2009: 49). However, we did not need neuroscience to rubbish the practical teaching applications of this myth, which found their most common expression in Neuro-Linguistic Programming (NLP) and Brain Gym. Neuroscience simply banged in the final nail in the coffin of these trends. The same is true for learning styles and the meshing hypothesis. It’s also worth noting that, despite the neuroscientific evidence, such myths are taking a long time to die … a point I will return to at the end of this post.

Lethaby and Harries’s second and third arguments are essentially the same, unless, in their second point they are arguing that neuroscience can provide new information. I struggle, however, to see anything that is new. Neuroimaging apparently shows that the medial prefrontal cortex is activated when prior knowledge is accessed, but we have long known (since Vygotsky, at least!) that effective learning builds on previous knowledge. Similarly, the amygdala (known to be associated with the processing of emotions) may play an important role in learning, but we don’t need to know about the amygdala to understand the role of affect in learning. Lastly, the neuroscientific finding that different languages are not ‘stored’ in separate parts of the brain (Spivey & Hirsch, 2003) is useful to substantiate arguments that translation can have a positive role to play in learning another language, but convincing arguments predate findings such as these by many, many years. This would all seem to back up Howard-Jones’s observation about confirming what we’ve been doing and giving us new insights into a couple of new things. It isn’t the most compelling case for the relevance of neuroscience to ELT.

Chapter 2 of Carol Lethaby’s new book, ‘An Introduction to Evidence-based Teaching in the English Language Classroom’ is devoted to ‘Science and neuroscience’. The next chapter is called ‘Psychology and cognitive science’ and practically all the evidence for language teaching approaches in the rest of the book is drawn from cognitive (rather than neuro-) science. I think the same is true for the work of Kelly, Helgesen, Roberts and Puchta that I mentioned earlier.

It is perhaps the case these days that educationalists prefer to refer to ‘Mind, Brain, and Education Science’ (MBE) – the ‘intersection of neuroscience, education, and psychology’ – rather than educational neuroscience, but, looking at the literature of MBE, there’s a lot more education and psychology than there is neuroscience (although the latter always gets a mention). Probably the most comprehensive and well-known volume of practical ideas deriving from MBE is ‘Making Classrooms Better’ (Tokuhama-Espinosa, 2014). Of the 50 practical applications listed, most are either inspired by the work of John Hattie (2009) or the work of cognitive psychologists. Neuroscience hardly gets a look in.

To wrap up, I’d like to return to the question of neuroscience’s role in busting neuromyths. References to neuroscience, especially when accompanied by fMRI images, have a seductive appeal to many: they confer a sense of ‘scientific’ authority. Many teachers, it seems, are keen to hear about neuroscience (Pickering & Howard-Jones, 2007). Even when the discourse contains irrelevant neuroscientific information (diagrams of myelination come to mind), it seems that many of us find this satisfying (Weisberg et al., 2015; Weisberg et al., 2008). It gives an illusion of explanatory depth (Rozenblit & Keil, 2002), the so-called ‘seductive details effect’. You are far more likely to see conference presentations, blog posts and magazine articles extolling the virtues of neuroscientific findings than you are to come across things like I am writing here. But is it possible that the much-touted idea that neuroscience can bust neuromyths is itself a myth?

Sadly, we have learnt in recent times that scientific explanations have only very limited impact on the beliefs of large swathes of the population (including teachers, of course). Think of climate change and COVID. Why should neuroscience be any different? It probably isn’t. Scurich & Shniderman (2014) found that ‘neuroscience is more likely to be accepted and credited when it confirms prior beliefs’. We are more likely to accept neuroscientific findings because we ‘find them intuitively satisfying, not because they are accurate’ (Weisberg, et al. 2008). Teaching teachers about educational neuroscience may not make much, if any, difference (Tham et al., 2019). I think there is a danger in using educational neuroscience, seductive details and all, to validate what we already do (as opposed to questioning what we do). And for those who don’t already do these things, they’ll probably ignore such findings as there are, anyway.

References

Bowers, J. (2016a) The practical and principled problems with educational Neuroscience. Psychological Review 123 (5) 600 – 612

Bowers, J.S. (2016b) Psychology, not educational neuroscience, is the way forward for improving educational outcomes for all children: Reply to Gabrieli (2016) and Howard-Jones et al. (2016). Psychological Review. 123 (5):628-35.

Dörnyei, Z. (2009) The Psychology of Second Language Acquisition. Oxford: Oxford University Press

Gabrieli, J.D. (2016) The promise of educational neuroscience: Comment on Bowers (2016). Psychological Review. 123 (5):613-9

Goswami , U. (2006). Neuroscience and education: From research to practice? Nature Reviews Neuroscience, 7: 406 – 413

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge

Helgesen, M. & Kelly, C. (2015) Do-it-yourself: Ways to make your textbook more brain-friendly’ SPELT Quarterly, 30 (3): 32 – 37

Howard-Jones, P.A., Varma. S., Ansari, D., Butterworth, B., De Smedt, B., Goswami, U., Laurillard, D. & Thomas, M. S. (2016) The principles and practices of educational neuroscience: Comment on Bowers (2016). Psychological Review. 123 (5):620-7

Kelly, C. (2017) The Brain Studies Boom: Using Neuroscience in ESL/EFL Teacher Training. In Gregersen, T. S. & MacIntyre, P. D. (Eds.) Innovative Practices in Language Teacher Education pp.79-99 Springer

Lethaby, C. & Harries, P. (2018) Research and teaching: What has neuroscience ever done for us?’ in Pattison, T. (Ed.) IATEFL Glasgow Conference Selections 2017. Faversham, Kent, UK: IATEFL  p. 36- 37

Lethaby, C., Mayne, R. & Harries, P. (2021) An Introduction to Evidence-Based Teaching in the English Language Classroom. Shoreham-by-Sea: Pavilion Publishing

McCabe, D.P. & Castel, A.D. (2008) Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition 107: 343–352.

Pickering, S. J. & Howard-Jones, P. (2007) Educators’ views on the role of neuroscience in education: findings from a study of UK and international perspectives. Mind Brain Education 1: 109–113.

Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive science, 26(5), 521–562.

Scurich, N., & Shniderman, A. (2014) The selective allure of neuroscientific explanations. PLOS One, 9 (9), e107529. http://dx.doi.org/10.1371/journal.pone. 0107529.

Spivey, M. V. & Hirsch, J. (2003) ‘Shared and separate systems in bilingual language processing: Converging evidence from eyetracking and brain imaging’ Brain and Language, 86: 70 – 82

Tham, R., Walker, Z., Tan, S.H.D., Low, L.T. & Annabel Chan, S.H. (2019) Translating educational neuroscience for teachers. Learning: Research and Practice, 5 (2): 149-173 Singapore: National Institute of Education

Tokuhama-Espinosa, T. (2014) Making Classrooms Better. New York: Norton

Weisberg, D. S., Taylor, J. C. V. & Hopkins, E.J. (2015) Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision Making, Vol. 10, No. 5, September 2015, pp. 429–441

Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of cognitive neuroscience, 20 (3): 470–477.

Willingham, D. T. (2009). Three problems in the marriage of neuroscience and education. Cortex, 45: 54-55.

Since I wrote my book of language-learning / teaching activities that involve the use of the learners’ own language (Kerr, 2014), one significant change has taken place. Some of these activities focused on machine translation tools, like Google Translate. The main concern at the time was the lack of reliability of these tools, and many teachers were strongly opposed to their students using them. It was easy to find examples of bad translation and to laugh at them. My favourite was an image of a crowd welcoming Pope Francis to Cuba, where a banner saying ‘Welcome Potato’ was supposedly a mistranslation of the Spanish ‘papa’, which can mean both ‘pope’ and ‘potato’. It’s a pity the image was Photoshopped.

My approach, feeling that it was impracticable and counter-productive to ban Google Translate altogether, was to exploit the poor quality of many of the translations as a way of training learners to use them more critically and more effectively. But, in the intervening years, the accuracy of online translation has much improved. One study (Aiken, 2019) found that Google Translate had improved by 34% over an 8-year period, although there were still significant differences in the accuracy of particular language pairings. Improvements will continue, and there are new services like DeepL Translator, which was launched in 2017, and, in my view, generally outperforms Google Translate, although fewer language pairings are available. 100% translation accuracy (if such a thing actually exists) may never be achievable, but for some kinds of texts with some language pairings, we are effectively there.

Training in using online translation is, however, still needed for some language pairings. There are two good ways of starting this.

1 Take a text in the learners’ L1 and machine-translate it into English. Highlight the errors and give it to the learners along with the original and a list of common error types (see below). The learners work together, looking at the highlighted errors and attempting to match them to one of the error types on the list.

2 Take a text in English and machine-translate it into the learners’ L1. The learners work together, first identifying and highlighting the errors they find, then comparing the translation with the original and attempting to identify the reasons for the error having happened.

At the time that I wrote this book, I would have advised against using Google Translate as a dictionary to look up single words, on the grounds that (1) the tool worked better the more context / co-text it had, and (2) there were usually better bilingual dictionaries available. My position has shifted somewhat, primarily because the features that Google Translate now offers have improved. There’s a video by Russell Stannard, called ‘Using Google Translate in Language Teaching -Tips and Ideas’,where Russell basically uses the software as a dictionary tool, and enthuses about the possibilities for pronunciation and listening work, for using the ‘favourites’ feature, and for exporting, via a spreadsheet, wordlists that are selected so they can be used with a spaced-repetition memory trainer.

You can find more ideas for using Google Translate as a pronunciation training tool in Minh Trang (2019).

One of the most common uses of machine translation by learners is undoubtedly in the production of written work. One recent piece of research (Tsai, 2019) came to the less than surprising conclusion that learners produced better drafts when doing so, and were happy to use it. Whether or not more learning took place when machine translation was used is another matter. O’Neill (2019) came to a similar conclusion, but found that students performed better with prior training. This training consisted of two 20-minutes sessions, where students tested the tool with examples before reviewing its strengths and weaknesses. More ideas for machine translation literacy training can be found in Bowker (2020).

I’d like to suggest a couple of further activities where Google Translate or DeepL can be used in the preparation of activities. In both cases, I’ll illustrate with the short original text from a newspaper (Der Standard) below:

Eine Passage in der neuen Covid-19-Verordnung erregt seit letzter Nacht besondere Aufmerksamkeit: das Alkoholverbot nach der Sperrstunde im Umfeld von Bars. Weil kein Ende definiert ist, sind manche in Sorge: Sind wir auf dem Weg in eine Prohibition? Konkret heißt es in der Novelle, die am Sonntag in Kraft tritt: „Nach der Sperrstunde dürfen im Umkreis von 50 Metern um Betriebsstätten der Gastgewerbe (sic!) keine alkoholischen Getränke konsumiert werden.“ Die Sperrstunde liegt in den meisten Lokalen bei 1.00 Uhr.

For the first activity, the students’ task is to translate this into English. Beforehand, translate the text using DeepL, and scramble the words, giving a copy of this scramble to the students.

1.00 am   50 meters   a   a   after   after   alcohol   alcoholic   amendment   are   are   around   attention   attracting   ban   bars   be   because   been   beverages   closing   come   consumed   Covid 19   curfew   curfew   defined   end   establishments   establishments   force   has   hospitality   in   in   in   into   is   is   last   may   most   new   night   no   no   of   of   on   on   on   one   passage   prohibition   radius   regulation   sic!   since   some   special   specifically   states   Sunday   the   the   the   the   the   the   time   to   vicinity   way   we   which   will   within   worried

The translation becomes a kind of jigsaw.

The second activity, only appropriate for more advanced learners, takes a text in the L1. Use two different translation tools to create separate translations, and correct any obvious errors (if there are any). Distribute these, along with the original to the students. Their task is, first, to identify and highlight any differences between the two versions. After that, they discuss each difference, saying which version they prefer (and why) or whether they have no preference.

Google Translate: One passage in the new Covid-19 regulation has been attracting special attention since last night: the ban on alcohol after the curfew in the vicinity of bars. Because no end is defined, some are concerned: are we on the way to prohibition? Specifically, the amendment, which comes into force on Sunday, says: “After the curfew, alcoholic beverages may not be consumed within 50 meters of the hospitality industry (sic!).” The curfew is at 1.00 a.m. in most restaurants.

Deepl: One passage in the new Covid 19 regulation has been attracting special attention since last night: the ban on alcohol after curfew in the vicinity of bars. Because no end is defined, some are worried: Are we on the way to a prohibition? Specifically, the amendment, which will come into force on Sunday, states: “After curfew, no alcoholic beverages may be consumed within a radius of 50 meters around hospitality establishments (sic!). The closing time is 1.00 am in most establishments.

One further activity that I would like to suggest makes use of the way that Google Translate translates each word as it goes, but amends previously translated words in the light of what follows. This is only suitable when Google Translate is accurate! The cleft example below (The thing that bothers me most is how long it will take) neatly illustrates the process. The following is a game-like exploitation. Project (or screen-share) Google Translate, set up to English and the learners’ own language. Tell the students that you are going to do a translation together. Tell them that the first word will be ‘the’, and ask them to predict how Google will translate it. Then, type in the word and everyone can see how Google translates it. Tell the students the next word (‘thing’) and again ask for their suggestions before typing it in. Carry on in the same way.

The

Das

The thing

Die Sache

The thing that

Die Sache, die

The thing that bothers

Das, was stört

The thing that bothers me

Das, was mich stört

The thing that bothers me most

Das, was mich am mesiten stört

The thing that bothers me most is

Das, was mich am mesiten stört, ist

The thing that bothers me most is how

Was mich am meisten stört, ist wie

The thing that bothers me most is how long

Was mich am meisten stört, ist wie lange

The thing that bothers me most is how long it

Das, was mich am meisten stört, ist, wie lange es dauert

The thing that bothers me most is how long it will

Was mich am meisten stört, ist, wie lange es dauern wird

The thing that bothers me most is how long it will take.

Was mich am meisten stört, ist, wie lange es dauern wird.

References

Aiken, M. (2019). An Updated Evaluation of Google Translate Accuracy. Studies in Linguistics and Literature, 3 (3) http://dx.doi.org/10.22158/sll.v3n3p253

Bowker, L. (2020) Machine translation literacy instruction for international business students and business English instructors. Journal of Business & Finance Librarianship 25 (1):1-19 https://www.researchgate.net/publication/343410145_Machine_translation_literacy_instruction_for_international_business_students_and_business_English_instructors

Kerr, P. (2014) Translation and Own-Language Activities. Cambridge: Cambridge University Press

Minh Trang, N. (2019) Using Google Translate as a Pronunciation Training Tool. LangLit, 5 (4), May 2019 https://www.researchgate.net/publication/333808794_USING_GOOGLE_TRANSLATE_AS_A_PRONUNCIATION_TRAINING_TOOL

O’Neill, E. M. (2019) Training students to use online translators and dictionaries: The impact on second language writing scores. International Journal of Research Studies in Language Learning, 8(2), 47-65

Tsai, S. (2019) Using google translate in EFL drafts: a preliminary investigation. Computer Assisted Language Learning, 32 (5-6): pp. 510–526. https://doi.org/10.1080/09588221.2018.1527361

subtitlesAs both a language learner and a teacher, I have a number of questions about the value of watching subtitled videos for language learning. My interest is in watching extended videos, rather than short clips for classroom use, so I am concerned with incidental, rather than intentional, learning, mostly of vocabulary. My questions include:

  • Is it better to watch a video that is subtitled or unsubtitled?
  • Is it better to watch a video with L1 or L2 subtitles?
  • If a video is watched more than once, what is the best way to start and proceed? In which order (no subtitles, L1 subtitles and L2 subtitles) is it best to watch?

For help, I turned to three recent books about video and language learning: Ben Goldstein and Paul Driver’s Language Learning with Digital Video (CUP, 2015), Kieran Donaghy’s Film in Action (Delta, 2015) and Jamie Keddie’s Bringing Online Video into the Classroom (OUP, 2014). I was surprised to find no advice, but, as I explored more, I discovered that there may be a good reason for these authors’ silence.

There is now a huge literature out there on subtitles and language learning, and I cannot claim to have read it all. But I think I have read enough to understand that I am not going to find clear-cut answers to my questions.

The learning value of subtitles

It has been known for some time that the use of subtitles during extensive viewing of video in another language can help in the acquisition of that language. The main gains are in vocabulary acquisition and the development of listening skills (Montero Perez et al., 2013). This is true of both L1 subtitles (with an L2 audio track), sometimes called interlingual subtitles, (Incalcaterra McLoughlin et al, 2011) and L2 subtitles (with an L2 audio track), sometimes called intralingual subtitles or captions (Vanderplank, 1988). Somewhat more surprisingly, vocabulary gains may also come from what are called reversed subtitles (L2 subtitles and an L1 audio track) (Burczyńska, 2015). Of course, certain conditions apply for subtitled video to be beneficial, and I’ll come on to these. But there is general research agreement (an exception is Karakaş & Sariçoban, 2012) that more learning is likely to take place from watching a subtitled video in a target language than an unsubtitled one.

Opposition to the use of subtitles as a tool for language learning has mostly come from three angles. The first of these, which concerns L1 subtitles, is an antipathy to any use at all of L1. Although such an attitude remains entrenched in some quarters, there is no evidence to support it (Hall & Cook, 2012; Kerr, 2016). Researchers and, increasingly, teachers have moved on.

The second reservation that is sometimes expressed is that learners may not attend to either the audio track or the subtitles if they do not need to. They may, for example, ignore the subtitles in the case of reversed subtitles or ignore the L2 audio track when there are L1 subtitles. This can, of course, happen, but it seems that, on the whole, this is not the case. In an eye-tracking study by Bisson et al (2012), for example, it was found that most people followed the subtitles, irrespective of what kind they were. Unsurprisingly, they followed the subtitles more closely when the audio track was in a language that was less familiar. When conditions are right (see below), reading subtitles becomes a very efficient and partly automatized cognitive activity, which does not prevent people from processing the audio track at the same time (d’Ydewalle & Pavakanun, 1997).

Related to the second reservation is the concern that the two sources of information (audio and subtitles), combined with other information (images and music or sound effects), may be in competition and lead to cognitive overload, impacting negatively on both comprehension and learning. Recent research suggests that this concern is ungrounded (Kruger et al, 2014). L1 subtitles generate less cognitive load than L2 subtitles, but overload is not normally reached and mental resources are still available for learning (Baranowska, 2020). The absence of subtitles generates more cognitive load.

Conditions for learning

Before looking at the differences between L1 and L2 subtitles, it’s a good idea to look at the conditions under which learning is more likely to take place with subtitles. Some of these are obvious, others less so.

First of all, the video material must be of sufficient intrinsic interest to the learner. Secondly, the subtitles must be of a sufficiently high quality. This is not always the case with automatically generated captions, especially if the speech-to-text software struggles with the audio accent. It is also not always the case with professionally produced L1 subtitles, especially when the ‘translations are non-literal and made at the phrase level, making it hard to find connections between the subtitle text and the words in the video’ (Kovacs, 2013, cited by Zabalbeascoa et al., 2015: 112). As a minimum, standard subtitling guidelines, such as those produced for the British Channel 4, should be followed. These limit, for example, the number of characters per line to about 40 and a maximum of two lines.

For reasons that I’ll come on to, learners should be able to switch easily between L1 and L2 subtitles. They are also likely to benefit if reliably accurate glosses or hyperlinks are ‘embedded in the subtitles, making it possible for a learner to simply click for additional verbal, auditory or even pictorial glosses’ (Danan, 2015: 49).

At least as important as considerations of the materials or tools, is a consideration of what the learner brings to the activity (Frumuselu, 2019: 104). Vanderplank (2015) describes these different kinds of considerations as the ‘effects of’ subtitles on a learner and the ‘effects with’ subtitles on learner behaviour.

In order to learn from subtitles, you need to be able to read fast enough to process them. Anyone with a slow reading speed (e.g. some dyslexics) in their own language is going to struggle. Even with L1 subtitles, Vanderplank (2015: 24) estimates that it is only around the age of 10 that children can do this with confidence. Familarity with both the subject matter and with subtitle use will impact on this ability to read subtitles fast enough.

With L2 subtitles, the language proficiency of the learner related to the level of difficulty (especially lexical difficulty) of the subtitles will clearly be of some significance. It is unlikely that L2 subtitles will be of much benefit to beginners (Taylor, 2005). It also suggests that, at lower levels, materials need to be chosen carefully. On the whole, researchers have found that higher proficiency levels correlate with greater learning gains (Pujadas & Muñoz, 2019; Suárez & Gesa, 2019), but one earlier meta-analysis (Montero Perez et al., 2013) did not find that proficiency levels were significant.

Measures of general language proficiency may be too blunt an instrument to help us all of the time. I can learn more from Portuguese than from Arabic subtitles, even though I am a beginner in both languages. The degree of proximity between two languages, especially the script (Winke et al., 2010), is also likely to be significant.

But a wide range of other individual learner differences will also impact on the learning from subtitles. It is known that learners approach subtitles in varied and idiosyncratic ways (Pujolá, 2002), with some using L2 subtitles only as a ‘back-up’ and others relying on them more. Vanderplank (2019) grouped learners into three broad categories: minimal users who were focused throughout on enjoying films as they would in their L1, evolving users who showed marked changes in their viewing behaviour over time, and maximal users who tended to be experienced at using films to enhance their language learning.

Categories like these are only the tip of the iceberg. Sensory preferences, personality types, types of motivation, the impact of subtitles on anxiety levels and metacognitive strategy awareness are all likely to be important. For the last of these, Danan (2015: 47) asks whether learners should be taught ‘techniques to make better use of subtitles and compensate for weaknesses: techniques such as a quick reading of subtitles before listening, confirmation of word recognition or meaning after listening, as well as focus on form for spelling or grammatical accuracy?’

In short, it is, in practice, virtually impossible to determine optimal conditions for learning from subtitles, because we cannot ‘take into account all the psycho-social, cultural and pedagogic parameters’ (Gambier, 2015). With that said, it’s time to take a closer look at the different potential of L1 and L2 subtitles.

L1 vs L2 subtitles

Since all other things are almost never equal, it is not possible to say that one kind of subtitles offers greater potential for learning than another. As regards gains in vocabulary acquisition and listening comprehension, there is no research consensus (Baranowska, 2020: 107). Research does, however, offer us a number of pointers.

Extensive viewing of subtitled video (both L1 and L2) can offer ‘massive quantities of authentic and comprehensible input’ (Vanderplank, 1988: 273). With lower level learners, the input is likely to be more comprehensible with L1 subtitles, and, therefore, more enjoyable and motivating. This makes them often more suitable for what Caimi (2015: 11) calls ‘leisure viewing’. Vocabulary acquisition may be better served with L2 subtitles, because they can help viewers to recognize the words that are being spoken, increase their interaction with the target language, provide further language context, and increase the redundancy of information, thereby enhancing the possibility of this input being stored in long-term memory (Frumuselu et al., 2015). These effects are much more likely with Vanderplank’s (2019) motivated, ‘maximal’ users than with ‘minimal’ users.

There is one further area where L2 subtitles may have the edge over L1. One of the values of extended listening in a target language is the improvement in phonetic retuning (see, for example, Reinisch & Holt, 2013), the ability to adjust the phonetic boundaries in your own language to the boundaries that exist in the target language. Learning how to interpret unusual speech-sounds, learning how to deal with unusual mappings between sounds and words and learning how to deal with the acoustic variations of different speakers of the target language are all important parts of acquiring another language. Research by Mitterer and McQueen (2009) suggests that L2 subtitles help in this process, but L1 subtitles hinder it.

Classroom implications?

The literature on subtitles and language learning echoes with the refrain of ‘more research needed’, but I’m not sure that further research will lead to less ambiguous, practical conclusions. One of my initial questions concerned the optimal order of use of different kinds of subtitles. In most extensive viewing contexts, learners are unlikely to watch something more than twice. If they do (watching a recorded academic lecture, for example), they are likely to be more motivated by a desire to learn from the content than to learn language from the content. L1 subtitles will probably be preferred, and will have the added bonus of facilitating note-taking in the L1. For learners who are more motivated to learn the target language (Vanderplank’s ‘maximal’ users), a sequence of subtitle use, starting with the least cognitively challenging and moving to greater challenge, probably makes sense. Danan (2015: 46) suggests starting with an L1 soundtrack and reversed (L2) subtitles, then moving on to an L2 soundtrack and L2 subtitles, and ending with an L2 soundtrack and no subtitles. I would replace her first stage with an L2 soundtrack and L1 subtitles, but this is based on hunch rather than research.

This sequencing of subtitle use is common practice in language classrooms, but, here, (1) the video clips are usually short, and (2) the aim is often not incidental learning of vocabulary. Typically, the video clip has been selected as a tool for deliberate teaching of language items, so different conditions apply. At least one study has confirmed the value of the common teaching practice of pre-teaching target vocabulary items before viewing (Pujadas & Muñoz, 2019). The drawback is that, by getting learners to focus on particular items, less incidental learning of other language features is likely to take place. Perhaps this doesn’t matter too much. In a short clip of a few minutes, the opportunities for incidental learning are limited, anyway. With short clips and a deliberate learning aim, it seems reasonable to use L2 subtitles for a first viewing, and no subtitles thereafter.

An alternative frequent use of short video clips in classrooms is to use them as a springboard for speaking. In these cases, Baranowska (2020: 113) suggests that teachers may opt for L1 subtitles first, and follow up with L2 subtitles. Of course, with personal viewing devices or in online classes, teachers may want to exploit the possibilities of differentiating the subtitle condition for different learners.

REFERENCES

Baranowska, K. (2020). Learning most with least effort: subtitles and cognitive load. ELT Journal 74 (2): pp.105 – 115

Bisson, M.-J., Van Heuven, W.J.B., Conklin, K. and Tunney, R.J. (2012). Processing of native and foreign language subtitles in films: An eye tracking study. Applied Psycholingistics, 35 (2): pp. 399 – 418

Burczyńska, P. (2015). Reversed Subtitles as a Powerful Didactic Tool in SLA. In Gambier, Y., Caimi, A. & Mariotti, C. (Eds.), Subtitles and Language Learning. Principles, strategies and practical experiences. Bern: Peter Lang (pp. 221 – 244)

Caimi, A. (2015). Introduction. In Gambier, Y., Caimi, A. & Mariotti, C. (Eds.), Subtitles and Language Learning. Principles, strategies and practical experiences. Bern: Peter Lang (pp. 9 – 18)

Danan, M. (2015). Subtitling as a Language Learning Tool: Past Findings, Current Applications, and Future Paths. In Gambier, Y., Caimi, A. & Mariotti, C. (Eds.), Subtitles and Language Learning. Principles, strategies and practical experiences. Bern: Peter Lang (pp. 41 – 61)

d’Ydewalle, G. & Pavakanun, U. (1997). Could Enjoying a Movie Lead to Language Acquisition?. In: Winterhoff-Spurk, P., van der Voort, T.H.A. (Eds.) New Horizons in Media Psychology. VS Verlag für Sozialwissenschaften, Wiesbaden. https://doi.org/10.1007/978-3-663-10899-3_10

Frumuselu, A.D., de Maeyer, S., Donche, V. & Gutierrez Colon Plana, M. (2015). Television series inside the EFL classroom: bridging the gap between teaching and learning informal language through subtitles. Linguistics and Education, 32: pp. 107 – 17

Frumuselu, A. D. (2019). ‘A Friend in Need is a Film Indeed’: Teaching Colloquial Expressions with Subtitled Television Series. In Herrero, C. & Vanderschelden, I. (Eds.) Using Film and Media in the Language Classroom. Bristol: Multimedia Matters. pp.92 – 107

Gambier, Y. (2015). Subtitles and Language Learning (SLL): Theoretical background. In Gambier, Y., Caimi, A. & Mariotti, C. (Eds.), Subtitles and Language Learning. Principles, strategies and practical experiences. Bern: Peter Lang (pp. 63 – 82)

Hall, G. & Cook, G. (2012). Own-language Use in Language Teaching and Learning. Language Learning, 45 (3): pp. 271 – 308

Incalcaterra McLoughlin, L., Biscio, M. & Ní Mhainnín, M. A. (Eds.) (2011). Audiovisual Translation, Subtitles and Subtitling. Theory and Practice. Bern: Peter Lang

Karakaş, A. & Sariçoban, A. (2012). The impact of watching subtitled animated cartoons on incidental vocabulary learning of ELT students. Teaching English with Technology, 12 (4): pp. 3 – 15

Kerr, P. (2016). Questioning ‘English-only’ Classrooms: Own-language Use in ELT. In Hall, G. (Ed.) The Routledge Handbook of English Language Teaching (pp. 513 – 526)

Kruger, J. L., Hefer, E. & Matthew, G. (2014). Attention distribution and cognitive load in a subtitled academic lecture: L1 vs. L2. Journal of Eye Movement Research, 7: pp. 1 – 15

Mitterer, H. & McQueen, J. M. (2009). Foreign Subtitles Help but Native-Language Subtitles Harm Foreign Speech Perception. PLoS ONE 4 (11): e7785.doi:10.1371/journal.pone.0007785

Montero Perez, M., Van Den Noortgate, W., & Desmet, P. (2013). Captioned video for L2 listening and vocabulary learning: A meta-analysis. System, 41, pp. 720–739 doi:10.1016/j.system.2013.07.013

Pujadas, G. & Muñoz, C. (2019). Extensive viewing of captioned and subtitled TV series: a study of L2 vocabulary learning by adolescents, The Language Learning Journal, 47:4, 479-496, DOI: 10.1080/09571736.2019.1616806

Pujolá, J.- T. (2002). CALLing for help: Researching language learning strategies using help facilities in a web-based multimedia program. ReCALL, 14 (2): pp. 235 – 262

Reinisch, E. & Holt, L. L. (2013). Lexically Guided Phonetic Retuning of Foreign-Accented Speech and Its Generalization. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi: 10.1037/a0034409

Suárez, M. & Gesa, F. (2019) Learning vocabulary with the support of sustained exposure to captioned video: do proficiency and aptitude make a difference? The Language Learning Journal, 47:4, 497-517, DOI: 10.1080/09571736.2019.1617768

Taylor, G. (2005). Perceived processing strategies of students watching captioned video. Foreign Language Annals, 38(3), pp. 422-427

Vanderplank, R. (1988). The value of teletext subtitles in language learning. ELT Journal, 42 (4): pp. 272 – 281

Vanderplank, R. (2015). Thirty Years of Research into Captions / Same Language Subtitles and Second / Foreign Language Learning: Distinguishing between ‘Effects of’ Subtitles and ‘Effects with’ Subtitles for Future Research. In Gambier, Y., Caimi, A. & Mariotti, C. (Eds.), Subtitles and Language Learning. Principles, strategies and practical experiences. Bern: Peter Lang (pp. 19 – 40)

Vanderplank, R. (2019). ‘Gist watching can only take you so far’: attitudes, strategies and changes in behaviour in watching films with captions, The Language Learning Journal, 47:4, 407-423, DOI: 10.1080/09571736.2019.1610033

Winke, P., Gass, S. M., & Sydorenko, T. (2010). The Effects of Captioning Videos Used for Foreign Language Listening Activities. Language Learning & Technology, 1 (1): pp. 66 – 87

Zabalbeascoa, P., González-Casillas, S. & Pascual-Herce, R. (2015). In Gambier, Y., Caimi, A. & Mariotti, C. (Eds.), Subtitles and Language Learning. Principles, strategies and practical experiences Bern: Peter Lang (pp. 105–126)

In my last post , I looked at the use of digital dictionaries. This post is a sort of companion piece to that one.

I noted in that post that teachers are typically less keen on bilingual dictionaries (preferring monolingual versions) than their students. More generally, it seems that teachers are less keen on any kind of dictionary, preferring their students to attempt to work out the meaning of unknown words from context. Coursebooks invariably promote the skill of guessing meaning from context (also known as ‘lexical inferencing’) and some suggest that dictionary work should be banned from the classroom (Haynes & Baker, 1993, cited in Folse, 2004: 112). Teacher educators usually follow suit. Scott Thornbury, for example, has described guessing from context as ‘probably one of the most useful skills learners can acquire and apply both inside and outside the classroom’ (Thornbury, 2002: 148) and offers a series of steps to train learners in this skill before adding ‘when all else fails, consult a dictionary’. Dictionary use, then, is a last resort.

These steps are fairly well known and a typical example (from Clarke & Nation, 1980, cited in Webb & Nation, 2017: 169) is

1 Determine the part of speech of the unknown word

2 Analyse the immediate context to try to determine the meaning of the unknown word

3 Analyse the wider context to try to determine the meaning of the unknown word

4 Guess the meaning of the unknown word

5 Check the guess against the information that was found in the first four steps

It has been suggested that training in the use of this skill should be started at low levels, so that learners have a general strategy for dealing with unknown words. As proficiency develops, more specific instruction in the recognition and interpretation of context clues can be provided (Walters, 2006: 188). Training may include a demonstration by the teacher using a marked-up text, perhaps followed by ‘think-aloud’ sessions, where learners say out loud the step-by-step process they are going through when inferring meaning. It may also include a progression from, first, cloze exercises to, second, texts where highlighted words are provided with multiple choice definitions to, finally, texts with no support.

Although research has not established what kind of training is likely to be most effective, or whether specific training is more valuable than the provision of lots of opportunities to practise the skill, it would seem that this kind of work is likely to lead to gains in reading comprehension.

Besides the obvious value of this skill in helping learners to decode the meaning of unknown items in a text, it has been hypothesized that learners are ‘more likely to remember the form and meaning of a word when they have inferred its meaning by themselves than when the meaning has been given to them’ (Hulstijn, 1992). This is because memorisation is likely to be enhanced when mental effort has been exercised. The hypothesis was confirmed by Hulstijn in his 1992 study.

Unfortunately, Hulstijn’s study is not, in itself, sufficient evidence to prove the hypothesis. Other studies have shown the opposite. Keith Folse (2004: 112) cites a study by Knight (1994) which ‘found that subjects who used a bilingual dictionary while reading a passage not only learned more words but also achieved higher reading comprehension scores than subjects who did not have a dictionary and therefore had to rely on guessing from context clues’. More recently, Mokhtar & Rawian (2012) entitled their paper ‘Guessing Word Meaning from Context Has Its Limit: Why?’ They argue that ‘though it is not impossible for ESL learners to derive vocabulary meanings from context, guessing strategy by itself does not foster retention of meanings’.

What, then, are the issues here?

  • First of all, Liu and Nation (1985) have estimated that learners ought to know at least 95 per cent of the context words in order to be able to infer meaning from context. Whilst this figure may not be totally accurate, it is clear that because ‘the more words you know, the more you are able to acquire new words’ (Prince, 1996), guessing from context is likely to work better with students at higher levels of proficiency than those with a lower level.
  • Although exercises in coursebooks which require students to guess meaning from context have usually been written in such a way that it is actually possible to do so, ‘such a nicely packaged contextual environment is rare’ in the real world (Folse, 2004: 115). The skill of guessing from context may not be as useful as was previously assumed.
  • There is clearly a risk that learners will guess wrong and, therefore, learn the wrong meaning. Nassaji (2003: 664) found in one study that learners guessed wrong more than half the time.
  • Lastly, it appears that many learners do not like to employ this strategy, believing that using a dictionary is more useful to them and, possibly as a result of this attitude, fail to devote sufficient mental effort to it (Prince, 1996: 480).

Perhaps the most forceful critique of the promotion of guessing meaning from context has come from Catherine Walter and Michael Swan (2009), who referred to it as ‘an alleged ‘skill’’ and considered it, along with skimming and scanning, to be ‘mostly a waste of time’. Scott Thornbury (2006), in a marked departure from his comments (from a number of years earlier) quoted at the start of this post, also questioned the relevance of ‘guessing from context’ activities, arguing that, if students can employ a strategy such as inferring when reading their own language, they can transfer it to another language … so teachers are at risk of teaching their students what they already know.

To summarize, then, we might say that (1) the skill of guessing from context may not be as helpful in the real world as previously imagined, (2) it may not be as useful in acquiring vocabulary items as previously imagined. When a teacher is asked by a student for the meaning of a word in a text, the reflex response of ‘try to work it out from the context’ may also not be as helpful as previously imagined. Translations and / or dictionary advice may well, at times, be more appropriate.

References

Clarke, D.F. & Nation, I.S.P. 1980. ‘Guessing the meanings of words from context: Strategy and techniques.’ System, 8 (3): 211 -220

Folse, K. 2004. Vocabulary Myths. Ann Arbor: University of Michigan Press

Haynes, M. & Baker, I. 1993. ‘American and Chinese readers learning from lexical familiarization in English texts.’ In Huckin, T., Haynes, M. & Coady, J. (Eds.) Second Language Reading and Vocabulary Acquisition. Norwood, NJ.: Ablex. pp. 130 – 152

Hulstijn, J. 1992. ‘Retention of inferred and given word meanings: experiments in incidental vocabulary learning.’ In Arnaud, P. & Bejoint, H. (Eds.) Vocabulary and Applied Linguistics. London: Macmillan Academic and Professional Limited, pp. 113 – 125

Liu, N. & Nation, I. S. P. 1985. ‘Factors affecting guessing vocabulary in context.’ RELC Journal 16 (1): 33–42

Mokhtar, A. A. & Rawian, R. M. 2012. ‘Guessing Word Meaning from Context Has Its Limit: Why?’ International Journal of Linguistics 4 (2): 288 – 305

Nassaji, H. 2003. ‘L2 vocabulary learning from context: Strategies, knowledge sources, and their relationship with success in L2 lexical inferencing.’ TESOL Quarterly, 37(4): 645-670

Prince, P. 1996. ‘Second Language vocabulary Learning: The Role of Context versus Translations as a Function of Proficiency.’ The Modern Language Journal, 80(4): 478-493

Thornbury, S. 2002. How to Teach Vocabulary. Harlow: Pearson Education

Thornbury, S. 2006. The End of Reading? One Stop English,

Walter, C. & Swan, M. 2009. ‘Teaching reading skills: mostly a waste of time?’ In Beaven B. (Ed.) IATEFL 2008 Exeter Conference Selections. Canterbury: IATEFL, pp. 70-71

Walters, J.M. 2004. ‘Teaching the use of context to infer meaning: A longitudinal survey of L1 and L2 vocabulary research.’ Language Teaching, 37(4), pp. 243-252

Walters, J.D. 2006. ‘Methods of teaching inferring meaning from context.’ RELC Journal, 37(2), pp. 176-190

Webb, S. & Nation, P. 2017. How Vocabulary is Learned. Oxford: Oxford University Press

 

The most widely-used and popular tool for language learners is the bilingual dictionary (Levy & Steel, 2015), and the first of its kind appeared about 4,000 years ago (2,000 years earlier than the first monolingual dictionaries), offering wordlists in Sumerian and Akkadian (Wheeler, 2013: 9 -11). Technology has come a long way since the clay tablets of the Bronze Age. Good online dictionaries now contain substantially more information (in particular audio recordings) than their print equivalents of a few decades ago. In addition, they are usually quicker and easier to use, more popular, and lead to retention rates that are comparable to, or better than, those achieved with print (Töpel, 2014). The future of dictionaries is likely to be digital, and paper dictionaries may well disappear before very long (Granger, 2012: 2).

English language learners are better served than learners of other languages, and the number of free, online bilingual dictionaries is now enormous. Speakers of less widely-spoken languages may still struggle to find a good quality service, but speakers of, for example, Polish (with approximately 40 million speakers, and a ranking of #33 in the list of the world’s most widely spoken languages) will find over twenty free, online dictionaries to choose from (Lew & Szarowska, 2017). Speakers of languages that are more widely spoken (Chinese, Spanish or Portuguese, for example) will usually find an even greater range. The choice can be bewildering and neither search engine results nor rankings from app stores can be relied on to suggest the product of the highest quality.

Language teachers are not always as enthusiastic about bilingual dictionaries as their learners. Folse (2004: 114 – 120) reports on an informal survey of English teachers which indicated that 11% did not allow any dictionaries in class at all, 37% allowed monolingual dictionaries and only 5% allowed bilingual dictionaries. Other researchers (e.g. Boonmoh & Nesi, 2008), have found a similar situation, with teachers overwhelmingly recommending the use of a monolingual learner’s dictionary: almost all of their students bought one, but the great majority hardly ever used it, preferring instead a digital bilingual version.

Teachers’ preferences for monolingual dictionaries are usually motivated in part by a fear that their students will become too reliant on translation. Whilst this concern remains widespread, much recent suggests that this fear is misguided (Nation, 2013: 424) and that monolingual dictionaries do not actually lead to greater learning gains than their bilingual counterparts. This is, in part, due to the fact that learners typically use these dictionaries in very limited ways – to see if a word exists, check spelling or look up meaning (Harvey & Yuill, 1997). If they made fuller use of the information (about frequency, collocations, syntactic patterns, etc.) on offer, it is likely that learning gains would be greater: ‘it is accessing multiplicity of information that is likely to enhance retention’ (Laufer & Hill, 2000: 77). Without training, however, this is rarely the case.  With lower-level learners, a monolingual learner’s dictionary (even one designed for Elementary level students) can be a frustrating experience, because until they have reached a vocabulary size of around 2,000 – 3,000 words, they will struggle to understand the definitions (Webb & Nation, 2017: 119).

The second reason for teachers’ preference for monolingual dictionaries is that the quality of many bilingual dictionaries is undoubtedly very poor, compared to monolingual learner’s dictionaries such as those produced by Oxford University Press, Cambridge University Press, Longman Pearson, Collins Cobuild, Merriam-Webster and Macmillan, among others. The situation has changed, however, with the rapid growth of bilingualized dictionaries. These contain all the features of a monolingual learner’s dictionary, but also include translations into the learner’s own language. Because of the wealth of information provided by a good bilingualized dictionary, researchers (e.g. Laufer & Hadar, 1997; Chen, 2011) generally consider them preferable to monolingual or normal bilingual dictionaries. They are also popular with learners. Good bilingualized online dictionaries (such as the Oxford Advanced Learner’s English-Chinese Dictionary) are not always free, but many are, and with some language pairings free software can be of a higher quality than services that incur a subscription charge.

If a good bilingualized dictionary is available, there is no longer any compelling reason to use a monolingual learner’s dictionary, unless it contains features which cannot be found elsewhere. In order to compete in a crowded marketplace, many of the established monolingual learner’s dictionaries do precisely that. Examples of good, free online dictionaries include:

Students need help in selecting a dictionary that is right for them. Without this, many end up using as a dictionary a tool such as Google Translate , which, for all its value, is of very limited use as a dictionary. They need to understand that the most appropriate dictionary will depend on what they want to use it for (receptive, reading purposes or productive, writing purposes). Teachers can help in this decision-making process by addressing the issue in class (see the activity below).

In addition to the problem of selecting an appropriate dictionary, it appears that many learners have inadequate dictionary skills (Niitemaa & Pietilä, 2018). In one experiment (Tono, 2011), only one third of the vocabulary searches in a dictionary that were carried out by learners resulted in success. The reasons for failure include focussing on only the first meaning (or translation) of a word that is provided, difficulty in finding the relevant information in long word entries, an inability to find the lemma that is needed, and spelling errors (when they had to type in the word) (Töpel, 2014). As with monolingual dictionaries, learners often only check the meaning of a word in a bilingual dictionary and fail to explore the wider range of information (e.g. collocation, grammatical patterns, example sentences, synonyms) that is available (Laufer & Kimmel, 1997; Laufer & Hill, 2000; Chen, 2010). This information is both useful and may lead to improved retention.

Most learners receive no training in dictionary skills, but would clearly benefit from it. Nation (2013: 333) suggests that at least four or five hours, spread out over a few weeks, would be appropriate. He suggests (ibid: 419 – 421) that training should encourage learners, first, to look closely at the context in which an unknown word is encountered (in order to identify the part of speech, the lemma that needs to be looked up, its possible meaning and to decide whether it is worth looking up at all), then to help learners in finding the relevant entry or sub-entry (by providing information about common dictionary abbreviations (e.g. for parts of speech, style and register)), and, finally, to check this information against the original context.

Two good resource books full of practical activities for dictionary training are available: ‘Dictionary Activities’ by Cindy Leaney (Cambridge: Cambridge University Press, 2007) and ‘Dictionaries’ by Jon Wright (Oxford: Oxford University Press, 1998). Many of the good monolingual dictionaries offer activity guides to promote effective dictionary use and I have suggested a few activities here.

Activity: Understanding a dictionary

Outline: Students explore the use of different symbols in good online dictionaries.

Level: All levels, but not appropriate for very young learners. The activity ‘Choosing a dictionary’ is a good follow-up to this activity.

1 Distribute the worksheet and ask students to follow the instructions.

act_1

2 Check the answers.

Act_1_key

Activity: Choosing a dictionary

Outline: Students explore and evaluate the features of different free, online bilingual dictionaries.

Level: All levels, but not appropriate for very young learners. The text in stage 3 is appropriate for use with levels A2 and B1. For some groups of learners, you may want to adapt (or even translate) the list of features. It may be useful to do the activity ‘Understanding a dictionary’ before this activity.

1 Ask the class which free, online bilingual dictionaries they like to use. Write some of their suggestions on the board.

2 Distribute the list of features. Ask students to work individually and tick the boxes that are important for them. Ask students to work with a partner to compare their answers.

Act_2

3 Give students a list of free, online bilingual (English and the students’ own language) dictionaries. You can use suggestions from the list below, add the suggestions that your students made in stage 1, or add your own ideas. (For many language pairings, better resources are available than those in the list below.) Give the students the following short text and ask the students to use two of these dictionaries to look up the underlined words. Ask the students to decide which dictionary they found most useful and / or easiest to use.

act_2_text

dict_list

4 Conduct feedback with the whole class.

Activity: Getting more out of a dictionary

Outline: Students use a dictionary to help them to correct a text

Level: Levels B1 and B2, but not appropriate for very young learners. For higher levels, a more complex text (with less obvious errors) would be appropriate.

1 Distribute the worksheet below and ask students to follow the instructions.

act_3

2 Check answers with the whole class. Ask how easy it was to find the information in the dictionary that they were using.

Key

When you are reading, you probably only need a dictionary when you don’t know the meaning of a word and you want to look it up. For this, a simple bilingual dictionary is good enough. But when you are writing or editing your writing, you will need something that gives you more information about a word: grammatical patterns, collocations (the words that usually go with other words), how formal the word is, and so on. For this, you will need a better dictionary. Many of the better dictionaries are monolingual (see the box), but there are also some good bilingual ones.

Use one (or more) of the online dictionaries in the box (or a good bilingual dictionary) and make corrections to this text. There are eleven mistakes (they have been underlined) in total.

References

Boonmoh, A. & Nesi, H. 2008. ‘A survey of dictionary use by Thai university staff and students with special reference to pocket electronic dictionaries’ Horizontes de Linguística Aplicada , 6(2), 79 – 90

Chen, Y. 2011. ‘Studies on Bilingualized Dictionaries: The User Perspective’. International Journal of Lexicography, 24 (2): 161–197

Folse, K. 2004. Vocabulary Myths. Ann Arbor: University of Michigan Press

Granger, S. 2012. Electronic Lexicography. Oxford: Oxford University Press

Harvey, K. & Yuill, D. 1997. ‘A study of the use of a monolingual pedagogical dictionary by learners of English engaged in writing’ Applied Linguistics, 51 (1): 253 – 78

Laufer, B. & Hadar, L. 1997. ‘Assessing the effectiveness of monolingual, bilingual and ‘bilingualized’ dictionaries in the comprehension and production of new words’. Modern Language Journal, 81 (2): 189 – 96

Laufer, B. & M. Hill 2000. ‘What lexical information do L2 learners select in a CALL dictionary and how does it affect word retention?’ Language Learning & Technology 3 (2): 58–76

Laufer, B. & Kimmel, M. 1997. ‘Bilingualised dictionaries: How learners really use them’, System, 25 (3): 361 -369

Leaney, C. 2007. Dictionary Activities. Cambridge: Cambridge University Press

Levy, M. and Steel, C. 2015. ‘Language learner perspectives on the functionality and use of electronic language dictionaries’. ReCALL, 27(2): 177–196

Lew, R. & Szarowska, A. 2017. ‘Evaluating online bilingual dictionaries: The case of popular free English-Polish dictionaries’ ReCALL 29(2): 138–159

Nation, I.S.P. 2013. Learning Vocabulary in Another Language 2nd edition. Cambridge: Cambridge University Press

Niitemaa, M.-L. & Pietilä, P. 2018. ‘Vocabulary Skills and Online Dictionaries: A Study on EFL Learners’ Receptive Vocabulary Knowledge and Success in Searching Electronic Sources for Information’, Journal of Language Teaching and Research, 9 (3): 453-462

Tono, Y. 2011. ‘Application of eye-tracking in EFL learners’ dictionary look-up process research’, International Journal of Lexicography 24 (1): 124–153

Töpel, A. 2014. ‘Review of research into the use of electronic dictionaries’ in Müller-Spitzer, C. (Ed.) 2014. Using Online Dictionaries. Berlin: De Gruyter, pp. 13 – 54

Webb, S. & Nation, P. 2017. How Vocabulary is Learned. Oxford: Oxford University Press

Wheeler, G. 2013. Language Teaching through the Ages. New York: Routledge

Wright, J. 1998. Dictionaries. Oxford: Oxford University Press

At a recent ELT conference, a plenary presentation entitled ‘Getting it right with edtech’ (sponsored by a vendor of – increasingly digital – ELT products) began with the speaker suggesting that technology was basically neutral, that what you do with educational technology matters far more than the nature of the technology itself. The idea that technology is a ‘neutral tool’ has a long pedigree and often accompanies exhortations to embrace edtech in one form or another (see for example Fox, 2001). It is an idea that is supported by no less a luminary than Chomsky, who, in a 2012 video entitled ‘The Purpose of Education’ (Chomsky, 2012), said that:

As far as […] technology […] and education is concerned, technology is basically neutral. It’s kind of like a hammer. I mean, […] the hammer doesn’t care whether you use it to build a house or whether a torturer uses it to crush somebody’s skull; a hammer can do either. The same with the modern technology; say, the Internet, and so on.

Womans hammerAlthough hammers are not usually classic examples of educational technology, they are worthy of a short discussion. Hammers come in all shapes and sizes and when you choose one, you need to consider its head weight (usually between 16 and 20 ounces), the length of the handle, the shape of the grip, etc. Appropriate specifications for particular hammering tasks have been calculated in great detail. The data on which these specifications is based on an analysis of the hand size and upper body strength of the typical user. The typical user is a man, and the typical hammer has been designed for a man. The average male hand length is 177.9 mm, that of the average woman is 10 mm shorter (Wang & Cai, 2017). Women typically have about half the upper body strength of men (Miller et al., 1993). It’s possible, but not easy to find hammers designed for women (they are referred to as ‘Ladies hammers’ on Amazon). They have a much lighter head weight, a shorter handle length, and many come in pink or floral designs. Hammers, in other words, are far from neutral: they are highly gendered.

Moving closer to educational purposes and ways in which we might ‘get it right with edtech’, it is useful to look at the smart phone. The average size of these devices has risen in recent years, and is now 5.5 inches, with the market for 6 inch screens growing fast. Why is this an issue? Well, as Caroline Criado Perez (2019: 159) notes, ‘while we’re all admittedly impressed by the size of your screen, it’s a slightly different matter when it comes to fitting into half the population’s hands. The average man can fairly comfortably use his device one-handed – but the average woman’s hand is not much bigger than the handset itself’. This is despite the fact the fact that women are more likely to own an iPhone than men  .

It is not, of course, just technological artefacts that are gendered. Voice-recognition software is also very biased. One researcher (Tatman, 2017) has found that Google’s speech recognition tool is 13% more accurate for men than it is for women. There are also significant biases for race and social class. The reason lies in the dataset that the tool is trained on: the algorithms may be gender- and socio-culturally-neutral, but the dataset is not. It would not be difficult to redress this bias by training the tool on a different dataset.

The same bias can be found in automatic translation software. Because corpora such as the BNC or COCA have twice as many male pronouns as female ones (as a result of the kinds of text that are selected for the corpora), translation software reflects the bias. With Google Translate, a sentence in a language with a gender-neutral pronoun, such as ‘S/he is a doctor’ is rendered into English as ‘He is a doctor’. Meanwhile, ‘S/he is a nurse’ is translated as ‘She is a nurse’ (Criado Perez, 2019: 166).

Datasets, then, are often very far from neutral. Algorithms are not necessarily any more neutral than the datasets, and Cathy O’Neil’s best-seller ‘Weapons of Math Destruction’ catalogues the many, many ways in which algorithms, posing as neutral mathematical tools, can increase racial, social and gender inequalities.

It would not be hard to provide many more examples, but the selection above is probably enough. Technology, as Langdon Winner (Winner, 1980) observed almost forty years ago, is ‘deeply interwoven in the conditions of modern politics’. Technology cannot be neutral: it has politics.

So far, I have focused primarily on the non-neutrality of technology in terms of gender (and, in passing, race and class). Before returning to broader societal issues, I would like to make a relatively brief mention of another kind of non-neutrality: the pedagogic. Language learning materials necessarily contain content of some kind: texts, topics, the choice of values or role models, language examples, and so on. These cannot be value-free. In the early days of educational computer software, one researcher (Biraimah, 1993) found that it was ‘at least, if not more, biased than the printed page it may one day replace’. My own impression is that this remains true today.

Equally interesting to my mind is the fact that all educational technologies, ranging from the writing slate to the blackboard (see Buzbee, 2014), from the overhead projector to the interactive whiteboard, always privilege a particular kind of teaching (and learning). ‘Technologies are inherently biased because they are built to accomplish certain very specific goals which means that some technologies are good for some tasks while not so good for other tasks’ (Zhao et al., 2004: 25). Digital flashcards, for example, inevitably encourage a focus on rote learning. Contemporary LMSs have impressive multi-functionality (i.e. they often could be used in a very wide variety of ways), but, in practice, most teachers use them in very conservative ways (Laanpere et al., 2004). This may be a result of teacher and institutional preferences, but it is almost certainly due, at least in part, to the way that LMSs are designed. They are usually ‘based on traditional approaches to instruction dating from the nineteenth century: presentation and assessment [and] this can be seen in the selection of features which are most accessible in the interface, and easiest to use’ (Lane, 2009).

The argument that educational technology is neutral because it could be put to many different uses, good or bad, is problematic because the likelihood of one particular use is usually much greater than another. There is, however, another way of looking at technological neutrality, and that is to look at its origins. Elsewhere on this blog, in post after post, I have given examples of the ways in which educational technology has been developed, marketed and sold primarily for commercial purposes. Educational values, if indeed there are any, are often an afterthought. The research literature in this area is rich and growing: Stephen Ball, Larry Cuban, Neil Selwyn, Joel Spring, Audrey Watters, etc.

Rather than revisit old ground here, this is an opportunity to look at a slightly different origin of educational technology: the US military. The close connection of the early history of the internet and the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense is fairly well-known. Much less well-known are the very close connections between the US military and educational technologies, which are catalogued in the recently reissued ‘The Classroom Arsenal’ by Douglas D. Noble.

Following the twin shocks of the Soviet Sputnik 1 (in 1957) and Yuri Gagarin (in 1961), the United States launched a massive programme of investment in the development of high-tech weaponry. This included ‘computer systems design, time-sharing, graphics displays, conversational programming languages, heuristic problem-solving, artificial intelligence, and cognitive science’ (Noble, 1991: 55), all of which are now crucial components in educational technology. But it also quickly became clear that more sophisticated weapons required much better trained operators, hence the US military’s huge (and continuing) interest in training. Early interest focused on teaching machines and programmed instruction (branches of the US military were by far the biggest purchasers of programmed instruction products). It was essential that training was effective and efficient, and this led to a wide interest in the mathematical modelling of learning and instruction.

What was then called computer-based education (CBE) was developed as a response to military needs. The first experiments in computer-based training took place at the Systems Research Laboratory of the Air Force’s RAND Corporation think tank (Noble, 1991: 73). Research and development in this area accelerated in the 1960s and 1970s and CBE (which has morphed into the platforms of today) ‘assumed particular forms because of the historical, contingent, military contexts for which and within which it was developed’ (Noble, 1991: 83). It is possible to imagine computer-based education having developed in very different directions. Between the 1960s and 1980s, for example, the PLATO (Programmed Logic for Automatic Teaching Operations) project at the University of Illinois focused heavily on computer-mediated social interaction (forums, message boards, email, chat rooms and multi-player games). PLATO was also significantly funded by a variety of US military agencies, but proved to be of much less interest to the generals than the work taking place in other laboratories. As Noble observes, ‘some technologies get developed while others do not, and those that do are shaped by particular interests and by the historical and political circumstances surrounding their development (Noble, 1991: 4).

According to Noble, however, the influence of the military reached far beyond the development of particular technologies. Alongside the investment in technologies, the military were the prime movers in a campaign to promote computer literacy in schools.

Computer literacy was an ideological campaign rather than an educational initiative – a campaign designed, at bottom, to render people ‘comfortable’ with the ‘inevitable’ new technologies. Its basic intent was to win the reluctant acquiescence of an entire population in a brave new world sculpted in silicon.

The computer campaign also succeeded in getting people in front of that screen and used to having computers around; it made people ‘computer-friendly’, just as computers were being rendered ‘used-friendly’. It also managed to distract the population, suddenly propelled by the urgency of learning about computers, from learning about other things, such as how computers were being used to erode the quality of their working lives, or why they, supposedly the citizens of a democracy, had no say in technological decisions that were determining the shape of their own futures.

Third, it made possible the successful introduction of millions of computers into schools, factories and offices, even homes, with minimal resistance. The nation’s public schools have by now spent over two billion dollars on over a million and a half computers, and this trend still shows no signs of abating. At this time, schools continue to spend one-fifth as much on computers, software, training and staffing as they do on all books and other instructional materials combined. Yet the impact of this enormous expenditure is a stockpile of often idle machines, typically used for quite unimaginative educational applications. Furthermore, the accumulated results of three decades of research on the effectiveness of computer-based instruction remain ‘inconclusive and often contradictory’. (Noble, 1991: x – xi)

Rather than being neutral in any way, it seems more reasonable to argue, along with (I think) most contemporary researchers, that edtech is profoundly value-laden because it has the potential to (i) influence certain values in students; (ii) change educational values in [various] ways; and (iii) change national values (Omotoyinbo & Omotoyinbo, 2016: 173). Most importantly, the growth in the use of educational technology has been accompanied by a change in the way that education itself is viewed: ‘as a tool, a sophisticated supply system of human cognitive resources, in the service of a computerized, technology-driven economy’ (Noble, 1991: 1). These two trends are inextricably linked.

References

Biraimah, K. 1993. The non-neutrality of educational computer software. Computers and Education 20 / 4: 283 – 290

Buzbee, L. 2014. Blackboard: A Personal History of the Classroom. Minneapolis: Graywolf Press

Chomsky, N. 2012. The Purpose of Education (video). Learning Without Frontiers Conference. https://www.youtube.com/watch?v=DdNAUJWJN08

Criado Perez, C. 2019. Invisible Women. London: Chatto & Windus

Fox, R. 2001. Technological neutrality and practice in higher education. In A. Herrmann and M. M. Kulski (Eds), Expanding Horizons in Teaching and Learning. Proceedings of the 10th Annual Teaching Learning Forum, 7-9 February 2001. Perth: Curtin University of Technology. http://clt.curtin.edu.au/events/conferences/tlf/tlf2001/fox.html

Laanpere, M., Poldoja, H. & Kikkas, K. 2004. The second thoughts about pedagogical neutrality of LMS. Proceedings of IEEE International Conference on Advanced Learning Technologies, 2004. https://ieeexplore.ieee.org/abstract/document/1357664

Lane, L. 2009. Insidious pedagogy: How course management systems impact teaching. First Monday, 14(10). https://firstmonday.org/ojs/index.php/fm/article/view/2530/2303Lane

Miller, A.E., MacDougall, J.D., Tarnopolsky, M. A. & Sale, D.G. 1993. ‘Gender differences in strength and muscle fiber characteristics’ European Journal of Applied Physiology and Occupational Physiology. 66(3): 254-62 https://www.ncbi.nlm.nih.gov/pubmed/8477683

Noble, D. D. 1991. The Classroom Arsenal. Abingdon, Oxon.: Routledge

Omotoyinbo, D. W. & Omotoyinbo, F. R. 2016. Educational Technology and Value Neutrality. Societal Studies, 8 / 2: 163 – 179 https://www3.mruni.eu/ojs/societal-studies/article/view/4652/4276

O’Neil, C. 2016. Weapons of Math Destruction. London: Penguin

Sundström, P. Interpreting the Notion that Technology is Value Neutral. Medicine, Health Care and Philosophy 1, 1998: 42-44

Tatman, R. 2017. ‘Gender and Dialect Bias in YouTube’s Automatic Captions’ Proceedings of the First Workshop on Ethics in Natural Language Processing, pp. 53–59 http://www.ethicsinnlp.org/workshop/pdf/EthNLP06.pdf

Wang, C. & Cai, D. 2017. ‘Hand tool handle design based on hand measurements’ MATEC Web of Conferences 119, 01044 (2017) https://www.matec-conferences.org/articles/matecconf/pdf/2017/33/matecconf_imeti2017_01044.pdf

Winner, L. 1980. Do Artifacts have Politics? Daedalus 109 / 1: 121 – 136

Zhao, Y, Alvarez-Torres, M. J., Smith, B. & Tan, H. S. 2004. The Non-neutrality of Technology: a Theoretical Analysis and Empirical Study of Computer Mediated Communication Technologies. Journal of Educational Computing Research 30 (1 &2): 23 – 55