Posts Tagged ‘multimodality’

My attention was recently drawn (thanks to Grzegorz Śpiewak) to a recent free publication from OUP. It’s called ‘Multimodality in ELT: Communication skills for today’s generation’ (Donaghy et al., 2023) and it’s what OUP likes to call a ‘position paper’: it offers ‘evidence-based recommendations to support educators and learners in their future success’. Its topic is multimodal (or multimedia) literacy, a term used to describe the importance for learners of being able ‘not just to understand but to create multimedia messages, integrating text with images, sounds and video to suit a variety of communicative purposes and reach a range of target audiences’ (Dudeney et al., 2013: 13).

Grzegorz noted the author of this paper’s ‘positively charged, unhedged language to describe what is arguably a most complex problem area’. As an example, he takes the summary of the first section and circles questionable and / or unsubstantiated claims. It’s just one example from a text that reads more like a ‘manifesto’ than a balanced piece of evidence-reporting. The verb ‘need’ (in the sense of ‘must’, as in ‘teachers / learners / students need to …’) appears no less than 57 times. The modal ‘should’ (as in ‘teachers / learners / students should …’) clocks up 27 appearances.

What is it then that we all need to do? Essentially, the argument is that English language teachers need to develop their students’ multimodal literacy by incorporating more multimodal texts and tasks (videos and images) in all their lessons. The main reason for this appears to be that, in today’s digital age, communication is more often multimodal than not (i.e. monomodal written or spoken text). As an addendum, we are told that multimodal classroom practices are a ‘fundamental part of inclusive teaching’ in classes with ‘learners with learning difficulties and disabilities’. In case you thought it was ironic that such an argument would be put forward in a flat monomodal pdf, OUP also offers the same content through a multimodal ‘course’ with text, video and interactive tasks.

It might all be pretty persuasive, if it weren’t so overstated. Here are a few of the complex problem areas.

What exactly is multimodal literacy?

We are told in the paper that there are five modes of communication: linguistic, visual, aural, gestural and spatial. Multimodal literacy consists, apparently, of the ability

  • to ‘view’ multimodal texts (noticing the different modes, and, for basic literacy, responding to the text on an emotional level, and, for more advanced literacy, respond to it critically)
  • to ‘represent’ ideas and information in a multimodal way (posters, storyboards, memes, etc.)

I find this frustratingly imprecise. First: ‘viewing’. Noticing modes and reacting emotionally to a multimedia artefact do not take anyone very far on the path towards multimodal literacy, even if they are necessary first steps. It is only when we move towards a critical response (understanding the relative significance of different modes and problematizing our initial emotional response) that we can really talk about literacy (see the ‘critical literacy’ of Pegrum et al., 2018). We’re basically talking about critical thinking, a concept as vague and contested as any out there. Responding to a multimedia artefact ‘critically’ can mean more or less anything and everything.

Next: ‘representing’. What is the relative importance of ‘viewing’ and ‘representing’? What kinds of representations (artefacts) are important, and which are not? Presumably, they are not all of equal importance. And, whichever artefact is chosen as the focus, a whole range of technical skills will be needed to produce the artefact in question. So, precisely what kind of representing are we talking about?

Priorities in the ELT classroom

The Oxford authors write that ‘the main focus as English language teachers should obviously be on language’. I take this to mean that the ‘linguistic mode’ of communication should be our priority. This seems reasonable, since it’s hard to imagine any kind of digital literacy without some reading skills preceding it. But, again, the question of relative importance rears its ugly head. The time available for language leaning and teaching is always limited. Time that is devoted to the visual, aural, gestural or spatial modes of communication is time that is not devoted to the linguistic mode.

There are, too, presumably, some language teaching contexts (I’m thinking in particular about some adult, professional contexts) where the teaching of multimodal literacy would be completely inappropriate.

Multimodal literacy is a form of digital literacy. Writers about digital literacies like to say things like ‘digital literacies are as important to language learning as […] reading and writing skills’ or it is ‘crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Pegrum et al, 2022). The question then arises: how important, in relative terms, are the various digital literacies? Where does multimodal literacy stand?

The Oxford authors summarise their view as follows:

There is a need for a greater presence of images, videos, and other multimodal texts in ELT coursebooks and a greater focus on using them as a starting point for analysis, evaluation, debate, and discussion.

My question to them is: greater than what? Typical contemporary courseware is already a whizzbang multimodal jamboree. There seem to me to be more pressing concerns with most courseware than supplementing it with visuals or clickables.

Evidence

The Oxford authors’ main interest is unquestionably in the use of video. They recommend extensive video viewing outside the classroom and digital story-telling activities inside. I’m fine with that, so long as classroom time isn’t wasted on getting to grips with a particular digital tool (e.g. a video editor, which, a year from now, will have been replaced by another video editor).

I’m fine with this because it involves learners doing meaningful things with language, and there is ample evidence to indicate that a good way to acquire language is to do meaningful things with it. However, I am less than convinced by the authors’ claim that such activities will strengthen ‘active and critical viewing, and effective and creative representing’. My scepticism derives firstly from my unease about the vagueness of the terms ‘viewing’ and ‘representing’, but I have bigger reservations.

There is much debate about the extent to which general critical thinking can be taught. General critical viewing has the same problems. I can apply critical viewing skills to some topics, because I have reasonable domain knowledge. In my case, it’s domain knowledge that activates my critical awareness of rhetorical devices, layout, choice of images and pull-out quotes, multimodal add-ons and so on. But without the domain knowledge, my critical viewing skills are likely to remain uncritical.

Perhaps most importantly of all, there is a lack of reliable research about ‘the extent to which language instructors should prioritize multimodality in the classroom’ (Kessler, 2022: 552). There are those, like the authors of this paper, who advocate for a ‘strong version’ of multimodality. Others go for a ‘weak version’ ‘in which non-linguistic modes should only minimally support or supplement linguistic instruction’ (Kessler, 2022: 552). And there are others who argue that multimodal activities may actually detract from or stifle L2 development (e.g. Manchón, 2017). In the circumstances, all the talk of ‘needs to’ and ‘should’ is more than a little premature.

Assessment

The authors of this Oxford paper rightly note that, if we are to adopt a multimodal approach, ‘it is important that assessment requirements take into account the multimodal nature of contemporary communication’. The trouble is that there are no widely used assessments (to my knowledge) that do this (including Oxford’s own tests). English language reading tests (like the Oxford Test of English) measure the comprehension of flat printed texts, as a proxy for reading skills. This is not the place to question the validity of such reading tests. Suffice to say that ‘little consensus exists as to what [the ability to read another language] entails, how it develops, and how progress in development can be monitored and fostered’ (Koda, 2021).

No doubt there are many people beavering away at trying to figure out how to assess multimodal literacy, but the challenges they face are not negligible. Twenty-first century digital (multimodal) literacy includes such things as knowing how to change the language of an online text to your own (and vice versa), how to bring up subtitles, how to convert written text to speech, how to generate audio scripts. All such skills may well be very valuable in this digital age, and all of them limit the need to learn another language.

Final thoughts

I can’t help but wonder why Oxford University Press should bring out a ‘position paper’ that is so at odds with their own publishing and assessing practices, and so at odds with the paper recently published in their flagship journal, ELT Journal. There must be some serious disconnect between the Marketing Department, which commissions papers such as these, and other departments within the company. Why did they allow such overstatement, when it is well known that many ELT practitioners (i.e. their customers) have the view that ‘linguistically based forms are (and should be) the only legitimate form of literacy’ (Choi & Yi, 2016)? Was it, perhaps, the second part of the title of this paper that appealed to the marketing people (‘Communication Skills for Today’s Generation’) and they just thought that ‘multimodality’ had a cool, contemporary ring to it? Or does the use of ‘multimodality’ help the marketing of courses like Headway and English File with additional multimedia bells and whistles? As I say, I can’t help but wonder.

If you want to find out more, I’d recommend the ELT Journal article, which you can access freely without giving your details to the marketing people.

Finally, it is perhaps time to question the logical connection between the fact that much reading these days is multimodal and the idea that multimodal literacy should be taught in a language classroom. Much reading that takes place online, especially with multimodal texts, could be called ‘hyper reading’, characterised as ‘sort of a brew of skimming and scanning on steroids’ (Baron, 2021: 12). Is this the kind of reading that should be promoted with language learners? Baron (2021) argues that the answer to this question depends on the level of reading skills of the learner. The lower the level, the less beneficial it is likely to be. But for ‘accomplished readers with high levels of prior knowledge about the topic’, hyper-reading may be a valuable approach. For many language learners, monomodal deep reading, which demands ‘slower, time-demanding cognitive and reflective functions’ (Baron, 2021: x – xi) may well be much more conducive to learning.

References

Baron, N. S. (2021) How We Read Now. Oxford: Oxford University Press

Choi, J. & Yi, Y. (2016) Teachers’ Integration of Multimodality into Classroom Practices for English Language Learners’ TESOL Journal, 7 (2): 3-4 – 327

Donaghy, K. (author), Karastathi, S. (consultant), Peachey, N. (consultant), (2023). Multimodality in ELT: Communication skills for today’s generation [PDF]. Oxford University Press. https://elt.oup.com/feature/global/expert/multimodality (registration needed)

Dudeney, G., Hockly, N. & Pegrum, M. (2013) Digital Literacies. Harlow: Pearson Education

Kessler, M. (2022) Multimodality. ELT Journal, 76 (4): 551 – 554

Koda, K. (2021) Assessment of Reading. https://doi.org/10.1002/9781405198431.wbeal0051.pub2

Manchón, R. M. (2017) The Potential Impact of Multimodal Composition on Language Learning. Journal of Second Language Writing, 38: 94 – 95

Pegrum, M., Dudeney, G. & Hockly, N. (2018) Digital Literacies Revisited. The European Journal of Applied Linguistics and TEFL, 7 (2): 3 – 24

Pegrum, M., Hockly, N. & Dudeney, G. (2022) Digital Literacies 2nd Edition. New York: Routledge

The idea of ‘digital natives’ emerged at the turn of the century, was popularized by Marc Prensky (2001), and rapidly caught the public imagination, especially the imagination of technology marketers. Its popularity has dwindled a little since then, but is still widely used. Alternative terms include ‘Generation Y’, ‘Millennials’ and the ‘Net Generation’, definitions of which vary slightly from writer to writer. Two examples of the continued currency of the term ‘digital native’ are a 2019 article on the Pearson Global Scale of English website entitled ‘Teaching digital natives to become more human’ and an article in The Pie News (a trade magazine for ‘professionals in international education’), extolling the virtues of online learning for digital natives in times of Covid-19.

Key to understanding ‘digital natives’, according to users of the term, is their fundamental differences from previous generations. They have grown up immersed in technology, have shorter attention spans, and are adept at multitasking. They ‘are no longer the people our educational system was designed to teach’ (Prensky, 2001), so educational systems must change in order to accommodate their needs.

The problem is that ‘digital natives’ are a myth. Prensky’s ideas were not based on any meaningful research: his observations and conclusions, seductive though they might be, were no more than opinions. Kirschner and De Bruyckere (2017) state the research consensus:

There is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. […] One of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning.

This is neither new (see Bennett et al., 2008) nor contentious. Almost ten years ago, Thomas (2011:3) reported that ‘some researchers have been asked to remove all trace of the term from academic papers submitted to conferences in order to be seriously considered for inclusion’. There are reasons, he added, to consider some uses of the term nothing more than technoevangelism (Thomas, 2011:4). Perhaps someone should tell Pearson and the Pie News? Then, again, perhaps, they wouldn’t care.

The attribution of particular characteristics to ‘digital natives’ / ‘Generation Y’ / ‘Millennials’ is an application of Generation Theory. This can be traced back to a 1928 paper by Karl Mannheim, called ‘Das Problem der Generationen’ which grew in popularity after being translated into English in the 1950s. According to Jauregui et al (2019), the theory was extensively debated in the 1960s and 1970s, but then disappeared from academic study. The theory was not supported by empirical research, was considered to be overly schematised and too culturally-bound, and led inexorably to essentialised and reductive stereotypes.

But Generation Theory gained a new lease of life in the 1990s, following the publication of ‘Generations’ by William Strauss and Neil Howe. The book was so successful that it spawned a slew of other titles leading up to ‘Millennials Rising’ (Howe & Strauss, 2000). This popularity has continued to the present, with fans including Steve Bannon (Kaiser, 2016) who made an ‘apocalyptical and polemical’ documentary film about the 2007 – 2008 financial crisis entitled ‘Generation Zero’. The work of Strauss and Howe has been dismissed as ‘more popular culture than social science’ (Jauregui et al., 2019: 63) and in much harsher terms in two fascinating articles in Jacobin (Hart, 2018) and Aeon (Onion, 2015). The sub-heading of the latter is ‘generational labels are lazy, useless and just plain wrong’. Although dismissed by scholars as pseudo-science, the popularity of such Generation Theory helps explain why Prensky’s paper about ‘digital natives’ fell on such fertile ground. The saying, often falsely attributed to Mark Twain, that we should ‘never let the truth get in the way of a good story’ comes to mind.

But by the end of the first decade of this century, ‘digital natives’ had become problematic in two ways: not only did the term not stand up to close analysis, but it also no longer referred to the generational cohort that pundits and marketers wanted to talk about.

Around January 2018, use of the term ‘Generation Z’ began to soar, and is currently at its highest point ever in the Google Trends graph. As with ‘digital natives’, the precise birth dates of Generation Z vary from writer to writer. After 2001, according to the Cambridge dictionary; slightly earlier according to other sources. The cut-off point is somewhere between the mid and late 2010s. Other terms for this cohort have been proposed, but ‘Generation Z’ is the most popular.

William Strauss died in 2007 and Neil Howe was in his late 60s when ‘Generation Z’ became a thing, so there was space for others to take up the baton. The most successful have probably been Corey Seemiller and Meghan Grace, who, since 2016, have been churning out a book a year devoted to ‘Generation Z’. In the first of these (Seemiller & Grace, 2016), they were clearly keen to avoid some of the criticisms that had been levelled at Strauss and Howe, and they carried out research. This consisted of 1143 responses to a self-reporting questionnaire by students at US institutions of higher education. The survey also collected information about Kolb’s learning styles and multiple intelligences. With refreshing candour, they admit that the sample is not entirely representative of higher education in the US. And, since it only looked at students in higher education, it told us nothing at all about those who weren’t.

In August 2018, Pearson joined the party, bringing out a report entitled ‘Beyond Millennials: The Next Generation of Learners’. Conducted by the Harris Poll, the survey looked at 2,587 US respondents, aged between 14 and 40. The results were weighted for age, gender, race/ethnicity, marital status, household income, and education, so were rather more representative than the Seemiller & Grace research.

In ELT and educational references to ‘Generation Z’, research, of even the very limited kind mentioned above, is rarely cited. When it is, Seemiller and Grace feature prominently (e.g. Mohr & Mohr, 2017). Alternatively, even less reliable sources are used. In an ELT webinar entitled ‘Engaging Generation Z’, for example, information about the characteristics of ‘Generation Z’ learners is taken from an infographic produced by an American office furniture company.

But putting aside quibbles about the reliability of the information, and the fact that it most commonly[1] refers to Americans (who are not, perhaps, the most representative group in global terms), what do the polls tell us?

Despite claims that Generation Z are significantly different from their Millennial predecessors, the general picture that emerges suggests that differences are more a question of degree than substance. These include:

  • A preference for visual / video information over text
  • A variety of bite-sized, entertaining educational experiences
  • Short attention spans and zero tolerance for delay

All of these were identified in 2008 (Williams et al., 2008) as characteristics of the ‘Google Generation’ (a label which usually seems to span Millennials and Generation Z). There is nothing fundamentally different from Prensky’s description of ‘digital natives’. The Pearson report claimed that ‘Generation Z expects experiences both inside and outside the classroom that are more rewarding, more engaging and less time consuming. Technology is no longer a transformative phenomena for this generation, but rather a normal, integral part of life’. However, there is no clear disjuncture or discontinuity between Generation Z and Millennials, any more than there was between ‘digital natives’ and previous generations (Selwyn, 2009: 375). What has really changed is that the technology has moved on (e.g. YouTube was founded in 2005 and the first iPhone was released in 2007).

TESOL TurkeyThe discourse surrounding ‘Generation Z’ is now steadily finding its way into the world of English language teaching. The 2nd TESOL Turkey International ELT Conference took place last November with ‘Teaching Generation Z: Passing on the baton from K12 to University’ as its theme. A further gloss explained that the theme was ‘in reference to the new digital generation of learners with outstanding multitasking skills; learners who can process and absorb information within mere seconds and yet possess the shortest attention span ever’.

 

A few more examples … Cambridge University Press ran a webinar ELT webinar entitled ‘Engaging Generation Z’ and Macmillan Education has a coursebook series called ‘Exercising English for Generation Z’. EBC, a TEFL training provider, ran a blog post in November last year, ‘Teaching English to generation Z students’. And EFL Magazine had an article, ‘Critical Thinking – The Key Competence For The Z Generation’, in February of this year.

The pedagogical advice that results from this interest in Generation Z seems to boil down to: ‘Accept the digital desires of the learners, use lots of video (i.e. use more technology in the classroom) and encourage multi-tasking’.

No one, I suspect, would suggest that teachers should not make use of topics and technologies that appeal to their learners. But recommendations to change approaches to language teaching, ‘based solely on the supposed demands and needs of a new generation of digital natives must be treated with caution’ (Bennett et al., 2008: 782). It is far from clear that generational differences (even if they really exist) are important enough ‘to be considered during the design of instruction or the use of different educational technologies – at this time, the weight of the evidence is negative’ (Reeves, 2008: 21).

Perhaps, it would be more useful to turn away from surveys of attitudes and towards more fact-based research. Studies in both the US and the UK have found that myopia and other problems with the eyes is rising fast among the Generation Z cohort, and that there is a link with increased screen time, especially with handheld devices. At the same time, Generation Zers are much more likely than their predecessors to be diagnosed with anxiety disorder and depression. While the connection between technology use and mental health is far from clear, it is possible that  ‘the rise of the smartphone and social media have at least something to do with [the rise in mental health issues]’ (Twenge, 2017).

Should we be using more technology in class because learners say they want or need it? If we follow that logic, perhaps we should also be encouraging the consumption of fast food, energy drinks and Ritalin before and after lessons?

[1] Studies have been carried out in other geographical settings, including Europe (e.g. Triple-a-Team AG, 2016) and China (Tang, 2019).

References

Bennett S., Maton K., & Kervin, L. (2008). The ‘digital natives’ debate: a critical review of the evidence. British Jmournal of Educational Technology, 39 (5):pp. 775-786.

Hart, A. (2018). Against Generational Politics. Jacobin, 28 February 2018. https://jacobinmag.com/2018/02/generational-theory-millennials-boomers-age-history

Howe, N. & Strauss, W. (2000). Millennials Rising: The Next Great Generation. New York, NY: Vintage Books.

Jauregui, J., Watsjold, B., Welsh, L., Ilgen, J. S. & Robins, L. (2019). Generational “othering”: The myth of the Millennial learner. Medical Education,54: pp.60–65. https://onlinelibrary.wiley.com/doi/pdf/10.1111/medu.13795

Kaiser, D. (2016). Donald Trump, Stephen Bannon and the Coming Crisis in American National Life. Time, 18 November 2016. https://time.com/4575780/stephen-bannon-fourth-turning/

Kirschner, P.A. & De Bruyckere P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67: pp. 135-142. https://www.sciencedirect.com/science/article/pii/S0742051X16306692

Mohr, K. A. J. & Mohr, E. S. (2017). Understanding Generation Z Students to Promote a Contemporary Learning Environment. Journal on Empowering Teacher Excellence, 1 (1), Article 9 DOI: https://doi.org/10.15142/T3M05T

Onion, R. (2015). Against generations. Aeon, 19 May, 2015. https://aeon.co/essays/generational-labels-are-lazy-useless-and-just-plain-wrong

Pearson (2018). Beyond Millennials: The Next Generation of Learners. https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/news/news-annoucements/2018/The-Next-Generation-of-Learners_final.pdf

Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9: pp. 1- 6

Reeves, T.C. (2008). Do Generational Differences Matter in Instructional Design? Athens, GA: University of Georgia, Department of Educational Psychology and Instructional Technology

Seemiller, C. & and Grace, M. (2016). Generation Z Goes to College. San Francisco: Jossey-Bass

Selwyn, N. (2009). The digital native-myth and reality. Perspectives, 61: pp. 364-379

Strauss W. & Howe, N. (1991). Generations: The History of America’s Future, 1584 to 2069. New York, New York: HarperCollins.

Tang F. (2019). A critical review of research on the work-related attitudes of Generation Z in China. Social Psychology and Society, 10 (2): pp. 19—28. Available at: https://psyjournals.ru/files/106927/sps_2019_n2_Tang.pdf

Thomas, M. (2011). Technology, Education, and the Discourse of the Digital Native: Between Evangelists and Dissenters. In Thomas, M. (ed). (2011). Deconstructing Digital Natives: Young people, technology and the new literacies. London: Routledge. pp. 1- 13)

Triple-a-Team AG. (2016). Generation Z Metastudie über die kommende Generation. Biglen, Switzerland. Available at: http://www.sprachenrat.bremen.de/files/aktivitaeten/Generation_Z_Metastudie.pdf

Twenge, J. M. (2017). iGen. New York: Atria Books

Williams, P., Rowlands, I. & Fieldhouse, M. (2008). The ‘Google Generation’ – myths and realities about young people’s digital information behaviour. In Nicholas, D. & Rowlands, I. (eds.) (2008). Digital Consumers. London: Facet Publishers.

I’m a sucker for meta-analyses, those aggregates of multiple studies that generate an effect size, and I am even fonder of meta-meta analyses. I skip over the boring stuff about inclusion criteria and statistical procedures and zoom in on the results and discussion. I’ve pored over Hattie (2009) and, more recently, Dunlosky et al (2013), and quoted both more often than is probably healthy. Hardly surprising, then, that I was eager to read Luke Plonsky and Nicole Ziegler’s ‘The CALL–SLA interface: insights from a second-order synthesis’ (Plonsky & Ziegler, 2016), an analysis of nearly 30 meta-analyses (later whittled down to 14) looking at the impact of technology on L2 learning. The big question they were looking to find an answer to? How effective is computer-assisted language learning compared to face-to-face contexts?

Plonsky & Ziegler

Plonsky and Ziegler found that there are unequivocally ‘positive effects of technology on language learning’. In itself, this doesn’t really tell us anything, simply because there are too many variables. It’s a statistical soundbite, ripe for plucking by anyone with an edtech product to sell. Much more useful is to understand which technologies used in which ways are likely to have a positive effect on learning. It appears from Plonsky and Ziegler’s work that the use of CALL glosses (to develop reading comprehension and vocabulary development) provides the strongest evidence of technology’s positive impact on learning. The finding is reinforced by the fact that this particular technology was the most well-represented research area in the meta-analyses under review.

What we know about glosses

gloss_gloss_WordA gloss is ‘a brief definition or synonym, either in L1 or L2, which is provided with [a] text’ (Nation, 2013: 238). They can take many forms (e.g. annotations in the margin or at the foot a printed page), but electronic or CALL glossing is ‘an instant look-up capability – dictionary or linked’ (Taylor, 2006; 2009) which is becoming increasingly standard in on-screen reading. One of the most widely used is probably the translation function in Microsoft Word: here’s the French gloss for the word ‘gloss’.

Language learning tools and programs are making increasing use of glosses. Here are two examples. The first is Lingro , a dictionary tool that learners can have running alongside any webpage: clicking on a word brings up a dictionary entry, and the word can then be exported into a wordlist which can be practised with spaced repetition software. The example here is using the English-English dictionary, but a number of bilingual pairings are available. The second is from Bliu Bliu , a language learning app that I unkindly reviewed here .Lingro_example

Bliu_Bliu_example_2

So, what did Plonsky and Ziegler discover about glosses? There were two key takeways:

  • both L1 and L2 CALL glossing can be beneficial to learners’ vocabulary development (Taylor, 2006, 2009, 2013)
  • CALL / electronic glosses lead to more learning gains than paper-based glosses (p.22)

On the surface, this might seem uncontroversial, but if you took a good look at the three examples (above) of online glosses, you’ll be thinking that something is not quite right here. Lingro’s gloss is a fairly full dictionary entry: it contains too much information for the purpose of a gloss. Cognitive Load Theory suggests that ‘new information be provided concisely so as not to overwhelm the learner’ (Khezrlou et al, 2017: 106): working out which definition is relevant here (the appropriate definition is actually the sixth in this list) will overwhelm many learners and interfere with the process of reading … which the gloss is intended to facilitate. In addition, the language of the definitions is more difficult than the defined item. Cognitive load is, therefore, further increased. Lingro needs to use a decent learner’s dictionary (with a limited defining vocabulary), rather than relying on the free Wiktionary.

Nation (2013: 240) cites research which suggests that a gloss is most effective when it provides a ‘core meaning’ which users will have to adapt to what is in the text. This is relatively unproblematic, from a technological perspective, but few glossing tools actually do this. The alternative is to use NLP tools to identify the context-specific meaning: our ability to do this is improving all the time but remains some way short of total accuracy. At the very least, NLP tools are needed to identify part of speech (which will increase the probability of hitting the right meaning). Bliu Bliu gets things completely wrong, confusing the verb and the adjective ‘own’.

Both Lingro and Bliu Bliu fail to meet the first requirement of a gloss: ‘that it should be understood’ (Nation, 2013: 239). Neither is likely to contribute much to the vocabulary development of learners. We will need to modify Plonsky and Ziegler’s conclusions somewhat: they are contingent on the quality of the glosses. This is not, however, something that can be assumed …. as will be clear from even the most cursory look at the language learning tools that are available.

Nation (2013: 447) also cites research that ‘learning is generally better if the meaning is written in the learner’s first language. This is probably because the meaning can be easily understood and the first language meaning already has many rich associations for the learner. Laufer and Shmueli (1997) found that L1 glosses are superior to L2 glosses in both short-term and long-term (five weeks) retention and irrespective of whether the words are learned in lists, sentences or texts’. Not everyone agrees, and a firm conclusion either way is probably not possible: learner variables (especially learner preferences) preclude anything conclusive, which is why I’ve highlighted Nation’s use of the word ‘generally’. If we have a look at Lingro’s bilingual gloss, I think you’ll agree that the monolingual and bilingual glosses are equally unhelpful, equally unlikely to lead to better learning, whether it’s vocabulary acquisition or reading comprehension.bilingual lingro

 

The issues I’ve just discussed illustrate the complexity of the ‘glossing’ question, but they only scratch the surface. I’ll dig a little deeper.

1 Glosses are only likely to be of value to learning if they are used selectively. Nation (2013: 242) suggests that ‘it is best to assume that the highest density of glossing should be no more than 5% and preferably around 3% of the running words’. Online glosses make the process of look-up extremely easy. This is an obvious advantage over look-ups in a paper dictionary, but there is a real risk, too, that the ease of online look-up encourages unnecessary look-ups. More clicks do not always lead to more learning. The value of glosses cannot therefore be considered independently of a consideration of the level (i.e. appropriacy) of the text that they are being used with.

2 A further advantage of online glosses is that they can offer a wide range of information, e.g. pronunciation, L1 translation, L2 definition, visuals, example sentences. The review of literature by Khezrlou et al (2017: 107) suggests that ‘multimedia glosses can promote vocabulary learning but uncertainty remains as to whether they also facilitate reading comprehension’. Barcroft (2015), however, warns that pictures may help learners with meaning, but at the cost of retention of word form, and the research of Boers et al did not find evidence to support the use of pictures. Even if we were to accept the proposition that pictures might be helpful, we would need to hold two caveats. First, the amount of multimodal support should not lead to cognitive overload. Second, pictures need to be clear and appropriate: a condition that is rarely met in online learning programs. The quality of multimodal glosses is more important than their inclusion / exclusion.

3 It’s a commonplace to state that learners will learn more if they are actively engaged or involved in the learning, rather than simply (receptively) looking up a gloss. So, it has been suggested that cognitive engagement can be stimulated by turning the glosses into a multiple-choice task, and a fair amount of research has investigated this possibility. Barcroft (2015: 143) reports research that suggests that ‘multiple-choice glosses [are] more effective than single glosses’, but Nation (2013: 246) argues that ‘multiple choice glosses are not strongly supported by research’. Basically, we don’t know and even if we have replication studies to re-assess the benefits of multimodal glosses (as advocated by Boers et al, 2017), it is again likely that learner variables will make it impossible to reach a firm conclusion.

Learning from meta-analyses

Discussion of glosses is not new. Back in the late 19th century, ‘most of the Reform Movement teachers, took the view that glossing was a sensible technique’ (Howatt, 2004: 191). Sensible, but probably not all that important in the broader scheme of language learning and teaching. Online glosses offer a number of potential advantages, but there is a huge number of variables that need to be considered if the potential is to be realised. In essence, I have been arguing that asking whether online glosses are more effective than print glosses is the wrong question. It’s not a question that can provide us with a useful answer. When you look at the details of the research that has been brought together in the meta-analysis, you simply cannot conclude that there are unequivocally positive effects of technology on language learning, if the most positive effects are to be found in the digital variation of an old sensible technique.

Interesting and useful as Plonsky and Ziegler’s study is, I think it needs to be treated with caution. More generally, we need to be cautious about using meta-analyses and effect sizes. Mura Nava has a useful summary of an article by Adrian Simpson (Simpson, 2017), that looks at inclusion criteria and statistical procedures and warns us that we cannot necessarily assume that the findings of meta-meta-analyses are educationally significant. More directly related to technology and language learning, Boulton’s paper (Boulton, 2016) makes a similar point: ‘Meta-analyses need interpreting with caution: in particular, it is tempting to seize on a single figure as the ultimate answer to the question: Does it work? […] More realistically, we need to look at variation in what works’.

For me, the greatest value in Plonsky and Ziegler’s paper was nothing to do with effect sizes and big answers to big questions. It was the bibliography … and the way it forced me to be rather more critical about meta-analyses.

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Boers, F., Warren, P., He, L. & Deconinck, J. 2017. ‘Does adding pictures to glosses enhance vocabulary uptake from reading?’ System 66: 113 – 129

Boulton, A. 2016. ‘Quantifying CALL: significance, effect size and variation’ in S. Papadima-Sophocleus, L. Bradley & S. Thouësny (eds.) CALL Communities and Culture – short papers from Eurocall 2016 pp.55 – 60 http://files.eric.ed.gov/fulltext/ED572012.pdf

Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J. & Willingham, D.T. 2013. ‘Improving Students’ Learning With Effective Learning Techniques’ Psychological Science in the Public Interest 14 / 1: 4 – 58

Hattie, J.A.C. 2009. Visible Learning. Abingdon, Oxon.: Routledge

Howatt, A.P.R. 2004. A History of English Language Teaching 2nd edition. Oxford: Oxford University Press

Khezrlou, S., Ellis, R. & K. Sadeghi 2017. ‘Effects of computer-assisted glosses on EFL learners’ vocabulary acquisition and reading comprehension in three learning conditions’ System 65: 104 – 116

Laufer, B. & Shmueli, K. 1997. ‘Memorizing new words: Does teaching have anything to do with it?’ RELC Journal 28 / 1: 89 – 108

Nation, I.S.P. 2013. Learning Vocabulary in Another Language. Cambridge: Cambridge University Press

Plonsky, L. & Ziegler, N. 2016. ‘The CALL–SLA interface:  insights from a second-order synthesis’ Language Learning & Technology 20 / 2: 17 – 37

Simpson, A. 2017. ‘The misdirection of public policy: Comparing and combining standardised effect sizes’ Journal of Education Policy, 32 / 4: 450-466

Taylor, A. M. 2006. ‘The effects of CALL versus traditional L1 glosses on L2 reading comprehension’. CALICO Journal, 23, 309–318.

Taylor, A. M. 2009. ‘CALL-based versus paper-based glosses: Is there a difference in reading comprehension?’ CALICO Journal, 23, 147–160.

Taylor, A. M. 2013. CALL versus paper: In which context are L1 glosses more effective? CALICO Journal, 30, 63-8