Posts Tagged ‘multimodality’

The idea of ‘digital natives’ emerged at the turn of the century, was popularized by Marc Prensky (2001), and rapidly caught the public imagination, especially the imagination of technology marketers. Its popularity has dwindled a little since then, but is still widely used. Alternative terms include ‘Generation Y’, ‘Millennials’ and the ‘Net Generation’, definitions of which vary slightly from writer to writer. Two examples of the continued currency of the term ‘digital native’ are a 2019 article on the Pearson Global Scale of English website entitled ‘Teaching digital natives to become more human’ and an article in The Pie News (a trade magazine for ‘professionals in international education’), extolling the virtues of online learning for digital natives in times of Covid-19.

Key to understanding ‘digital natives’, according to users of the term, is their fundamental differences from previous generations. They have grown up immersed in technology, have shorter attention spans, and are adept at multitasking. They ‘are no longer the people our educational system was designed to teach’ (Prensky, 2001), so educational systems must change in order to accommodate their needs.

The problem is that ‘digital natives’ are a myth. Prensky’s ideas were not based on any meaningful research: his observations and conclusions, seductive though they might be, were no more than opinions. Kirschner and De Bruyckere (2017) state the research consensus:

There is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. […] One of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning.

This is neither new (see Bennett et al., 2008) nor contentious. Almost ten years ago, Thomas (2011:3) reported that ‘some researchers have been asked to remove all trace of the term from academic papers submitted to conferences in order to be seriously considered for inclusion’. There are reasons, he added, to consider some uses of the term nothing more than technoevangelism (Thomas, 2011:4). Perhaps someone should tell Pearson and the Pie News? Then, again, perhaps, they wouldn’t care.

The attribution of particular characteristics to ‘digital natives’ / ‘Generation Y’ / ‘Millennials’ is an application of Generation Theory. This can be traced back to a 1928 paper by Karl Mannheim, called ‘Das Problem der Generationen’ which grew in popularity after being translated into English in the 1950s. According to Jauregui et al (2019), the theory was extensively debated in the 1960s and 1970s, but then disappeared from academic study. The theory was not supported by empirical research, was considered to be overly schematised and too culturally-bound, and led inexorably to essentialised and reductive stereotypes.

But Generation Theory gained a new lease of life in the 1990s, following the publication of ‘Generations’ by William Strauss and Neil Howe. The book was so successful that it spawned a slew of other titles leading up to ‘Millennials Rising’ (Howe & Strauss, 2000). This popularity has continued to the present, with fans including Steve Bannon (Kaiser, 2016) who made an ‘apocalyptical and polemical’ documentary film about the 2007 – 2008 financial crisis entitled ‘Generation Zero’. The work of Strauss and Howe has been dismissed as ‘more popular culture than social science’ (Jauregui et al., 2019: 63) and in much harsher terms in two fascinating articles in Jacobin (Hart, 2018) and Aeon (Onion, 2015). The sub-heading of the latter is ‘generational labels are lazy, useless and just plain wrong’. Although dismissed by scholars as pseudo-science, the popularity of such Generation Theory helps explain why Prensky’s paper about ‘digital natives’ fell on such fertile ground. The saying, often falsely attributed to Mark Twain, that we should ‘never let the truth get in the way of a good story’ comes to mind.

But by the end of the first decade of this century, ‘digital natives’ had become problematic in two ways: not only did the term not stand up to close analysis, but it also no longer referred to the generational cohort that pundits and marketers wanted to talk about.

Around January 2018, use of the term ‘Generation Z’ began to soar, and is currently at its highest point ever in the Google Trends graph. As with ‘digital natives’, the precise birth dates of Generation Z vary from writer to writer. After 2001, according to the Cambridge dictionary; slightly earlier according to other sources. The cut-off point is somewhere between the mid and late 2010s. Other terms for this cohort have been proposed, but ‘Generation Z’ is the most popular.

William Strauss died in 2007 and Neil Howe was in his late 60s when ‘Generation Z’ became a thing, so there was space for others to take up the baton. The most successful have probably been Corey Seemiller and Meghan Grace, who, since 2016, have been churning out a book a year devoted to ‘Generation Z’. In the first of these (Seemiller & Grace, 2016), they were clearly keen to avoid some of the criticisms that had been levelled at Strauss and Howe, and they carried out research. This consisted of 1143 responses to a self-reporting questionnaire by students at US institutions of higher education. The survey also collected information about Kolb’s learning styles and multiple intelligences. With refreshing candour, they admit that the sample is not entirely representative of higher education in the US. And, since it only looked at students in higher education, it told us nothing at all about those who weren’t.

In August 2018, Pearson joined the party, bringing out a report entitled ‘Beyond Millennials: The Next Generation of Learners’. Conducted by the Harris Poll, the survey looked at 2,587 US respondents, aged between 14 and 40. The results were weighted for age, gender, race/ethnicity, marital status, household income, and education, so were rather more representative than the Seemiller & Grace research.

In ELT and educational references to ‘Generation Z’, research, of even the very limited kind mentioned above, is rarely cited. When it is, Seemiller and Grace feature prominently (e.g. Mohr & Mohr, 2017). Alternatively, even less reliable sources are used. In an ELT webinar entitled ‘Engaging Generation Z’, for example, information about the characteristics of ‘Generation Z’ learners is taken from an infographic produced by an American office furniture company.

But putting aside quibbles about the reliability of the information, and the fact that it most commonly[1] refers to Americans (who are not, perhaps, the most representative group in global terms), what do the polls tell us?

Despite claims that Generation Z are significantly different from their Millennial predecessors, the general picture that emerges suggests that differences are more a question of degree than substance. These include:

  • A preference for visual / video information over text
  • A variety of bite-sized, entertaining educational experiences
  • Short attention spans and zero tolerance for delay

All of these were identified in 2008 (Williams et al., 2008) as characteristics of the ‘Google Generation’ (a label which usually seems to span Millennials and Generation Z). There is nothing fundamentally different from Prensky’s description of ‘digital natives’. The Pearson report claimed that ‘Generation Z expects experiences both inside and outside the classroom that are more rewarding, more engaging and less time consuming. Technology is no longer a transformative phenomena for this generation, but rather a normal, integral part of life’. However, there is no clear disjuncture or discontinuity between Generation Z and Millennials, any more than there was between ‘digital natives’ and previous generations (Selwyn, 2009: 375). What has really changed is that the technology has moved on (e.g. YouTube was founded in 2005 and the first iPhone was released in 2007).

TESOL TurkeyThe discourse surrounding ‘Generation Z’ is now steadily finding its way into the world of English language teaching. The 2nd TESOL Turkey International ELT Conference took place last November with ‘Teaching Generation Z: Passing on the baton from K12 to University’ as its theme. A further gloss explained that the theme was ‘in reference to the new digital generation of learners with outstanding multitasking skills; learners who can process and absorb information within mere seconds and yet possess the shortest attention span ever’.

 

A few more examples … Cambridge University Press ran a webinar ELT webinar entitled ‘Engaging Generation Z’ and Macmillan Education has a coursebook series called ‘Exercising English for Generation Z’. EBC, a TEFL training provider, ran a blog post in November last year, ‘Teaching English to generation Z students’. And EFL Magazine had an article, ‘Critical Thinking – The Key Competence For The Z Generation’, in February of this year.

The pedagogical advice that results from this interest in Generation Z seems to boil down to: ‘Accept the digital desires of the learners, use lots of video (i.e. use more technology in the classroom) and encourage multi-tasking’.

No one, I suspect, would suggest that teachers should not make use of topics and technologies that appeal to their learners. But recommendations to change approaches to language teaching, ‘based solely on the supposed demands and needs of a new generation of digital natives must be treated with caution’ (Bennett et al., 2008: 782). It is far from clear that generational differences (even if they really exist) are important enough ‘to be considered during the design of instruction or the use of different educational technologies – at this time, the weight of the evidence is negative’ (Reeves, 2008: 21).

Perhaps, it would be more useful to turn away from surveys of attitudes and towards more fact-based research. Studies in both the US and the UK have found that myopia and other problems with the eyes is rising fast among the Generation Z cohort, and that there is a link with increased screen time, especially with handheld devices. At the same time, Generation Zers are much more likely than their predecessors to be diagnosed with anxiety disorder and depression. While the connection between technology use and mental health is far from clear, it is possible that  ‘the rise of the smartphone and social media have at least something to do with [the rise in mental health issues]’ (Twenge, 2017).

Should we be using more technology in class because learners say they want or need it? If we follow that logic, perhaps we should also be encouraging the consumption of fast food, energy drinks and Ritalin before and after lessons?

[1] Studies have been carried out in other geographical settings, including Europe (e.g. Triple-a-Team AG, 2016) and China (Tang, 2019).

References

Bennett S., Maton K., & Kervin, L. (2008). The ‘digital natives’ debate: a critical review of the evidence. British Jmournal of Educational Technology, 39 (5):pp. 775-786.

Hart, A. (2018). Against Generational Politics. Jacobin, 28 February 2018. https://jacobinmag.com/2018/02/generational-theory-millennials-boomers-age-history

Howe, N. & Strauss, W. (2000). Millennials Rising: The Next Great Generation. New York, NY: Vintage Books.

Jauregui, J., Watsjold, B., Welsh, L., Ilgen, J. S. & Robins, L. (2019). Generational “othering”: The myth of the Millennial learner. Medical Education,54: pp.60–65. https://onlinelibrary.wiley.com/doi/pdf/10.1111/medu.13795

Kaiser, D. (2016). Donald Trump, Stephen Bannon and the Coming Crisis in American National Life. Time, 18 November 2016. https://time.com/4575780/stephen-bannon-fourth-turning/

Kirschner, P.A. & De Bruyckere P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67: pp. 135-142. https://www.sciencedirect.com/science/article/pii/S0742051X16306692

Mohr, K. A. J. & Mohr, E. S. (2017). Understanding Generation Z Students to Promote a Contemporary Learning Environment. Journal on Empowering Teacher Excellence, 1 (1), Article 9 DOI: https://doi.org/10.15142/T3M05T

Onion, R. (2015). Against generations. Aeon, 19 May, 2015. https://aeon.co/essays/generational-labels-are-lazy-useless-and-just-plain-wrong

Pearson (2018). Beyond Millennials: The Next Generation of Learners. https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/news/news-annoucements/2018/The-Next-Generation-of-Learners_final.pdf

Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9: pp. 1- 6

Reeves, T.C. (2008). Do Generational Differences Matter in Instructional Design? Athens, GA: University of Georgia, Department of Educational Psychology and Instructional Technology

Seemiller, C. & and Grace, M. (2016). Generation Z Goes to College. San Francisco: Jossey-Bass

Selwyn, N. (2009). The digital native-myth and reality. Perspectives, 61: pp. 364-379

Strauss W. & Howe, N. (1991). Generations: The History of America’s Future, 1584 to 2069. New York, New York: HarperCollins.

Tang F. (2019). A critical review of research on the work-related attitudes of Generation Z in China. Social Psychology and Society, 10 (2): pp. 19—28. Available at: https://psyjournals.ru/files/106927/sps_2019_n2_Tang.pdf

Thomas, M. (2011). Technology, Education, and the Discourse of the Digital Native: Between Evangelists and Dissenters. In Thomas, M. (ed). (2011). Deconstructing Digital Natives: Young people, technology and the new literacies. London: Routledge. pp. 1- 13)

Triple-a-Team AG. (2016). Generation Z Metastudie über die kommende Generation. Biglen, Switzerland. Available at: http://www.sprachenrat.bremen.de/files/aktivitaeten/Generation_Z_Metastudie.pdf

Twenge, J. M. (2017). iGen. New York: Atria Books

Williams, P., Rowlands, I. & Fieldhouse, M. (2008). The ‘Google Generation’ – myths and realities about young people’s digital information behaviour. In Nicholas, D. & Rowlands, I. (eds.) (2008). Digital Consumers. London: Facet Publishers.

I’m a sucker for meta-analyses, those aggregates of multiple studies that generate an effect size, and I am even fonder of meta-meta analyses. I skip over the boring stuff about inclusion criteria and statistical procedures and zoom in on the results and discussion. I’ve pored over Hattie (2009) and, more recently, Dunlosky et al (2013), and quoted both more often than is probably healthy. Hardly surprising, then, that I was eager to read Luke Plonsky and Nicole Ziegler’s ‘The CALL–SLA interface: insights from a second-order synthesis’ (Plonsky & Ziegler, 2016), an analysis of nearly 30 meta-analyses (later whittled down to 14) looking at the impact of technology on L2 learning. The big question they were looking to find an answer to? How effective is computer-assisted language learning compared to face-to-face contexts?

Plonsky & Ziegler

Plonsky and Ziegler found that there are unequivocally ‘positive effects of technology on language learning’. In itself, this doesn’t really tell us anything, simply because there are too many variables. It’s a statistical soundbite, ripe for plucking by anyone with an edtech product to sell. Much more useful is to understand which technologies used in which ways are likely to have a positive effect on learning. It appears from Plonsky and Ziegler’s work that the use of CALL glosses (to develop reading comprehension and vocabulary development) provides the strongest evidence of technology’s positive impact on learning. The finding is reinforced by the fact that this particular technology was the most well-represented research area in the meta-analyses under review.

What we know about glosses

gloss_gloss_WordA gloss is ‘a brief definition or synonym, either in L1 or L2, which is provided with [a] text’ (Nation, 2013: 238). They can take many forms (e.g. annotations in the margin or at the foot a printed page), but electronic or CALL glossing is ‘an instant look-up capability – dictionary or linked’ (Taylor, 2006; 2009) which is becoming increasingly standard in on-screen reading. One of the most widely used is probably the translation function in Microsoft Word: here’s the French gloss for the word ‘gloss’.

Language learning tools and programs are making increasing use of glosses. Here are two examples. The first is Lingro , a dictionary tool that learners can have running alongside any webpage: clicking on a word brings up a dictionary entry, and the word can then be exported into a wordlist which can be practised with spaced repetition software. The example here is using the English-English dictionary, but a number of bilingual pairings are available. The second is from Bliu Bliu , a language learning app that I unkindly reviewed here .Lingro_example

Bliu_Bliu_example_2

So, what did Plonsky and Ziegler discover about glosses? There were two key takeways:

  • both L1 and L2 CALL glossing can be beneficial to learners’ vocabulary development (Taylor, 2006, 2009, 2013)
  • CALL / electronic glosses lead to more learning gains than paper-based glosses (p.22)

On the surface, this might seem uncontroversial, but if you took a good look at the three examples (above) of online glosses, you’ll be thinking that something is not quite right here. Lingro’s gloss is a fairly full dictionary entry: it contains too much information for the purpose of a gloss. Cognitive Load Theory suggests that ‘new information be provided concisely so as not to overwhelm the learner’ (Khezrlou et al, 2017: 106): working out which definition is relevant here (the appropriate definition is actually the sixth in this list) will overwhelm many learners and interfere with the process of reading … which the gloss is intended to facilitate. In addition, the language of the definitions is more difficult than the defined item. Cognitive load is, therefore, further increased. Lingro needs to use a decent learner’s dictionary (with a limited defining vocabulary), rather than relying on the free Wiktionary.

Nation (2013: 240) cites research which suggests that a gloss is most effective when it provides a ‘core meaning’ which users will have to adapt to what is in the text. This is relatively unproblematic, from a technological perspective, but few glossing tools actually do this. The alternative is to use NLP tools to identify the context-specific meaning: our ability to do this is improving all the time but remains some way short of total accuracy. At the very least, NLP tools are needed to identify part of speech (which will increase the probability of hitting the right meaning). Bliu Bliu gets things completely wrong, confusing the verb and the adjective ‘own’.

Both Lingro and Bliu Bliu fail to meet the first requirement of a gloss: ‘that it should be understood’ (Nation, 2013: 239). Neither is likely to contribute much to the vocabulary development of learners. We will need to modify Plonsky and Ziegler’s conclusions somewhat: they are contingent on the quality of the glosses. This is not, however, something that can be assumed …. as will be clear from even the most cursory look at the language learning tools that are available.

Nation (2013: 447) also cites research that ‘learning is generally better if the meaning is written in the learner’s first language. This is probably because the meaning can be easily understood and the first language meaning already has many rich associations for the learner. Laufer and Shmueli (1997) found that L1 glosses are superior to L2 glosses in both short-term and long-term (five weeks) retention and irrespective of whether the words are learned in lists, sentences or texts’. Not everyone agrees, and a firm conclusion either way is probably not possible: learner variables (especially learner preferences) preclude anything conclusive, which is why I’ve highlighted Nation’s use of the word ‘generally’. If we have a look at Lingro’s bilingual gloss, I think you’ll agree that the monolingual and bilingual glosses are equally unhelpful, equally unlikely to lead to better learning, whether it’s vocabulary acquisition or reading comprehension.bilingual lingro

 

The issues I’ve just discussed illustrate the complexity of the ‘glossing’ question, but they only scratch the surface. I’ll dig a little deeper.

1 Glosses are only likely to be of value to learning if they are used selectively. Nation (2013: 242) suggests that ‘it is best to assume that the highest density of glossing should be no more than 5% and preferably around 3% of the running words’. Online glosses make the process of look-up extremely easy. This is an obvious advantage over look-ups in a paper dictionary, but there is a real risk, too, that the ease of online look-up encourages unnecessary look-ups. More clicks do not always lead to more learning. The value of glosses cannot therefore be considered independently of a consideration of the level (i.e. appropriacy) of the text that they are being used with.

2 A further advantage of online glosses is that they can offer a wide range of information, e.g. pronunciation, L1 translation, L2 definition, visuals, example sentences. The review of literature by Khezrlou et al (2017: 107) suggests that ‘multimedia glosses can promote vocabulary learning but uncertainty remains as to whether they also facilitate reading comprehension’. Barcroft (2015), however, warns that pictures may help learners with meaning, but at the cost of retention of word form, and the research of Boers et al did not find evidence to support the use of pictures. Even if we were to accept the proposition that pictures might be helpful, we would need to hold two caveats. First, the amount of multimodal support should not lead to cognitive overload. Second, pictures need to be clear and appropriate: a condition that is rarely met in online learning programs. The quality of multimodal glosses is more important than their inclusion / exclusion.

3 It’s a commonplace to state that learners will learn more if they are actively engaged or involved in the learning, rather than simply (receptively) looking up a gloss. So, it has been suggested that cognitive engagement can be stimulated by turning the glosses into a multiple-choice task, and a fair amount of research has investigated this possibility. Barcroft (2015: 143) reports research that suggests that ‘multiple-choice glosses [are] more effective than single glosses’, but Nation (2013: 246) argues that ‘multiple choice glosses are not strongly supported by research’. Basically, we don’t know and even if we have replication studies to re-assess the benefits of multimodal glosses (as advocated by Boers et al, 2017), it is again likely that learner variables will make it impossible to reach a firm conclusion.

Learning from meta-analyses

Discussion of glosses is not new. Back in the late 19th century, ‘most of the Reform Movement teachers, took the view that glossing was a sensible technique’ (Howatt, 2004: 191). Sensible, but probably not all that important in the broader scheme of language learning and teaching. Online glosses offer a number of potential advantages, but there is a huge number of variables that need to be considered if the potential is to be realised. In essence, I have been arguing that asking whether online glosses are more effective than print glosses is the wrong question. It’s not a question that can provide us with a useful answer. When you look at the details of the research that has been brought together in the meta-analysis, you simply cannot conclude that there are unequivocally positive effects of technology on language learning, if the most positive effects are to be found in the digital variation of an old sensible technique.

Interesting and useful as Plonsky and Ziegler’s study is, I think it needs to be treated with caution. More generally, we need to be cautious about using meta-analyses and effect sizes. Mura Nava has a useful summary of an article by Adrian Simpson (Simpson, 2017), that looks at inclusion criteria and statistical procedures and warns us that we cannot necessarily assume that the findings of meta-meta-analyses are educationally significant. More directly related to technology and language learning, Boulton’s paper (Boulton, 2016) makes a similar point: ‘Meta-analyses need interpreting with caution: in particular, it is tempting to seize on a single figure as the ultimate answer to the question: Does it work? […] More realistically, we need to look at variation in what works’.

For me, the greatest value in Plonsky and Ziegler’s paper was nothing to do with effect sizes and big answers to big questions. It was the bibliography … and the way it forced me to be rather more critical about meta-analyses.

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Boers, F., Warren, P., He, L. & Deconinck, J. 2017. ‘Does adding pictures to glosses enhance vocabulary uptake from reading?’ System 66: 113 – 129

Boulton, A. 2016. ‘Quantifying CALL: significance, effect size and variation’ in S. Papadima-Sophocleus, L. Bradley & S. Thouësny (eds.) CALL Communities and Culture – short papers from Eurocall 2016 pp.55 – 60 http://files.eric.ed.gov/fulltext/ED572012.pdf

Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J. & Willingham, D.T. 2013. ‘Improving Students’ Learning With Effective Learning Techniques’ Psychological Science in the Public Interest 14 / 1: 4 – 58

Hattie, J.A.C. 2009. Visible Learning. Abingdon, Oxon.: Routledge

Howatt, A.P.R. 2004. A History of English Language Teaching 2nd edition. Oxford: Oxford University Press

Khezrlou, S., Ellis, R. & K. Sadeghi 2017. ‘Effects of computer-assisted glosses on EFL learners’ vocabulary acquisition and reading comprehension in three learning conditions’ System 65: 104 – 116

Laufer, B. & Shmueli, K. 1997. ‘Memorizing new words: Does teaching have anything to do with it?’ RELC Journal 28 / 1: 89 – 108

Nation, I.S.P. 2013. Learning Vocabulary in Another Language. Cambridge: Cambridge University Press

Plonsky, L. & Ziegler, N. 2016. ‘The CALL–SLA interface:  insights from a second-order synthesis’ Language Learning & Technology 20 / 2: 17 – 37

Simpson, A. 2017. ‘The misdirection of public policy: Comparing and combining standardised effect sizes’ Journal of Education Policy, 32 / 4: 450-466

Taylor, A. M. 2006. ‘The effects of CALL versus traditional L1 glosses on L2 reading comprehension’. CALICO Journal, 23, 309–318.

Taylor, A. M. 2009. ‘CALL-based versus paper-based glosses: Is there a difference in reading comprehension?’ CALICO Journal, 23, 147–160.

Taylor, A. M. 2013. CALL versus paper: In which context are L1 glosses more effective? CALICO Journal, 30, 63-8