Posts Tagged ‘Pearson’

I noted in a recent post about current trends in ELT that mindfulness has been getting a fair amount of attention recently. Here are three recent examples:

  • Pearson recently produced the Pearson Experiences: A Pocket Guide to Mindfulness, written by Amy Malloy. Amy has written also written a series of blog posts for Pearson on the topic and she is a Pearson-sponsored speaker (Why use mindfulness in the classroom?) at the English Australia Ed-Tech SIG Online Symposium this week.
  • Russell Stannard has written two posts for Express Publishing (here and here)
  • Sarah Mercer and Tammy Gregersen’s new book, ‘Teacher Wellbeing’ (OUP, 2020) includes a section in which they recommend mindfulness practices to teachers as a buffer against stress and as a way to promote wellbeing.

The claims

Definitions of mindfulness often vary slightly, but the following from Amy Malloy is typical: ‘mindfulness is about the awareness that comes from consciously focussing on the present moment’. Claims for the value of mindfulness practices also vary slightly. Besides the general improvements to wellbeing suggested by Sarah and Tammy, attention, concentration and resilience are also commonly mentioned.

Amy: [Mindfulness] develops [children’s] brains, which in turn helps them find it easier to calm down and stay calm. … It changes our brains for the better. …. [Mindfulness] helps children concentrate more easily on classroom activities.

Russell: Students going through mindfulness training have increased levels of determination and willpower, they are less likely to give up if they find something difficult … Mindfulness has been shown to improve concentration. Students are able to study for longer periods of time and are more focused … Studies have shown that practicing mindfulness can lead to reduced levels of anxiety and stress.

In addition to the behavioural changes that mindfulness can supposedly bring about, both Amy and Russell refer to neurological changes:

Amy: Studies have shown that the people who regularly practise mindfulness develop the areas of the brain associated with patience, compassion, focus, concentration and emotional regulation.

Russell: At the route of our current understanding of mindfulness is brain plasticity. … in probably the most famous neuroimaging research project, scientists took a group of people and found that by doing a programme of 8 weeks of mindfulness training based around gratitude, they could actually increase the size of the areas of the brain generally associated with happiness.

Supporting evidence

In her pocket guide for Pearson, Amy provides no references to support her claims.

In Russell’s first post, he links to a piece of research which looked at the self-reported psychological impact of a happiness training programme developed by a German cabaret artist and talk show host. The programme wasn’t specifically mindfulness-oriented, so tells us nothing about mindfulness, but it is also highly suspect as a piece of research, not least because one of the co-authors is the cabaret artist himself. His second link is to an article about human attention, a long-studied area of psychology, but this has nothing to do with mindfulness, although Russell implies that there is a connection. His third link is to a very selective review of research into mindfulness, written by two mindfulness enthusiasts. It’s not so much a review of research as a selection of articles which support mindfulness advocacy.

In his second post, Russell links to a review of mindfulness-based interventions (MBIs) in education. Appearing in the ‘Mindfulness’ journal, it is obviously in broad support of MBIs, but its conclusions are hedged: ‘Research on the neurobiology of mindfulness in adults suggests that sustained mindfulness practice can ….’ ‘mindfulness training holds promise for being one such intervention for teachers.’ His second link is to a masterpiece of pseudo-science delivered by Joe Dispenza, author of many titles including ‘Becoming Supernatural: How Common People are Doing the Uncommon’ and ‘Breaking the Habit of Being Yourself’. Russell’s 3rd link is to an interview with Matthieu Ricard, one of the Dalai Lama’s translators. Interestingly, but not in this interview, Ricard is very dismissive of secular mindfulness (‘Buddhist meditation without the Buddhism’). His fourth link is to a video presentation about mindfulness from Diana Winston of UCLA. The presenter doesn’t give citations for the research she mentions (so I can’t follow them up): instead, she plugs her own book.

Sarah and Tammy’s three references are not much better. The first is to a self-help book, called ‘Every Teacher Matters: Inspiring Well-Being through Mindfulness’ by K. Lovewell (2012), whose other work includes ‘The Little Book of Self-Compassion: Learn to be your own Best Friend’. The second (Cresswell, J. D. & Lindsay, E.K. (2014). How does mindfulness training affect health? A mindfulness stress buffering account. Current Directions in Psychological Science 23 (6): pp. 401-407) is more solid, but a little dated now. The third (Garland, E., Gaylord, S.A. & Fredrickson, B. L. (2011). Positive Reappraisal Mediates the Stress-Reductive Effects of Mindfulness: An Upward Spiral Process. Mindfulness 2 (1): pp. 59 – 67) is an interesting piece, but of limited value since there was no control group in the research and it tells us nothing about MBIs per se.

The supporting evidence provided by these writers for the claims they make is thin, to say the least. It is almost as if the truth of the claims is self-evident, and for these writers (all of whom use mindfulness practices themselves) there is clearly a personal authentication. But, not having had an epiphany myself and being somewhat reluctant to roll a raisin around my mouth, concentrating on its texture and flavours, fully focussing on the process of eating it (as recommended by Sarah and Tammy), I will, instead, consciously focus on the present moment of research.

Mindfulness and research

The first thing to know is that there has been a lot of research into mindfulness in recent years. The second thing to know is that much of it is poor quality. Here’s why:

  • There is no universally accepted technical definition of ‘mindfulness’ nor any broad agreement about detailed aspects of the underlying concept to which it refers (Van Dam, N. T. et al. (2018). Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation. Perspectives on Psychological Science 13: pp. 36 – 61)
  • To date, there are at least nine different psychometric questionnaires, all of which define and measure mindfulness differently (Purser, R.E. (2019). McMindfulness. Repeater Books. p.128)
  • Mindfulness research tends to rely on self-reporting, which is notoriously unreliable.
  • The majority of studies did not utilize randomized control groups (Goyal, M., Singh, S., Sibinga, E.S., et al. (201). Meditation Programs for Psychological Stress and Well-being: A Systematic Review and Meta-analysis. JAMA Intern Med. 2014. doi:10.1001/ jamainternmed.2013.13018).
  • Early meditation studies were mostly cross-sectional studies: that is, they compared data from a group of meditators with data from a control group at one point in time. A cross-sectional study design precludes causal attribution. (Tang, Y., Hölzel, B. & Posner, M. (2015). The neuroscience of mindfulness meditation. Nature Reviews Neuroscience 16, 213–225)
  • Sample sizes tend to be small and there is often no active control group. There are few randomized controlled trials (Dunning, D.L., Griffiths, K., Kuyken, W., Crane, C., Foulkes, L., Parker, J. and Dalgleish, T. (2019), Research Review: The effects of mindfulness‐based interventions on cognition and mental health in children and adolescents – a meta‐analysis of randomized controlled trials. (Journal of Child Psychology and Psychiatry, 60: 244-258. doi:10.1111/jcpp.12980)
  • There is a relatively strong bias towards the publication of positive or significant results (Coronado-Montoya, S., Levis, A.W., Kwakkenbos, L., Steele, R.J., Turner, E.H. & Thombs, B.D. (2016). Reporting of Positive Results in Randomized Controlled Trials of Mindfulness-Based Mental Health Interventions. PLoS ONE 11(4): e0153220. https://doi.org/10.1371/journal.pone.0153220)
  • More recent years have not seen significant improvements in the rigorousness of research (Goldberg SB, Tucker RP, Greene PA, Simpson TL, Kearney DJ, Davidson RJ (2017). Is mindfulness research methodology improving over time? A systematic review. PLoS ONE 12(10): e0187298).

 

The overall quality of the research into mindfulness is so poor that a group of fifteen researchers came together to write a paper entitled ‘Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation’ (Van Dam, N. T. et al. (2018). Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation. Perspectives on Psychological Science 13: pp. 36 – 61).

So, the research is problematic and replication is needed, but it does broadly support the claim that mindfulness meditation exerts beneficial effects on physical and mental health, and cognitive performance (Tang, Y., Hölzel, B. & Posner, M. (2015). The neuroscience of mindfulness meditation. Nature Reviews Neuroscience 16, 213–225). The italicized broadly is important here. As one of the leaders of the British Mindfulness in Schools Project (which has trained thousands of teachers in the UK) puts it, ‘research on mindfulness in schools is still in its infancy, particularly in relation to impacts on behaviour, academic performance and physical health. It can best be described as ‘promising’ and ‘worth trying’ (Weare, K. (2018). Evidence for the Impact of Mindfulness on Children and Young People. The Mindfulness in Schools Project). We don’t know what kind of MBIs are most effective, what kind of ‘dosage’ should be administered, what kinds of students it is (and is not) appropriate for, whether instructor training is significant or what cost-benefits it might bring. In short, there is more that we do not know than we know.

One systematic review, for example, found that MBIs had ‘small, positive effects on cognitive and socioemotional processes but these effects were not seen for behavioral or academic outcomes’. What happened to the promises of improved concentration, calmer behaviour and willpower? The review concludes that ‘the evidence from this review urges caution in the widespread adoption of MBIs and encourages rigorous evaluation of the practice should schools choose to implement it’ (Maynard, B. R., Solis, M., Miller, V. & Brendel, K. E. (2017). Mindfulness-based interventions for improving cognition, academic achievement, behavior and socio-emotional functioning of primary and secondary students. A Campbell Systematic Review 2017:5).

What about the claims for neurological change? As a general rule, references to neuroscience by educators should be taken with skepticism. Whilst it appears that ‘mindfulness meditation might cause neuroplastic changes in the structure and function of brain regions involved in regulation of attention, emotion and self-awareness’ (Tang, Y., Hölzel, B. & Posner, M. (2015). The neuroscience of mindfulness meditation. Nature Reviews Neuroscience 16, 213–225), this doesn’t really tell us very much. A complex mental state like mindfulness ‘is likely to be supported by the large-scale brain networks’ (ibid) and insights derived from fMRI scans of particular parts of the brain provide us with, at best, only a trivial understanding of what is going on. Without a clear definition of what mindfulness actually is, it is going to be some time before we unravel the neural mechanisms underpinning it. If, in fact, we ever do. By way of comparison, you might be interested in reading about neuroscientific studies into prayer , which also appears to correlate with enhanced wellbeing.

Rather than leaving things with the research, I’d like to leave you with a few more short mindfulness raisins to chew on.

Mindfulness and money

As Russell says in his blog post, ‘research in science doesn’t come out of a vacuum’. Indeed, it tends to follow the money. It is estimated that mindfulness is now ‘a $4 billion industry’ (Purser, R.E. (2019). McMindfulness. Repeater Books. p.13): ‘More than 100,000 books for sale on Amazon have a variant of ‘mindfulness’ in their title, touting the benefits of Mindful Parenting, Mindful Eating, Mindful Teaching, Mindful Therapy, Mindful Leadership, Mindful Finance, a Mindful Nation, and Mindful Dog Owners, to name just a few. There is also The Mindfulness Coloring Book, a bestselling genre in itself. Besides books, there are workshops, online courses, glossy magazines, documentary films, smartphone apps, bells, cushions, bracelets, beauty products and other paraphernalia, as well as a lucrative and burgeoning conference circuit’.

It is precisely because so much money is at stake that so much research has been funded. More proof is desperately needed, and it is sadly unforthcoming. Meanwhile, in the immortal words of Kayleigh McEnany, ‘science should not stand in the way.’

Minefulness and the individual

Mindfulness may be aptly described as a ‘technology of the self’. Ronald Purser, the author of ‘McMindfulness’, puts it like this: ‘Rather than discussing how attention is monetized and manipulated by corporations such as Google, Facebook, Twitter and Apple, [mindfulness advocates] locate crisis in our minds. It is not the nature of the capitalist system that is inherently problematic; rather, it is the failure of individuals to be mindful and resilient in a precarious and uncertain economy. Then they sell us solutions that make us contented mindful capitalists’.

It is this focus on the individual that makes it so appealing to right-wing foundations (e.g. the Templeton Foundation) that fund the research into mindfulness. For more on this topic, see my post about grit .

Mindfulness and religion

It is striking how often mindfulness advocates, like Amy, feel the need to insist that mindfulness is not a religious practice. Historically, of course, mindfulness comes direct from a Buddhist tradition, but in its present Western incarnation, it is a curious hybrid. Jon Kabat-Zinn  who, more than anyone else, has transformed mindfulness into a marketable commodity, is profoundly ambiguous on the topic. Buddhists, like Matthieu Ricard or David Forbes (author of ‘Mindfulness and its Discontents’, Fernwood Publishing, 2019), have little time for the cultural appropriation of the Pali term ‘Sati’, especially when mindfulness is employed by the American military for training for snipers. Others, like Goldie Hawn, whose MindUP programme sells well in the US, are quite clear about their religious affiliation and their desire to bring Buddhism into schools through the back door.

I personally find it hard to see the banging of Tibetan bowls as anything other than a religious act, but I am less bothered by this than those American school districts who saw MBIs as ‘covert religious indoctrination’ and banned them. Having said that, why not promote more prayer in schools if the ‘neuroscience’ supports it?

Clare is a busy teacher

Image from Andrew Percival , inspired by The Ladybird Book of Mindfulness and similar titles.

Around 25 years ago, when I worked at International House London, I used to teach a course called ‘Current Trends in ELT’. I no longer have records of the time so I can’t be 100% sure what was included in the course, but task-based learning, the ‘Lexical Approach’, the use of corpora, English as a Lingua Franca, learner autonomy / centredness, reflective practice and technology (CALL and CD-ROMs) were all probably part of it. I see that IH London still offers this course (next available course in January 2021) and I am struck by how similar the list of contents is. Only ‘emerging language’, CLIL, DOGME and motivation are clearly different from the menu of 25 years ago.

The term ‘current trends’ has always been a good hook to sell a product. Each year, any number of ELT conferences chooses it as their theme. Coursebooks, like ‘Cutting Edge’ or ‘Innovations’, suggest in their titles something fresh and appealing. And, since 2003, the British Council has used its English Language Teaching Innovation Awards to position itself as forward-thinking and innovative.

You could be forgiven for wondering what is especially innovative about many of the ELTon award-winners, or indeed, why neophilia actually matters at all. The problem, in a relatively limited world like language teaching, is that only so much innovation is either possible or desirable.

A year after the ELTons appeared, Adrian Underhill wrote an article about ‘Trends in English Language Teaching Today’. Almost ten years after I was teaching ‘current trends’, Adrian’s list included the use of corpora, English as a Lingua Franca, reflective practice and learner-centredness. His main guess was that practitioners would be working more with ‘the fuzzy, the unclear, the unfinished’. He hadn’t reckoned on the influence of the CEFR, Pearson’s Global Scale of English and our current obsession with measuring everything!

Jump just over ten years and Chia Suan Chong offered a listicle of ‘Ten innovations that have changed English language teaching for the British Council. Most of these were technological developments (platforms, online CPD, mobile learning) but a significant newcomer to the list was ‘soft skills’ (especially critical thinking).

Zooming forward nearer to the present, Chia then offered her list of ‘Ten trends and innovations in English language teaching for 2018’ in another post for the British Council. English as a Lingua Franca was still there, but gone were task-based learning and the ‘Lexical Approach’, corpora, learner-centredness and reflective practice. In their place came SpLNs, multi-literacies, inquiry-based learning and, above all, more about technology – platforms, mobile and blended learning, gamification.

I decided to explore current ‘current trends’ by taking a look at the last twelve months of blog posts from the four biggest UK ELT publishers. Posts such as these are interesting in two ways: (1) they are an attempt to capture what is perceived as ‘new’ and therefore more likely to attract clicks, and (2) they are also an attempt to set an agenda – they reflect what these commercial organisations would like us to be talking and thinking about. The posts reflect reasonably well the sorts of topics that are chosen for webinars, whether directly hosted or sponsored.

The most immediate and unsurprising observation is that technology is ubiquitous. No longer one among a number of topics, technology now informs (almost) all other topics. Before I draw a few conclusion, here are more detailed notes.

Pearson English blog

Along with other publishers, Pearson were keen to show how supportive to teachers they were, and the months following the appearance of the pandemic saw a greater number than normal of blog posts that did not focus on particular Pearson products. Over the last twelve months as a whole, Pearson made strenuous efforts to draw attention to their Global Scale of English and the Pearson Test of English. Assessment of one kind or another was never far away. But the other big themes of the last twelve months have been ‘soft / 21st century skills (creativity, critical thinking, collaboration, leadership, etc.), and aspects of social and emotional learning (especially engagement / motivation, anxiety and mindfulness). Three other topics also featured more than once: mediation, personalization and SpLN (dyslexia).

OUP ELT Global blog

The OUP blog has, on the whole, longer, rather more informative posts than Pearson. They also tend to be less obviously product-oriented, and fewer are written by in-house marketing people. The main message that comes across is the putative importance of ‘soft / 21st century skills’, which Oxford likes to call ‘global skills’ (along with the assessment of these skills). One post manages to pack three buzzwords into one title: ‘Global Skills – Create Empowered 21st Century Learners’. As with Pearson, ‘engagement / engaging’ is probably the most over-used word in the last twelve months. In the social and emotional area, OUP focuses on teacher well-being, rather than mindfulness (although, of course, mindfulness is a path to this well-being). There is also an interest in inquiry-based learning, literacies (digital and assessment), formative assessment and blended learning.

Macmillan English blog

The Macmillan English ‘Advancing Learning’ blog is a much less corporate beast than the Pearson and OUP blogs. There have been relatively few posts in the last twelve months, and no clear message emerges. The last year has seen posts on the Image Conference, preparing for IELTS, student retention, extensive reading, ELF pronunciation, drama, mindfulness, Zoom, EMI, and collaboration skills.

CUP World of Better Learning blog

The CUP blog, like Macmillan’s, is an eclectic affair, with no clearly discernible messages beyond supporting teachers with tips and tools to deal with the shift to online teaching. Motivation and engagement are fairly prominent (with Sarah Mercer contributing both here and at OUP). Well-being (and the inevitable nod to mindfulness) gets a look-in. Other topics include SpLNs, video and ELF pronunciation (with Laura Patsko contributing both here and at the Macmillan site).

Macro trends

My survey has certainly not been ‘scientific’, but I think it allows us to note a few macro-trends. Here are my thoughts:

  • Measurement of language and skills (both learning and teaching skills) has become central to many of our current concerns.
  • We are now much less interested in issues which are unique to language learning and teaching (e.g. task-based learning, the ‘Lexical Approach’, corpora) than we used to be.
  • Current concerns reflect much more closely the major concerns of general education (measurement, 21st century skills, social-emotional learning) than they used to. It is no coincidence that these reflect the priorities of those who shape global educational policy (OECD, World Bank, etc.).
  • 25 years ago, current trends were more like zones of interest. They were areas to explore, research and critique further. As such, we might think of them as areas of exploratory practice (‘Exploratory Practice’ itself was a ‘current trend’ in the mid 1990s). Current ‘current trends’ are much more enshrined. They are things to be implemented, and exploration of them concerns the ‘how’, not the ‘whether’.

The idea of ‘digital natives’ emerged at the turn of the century, was popularized by Marc Prensky (2001), and rapidly caught the public imagination, especially the imagination of technology marketers. Its popularity has dwindled a little since then, but is still widely used. Alternative terms include ‘Generation Y’, ‘Millennials’ and the ‘Net Generation’, definitions of which vary slightly from writer to writer. Two examples of the continued currency of the term ‘digital native’ are a 2019 article on the Pearson Global Scale of English website entitled ‘Teaching digital natives to become more human’ and an article in The Pie News (a trade magazine for ‘professionals in international education’), extolling the virtues of online learning for digital natives in times of Covid-19.

Key to understanding ‘digital natives’, according to users of the term, is their fundamental differences from previous generations. They have grown up immersed in technology, have shorter attention spans, and are adept at multitasking. They ‘are no longer the people our educational system was designed to teach’ (Prensky, 2001), so educational systems must change in order to accommodate their needs.

The problem is that ‘digital natives’ are a myth. Prensky’s ideas were not based on any meaningful research: his observations and conclusions, seductive though they might be, were no more than opinions. Kirschner and De Bruyckere (2017) state the research consensus:

There is no such thing as a digital native who is information-skilled simply because (s)he has never known a world that was not digital. […] One of the alleged abilities of students in this generation, the ability to multitask, does not exist and that designing education that assumes the presence of this ability hinders rather than helps learning.

This is neither new (see Bennett et al., 2008) nor contentious. Almost ten years ago, Thomas (2011:3) reported that ‘some researchers have been asked to remove all trace of the term from academic papers submitted to conferences in order to be seriously considered for inclusion’. There are reasons, he added, to consider some uses of the term nothing more than technoevangelism (Thomas, 2011:4). Perhaps someone should tell Pearson and the Pie News? Then, again, perhaps, they wouldn’t care.

The attribution of particular characteristics to ‘digital natives’ / ‘Generation Y’ / ‘Millennials’ is an application of Generation Theory. This can be traced back to a 1928 paper by Karl Mannheim, called ‘Das Problem der Generationen’ which grew in popularity after being translated into English in the 1950s. According to Jauregui et al (2019), the theory was extensively debated in the 1960s and 1970s, but then disappeared from academic study. The theory was not supported by empirical research, was considered to be overly schematised and too culturally-bound, and led inexorably to essentialised and reductive stereotypes.

But Generation Theory gained a new lease of life in the 1990s, following the publication of ‘Generations’ by William Strauss and Neil Howe. The book was so successful that it spawned a slew of other titles leading up to ‘Millennials Rising’ (Howe & Strauss, 2000). This popularity has continued to the present, with fans including Steve Bannon (Kaiser, 2016) who made an ‘apocalyptical and polemical’ documentary film about the 2007 – 2008 financial crisis entitled ‘Generation Zero’. The work of Strauss and Howe has been dismissed as ‘more popular culture than social science’ (Jauregui et al., 2019: 63) and in much harsher terms in two fascinating articles in Jacobin (Hart, 2018) and Aeon (Onion, 2015). The sub-heading of the latter is ‘generational labels are lazy, useless and just plain wrong’. Although dismissed by scholars as pseudo-science, the popularity of such Generation Theory helps explain why Prensky’s paper about ‘digital natives’ fell on such fertile ground. The saying, often falsely attributed to Mark Twain, that we should ‘never let the truth get in the way of a good story’ comes to mind.

But by the end of the first decade of this century, ‘digital natives’ had become problematic in two ways: not only did the term not stand up to close analysis, but it also no longer referred to the generational cohort that pundits and marketers wanted to talk about.

Around January 2018, use of the term ‘Generation Z’ began to soar, and is currently at its highest point ever in the Google Trends graph. As with ‘digital natives’, the precise birth dates of Generation Z vary from writer to writer. After 2001, according to the Cambridge dictionary; slightly earlier according to other sources. The cut-off point is somewhere between the mid and late 2010s. Other terms for this cohort have been proposed, but ‘Generation Z’ is the most popular.

William Strauss died in 2007 and Neil Howe was in his late 60s when ‘Generation Z’ became a thing, so there was space for others to take up the baton. The most successful have probably been Corey Seemiller and Meghan Grace, who, since 2016, have been churning out a book a year devoted to ‘Generation Z’. In the first of these (Seemiller & Grace, 2016), they were clearly keen to avoid some of the criticisms that had been levelled at Strauss and Howe, and they carried out research. This consisted of 1143 responses to a self-reporting questionnaire by students at US institutions of higher education. The survey also collected information about Kolb’s learning styles and multiple intelligences. With refreshing candour, they admit that the sample is not entirely representative of higher education in the US. And, since it only looked at students in higher education, it told us nothing at all about those who weren’t.

In August 2018, Pearson joined the party, bringing out a report entitled ‘Beyond Millennials: The Next Generation of Learners’. Conducted by the Harris Poll, the survey looked at 2,587 US respondents, aged between 14 and 40. The results were weighted for age, gender, race/ethnicity, marital status, household income, and education, so were rather more representative than the Seemiller & Grace research.

In ELT and educational references to ‘Generation Z’, research, of even the very limited kind mentioned above, is rarely cited. When it is, Seemiller and Grace feature prominently (e.g. Mohr & Mohr, 2017). Alternatively, even less reliable sources are used. In an ELT webinar entitled ‘Engaging Generation Z’, for example, information about the characteristics of ‘Generation Z’ learners is taken from an infographic produced by an American office furniture company.

But putting aside quibbles about the reliability of the information, and the fact that it most commonly[1] refers to Americans (who are not, perhaps, the most representative group in global terms), what do the polls tell us?

Despite claims that Generation Z are significantly different from their Millennial predecessors, the general picture that emerges suggests that differences are more a question of degree than substance. These include:

  • A preference for visual / video information over text
  • A variety of bite-sized, entertaining educational experiences
  • Short attention spans and zero tolerance for delay

All of these were identified in 2008 (Williams et al., 2008) as characteristics of the ‘Google Generation’ (a label which usually seems to span Millennials and Generation Z). There is nothing fundamentally different from Prensky’s description of ‘digital natives’. The Pearson report claimed that ‘Generation Z expects experiences both inside and outside the classroom that are more rewarding, more engaging and less time consuming. Technology is no longer a transformative phenomena for this generation, but rather a normal, integral part of life’. However, there is no clear disjuncture or discontinuity between Generation Z and Millennials, any more than there was between ‘digital natives’ and previous generations (Selwyn, 2009: 375). What has really changed is that the technology has moved on (e.g. YouTube was founded in 2005 and the first iPhone was released in 2007).

TESOL TurkeyThe discourse surrounding ‘Generation Z’ is now steadily finding its way into the world of English language teaching. The 2nd TESOL Turkey International ELT Conference took place last November with ‘Teaching Generation Z: Passing on the baton from K12 to University’ as its theme. A further gloss explained that the theme was ‘in reference to the new digital generation of learners with outstanding multitasking skills; learners who can process and absorb information within mere seconds and yet possess the shortest attention span ever’.

 

A few more examples … Cambridge University Press ran a webinar ELT webinar entitled ‘Engaging Generation Z’ and Macmillan Education has a coursebook series called ‘Exercising English for Generation Z’. EBC, a TEFL training provider, ran a blog post in November last year, ‘Teaching English to generation Z students’. And EFL Magazine had an article, ‘Critical Thinking – The Key Competence For The Z Generation’, in February of this year.

The pedagogical advice that results from this interest in Generation Z seems to boil down to: ‘Accept the digital desires of the learners, use lots of video (i.e. use more technology in the classroom) and encourage multi-tasking’.

No one, I suspect, would suggest that teachers should not make use of topics and technologies that appeal to their learners. But recommendations to change approaches to language teaching, ‘based solely on the supposed demands and needs of a new generation of digital natives must be treated with caution’ (Bennett et al., 2008: 782). It is far from clear that generational differences (even if they really exist) are important enough ‘to be considered during the design of instruction or the use of different educational technologies – at this time, the weight of the evidence is negative’ (Reeves, 2008: 21).

Perhaps, it would be more useful to turn away from surveys of attitudes and towards more fact-based research. Studies in both the US and the UK have found that myopia and other problems with the eyes is rising fast among the Generation Z cohort, and that there is a link with increased screen time, especially with handheld devices. At the same time, Generation Zers are much more likely than their predecessors to be diagnosed with anxiety disorder and depression. While the connection between technology use and mental health is far from clear, it is possible that  ‘the rise of the smartphone and social media have at least something to do with [the rise in mental health issues]’ (Twenge, 2017).

Should we be using more technology in class because learners say they want or need it? If we follow that logic, perhaps we should also be encouraging the consumption of fast food, energy drinks and Ritalin before and after lessons?

[1] Studies have been carried out in other geographical settings, including Europe (e.g. Triple-a-Team AG, 2016) and China (Tang, 2019).

References

Bennett S., Maton K., & Kervin, L. (2008). The ‘digital natives’ debate: a critical review of the evidence. British Jmournal of Educational Technology, 39 (5):pp. 775-786.

Hart, A. (2018). Against Generational Politics. Jacobin, 28 February 2018. https://jacobinmag.com/2018/02/generational-theory-millennials-boomers-age-history

Howe, N. & Strauss, W. (2000). Millennials Rising: The Next Great Generation. New York, NY: Vintage Books.

Jauregui, J., Watsjold, B., Welsh, L., Ilgen, J. S. & Robins, L. (2019). Generational “othering”: The myth of the Millennial learner. Medical Education,54: pp.60–65. https://onlinelibrary.wiley.com/doi/pdf/10.1111/medu.13795

Kaiser, D. (2016). Donald Trump, Stephen Bannon and the Coming Crisis in American National Life. Time, 18 November 2016. https://time.com/4575780/stephen-bannon-fourth-turning/

Kirschner, P.A. & De Bruyckere P. (2017). The myths of the digital native and the multitasker. Teaching and Teacher Education, 67: pp. 135-142. https://www.sciencedirect.com/science/article/pii/S0742051X16306692

Mohr, K. A. J. & Mohr, E. S. (2017). Understanding Generation Z Students to Promote a Contemporary Learning Environment. Journal on Empowering Teacher Excellence, 1 (1), Article 9 DOI: https://doi.org/10.15142/T3M05T

Onion, R. (2015). Against generations. Aeon, 19 May, 2015. https://aeon.co/essays/generational-labels-are-lazy-useless-and-just-plain-wrong

Pearson (2018). Beyond Millennials: The Next Generation of Learners. https://www.pearson.com/content/dam/one-dot-com/one-dot-com/global/Files/news/news-annoucements/2018/The-Next-Generation-of-Learners_final.pdf

Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon, 9: pp. 1- 6

Reeves, T.C. (2008). Do Generational Differences Matter in Instructional Design? Athens, GA: University of Georgia, Department of Educational Psychology and Instructional Technology

Seemiller, C. & and Grace, M. (2016). Generation Z Goes to College. San Francisco: Jossey-Bass

Selwyn, N. (2009). The digital native-myth and reality. Perspectives, 61: pp. 364-379

Strauss W. & Howe, N. (1991). Generations: The History of America’s Future, 1584 to 2069. New York, New York: HarperCollins.

Tang F. (2019). A critical review of research on the work-related attitudes of Generation Z in China. Social Psychology and Society, 10 (2): pp. 19—28. Available at: https://psyjournals.ru/files/106927/sps_2019_n2_Tang.pdf

Thomas, M. (2011). Technology, Education, and the Discourse of the Digital Native: Between Evangelists and Dissenters. In Thomas, M. (ed). (2011). Deconstructing Digital Natives: Young people, technology and the new literacies. London: Routledge. pp. 1- 13)

Triple-a-Team AG. (2016). Generation Z Metastudie über die kommende Generation. Biglen, Switzerland. Available at: http://www.sprachenrat.bremen.de/files/aktivitaeten/Generation_Z_Metastudie.pdf

Twenge, J. M. (2017). iGen. New York: Atria Books

Williams, P., Rowlands, I. & Fieldhouse, M. (2008). The ‘Google Generation’ – myths and realities about young people’s digital information behaviour. In Nicholas, D. & Rowlands, I. (eds.) (2008). Digital Consumers. London: Facet Publishers.

Definition of gritGrit book cover

from Quartz at Work magazine

 

Grit is on the up. You may have come across articles like ‘How to Be Gritty in the Time of COVID-19’ or ‘Rediscovering the meaning of grit during COVID-19’ . If you still want more, there are new videos from Angela Duckworth herself where we can learn how to find our grit in the face of the pandemic.

Schools and educational authorities love grit. Its simple, upbeat message (‘Yes, you can’) has won over hearts and minds. Back in 2014, the British minister for education announced a £5million plan to encourage teaching ‘character and resilience’ in schools – specifically looking at making Britain’s pupils ‘grittier’. The spending on grit hasn’t stopped since.

The publishers of Duckworth’s book paid a seven-figure sum to acquire the US rights, and sales have proved the wisdom of the investment. Her TED talk has had over 6.5 million views on YouTube, although it’s worth looking at the comments to see why many people have been watching it.

Youtube comments

The world of English language teaching, always on the lookout for a new bandwagon to jump onto, is starting to catch up with the wider world of education. Luke Plonsky, an eminent SLA scholar, specialist in meta-analyses and grit enthusiast, has a bibliography of grit studies related to L2 learning, that he deems worthy of consideration. Here’s a summary, by year, of those publications. More details will follow in the next section.

Plonsky biblio

We can expect interest in ‘grit’ to continue growing, and this may be accelerated by the publication this year of Engaging Language Learners in Contemporary Classrooms by Sarah Mercer and Zoltán Dörnyei. In this book, the authors argue that a ‘facilitative mindset’ is required for learner engagement. They enumerate five interrelated principles for developing a ‘facilitative mindset’: promote a sense of competence, foster a growth mindset, promote learners’ sense of ownership and control, develop proactive learners and, develop gritty learners. After a brief discussion of grit, they write: ‘Thankfully, grit can be learnt and developed’ (p.38).

Unfortunately, they don’t provide any evidence at all for this. Unfortunately, too, this oversight is easy to explain. Such evidence as there is does not lend unequivocal support to the claim. Two studies that should have been mentioned in this book are ‘Much ado about grit: A meta-analytic synthesis of the grit literature’ (Credé et al, 2017) and ‘What shall we do about grit? A critical review of what we know and what we don’t know’ (Credé, 2018). The authors found that ‘grit as it is currently measured does not appear to be particularly predictive of success and performance’ (Credé et al, 2017) and that there is no support for the claim that ‘grit is likely to be responsive to interventions’ (Credé, 2018). In the L2 learning context, Teimouri et al (2020) concluded that more research in SLA substantiating the role of grit in L2 contexts was needed before any grit interventions can be recommended.

It has to be said that such results are hardly surprising. If, as Duckworth claims, ‘grit’ is a combination of passion and persistence, how on earth can the passion part of it be susceptible to educational interventions? ‘If there is one thing that cannot be learned, it’s passion. A person can have it and develop it, but learn it? Sadly, not’. (De Bruyckere et al., 2020: 83)

Even Duckworth herself is not convinced. In an interview on a Freakonomics podcast, she states that she hopes it’s something people can learn, but also admits not having enough proof to confirm that they can (Kirschner & Neelen, 2016)!

Is ‘grit’ a thing?

Marc Jones, in a 2016 blog post entitled ‘Gritty Politti: Grit, Growth Mindset and Neoliberal Language Teaching’, writes that ‘Grit is so difficult to define that it takes Duckworth (2016) the best part of a book to describe it adequately’. Yes, ‘grit’ is passion and persistence (or perseverance), but it’s also conscientiousness, practice and hope. Credé et al (2017) found that ‘grit is very strongly correlated with conscientiousness’ (which has already been widely studied in the educational literature). Why lump this together with passion? Another study (Muenks et al., 2017) found that ‘Students’ grit overlapped empirically with their concurrently reported self-control, self-regulation, and engagement. Students’ perseverance of effort (but not their consistency of interests) predicted their later grades, although other self-regulation and engagement variables were stronger predictors of students’ grades than was grit’. Credé (2018) concluded that ‘there appears to be no reason to accept the combination of perseverance and passion for long-term goals into a single grit construct’.

The L2 learning research listed in Plonsky’s bibliography does not offer much in support of ‘grit’, either. Many of the studies identified problems with ‘grit’ as a construct, but, even when accepting it, did not find it to be of much value. Wei et al. (2019) found a positive but weak correlation between grit and English language course grades. Yamashita (2018) found no relationship between learners’ grit and their course grades. Taşpinar & Külekçi (2018) found that students’ grit levels and academic achievement scores did not relate to each other (but still found that ‘grit, perseverance, and tenacity are the essential elements that impact learners’ ability to succeed to be prepared for the demands of today’s world’!).

There are, then, grounds for suspecting that Duckworth and her supporters have fallen foul of the ‘jangle fallacy’ – the erroneous assumption that two identical or almost identical things are different because they are labelled differently. This would also help to explain the lack of empirical support for the notion of ‘grit’. Not only are the numerous variables insufficiently differentiated, but the measures of ‘grit’ (such as Duckworth’s Grit-S measure) do not adequately target some of these variables (e.g. long-term goals, where ‘long-term’ is not defined) (Muenks et al., 2017). In addition, these measures are self-reporting and not, therefore, terribly reliable.

Referring to more general approaches to character education, one report (Gutman & Schoon, 2012) has argued that there is little empirical evidence of a causal relationship between self-concept and educational outcomes. Taking this one step further, Kathryn Ecclestone (Ecclestone, 2012) suggests that at best, the concepts and evidence that serve as the basis of these interventions are inconclusive and fragmented; ‘at worst, [they are] prey to ‘advocacy science’ or, in [their] worst manifestations, to simple entrepreneurship that competes for publicly funded interventions’ (cited in Cabanas & Illouz, 2019: 80).

Criticisms of ‘grit’

Given the lack of supporting research, any practical application of ‘grit’ ideas is premature. Duckworth herself, in an article entitled ‘Don’t Believe the Hype About Grit, Pleads the Scientist Behind the Concept’ (Dahl, 2016), cautions against hasty applications:

[By placing too much emphasis on grit, the danger is] that grit becomes a scapegoat — another reason to blame kids for not doing well, or to say that we don’t have a responsibility as a society to help them. [She worries that some interpretations of her work might make a student’s failure seem like his problem, as if he just didn’t work hard enough.] I think to separate and pit against each other character strengths on the one hand — like grit — and situational opportunities on the other is a false dichotomy […] Kids need to develop character, and they need our support in doing so.

Marc Jones, in the blog mentioned above, writes that ‘to me, grit is simply another tool for attacking the poor and the other’. You won’t win any prizes for guessing which kinds of students are most likely to be the targets of grit interventions. A clue: think of the ‘no-nonsense’ charters in the US and academies in the UK. This is what Kenneth Saltzman has to say:

‘Grit’ is a pedagogy of control that is predicated upon a promise made to poor children that if they learnt the tools of self-control and learnt to endure drudgery, then they can compete with rich children for scarce economic resources. (Saltzman, 2017: 38)

[It] is a behaviourist form of learned self-control targeting poor students of color and has been popularized post-crisis in the wake of educational privatization and defunding as the cure for poverty. [It] is designed to suggest that individual resilience and self-reliance can overcome social violence and unsupportive social contexts in the era of the shredded social state. (Saltzman, 2017: 15)

Grit is misrepresented by proponents as opening a world of individual choices rather than discussed as a mode of educational and social control in the austere world of work defined by fewer and fewer choices as secure public sector work is scaled back, unemployment continuing at high levels. (Saltzman, 2017: 49)

Whilst ‘grit’ is often presented as a way of dealing with structural inequalities in schools, critics see it as more of a problem than a solution: ‘It’s the kids who are most impacted by, rebel against, or criticize the embedded racism and classism of their institutions that are being told to have more grit, that school is hard for everyone’ (EquiTEA, 2018). A widely cited article by Nicholas Tampio (2016) points out that ‘Duckworth celebrates educational models such as Beast at West Point that weed out people who don’t obey orders’. He continues ‘that is a disastrous model for education in a democracy. US schools ought to protect dreamers, inventors, rebels and entrepreneurs – not crush them in the name of grit’.

If you’re interested in reading more critics of grit, the blog ‘Debunked!’ is an excellent source of links.

Measuring grit

Analyses of emotional behaviour have become central to economic analysis and, beginning in the 1990s, there have been constant efforts to create ‘formal instruments of classification of emotional behaviour and the elaboration of the notion of emotional competence’ (Illouz, 2007: 64). The measurement and manipulation of various aspects of ‘emotional intelligence’ have become crucial as ways ‘to control, predict, and boost performance’ (Illouz, 2007: 65). An article in the Journal of Benefit-Cost Analysis (Belfield et al., 2015) makes the economic importance of emotions very clear. Entitled ‘The Economic Value of Social and Emotional Learning’, it examines the economic value of these skills within a benefit-cost analysis (BCA) framework, and finds that the benefits of [social and emotional learning] interventions substantially outweigh the costs.

In recent years, the OECD has commissioned a number of reports on social and emotional learning and, as with everything connected with the OECD, is interested in measuringnon-cognitive skills such as perseverance (“grit”), conscientiousness, self-control, trust, attentiveness, self-esteem and self-efficacy, resilience to adversity, openness to experience, empathy, humility, tolerance of diverse opinions and the ability to engage productively in society’ (Kautz et al., 2014: 9). The measurement of personality factors will feature in the OECD’s PISA programme. Elsewhere, Ben Williamson reports that ‘US schools [are] now under pressure—following the introduction of the Every Student Succeeds Act in 2015—to provide measurable evidence of progress on the development of students’ non-academic learning’ (Williamson, 2017).

Grit, which ‘starts and ends with the lone individual as economic actor, worker, and consumer’ (Saltzman, 2017: 50), is a recent addition to the categories of emotional competence, and it should come as no surprise that educational authorities have so wholeheartedly embraced it. It is the claim that something (i.e. ‘grit’) can be taught and developed that leads directly to the desire to measure it. In a world where everything must be accountable, we need to know how effective and cost-effective our grit interventions have been.

The idea of measuring personality constructs like ‘grit’ worries even Angela Duckworth. She writes (Duckworth, 2016):

These days, however, I worry I’ve contributed, inadvertently, to an idea I vigorously oppose: high-stakes character assessment. New federal legislation can be interpreted as encouraging states and schools to incorporate measures of character into their accountability systems. This year, nine California school districts will begin doing this. But we’re nowhere near ready — and perhaps never will be — to use feedback on character as a metric for judging the effectiveness of teachers and schools. We shouldn’t be rewarding or punishing schools for how students perform on these measures.

Diane Ravitch (Ravitch, 2016) makes the point rather more forcefully: ‘The urge to quantify the unmeasurable must be recognized for what it is: stupid; arrogant; harmful; foolish, yet another way to standardize our beings’. But, like it or not, attempts to measure ‘grit’ and ‘grit’ interventions are unlikely to go away any time soon.

‘Grit’ and technology

Whenever there is talk about educational measurement and metrics, we are never far away from the world of edtech. It may not have escaped your notice that the OECD and the US Department of State for Education, enthusiasts for promoting ‘grit’, are also major players in the promotion of the datafication of education. The same holds true for organisations like the World Education Forum, the World Bank and the various philanthro-capitalist foundations to which I have referred so often in this blog. Advocacy of social and emotional learning goes hand in hand with edtech advocacy.

Two fascinating articles by Ben Williamson (2017; 2019) focus on ClassDojo, which, according to company information, reaches more than 10 million children globally every day. The founding directors of ClassDojo, writes Ben Williamson (2017), ‘explicitly describe its purpose as promoting ‘character development’ in schools and it is underpinned by particular psychological concepts from character research. Its website approvingly cites the journalist Paul Tough, author of two books on promoting ‘grit’ and ‘character’ in children, and is informed by character research conducted with the US network of KIPP charter schools (Knowledge is Power Program)’. In a circular process, ClassDojo has also ‘helped distribute and popularise concepts such as growth mindset, grit and mindfulness’ (Williamson, 2019).

The connections between ‘grit’ and edtech are especially visible when we focus on Stanford and Silicon Valley. ClassDojo was born in Palo Alto. Duckworth was a consulting scholar at Stanford 2014 -15, where Carol Dweck is a Professor of Psychology. Dweck is the big name behind growth mindset theory, which, as Sarah Mercer and Zoltán Dörnyei indicate, is closely related to ‘grit’. Dweck is also the co-founder of MindsetWorks, whose ‘Brainology’ product is ‘an online interactive program in which middle school students learn about how the brain works, how to strengthen their own brains, and how to ….’. Stanford is also home to the Stanford Lytics Lab, ‘which has begun applying new data analytics techniques to the measurement of non-cognitive learning factors including perseverance, grit, emotional state, motivation and self-regulation’, as well as the Persuasive Technologies Lab, ‘which focuses on the development of machines designed to influence human beliefs and behaviors across domains including health, business, safety, and education’ (Williamson, 2017). The Professor of Education Emeritus at Stanford is Linda Darling-Hammond, one of the most influential educators in the US. Darling-Hammond is known, among many other things, for collaborating with Pearson to develop the edTPA, ‘a nationally available, performance-based assessment for measuring the effectiveness of teacher candidates’. She is also a strong advocate of social-emotional learning initiatives and extols the virtues of ‘developing grit and a growth mindset’ (Hamadi & Darling-Hammond, 2015).

The funding of grit

Angela Duckworth’s Character Lab (‘Our mission is to advance scientific insights that help kids thrive’) is funded by, among others, the Chan Zuckerberg Initiative, the Bezos Family Foundation and Stanford’s Mindset Scholars Network. Precisely how much money Character Lab has is difficult to ascertain, but the latest grant from the Chan Zuckerberg Initiative was worth $1,912,000 to cover the period 2018 – 2021. Covering the same period, the John Templeton Foundation, donated $3,717,258 , the purpose of the grant being to ‘make character development fast, frictionless, and fruitful’.

In an earlier period (2015 – 2018), the Walton Family Foundation pledged $6.5 millionto promote and measure character education, social-emotional learning, and grit’, with part of this sum going to Character Lab and part going to similar research at Harvard Graduate School of Education. Character Lab also received $1,300,000 from the Overdeck Family Foundation for the same period.

It is not, therefore, an overstatement to say that ‘grit’ is massively funded. The funders, by and large, are the same people who have spent huge sums promoting personalized learning through technology (see my blog post Personalized learning: Hydra and the power of ambiguity). Whatever else it might be, ‘grit’ is certainly ‘a commercial tech interest’ (as Ben Williamson put it in a recent tweet).

Postscript

In the 2010 Cohen brothers’ film, ‘True Grit’, the delinquent ‘kid’, Moon, is knifed by his partner, Quincy. Turning to Rooster Cogburn, the man of true grit, Moon begs for help. In response, Cogburn looks at the dying kid and deadpans ‘I can do nothing for you, son’.

References

Belfield, C., Bowden, A., Klapp, A., Levin, H., Shand, R., & Zander, S. (2015). The Economic Value of Social and Emotional Learning. Journal of Benefit-Cost Analysis, 6(3), pp. 508-544. doi:10.1017/bca.2015.55

Cabanas, E. & Illouz, E. (2019). Manufacturing Happy Citizens. Cambridge: Polity Press.

Chaykowski, K. (2017). How ClassDojo Built One Of The Most Popular Classroom Apps By Listening To Teachers. Forbes, 22 May, 2017. https://www.forbes.com/sites/kathleenchaykowski/2017/05/22/how-classdojo-built-one-of-the-most-popular-classroom-apps-by-listening-to-teachers/#ea93d51e5ef5

Credé, M. (2018). What shall we do about grit? A critical review of what we know and what we don’t know. Educational Researcher, 47(9), 606-611.

Credé, M., Tynan, M. C., & Harms, P. D. (2017). Much ado about grit: A meta-analytic synthesis of the grit literature. Journal of Personality and Social Psychology, 113(3), 492. doi:10.1037/pspp0000102

Dahl, M. (2016). Don’t Believe the Hype About Grit, Pleads the Scientist Behind the Concept. The Cut, May 9, 2016. https://www.thecut.com/2016/05/dont-believe-the-hype-about-grit-pleads-the-scientist-behind-the-concept.html

De Bruyckere, P., Kirschner, P. A. & Hulshof, C. (2020). More Urban Myths about Learning and Education. Routledge.

Duckworth, A. (2016). Don’t Grade Schools on Grit. New York Times, March 26, 2016 https://www.nytimes.com/2016/03/27/opinion/sunday/dont-grade-schools-on-grit.html?auth=login-google&smid=nytcore-ipad-share&smprod=nytcore-ipad

Ecclestone, K. (2012). From emotional and psychological well-being to character education: Challenging policy discourses of behavioural science and ‘vulnerability’. Research Papers in Education, 27 (4), pp. 463-480

EquiTEA (2018). The Problem with Teaching ‘Grit’. Medium, 11 December 2018. https://medium.com/@eec/the-problem-with-teaching-grit-8b37ce43a87e

Gutman, L. M. & Schoon, I. (2013). The impact of non-cognitive skills on outcomes for young people: Literature review. London: Institute of Education, University of London

Hamedani, M. G. & Darling-Hammond, L. (2015). Social Emotional Learning in High School: How Three Urban High Schools Engage, Educate, and Empower Youth. Stanford Center for Opportunity Policy in Education

Kirschner, P.A. & Neelen, M. (2016). To Grit Or Not To Grit: That’s The Question. 3-Star Learning Experiences, July 5, 2016 https://3starlearningexperiences.wordpress.com/2016/07/05/to-grit-or-not-to-grit-thats-the-question/

Illouz, E. (2007). Cold Intimacies: The making of emotional capitalism. Cambridge: Polity Press

Kautz, T., Heckman, J. J., Diris, R., ter Weel, B & Borghans, L. (2014). Fostering and Measuring Skills: Improving Cognitive and Non-cognitive Skills to Promote Lifetime Success. OECD Education Working Papers 110, OECD Publishing.

Mercer, S. & Dörnyei, Z. (2020). Engaging Language Learners in Contemporary Classrooms. Cambridge University Press.

Muenks, K., Wigfield, A., Yang, J. S., & O’Neal, C. R. (2017). How true is grit? Assessing its relations to high school and college students’ personality characteristics, self-regulation, engagement, and achievement. Journal of Educational Psychology, 109, pp. 599–620.

Ravitch, D. (2016). Angela Duckworth, please don’t assess grit. Blog post, 27 March 2016, https://dianeravitch.net/2016/03/27/angela-duckworth-please-dont-assess-grit/

Saltzman, K. J. (2017). Scripted Bodies. Routledge.

Tampio, N. (2016). Teaching ‘grit’ is bad for children, and bad for democracy. Aeon, 2 June: https://aeon.co/ideas/teaching-grit-is-bad-for-children-and-bad-for-democracy

Taşpinar, K., & Külekçi, G. (2018). GRIT: An Essential Ingredient of Success in the EFL Classroom. International Journal of Languages’ Education and Teaching, 6, 208-226.

Teimouri, Y., Plonsky, L., & Tabandeh, F. (2020). L2 Grit: Passion and perseverance for second-language learning. Language Teaching Research.

Wei, H., Gao, K., & Wang, W. (2019). Understanding the relationship between grit and foreign language performance among middle schools students: The roles of foreign language enjoyment and classroom Environment. Frontiers in Psychology, 10, 1508. doi: 10.3389/fpsyg.2019.01508

Williamson, B. (2017). Decoding ClassDojo: psycho-policy, social-emotional learning and persuasive educational technologies. Learning, Media and Technology, 42 (4): pp. 440-453, DOI: 10.1080/17439884.2017.1278020

Williamson, B. (2019). ‘Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry. Journal of Professional Learning, 2019 (Semester 2) https://cpl.asn.au/journal/semester-2-2019/killer-apps-for-the-classroom-developing-critical-perspectives-on-classdojo

Yamashita, T. (2018). Grit and second language acquisition: Can passion and perseverance predict performance in Japanese language learning? Unpublished MA thesis, University of Massachusetts, Amherst.

 

If you cast your eye over the English language teaching landscape, you can’t help noticing a number of prominent features that weren’t there, or at least were much less visible, twenty years ago. I’d like to highlight three. First, there is the interest in life skills (aka 21st century skills). Second, there is the use of digital technology to deliver content. And third, there is a concern with measuring educational outputs through frameworks such as the Pearson GSE. In this post, I will focus primarily on the last of these, with a closer look at measuring teacher performance.

Recent years have seen the development of a number of frameworks for evaluating teacher competence in ELT. These include

TESOL has also produced a set of guidelines for developing professional teaching standards for EFL.

Frameworks such as these were not always intended as tools to evaluate teachers. The British Council’s framework, for example, was apparently designed for teachers to understand and plan their own professional development. Similarly, the Cambridge framework says that it is for teachers to see where they are in their development – and think about where they want to go next. But much like the CEFR for language competence, frameworks can be used for purposes rather different from their designers’ intentions. I think it is likely that frameworks such as these are more often used to evaluate teachers than for teachers to evaluate themselves.

But where did the idea for such frameworks come from? Was there a suddenly perceived need for things like this to aid in self-directed professional development? Were teachers’ associations calling out for frameworks to help their members? Even if that were the case, it would still be useful to know why, and why now.

One possibility is that the interest in life skills, digital technology and the measurement of educational outputs have all come about as a result of what has been called the Global Educational Reform Movement, or GERM (Sahlberg, 2016). GERM dates back to the 1980s and the shifts (especially in the United States under Reagan and the United Kingdom under Thatcher) in education policy towards more market-led approaches which emphasize (1) greater competition between educational providers, (2) greater autonomy from the state for educational providers (and therefore a greater role for private suppliers), (3) greater choice of educational provider for students and their parents, and (4) standardized tests and measurements which allow consumers of education to make more informed choices. One of the most significant GERM vectors is the World Bank.

The interest in incorporating the so-called 21st century skills as part of the curriculum can be traced back to the early 1980s when the US National Commission on Excellence in Education recommended the inclusion of a range of skills, which eventually crystallized into the four Cs of communication, collaboration, critical thinking and creativity. The labelling of this skill set as ‘life skills’ or ‘21st century skills’ was always something of a misnomer: the reality was that these were the soft skills required by the world of work. The key argument for their inclusion in the curriculum was that they were necessary for the ‘competitiveness and wealth of corporations and countries’ (Trilling & Fadel, 2009: 7). Unsurprisingly, the World Bank, whose interest in education extends only so far as its economic value, embraced the notion of ‘life skills’ with enthusiasm. Its document ‘Life skills : what are they, why do they matter, and how are they taught?’ (World Bank, 2013), makes the case very clearly. It took a while for the world of English language teaching to get on board, but by 2012, Pearson was already sponsoring a ‘signature event’ at IATEFL Glasgow entitled ‘21st Century Skills for ELT’. Since then, the currency of ‘life skills’ as an ELT buzz phrase has not abated.

Just as the World Bank’s interest in ‘life skills’ is motivated by the perceived need to prepare students for the world of work (for participation in the ‘knowledge economy’), the Bank emphasizes the classroom use of computers and resources from the internet: Information and communication technology (ICT) allows the adaptation of globally available information to local learning situations. […] A large percentage of the World Bank’s education funds are used for the purchase of educational technology. […] According to the Bank’s figures, 40 per cent of their education budget in 2000 and 27 per cent in 2001 was used to purchase technology. (Spring, 2015: 50).

Digital technology is also central to capturing data, which will allow for the measurement of educational outputs. As befits an organisation of economists that is interested in the cost-effectiveness of investments into education, it accords enormous importance to what are thought to be empirical measures or accountability. So intrinsic to the Bank’s approach is this concern with measurement that ‘the Bank’s implicit message to national governments seems to be: ‘improve your data collection capacity so that we can run more reliable cross-country analysis and regressions’. (Verger & Bonal, 2012: 131).

Measuring the performance of teachers is, of course, a part of assessing educational outputs. The World Bank, which sees global education as fundamentally ‘broken’, has, quite recently, turned more of its attention to the role of teachers. A World Bank blog from 2019 explains the reasons:

A growing body of evidence suggests the learning crisis is, at its core, a teaching crisis. For students to learn, they need good teachers—but many education systems pay little attention to what teachers know, what they do in the classroom, and in some cases whether they even show up. Rapid technological change is raising the stakes. Technology is already playing a crucial role in providing support to teachers, students, and the learning process more broadly. It can help teachers better manage the classroom and offer different challenges to different students. And technology can allow principals, parents, and students to interact seamlessly.

A key plank in the World Banks’s attempts to implement its educational vision is its System Assessment and Benchmarking for Education Results (SABER), which I will return to in due course. As part of its SABER efforts, last year the World Bank launched its ‘Teach’ tool . This tool is basically an evaluation framework. Videos of lessons are recorded and coded for indicators of teacher efficiency by coders who can be ‘90% reliable’ after only four days of training. The coding system focuses on the time that students spend on-task, but also ‘life skills’ like collaboration and critical thinking (see below).

Teach framework

Like the ELT frameworks, it can be used as a professional development tool, but, like them, it may also be used for summative evaluation.

The connections between those landmarks on the ELT landscape and the concerns of the World Bank are not, I would suggest, coincidental. The World Bank is, of course, not the only player in GERM, but it is a very special case. It is the largest single source of external financing in ‘developing countries’ (Beech, 2009: 345), managing a portfolio of $8.9 billion, with operations in 70 countries as of August 2013 (Spring, 2015: 32). Its loans come attached with conditions which tie the borrowing countries to GERM objectives. Arguably of even greater importance than its influence through funding, is the Bank’s direct entry into the world of ideas:

The Bank yearns for a deeper and more comprehensive impact through avenues of influence transcending both project and program loans. Not least in education, the World Bank is investing much in its quest to shape global opinion about economic, developmental, and social policy. Rather than imposing views through specific loan negotiations, Bank style is broadening in attempts to lead borrower country officials to its preferred way of thinking. (Jones, 2007: 259).

The World Bank sees itself as a Knowledge Bank and acts accordingly. Rizvi and Lingard (2010: 48) observe that ‘in many nations of the Global South, the only extant education policy analysis is research commissioned by donor agencies such as the World Bank […] with all the implications that result in relation to problem setting, theoretical frameworks and methodologies’. Hundreds of academics are engaged to do research related to the Bank’s areas of educational interest, and ‘the close links with the academic world give a strong credibility to the ideas disseminated by the Bank […] In fact, many ideas that acquired currency and legitimacy were originally proposed by them. This is the case of testing students and using the results to evaluate progress in education’ (Castro, 2009: 472).

Through a combination of substantial financial clout and relentless marketing (Selwyn, 2013: 50), the Bank has succeeded in shaping global academic discourse. In partnership with similar institutions, it has introduced a way of classifying and thinking about education (Beech, 2009: 352). It has become, in short, a major site ‘for the organization of knowledge about education’ (Rizvi & Lingard, 2010: 79), wielding ‘a degree of power that has arguably enabled it to shape the educational agendas of nations throughout the Global South’ and beyond (Menashy, 2012).

So, is there any problem in the world of ELT taking up the inclusion of ‘life skills’? I think there is. The first is one of definition. Creativity and critical thinking are very poorly defined, meaning very different things to different people, so it is not always clear what is being taught. Following on from this, there is substantial debate about whether such skills can actually be taught at all, and, if they can, how they should be taught. It seems highly unlikely that the tokenistic way in which they are ‘taught’ in most published ELT courses can be of any positive impact. But this is not my main reservation, which is that, by and large, we have come to uncritically accept the idea that English language learning is mostly concerned with preparation for the workplace (see my earlier post ‘The EdTech Imaginary in ELT’).

Is there any problem with the promotion of digital technologies in ELT? Again, I think there is, and a good proportion of the posts on this blog have argued for the need for circumspection in rolling out more technology in language learning and teaching. My main reason is that while it is clear that this trend is beneficial to technology vendors, it is much less clear that advantages will necessarily accrue to learners. Beyond this, there must be serious concerns about data ownership, privacy, and the way in which the datafication of education, led by businesses and governments in the Global North, is changing what counts as good education, a good student or an effective teacher, especially in the Global South. ‘Data and metrics,’ observe Williamson et al. (2020: 353), ‘do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place’.

And what about tools for evaluating teacher competences? Here I would like to provide a little more background. There is, first of all, a huge question mark about how accurately such tools measure what they are supposed to measure. This may not matter too much if the tool is only used for self-evaluation or self-development, but ‘once smart systems of data collection and social control are available, they are likely to be widely applied for other purposes’ (Sadowski, 2020: 138). Jaime Saavedra, head of education at the World Bank, insists that the World Bank’s ‘Teach’ tool is not for evaluation and is not useful for firing teachers who perform badly.

Saavedra needs teachers to buy into the tool, so he obviously doesn’t want to scare them off. However, ‘Teach’ clearly is an evaluation tool (if not, what is it?) and, as with other tools (I’m thinking of CEFR and teacher competency frameworks in ELT), its purposes will evolve. Eric Hanushek, an education economist at Stanford University, has commented that ‘this is a clear evaluation tool at the probationary stage … It provides a basis for counseling new teachers on how they should behave … but then again if they don’t change over the first few years you also have information you should use.

At this point, it is useful to take a look at the World Bank’s attitudes towards teachers. Teachers are seen to be at the heart of the ‘learning crisis’. However, the greatest focus in World Bank documents is on (1) teacher absenteeism in some countries, (2) unskilled and demotivated teachers, and (3) the reluctance of teachers and their unions to back World Bank-sponsored reforms. As real as these problems are, it is important to understand that the Bank has been complicit in them:

For decades, the Bank has criticised pre-service and in-service teacher training as not cost-effective For decades, the Bank has been pushing the hiring of untrained contract teachers as a cheap fix and a way to get around teacher unions – and contract teachers are again praised in the World Bank Development Report (WDR). This contradicts the occasional places in the WDR in which the Bank argues that developing countries need to follow the lead of the few countries that attract the best students to teaching, improve training, and improve working conditions. There is no explicit evidence offered at all for the repeated claim that teachers are unmotivated and need to be controlled and monitored to do their job. The Bank has a long history of blaming teachers and teacher unions for educational failures. The Bank implicitly argues that the problem of teacher absenteeism, referred to throughout the report, means teachers are unmotivated, but that simply is not true. Teacher absenteeism is not a sign of low motivation. Teacher salaries are abysmally low, as is the status of teaching. Because of this, teaching in many countries has become an occupation of last resort, yet it still attracts dedicated teachers. Once again, the Bank has been very complicit in this state of affairs as it, and the IMF, for decades have enforced neoliberal, Washington Consensus policies which resulted in government cutbacks and declining real salaries for teachers around the world. It is incredible that economists at the Bank do not recognise that the deterioration of salaries is the major cause of teacher absenteeism and that all the Bank is willing to peddle are ineffective and insulting pay-for-performance schemes. (Klees, 2017).

The SABER framework (referred to above) focuses very clearly on policies for hiring, rewarding and firing teachers.

[The World Bank] places the private sector’s methods of dealing with teachers as better than those of the public sector, because it is more ‘flexible’. In other words, it is possible to say that teachers can be hired and fired more easily; that is, hired without the need of organizing a public competition and fired if they do not achieve the expected outcomes as, for example, students’ improvements in international test scores. Further, the SABER document states that ‘Flexibility in teacher contracting is one of the primary motivations for engaging the private sector’ (World Bank, 2011: 4). This affirmation seeks to reduce expenditures on teachers while fostering other expenses such as the creation of testing schemes and spending more on ICTs, as well as making room to expand the hiring of private sector providers to design curriculum, evaluate students, train teachers, produce education software, and books. (De Siqueira, 2012).

The World Bank has argued consistently for a reduction of education costs by driving down teachers’ salaries. One of the authors of the World Bank Development Report 2018 notes that ‘in most countries, teacher salaries consume the lion’s share of the education budget, so there are already fewer resources to implement other education programs’. Another World Bank report (2007) makes the importance of ‘flexible’ hiring and lower salaries very clear:

In particular, recent progress in primary education in Francophone countries resulted from reduced teacher costs, especially through the recruitment of contractual teachers, generally at about 50% the salary of civil service teachers. (cited in Compton & Weiner, 2008: 7).

Merit pay (or ‘pay for performance’) is another of the Bank’s preferred wheezes. Despite enormous problems in reaching fair evaluations of teachers’ work and a distinct lack of convincing evidence that merit pay leads to anything positive (and may actually be counter-productive) (De Bruyckere et al., 2018: 143 – 147), the Bank is fully committed to the idea. Perhaps this is connected to the usefulness of merit pay in keeping teachers on their toes, compliant and fearful of losing their jobs, rather than any desire to improve teacher effectiveness?

There is evidence that this may be the case. Yet another World Bank report (Bau & Das, 2017) argues, on the basis of research, that improved TVA (teacher value added) does not correlate with wages in the public sector (where it is hard to fire teachers), but it does in the private sector. The study found that ‘a policy change that shifted public hiring from permanent to temporary contracts, reducing wages by 35 percent, had no adverse impact on TVA’. All of which would seem to suggest that improving the quality of teaching is of less importance to the Bank than flexible hiring and firing. This is very much in line with a more general advocacy of making education fit for the world of work. Lois Weiner of New Jersey City University puts it like this:

The architects of [GERM] policies—imposed first in developing countries—openly state that the changes will make education better fit the new global economy by producing workers who are (minimally) educated for jobs that require no more than a 7th or 8th grade education; while a small fraction of the population receive a high quality education to become the elite who oversee finance, industry, and technology. Since most workers do not need to be highly educated, it follows that teachers with considerable formal education and experience are neither needed nor desired because they demand higher wages, which is considered a waste of government money. Most teachers need only be “good enough”—as one U.S. government official phrased it—to follow scripted materials that prepare students for standardized tests. (Weiner, 2012).

It seems impossible to separate the World Bank’s ‘Teach’ tool from the broader goals of GERM. Teacher evaluation tools, like the teaching of 21st century skills and the datafication of education, need to be understood properly, I think, as means to an end. It’s time to spell out what that end is.

The World Bank’s mission is ‘to end extreme poverty (by reducing the share of the global population that lives in extreme poverty to 3 percent by 2030)’ and ‘to promote shared prosperity (by increasing the incomes of the poorest 40 percent of people in every country)’. Its education activities are part of this broad aim and are driven by subscription to human capital theory (a view of the skills, knowledge and experience of individuals in terms of their ability to produce economic value). This may be described as the ‘economization of education’: a shift in educational concerns away from ‘such things as civic participation, protecting human rights, and environmentalism to economic growth and employment’ (Spring, 2015: xiii). Both students and teachers are seen as human capital. For students, human capital education places an emphasis on the cognitive skills needed to succeed in the workplace and the ‘soft skills’, needed to function in the corporate world (Spring, 2015: 2). Accordingly, World Bank investments require ‘justifications on the basis of manpower demands’ (Heyneman, 2003: 317). One of the Bank’s current strategic priorities is the education of girls: although human rights and equity may also play a part, the Bank’s primary concern is that ‘Not Educating Girls Costs Countries Trillions of Dollars’ .

According to the Bank’s logic, its educational aims can best be achieved through a combination of support for the following:

  • cost accounting and quantification (since returns on investment must be carefully measured)
  • competition and market incentives (since it is believed that the ‘invisible hand’ of the market leads to the greatest benefits)
  • the private sector in education and a rolling back of the role of the state (since it is believed that private ownership improves efficiency)

The package of measures is a straightforward reflection of ‘what Western mainstream economists believe’ (Castro, 2009: 474).

Mainstream Western economics is, however, going through something of a rocky patch right now. Human capital theory is ‘useful when prevailing conditions are right’ (Jones, 2007: 248), but prevailing conditions are not right in much of the world (even in the United States), and the theory ‘for the most part ignores the intersections of poverty, equity and education’ (Menashy, 2012). In poorer countries evidence for the positive effects of markets in education is in very short supply, and even in richer countries it is still not conclusive (Verger & Bonal, 2012: 135). An OECD Education Paper (Waslander et al., 2010: 64) found that the effects of choice and competition between schools were at best small, if indeed any effects were found at all. Similarly, the claim that privatization improves efficiency is not sufficiently supported by evidence. Analyses of PISA data would seem to indicate that, ‘all else being equal (especially when controlling for the socio-economic status of the students), the type of ownership of the school, whether it is a private or a state school, has only modest effects on student achievement or none at all’ (Verger & Bonal, 2012: 133). Educational privatization as a one-size-fits-all panacea to educational problems has little to recommend it.

There are, then, serious limitations in the Bank’s theoretical approach. Its practical track record is also less than illustrious, even by the Bank’s own reckoning. Many of the Bank’s interventions have proved very ‘costly to developing countries. At the Bank’s insistence countries over-invested in vocational and technical education. Because of the narrow definition of recurrent costs, countries ignored investments in reading materials and in maintaining teacher salaries. Later at the Bank’s insistence, countries invested in thousands of workshops and laboratories that, for the most part, became useless ‘white elephants’ (Heyneman, 2003: 333).

As a bank, the World Bank is naturally interested in the rate of return of investment in that capital, and is therefore concerned with efficiency and efficacy. This raises the question of ‘Effective for what?’ and given that what may be effective for one individual or group may not necessarily be effective for another individual or group, one may wish to add a second question: ‘Effective for whom?’ (Biesta, 2020: 31). Critics of the World Bank, of whom there are many, argue that its policies serve ‘the interests of corporations by keeping down wages for skilled workers, cause global brain migration to the detriment of developing countries, undermine local cultures, and ensure corporate domination by not preparing school graduates who think critically and are democratically oriented’ (Spring, 2015: 56). Lest this sound a bit harsh, we can turn to the Bank’s own commissioned history: ‘The way in which [the Bank’s] ideology has been shaped conforms in significant degree to the interests and conventional wisdom of its principal stockholders [i.e. bankers and economists from wealthy nations]. International competitive bidding, reluctance to accord preferences to local suppliers, emphasis on financing foreign exchange costs, insistence on a predominant use of foreign consultants, attitudes toward public sector industries, assertion of the right to approve project managers – all proclaim the Bank to be a Western capitalist institution’ (Mason & Asher, 1973: 478 – 479).

The teaching of ‘life skills’, the promotion of data-capturing digital technologies and the push to evaluate teachers’ performance are, then, all closely linked to the agenda of the World Bank, and owe their existence in the ELT landscape, in no small part, to the way that the World Bank has shaped educational discourse. There is, however, one other connection between ELT and the World Bank which must be mentioned.

The World Bank’s foreign language instructional goals are directly related to English as a global language. The Bank urges, ‘Policymakers in developing countries …to ensure that young people acquire a language with more than just local use, preferably one used internationally.’ What is this international language? First, the World Bank mentions that schools of higher education around the world are offering courses in English. In addition, the Bank states, ‘People seeking access to international stores of knowledge through the internet require, principally, English language skills.’ (Spring, 2015: 48).

Without the World Bank, then, there might be a lot less English language teaching than there is. I have written this piece to encourage people to think more about the World Bank, its policies and particular instantiations of those policies. You might or might not agree that the Bank is an undemocratic, technocratic, neoliberal institution unfit for the necessities of today’s world (Klees, 2017). But whatever you think about the World Bank, you might like to consider the answers to Tony Benn’s ‘five little democratic questions’ (quoted in Sardowski, 2020: 17):

  • What power has it got?
  • Where did it get this power from?
  • In whose interests does it exercise this power?
  • To whom is it accountable?
  • How can we get rid of it?

References

Bau, N. and Das, J. (2017). The Misallocation of Pay and Productivity in the Public Sector : Evidence from the Labor Market for Teachers. Policy Research Working Paper; No. 8050. World Bank, Washington, DC. Retrieved [18 May 2020] from https://openknowledge.worldbank.org/handle/10986/26502

Beech, J. (2009). Who is Strolling Through The Global Garden? International Agencies and Educational Transfer. In Cowen, R. and Kazamias, A. M. (Eds.) Second International Handbook of Comparative Education. Dordrecht: Springer. pp. 341 – 358

Biesta, G. (2020). Educational Research. London: Bloomsbury.

Castro, C. De M., (2009). Can Multilateral Banks Educate The World? In Cowen, R. and Kazamias, A. M. (Eds.) Second International Handbook of Comparative Education. Dordrecht: Springer. pp. 455 – 478

Compton, M. and Weiner, L. (Eds.) (2008). The Global Assault on Teaching, Teachers, and their Unions. New York: Palgrave Macmillan

De Bruyckere, P., Kirschner, P.A. and Hulshof, C. (2020). More Urban Myths about Learning and Education. New York: Routledge.

De Siqueira, A. C. (2012). The 2020 World Bank Education Strategy: Nothing New, or the Same Old Gospel. In Klees, S. J., Samoff, J. and Stromquist, N. P. (Eds.) The World Bank and Education. Rotterdam: Sense Publishers. pp. 69 – 81

Heyneman, S.P. (2003). The history and problems in the making of education policy at the World Bank 1960–2000. International Journal of Educational Development 23 (2003) pp. 315–337. Retrieved [18 May 2020] from https://www.academia.edu/29593153/The_History_and_Problems_in_the_Making_of_Education_Policy_at_the_World_Bank_1960_2000

Jones, P. W. (2007). World Bank Financing of Education. 2nd edition. Abingdon, Oxon.: Routledge.

Klees, S. (2017). A critical analysis of the World Bank’s World Development Report on education. Retrieved [18 May 2020] from: https://www.brettonwoodsproject.org/2017/11/critical-analysis-world-banks-world-development-report-education/

Mason, E. S. & Asher, R. E. (1973). The World Bank since Bretton Woods. Washington, DC: Brookings Institution.

Menashy, F. (2012). Review of Klees, S J., Samoff, J. & Stromquist, N. P. (Eds) (2012). The World Bank and Education: Critiques and Alternatives .Rotterdam: Sense Publishers. Education Review, 15. Retrieved [18 May 2020] from https://www.academia.edu/7672656/Review_of_The_World_Bank_and_Education_Critiques_and_Alternatives

Rizvi, F. & Lingard, B. (2010). Globalizing Education Policy. Abingdon, Oxon.: Routledge.

Sadowski, J. (2020). Too Smart. Cambridge, MA.: MIT Press.

Sahlberg, P. (2016). The global educational reform movement and its impact on schooling. In K. Mundy, A. Green, R. Lingard, & A. Verger (Eds.), The handbook of global policy and policymaking in education. New York, NY: Wiley-Blackwell. pp.128 – 144

Selwyn, N. (2013). Education in a Digital World. New York: Routledge.

Spring, J. (2015). Globalization of Education 2nd Edition. New York: Routledge.

Trilling, B. & C. Fadel (2009). 21st Century Skills. San Francisco: Wiley

Verger, A. & Bonal, X. (2012). ‘All Things Being Equal?’ In Klees, S. J., Samoff, J. and Stromquist, N. P. (Eds.) The World Bank and Education. Rotterdam: Sense Publishers. pp. 69 – 81

Waslander, S., Pater, C. & van der Weide, M. (2010). Markets in Education: An analytical review of empirical research on market mechanisms in education. OECD EDU Working Paper 52.

Weiner, L. (2012). Social Movement Unionism: Teachers Can Lead the Way. Reimagine, 19 (2) Retrieved [18 May 2020] from: https://www.reimaginerpe.org/19-2/weiner-fletcher

Williamson, B., Bayne, S. & Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives, Teaching in Higher Education, 25:4, 351-365, DOI: 10.1080/13562517.2020.1748811

World Bank. (2013). Life skills : what are they, why do they matter, and how are they taught? (English). Adolescent Girls Initiative (AGI) learning from practice series. Washington DC ; World Bank. Retrieved [18 May 2020] from: http://documents.worldbank.org/curated/en/569931468331784110/Life-skills-what-are-they-why-do-they-matter-and-how-are-they-taught

In my last post , I asked why it is so easy to believe that technology (in particular, technological innovations) will offer solutions to whatever problems exist in language learning and teaching. A simple, but inadequate, answer is that huge amounts of money have been invested in persuading us. Without wanting to detract from the significance of this, it is clearly not sufficient as an explanation. In an attempt to develop my own understanding, I have been turning more and more to the idea of ‘social imaginaries’. In many ways, this is also an attempt to draw together the various interests that I have had since starting this blog.

The Canadian philosopher, Charles Taylor, describes a ‘social imaginary’ as a ‘common understanding that makes possible common practices and a widely shared sense of legitimacy’ (Taylor, 2004: 23). As a social imaginary develops over time, it ‘begins to define the contours of [people’s] worlds and can eventually come to count as the taken-for-granted shape of things, too obvious to mention’ (Taylor, 2004: 29). It is, however, not just a set of ideas or a shared narrative: it is also a set of social practices that enact those understandings, whilst at the same time modifying or solidifying them. The understandings make the practices possible, and it is the practices that largely carry the understanding (Taylor, 2004: 25). In the process, the language we use is filled with new associations and our familiarity with these associations shapes ‘our perceptions and expectations’ (Worster, 1994, quoted in Moore, 2015: 33). A social imaginary, then, is a complex system that is not technological or economic or social or political or educational, but all of these (Urry, 2016). The image of the patterns of an amorphous mass of moving magma (Castoriadis, 1987), flowing through pre-existing channels, but also, at times, striking out along new paths, may offer a helpful metaphor.

Lava flow Hawaii

Technology, of course, plays a key role in contemporary social imaginaries and the term ‘sociotechnical imaginary’ is increasingly widely used. The understandings of the sociotechnical imaginary typically express visions of social progress and a desirable future that is made possible by advances in science and technology (Jasanoff & Kim, 2015: 4). In education, technology is presented as capable of overcoming human failings and the dark ways of the past, of facilitating a ‘pedagogical utopia of natural, authentic teaching and learning’ (Friesen, forthcoming). As such understandings become more widespread and as the educational practices (platforms, apps, etc.) which both shape and are shaped by them become equally widespread, technology has come to be seen as a ‘solution’ to the ‘problem’ of education (Friesen, forthcoming). We need to be careful, however, that having shaped the technology, it does not comes to shape us (see Cobo, 2019, for a further exploration of this idea).

As a way of beginning to try to understand what is going on in edtech in ELT, which is not so very different from what is taking place in education more generally, I have sketched a number of what I consider key components of the shared understandings and the social practices that are related to them. These are closely interlocking pieces and each of them is itself embedded in much broader understandings. They evolve over time and their history can be traced quite easily. Taken together, they do, I think, help us to understand a little more why technology in ELT seems so seductive.

1 The main purpose of English language teaching is to prepare people for the workplace

There has always been a strong connection between learning an additional living language (such as English) and preparing for the world of work. The first modern language schools, such as the Berlitz schools at the end of the 19th century with their native-speaker teachers and monolingual methods, positioned themselves as primarily vocational, in opposition to the kinds of language teaching taking place in schools and universities, which were more broadly humanistic in their objectives. Throughout the 20th century, and especially as English grew as a global language, the public sector, internationally, grew closer to the methods and objectives of the private schools. The idea that learning English might serve other purposes (e.g. cultural enrichment or personal development) has never entirely gone away, as witnessed by the Council of Europe’s list of objectives (including the promotion of mutual understanding and European co-operation, and the overcoming of prejudice and discrimination) in the Common European Framework, but it is often forgotten.

The clarion calls from industry to better align education with labour markets, present and future, grow louder all the time, often finding expression in claims that ‘education is unfit for purpose.’ It is invariably assumed that this purpose is to train students in the appropriate skills to enhance their ‘human capital’ in an increasingly competitive and global market (Lingard & Gale, 2007). Educational agendas are increasingly set by the world of business (bodies like the OECD or the World Economic Forum, corporations like Google or Microsoft, and national governments which share their priorities (see my earlier post about neo-liberalism and solutionism ).

One way in which this shift is reflected in English language teaching is in the growing emphasis that is placed on ‘21st century skills’ in teaching material. Sometimes called ‘life skills’, they are very clearly concerned with the world of work, rather than the rest of our lives. The World Economic Forum’s 2018 Future of Jobs survey lists the soft skills that are considered important in the near future and they include ‘creativity’, ‘critical thinking’, ‘emotional intelligence’ and ‘leadership’. (The fact that the World Economic Forum is made up of a group of huge international corporations (e.g. J.P. Morgan, HSBC, UBS, Johnson & Johnson) with a very dubious track record of embezzlement, fraud, money-laundering and tax evasion has not resulted in much serious, public questioning of the view of education expounded by the WEF.)

Without exception, the ELT publishers have brought these work / life skills into their courses, and the topic is an extremely popular one in ELT blogs and magazines, and at conferences. Two of the four plenaries at this year’s international IATEFL conference are concerned with these skills. Pearson has a wide range of related products, including ‘a four-level competency-based digital course that provides engaging instruction in the essential work and life skills competencies that adult learners need’. Macmillan ELT made ‘life skills’ the central plank of their marketing campaign and approach to product design, and even won a British Council ELTon (see below) Award for ‘Innovation in teacher resources) in 2015 for their ‘life skills’ marketing campaign. Cambridge University Press has developed a ‘Framework for Life Competencies’ which allows these skills to be assigned numerical values.

The point I am making here is not that these skills do not play an important role in contemporary society, nor that English language learners may not benefit from some training in them. The point, rather, is that the assumption that English language learning is mostly concerned with preparation for the workplace has become so widespread that it becomes difficult to think in another way.

2 Technological innovation is good and necessary

The main reason that soft skills are deemed to be so important is that we live in a rapidly-changing world, where the unsubstantiated claim that 85% (or whatever other figure comes to mind) of current jobs won’t exist 10 years from now is so often repeated that it is taken as fact . Whether or not this is true is perhaps less important to those who make the claim than the present and the future that they like to envisage. The claim is, at least, true-ish enough to resonate widely. Since these jobs will disappear, and new ones will emerge, because of technological innovations, education, too, will need to innovate to keep up.

English language teaching has not been slow to celebrate innovation. There were coursebooks called ‘Cutting Edge’ (1998) and ‘Innovations’ (2005), but more recently the connections between innovation and technology have become much stronger. The title of the recent ‘Language Hub’ (2019) was presumably chosen, in part, to conjure up images of digital whizzkids in fashionable co-working start-up spaces. Technological innovation is explicitly promoted in the Special Interest Groups of IATEFL and TESOL. Despite a singular lack of research that unequivocally demonstrates a positive connection between technology and language learning, the former’s objective is ‘to raise awareness among ELT professionals of the power of learning technologies to assist with language learning’. There is a popular annual conference, called InnovateELT , which has the tagline ‘Be Part of the Solution’, and the first problem that this may be a solution to is that our students need to be ‘ready to take on challenging new careers’.

Last, but by no means least, there are the annual British Council ELTon awards  with a special prize for digital innovation. Among the British Council’s own recent innovations are a range of digitally-delivered resources to develop work / life skills among teens.

Again, my intention (here) is not to criticise any of the things mentioned in the preceding paragraphs. It is merely to point to a particular structure of feeling and the way that is enacted and strengthened through material practices like books, social groups, conferences and other events.

3 Technological innovations are best driven by the private sector

The vast majority of people teaching English language around the world work in state-run primary and secondary schools. They are typically not native-speakers of English, they hold national teaching qualifications and they are frequently qualified to teach other subjects in addition to English (often another language). They may or may not self-identify as teachers of ‘ELT’ or ‘EFL’, often seeing themselves more as ‘school teachers’ or ‘language teachers’. People who self-identify as part of the world of ‘ELT or ‘TEFL’ are more likely to be native speakers and to work in the private sector (including private or semi-private language schools, universities (which, in English-speaking countries, are often indistinguishable from private sector institutions), publishing companies, and freelancers). They are more likely to hold international (TEFL) qualifications or higher degrees, and they are less likely to be involved in the teaching of other languages.

The relationship between these two groups is well illustrated by the practice of training days, where groups of a few hundred state-school teachers participate in workshops organised by publishing companies and delivered by ELT specialists. In this context, state-school teachers are essentially in a client role when they are in contact with the world of ‘ELT’ – as buyers or potential buyers of educational products, training or technology.

Technological innovation is invariably driven by the private sector. This may be in the development of technologies (platforms, apps and so on), in the promotion of technology (through training days and conference sponsorship, for example), or in training for technology (with consultancy companies like ELTjam or The Consultants-E, which offer a wide range of technologically oriented ‘solutions’).

As in education more generally, it is believed that the private sector can be more agile and more efficient than state-run bodies, which continue to decline in importance in educational policy-setting. When state-run bodies are involved in technological innovation in education, it is normal for them to work in partnership with the private sector.

4 Accountability is crucial

Efficacy is vital. It makes no sense to innovate unless the innovations improve something, but for us to know this, we need a way to measure it. In a previous post , I looked at Pearson’s ‘Asking More: the Path to Efficacy’ by CEO John Fallon (who will be stepping down later this year). Efficacy in education, says Fallon, is ‘making a measurable impact on someone’s life through learning’. ‘Measurable’ is the key word, because, as Fallon claims, ‘it is increasingly possible to determine what works and what doesn’t in education, just as in healthcare.’ We need ‘a relentless focus’ on ‘the learning outcomes we deliver’ because it is these outcomes that can be measured in ‘a systematic, evidence-based fashion’. Measurement, of course, is all the easier when education is delivered online, ‘real-time learner data’ can be captured, and the power of analytics can be deployed.

Data is evidence, and it’s as easy to agree on the importance of evidence as it is hard to decide on (1) what it is evidence of, and (2) what kind of data is most valuable. While those questions remain largely unanswered, the data-capturing imperative invades more and more domains of the educational world.

English language teaching is becoming data-obsessed. From language scales, like Pearson’s Global Scale of English to scales of teacher competences, from numerically-oriented formative assessment practices (such as those used on many LMSs) to the reporting of effect sizes in meta-analyses (such as those used by John Hattie and colleagues), datafication in ELT accelerates non-stop.

The scales and frameworks are all problematic in a number of ways (see, for example, this post on ‘The Mismeasure of Language’) but they have undeniably shaped the way that we are able to think. Of course, we need measurable outcomes! If, for the present, there are privacy and security issues, it is to be hoped that technology will find solutions to them, too.

REFERENCES

Castoriadis, C. (1987). The Imaginary Institution of Society. Cambridge: Polity Press.

Cobo, C. (2019). I Accept the Terms and Conditions. Montevideo: International Development Research Centre / Center for Research Ceibal Foundation. https://adaptivelearninginelt.files.wordpress.com/2020/01/41acf-cd84b5_7a6e74f4592c460b8f34d1f69f2d5068.pdf

Friesen, N. (forthcoming) The technological imaginary in education, or: Myth and enlightenment in ‘Personalized Learning’. In M. Stocchetti (Ed.) The Digital Age and its Discontents. University of Helsinki Press. Available at https://www.academia.edu/37960891/The_Technological_Imaginary_in_Education_or_Myth_and_Enlightenment_in_Personalized_Learning_

Jasanoff, S. & Kim, S.-H. (2015). Dreamscapes of Modernity. Chicago: University of Chicago Press.

Lingard, B. & Gale, T. (2007). The emergent structure of feeling: what does it mean for critical educational studies and research?, Critical Studies in Education, 48:1, pp. 1-23

Moore, J. W. (2015). Capitalism in the Web of Life. London: Verso.

Robbins, K. & Webster, F. (1989]. The Technical Fix. Basingstoke: Macmillan Education.

Taylor, C. (2014). Modern Social Imaginaries. Durham, NC: Duke University Press.

Urry, J. (2016). What is the Future? Cambridge: Polity Press.

 

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

The most widely-used and popular tool for language learners is the bilingual dictionary (Levy & Steel, 2015), and the first of its kind appeared about 4,000 years ago (2,000 years earlier than the first monolingual dictionaries), offering wordlists in Sumerian and Akkadian (Wheeler, 2013: 9 -11). Technology has come a long way since the clay tablets of the Bronze Age. Good online dictionaries now contain substantially more information (in particular audio recordings) than their print equivalents of a few decades ago. In addition, they are usually quicker and easier to use, more popular, and lead to retention rates that are comparable to, or better than, those achieved with print (Töpel, 2014). The future of dictionaries is likely to be digital, and paper dictionaries may well disappear before very long (Granger, 2012: 2).

English language learners are better served than learners of other languages, and the number of free, online bilingual dictionaries is now enormous. Speakers of less widely-spoken languages may still struggle to find a good quality service, but speakers of, for example, Polish (with approximately 40 million speakers, and a ranking of #33 in the list of the world’s most widely spoken languages) will find over twenty free, online dictionaries to choose from (Lew & Szarowska, 2017). Speakers of languages that are more widely spoken (Chinese, Spanish or Portuguese, for example) will usually find an even greater range. The choice can be bewildering and neither search engine results nor rankings from app stores can be relied on to suggest the product of the highest quality.

Language teachers are not always as enthusiastic about bilingual dictionaries as their learners. Folse (2004: 114 – 120) reports on an informal survey of English teachers which indicated that 11% did not allow any dictionaries in class at all, 37% allowed monolingual dictionaries and only 5% allowed bilingual dictionaries. Other researchers (e.g. Boonmoh & Nesi, 2008), have found a similar situation, with teachers overwhelmingly recommending the use of a monolingual learner’s dictionary: almost all of their students bought one, but the great majority hardly ever used it, preferring instead a digital bilingual version.

Teachers’ preferences for monolingual dictionaries are usually motivated in part by a fear that their students will become too reliant on translation. Whilst this concern remains widespread, much recent suggests that this fear is misguided (Nation, 2013: 424) and that monolingual dictionaries do not actually lead to greater learning gains than their bilingual counterparts. This is, in part, due to the fact that learners typically use these dictionaries in very limited ways – to see if a word exists, check spelling or look up meaning (Harvey & Yuill, 1997). If they made fuller use of the information (about frequency, collocations, syntactic patterns, etc.) on offer, it is likely that learning gains would be greater: ‘it is accessing multiplicity of information that is likely to enhance retention’ (Laufer & Hill, 2000: 77). Without training, however, this is rarely the case.  With lower-level learners, a monolingual learner’s dictionary (even one designed for Elementary level students) can be a frustrating experience, because until they have reached a vocabulary size of around 2,000 – 3,000 words, they will struggle to understand the definitions (Webb & Nation, 2017: 119).

The second reason for teachers’ preference for monolingual dictionaries is that the quality of many bilingual dictionaries is undoubtedly very poor, compared to monolingual learner’s dictionaries such as those produced by Oxford University Press, Cambridge University Press, Longman Pearson, Collins Cobuild, Merriam-Webster and Macmillan, among others. The situation has changed, however, with the rapid growth of bilingualized dictionaries. These contain all the features of a monolingual learner’s dictionary, but also include translations into the learner’s own language. Because of the wealth of information provided by a good bilingualized dictionary, researchers (e.g. Laufer & Hadar, 1997; Chen, 2011) generally consider them preferable to monolingual or normal bilingual dictionaries. They are also popular with learners. Good bilingualized online dictionaries (such as the Oxford Advanced Learner’s English-Chinese Dictionary) are not always free, but many are, and with some language pairings free software can be of a higher quality than services that incur a subscription charge.

If a good bilingualized dictionary is available, there is no longer any compelling reason to use a monolingual learner’s dictionary, unless it contains features which cannot be found elsewhere. In order to compete in a crowded marketplace, many of the established monolingual learner’s dictionaries do precisely that. Examples of good, free online dictionaries include:

Students need help in selecting a dictionary that is right for them. Without this, many end up using as a dictionary a tool such as Google Translate , which, for all its value, is of very limited use as a dictionary. They need to understand that the most appropriate dictionary will depend on what they want to use it for (receptive, reading purposes or productive, writing purposes). Teachers can help in this decision-making process by addressing the issue in class (see the activity below).

In addition to the problem of selecting an appropriate dictionary, it appears that many learners have inadequate dictionary skills (Niitemaa & Pietilä, 2018). In one experiment (Tono, 2011), only one third of the vocabulary searches in a dictionary that were carried out by learners resulted in success. The reasons for failure include focussing on only the first meaning (or translation) of a word that is provided, difficulty in finding the relevant information in long word entries, an inability to find the lemma that is needed, and spelling errors (when they had to type in the word) (Töpel, 2014). As with monolingual dictionaries, learners often only check the meaning of a word in a bilingual dictionary and fail to explore the wider range of information (e.g. collocation, grammatical patterns, example sentences, synonyms) that is available (Laufer & Kimmel, 1997; Laufer & Hill, 2000; Chen, 2010). This information is both useful and may lead to improved retention.

Most learners receive no training in dictionary skills, but would clearly benefit from it. Nation (2013: 333) suggests that at least four or five hours, spread out over a few weeks, would be appropriate. He suggests (ibid: 419 – 421) that training should encourage learners, first, to look closely at the context in which an unknown word is encountered (in order to identify the part of speech, the lemma that needs to be looked up, its possible meaning and to decide whether it is worth looking up at all), then to help learners in finding the relevant entry or sub-entry (by providing information about common dictionary abbreviations (e.g. for parts of speech, style and register)), and, finally, to check this information against the original context.

Two good resource books full of practical activities for dictionary training are available: ‘Dictionary Activities’ by Cindy Leaney (Cambridge: Cambridge University Press, 2007) and ‘Dictionaries’ by Jon Wright (Oxford: Oxford University Press, 1998). Many of the good monolingual dictionaries offer activity guides to promote effective dictionary use and I have suggested a few activities here.

Activity: Understanding a dictionary

Outline: Students explore the use of different symbols in good online dictionaries.

Level: All levels, but not appropriate for very young learners. The activity ‘Choosing a dictionary’ is a good follow-up to this activity.

1 Distribute the worksheet and ask students to follow the instructions.

act_1

2 Check the answers.

Act_1_key

Activity: Choosing a dictionary

Outline: Students explore and evaluate the features of different free, online bilingual dictionaries.

Level: All levels, but not appropriate for very young learners. The text in stage 3 is appropriate for use with levels A2 and B1. For some groups of learners, you may want to adapt (or even translate) the list of features. It may be useful to do the activity ‘Understanding a dictionary’ before this activity.

1 Ask the class which free, online bilingual dictionaries they like to use. Write some of their suggestions on the board.

2 Distribute the list of features. Ask students to work individually and tick the boxes that are important for them. Ask students to work with a partner to compare their answers.

Act_2

3 Give students a list of free, online bilingual (English and the students’ own language) dictionaries. You can use suggestions from the list below, add the suggestions that your students made in stage 1, or add your own ideas. (For many language pairings, better resources are available than those in the list below.) Give the students the following short text and ask the students to use two of these dictionaries to look up the underlined words. Ask the students to decide which dictionary they found most useful and / or easiest to use.

act_2_text

dict_list

4 Conduct feedback with the whole class.

Activity: Getting more out of a dictionary

Outline: Students use a dictionary to help them to correct a text

Level: Levels B1 and B2, but not appropriate for very young learners. For higher levels, a more complex text (with less obvious errors) would be appropriate.

1 Distribute the worksheet below and ask students to follow the instructions.

act_3

2 Check answers with the whole class. Ask how easy it was to find the information in the dictionary that they were using.

Key

When you are reading, you probably only need a dictionary when you don’t know the meaning of a word and you want to look it up. For this, a simple bilingual dictionary is good enough. But when you are writing or editing your writing, you will need something that gives you more information about a word: grammatical patterns, collocations (the words that usually go with other words), how formal the word is, and so on. For this, you will need a better dictionary. Many of the better dictionaries are monolingual (see the box), but there are also some good bilingual ones.

Use one (or more) of the online dictionaries in the box (or a good bilingual dictionary) and make corrections to this text. There are eleven mistakes (they have been underlined) in total.

References

Boonmoh, A. & Nesi, H. 2008. ‘A survey of dictionary use by Thai university staff and students with special reference to pocket electronic dictionaries’ Horizontes de Linguística Aplicada , 6(2), 79 – 90

Chen, Y. 2011. ‘Studies on Bilingualized Dictionaries: The User Perspective’. International Journal of Lexicography, 24 (2): 161–197

Folse, K. 2004. Vocabulary Myths. Ann Arbor: University of Michigan Press

Granger, S. 2012. Electronic Lexicography. Oxford: Oxford University Press

Harvey, K. & Yuill, D. 1997. ‘A study of the use of a monolingual pedagogical dictionary by learners of English engaged in writing’ Applied Linguistics, 51 (1): 253 – 78

Laufer, B. & Hadar, L. 1997. ‘Assessing the effectiveness of monolingual, bilingual and ‘bilingualized’ dictionaries in the comprehension and production of new words’. Modern Language Journal, 81 (2): 189 – 96

Laufer, B. & M. Hill 2000. ‘What lexical information do L2 learners select in a CALL dictionary and how does it affect word retention?’ Language Learning & Technology 3 (2): 58–76

Laufer, B. & Kimmel, M. 1997. ‘Bilingualised dictionaries: How learners really use them’, System, 25 (3): 361 -369

Leaney, C. 2007. Dictionary Activities. Cambridge: Cambridge University Press

Levy, M. and Steel, C. 2015. ‘Language learner perspectives on the functionality and use of electronic language dictionaries’. ReCALL, 27(2): 177–196

Lew, R. & Szarowska, A. 2017. ‘Evaluating online bilingual dictionaries: The case of popular free English-Polish dictionaries’ ReCALL 29(2): 138–159

Nation, I.S.P. 2013. Learning Vocabulary in Another Language 2nd edition. Cambridge: Cambridge University Press

Niitemaa, M.-L. & Pietilä, P. 2018. ‘Vocabulary Skills and Online Dictionaries: A Study on EFL Learners’ Receptive Vocabulary Knowledge and Success in Searching Electronic Sources for Information’, Journal of Language Teaching and Research, 9 (3): 453-462

Tono, Y. 2011. ‘Application of eye-tracking in EFL learners’ dictionary look-up process research’, International Journal of Lexicography 24 (1): 124–153

Töpel, A. 2014. ‘Review of research into the use of electronic dictionaries’ in Müller-Spitzer, C. (Ed.) 2014. Using Online Dictionaries. Berlin: De Gruyter, pp. 13 – 54

Webb, S. & Nation, P. 2017. How Vocabulary is Learned. Oxford: Oxford University Press

Wheeler, G. 2013. Language Teaching through the Ages. New York: Routledge

Wright, J. 1998. Dictionaries. Oxford: Oxford University Press

I was intrigued to learn earlier this year that Oxford University Press had launched a new online test of English language proficiency, called the Oxford Test of English (OTE). At the conference where I first heard about it, I was struck by the fact that the presentation of the OUP sponsored plenary speaker was entitled ‘The Power of Assessment’ and dealt with formative assessment / assessment for learning. Oxford clearly want to position themselves as serious competitors to Pearson and Cambridge English in the testing business.

The brochure for the exam kicks off with a gem of a marketing slogan, ‘Smart. Smarter. SmarTest’ (geddit?), and the next few pages give us all the key information.

Faster and more flexible‘Traditional language proficiency tests’ is presumably intended to refer to the main competition (Pearson and Cambridge English). Cambridge First takes, in total, 3½ hours; the Pearson Test of English Academic takes 3 hours. The OTE takes, in total, 2 hours and 5 minutes. It can be taken, in theory, on any day of the year, although this depends on the individual Approved Test Centres, and, again, in theory, it can be booked as little as 14 days in advance. Results should take only two weeks to arrive. Further flexibility is offered in the way that candidates can pick ’n’ choose which of the four skills they want to have tests, just one or all four, although, as an incentive to go the whole hog, they will only get a ‘Certificate of Proficiency’ if they do all four.

A further incentive to do all four skills at the same time can be found in the price structure. One centre in Spain is currently offering the test for one single skill at Ꞓ41.50, but do the whole lot, and it will only set you back Ꞓ89. For a high-stakes test, this is cheap. In the UK right now, both Cambridge First and Pearson Academic cost in the region of £150, and IELTS a bit more than that. So, faster, more flexible and cheaper … Oxford means business.

Individual experience

The ‘individual experience’ on the next page of the brochure is pure marketing guff. This is, after all, a high-stakes, standardised test. It may be true that ‘the Speaking and Writing modules provide randomly generated tasks, making the overall test different each time’, but there can only be a certain number of permutations. What’s more, in ‘traditional tests’, like Cambridge First, where there is a live examiner or two, an individualised experience is unavoidable.

More interesting to me is the reference to adaptive technology. According to the brochure, ‘The Listening and Reading modules are adaptive, which means the test difficulty adjusts in response to your answers, quickly finding the right level for each test taker. This means that the questions are at just the right level of challenge, making the test shorter and less stressful than traditional proficiency tests’.

My curiosity piqued, I decided to look more closely at the Reading module. I found one practice test online which is the same as the demo that is available at the OTE website . Unfortunately, this example is not adaptive: it is at B1 level. The actual test records scores between 51 and 140, corresponding to levels A2, B1 and B2.

Test scores

The tasks in the Reading module are familiar from coursebooks and other exams: multiple choice, multiple matching and gapped texts.

Reading tasks

According to the exam specifications, these tasks are designed to measure the following skills:

  • Reading to identify main message, purpose, detail
  • Expeditious reading to identify specific information, opinion and attitude
  • Reading to identify text structure, organizational features of a text
  • Reading to identify attitude / opinion, purpose, reference, the meanings of words in context, global meaning

The ability to perform these skills depends, ultimately, on the candidate’s knowledge of vocabulary and grammar, as can be seen in the examples below.

Task 1Task 2

How exactly, I wonder, does the test difficulty adjust in response to the candidate’s answers? The algorithm that is used depends on measures of the difficulty of the test items. If these items are to be made harder or easier, the only significant way that I can see of doing this is by making the key vocabulary lower- or higher-frequency. This, in turn, is only possible if vocabulary and grammar has been tagged as being at a particular level. The most well-known tools for doing this have been developed by Pearson (with the GSE Teacher Toolkit ) and Cambridge English Profile . To the best of my knowledge, Oxford does not yet have a tool of this kind (at least, none that is publicly available). However, the data that OUP will accumulate from OTE scripts and recordings will be invaluable in building a database which their lexicographers can use in developing such a tool.

Even when a data-driven (and numerically precise) tool is available for modifying the difficulty of test items, I still find it hard to understand how the adaptivity will impact on the length or the stress of the reading test. The Reading module is only 35 minutes long and contains only 22 items. Anything that is significantly shorter must surely impact on the reliability of the test.

My conclusion from this is that the adaptive element of the Reading and Listening modules in the OTE is less important to the test itself than it is to building a sophisticated database (not dissimilar to the GSE Teacher Toolkit or Cambridge English Profile). The value of this will be found, in due course, in calibrating all OUP materials. The OTE has already been aligned to the Oxford Online Placement Test (OOPT) and, presumably, coursebooks will soon follow. This, in turn, will facilitate a vertically integrated business model, like Pearson and CUP, where everything from placement test, to coursework, to formative assessment, to final proficiency testing can be on offer.

The use of big data and analytics in education continues to grow.

A vast apparatus of measurement is being developed to underpin national education systems, institutions and the actions of the individuals who occupy them. […] The presence of digital data and software in education is being amplified through massive financial and political investment in educational technologies, as well as huge growth in data collection and analysis in policymaking practices, extension of performance measurement technologies in the management of educational institutions, and rapid expansion of digital methodologies in educational research. To a significant extent, many of the ways in which classrooms function, educational policy departments and leaders make decisions, and researchers make sense of data, simply would not happen as currently intended without the presence of software code and the digital data processing programs it enacts. (Williamson, 2017: 4)

The most common and successful use of this technology so far has been in the identification of students at risk of dropping out of their courses (Jørno & Gynther, 2018: 204). The kind of analytics used in this context may be called ‘academic analytics’ and focuses on educational processes at the institutional level or higher (Gelan et al, 2018: 3). However, ‘learning analytics’, the capture and analysis of learner and learning data in order to personalize learning ‘(1) through real-time feedback on online courses and e-textbooks that can ‘learn’ from how they are used and ‘talk back’ to the teacher, and (2) individualization and personalization of the educational experience through adaptive learning systems that enable materials to be tailored to each student’s individual needs through automated real-time analysis’ (Mayer-Schönberger & Cukier, 2014) has become ‘the main keyword of data-driven education’ (Williamson, 2017: 10). See my earlier posts on this topic here and here and here.

Learning with big dataNear the start of Mayer-Schönberger and Cukier’s enthusiastic sales pitch (Learning with Big Data: The Future of Education) for the use of big data in education, there is a discussion of Duolingo. They quote Luis von Ahn, the founder of Duolingo, as saying ‘there has been little empirical work on what is the best way to teach a foreign language’. This is so far from the truth as to be laughable. Von Ahn’s comment, along with the Duolingo product itself, is merely indicative of a lack of awareness of the enormous amount of research that has been carried out. But what could the data gleaned from the interactions of millions of users with Duolingo tell us of value? The example that is given is the following. Apparently, ‘in the case of Spanish speakers learning English, it’s common to teach pronouns early on: words like “he,” “she,” and “it”.’ But, Duolingo discovered, ‘the term “it” tends to confuse and create anxiety for Spanish speakers, since the word doesn’t easily translate into their language […] Delaying the introduction of “it” until a few weeks later dramatically improves the number of people who stick with learning English rather than drop out.’ Was von Ahn unaware of the decades of research into language transfer effects? Did von Ahn (who grew up speaking Spanish in Guatemala) need all this data to tell him that English personal pronouns can cause problems for Spanish learners of English? Was von Ahn unaware of the debates concerning the value of teaching isolated words (especially grammar words!)?

The area where little empirical research has been done is not in different ways of learning another language: it is in the use of big data and learning analytics to assist language learning. Claims about the value of these technologies in language learning are almost always speculative – they are based on comparison to other school subjects (especially, mathematics). Gelan et al (2018: 2), who note this lack of research, suggest that ‘understanding language learner behaviour could provide valuable insights into task design for instructors and materials designers, as well as help students with effective learning strategies and personalised learning pathways’ (my italics). Reinders (2018: 81) writes ‘that analysis of prior experiences with certain groups or certain courses may help to identify key moments at which students need to receive more or different support. Analysis of student engagement and performance throughout a course may help with early identification of learning problems and may prompt early intervention’ (italics added). But there is some research out there, and it’s worth having a look at. Most studies that have collected learner-tracking data concern glossary use for reading comprehension and vocabulary retention (Gelan et al, 2018: 5), but a few have attempted to go further in scope.

Volk et al (2015) looked at the behaviour of the 20,000 students per day using the platform which accompanies ‘More!’ (Gerngross et al. 2008) to do their English homework for Austrian lower secondary schools. They discovered that

  • the exercises used least frequently were those that are located further back in the course book
  • usage is highest from Monday to Wednesday, declining from Thursday, with a rise again on Sunday
  • most interaction took place between 3:00 and 5:00 pm.
  • repetition of exercises led to a strong improvement in success rate
  • students performed better on multiple choice and matching exercises than they did where they had to produce some language

The authors of this paper conclude by saying that ‘the results of this study suggest a number of new avenues for research. In general, the authors plan to extend their analysis of exercise results and applied exercises to the population of all schools using the online learning platform more-online.at. This step enables a deeper insight into student’s learning behaviour and allows making more generalizing statements.’ When I shared these research findings with the Austrian lower secondary teachers that I work with, their reaction was one of utter disbelief. People get paid to do this research? Why not just ask us?

More useful, more actionable insights may yet come from other sources. For example, Gu Yueguo, Pro-Vice-Chancellor of the Beijing Foreign Studies University has announced the intention to set up a national Big Data research center, specializing in big data-related research topics in foreign language education (Yu, 2015). Meanwhile, I’m aware of only one big research project that has published its results. The EC Erasmus+ VITAL project (Visualisation Tools and Analytics to monitor Online Language Learning & Teaching) was carried out between 2015 and 2017 and looked at the learning trails of students from universities in Belgium, Britain and the Netherlands. It was discovered (Gelan et al, 2015) that:

  • students who did online exercises when they were supposed to do them were slightly more successful than those who were late carrying out the tasks
  • successful students logged on more often, spent more time online, attempted and completed more tasks, revisited both exercises and theory pages more frequently, did the work in the order in which it was supposed to be done and did more work in the holidays
  • most students preferred to go straight into the assessed exercises and only used the theory pages when they felt they needed to; successful students referred back to the theory pages more often than unsuccessful students
  • students made little use of the voice recording functionality
  • most online activity took place the day before a class and the day of the class itself

EU funding for this VITAL project amounted to 274,840 Euros[1]. The technology for capturing the data has been around for a long time. In my opinion, nothing of value, or at least nothing new, has been learnt. Publishers like Pearson and Cambridge University Press who have large numbers of learners using their platforms have been capturing learning data for many years. They do not publish their findings and, intriguingly, do not even claim that they have learnt anything useful / actionable from the data they have collected. Sure, an exercise here or there may need to be amended. Both teachers and students may need more support in using the more open-ended functionalities of the platforms (e.g. discussion forums). But are they getting ‘unprecedented insights into what works and what doesn’t’ (Mayer-Schönberger & Cukier, 2014)? Are they any closer to building better pedagogies? On the basis of what we know so far, you wouldn’t want to bet on it.

It may be the case that all the learning / learner data that is captured could be used in some way that has nothing to do with language learning. Show me a language-learning app developer who does not dream of monetizing the ‘behavioural surplus’ (Zuboff, 2018) that they collect! But, for the data and analytics to be of any value in guiding language learning, it must lead to actionable insights. Unfortunately, as Jørno & Gynther (2018: 198) point out, there is very little clarity about what is meant by ‘actionable insights’. There is a danger that data and analytics ‘simply gravitates towards insights that confirm longstanding good practice and insights, such as “students tend to ignore optional learning activities … [and] focus on activities that are assessed” (Jørno & Gynther, 2018: 211). While this is happening, the focus on data inevitably shapes the way we look at the object of study (i.e. language learning), ‘thereby systematically excluding other perspectives’ (Mau, 2019: 15; see also Beer, 2019). The belief that tech is always the solution, that all we need is more data and better analytics, remains very powerful: it’s called techno-chauvinism (Broussard, 2018: 7-8).

References

Beer, D. 2019. The Data Gaze. London: Sage

Broussard, M. 2018. Artificial Unintelligence. Cambridge, Mass.: MIT Press

Gelan, A., Fastre, G., Verjans, M., Martin, N., Jansenswillen, G., Creemers, M., Lieben, J., Depaire, B. & Thomas, M. 2018. ‘Affordances and limitations of learning analytics for computer­assisted language learning: a case study of the VITAL project’. Computer Assisted Language Learning. pp. 1­26. http://clok.uclan.ac.uk/21289/

Gerngross, G., Puchta, H., Holzmann, C., Stranks, J., Lewis-Jones, P. & Finnie, R. 2008. More! 1 Cyber Homework. Innsbruck, Austria: Helbling

Jørno, R. L. & Gynther, K. 2018. ‘What Constitutes an “Actionable Insight” in Learning Analytics?’ Journal of Learning Analytics 5 (3): 198 – 221

Mau, S. 2019. The Metric Society. Cambridge: Polity Press

Mayer-Schönberger, V. & Cukier, K. 2014. Learning with Big Data: The Future of Education. New York: Houghton Mifflin Harcourt

Reinders, H. 2018. ‘Learning analytics for language learning and teaching’. JALT CALL Journal 14 / 1: 77 – 86 https://files.eric.ed.gov/fulltext/EJ1177327.pdf

Volk, H., Kellner, K. & Wohlhart, D. 2015. ‘Learning Analytics for English Language Teaching.’ Journal of Universal Computer Science, Vol. 21 / 1: 156-174 http://www.jucs.org/jucs_21_1/learning_analytics_for_english/jucs_21_01_0156_0174_volk.pdf

Williamson, B. 2017. Big Data in Education. London: Sage

Yu, Q. 2015. ‘Learning Analytics: The next frontier for computer assisted language learning in big data age’ SHS Web of Conferences, 17 https://www.shs-conferences.org/articles/shsconf/pdf/2015/04/shsconf_icmetm2015_02013.pdf

Zuboff, S. 2019. The Age of Surveillance Capitalism. London: Profile Books

 

[1] See https://ec.europa.eu/programmes/erasmus-plus/sites/erasmusplus2/files/ka2-2015-he_en.pdf