Digital flashcard systems like Memrise and Quizlet remain among the most popular language learning apps. Their focus is on the deliberate learning of vocabulary, an approach described by Paul Nation (Nation, 2005) as ‘one of the least efficient ways of developing learners’ vocabulary knowledge but nonetheless […] an important part of a well-balanced vocabulary programme’. The deliberate teaching of vocabulary also features prominently in most platform-based language courses.

For both vocabulary apps and bigger courses, the lexical items need to be organised into sets for the purposes of both presentation and practice. A common way of doing this, especially at lower levels, is to group the items into semantic clusters (sets with a classifying superordinate, like body part, and a collection of example hyponyms, like arm, leg, head, chest, etc.).

The problem, as Keith Folse puts it, is that such clusters ‘are not only unhelpful, they actually hinder vocabulary retention’ (Folse, 2004: 52). Evidence for this claim may be found in Higa (1963), Tinkham (1993, 1997), Waring (1997), Erten & Tekin (2008) and Barcroft (2015), to cite just some of the more well-known studies. The results, says Folse, ‘are clear and, I think, very conclusive’. The explanation that is usually given draws on interference theory: semantic similarity may lead to confusion (e.g. when learners mix up days of the week, colour words or adjectives to describe personality).

It appears, then, to be long past time to get rid of semantic clusters in language teaching. Well … not so fast. First of all, although most of the research sides with Folse, not all of it does. Nakata and Suzuki (2019) in their survey of more recent research found that results were more mixed. They found one study which suggested that there was no significant difference in learning outcomes between presenting words in semantic clusters and semantically unrelated groups (Ishii, 2015). And they found four studies (Hashemi & Gowdasiaei, 2005; Hoshino, 2010; Schneider, Healy, & Bourne, 1998, 2002) where semantic clusters had a positive effect on learning.

Nakata and Suzuki (2019) offer three reasons why semantic clustering might facilitate vocabulary learning: it (1) ‘reflects how vocabulary is stored in the mental lexicon, (2) introduces desirable difficulty, and (3) leads to extra attention, effort, or engagement from learners’. Finkbeiner and Nicol (2003) make a similar point: ‘although learning semantically related words appears to take longer, it is possible that words learned under these conditions are learned better for the purpose of actual language use (e.g., the retrieval of vocabulary during production and comprehension). That is, the very difficulty associated with learning the new labels may make them easier to process once they are learned’. Both pairs of researcher cited in this paragraph conclude that semantic clusters are best avoided, but their discussion of the possible benefits of this clustering is a recognition that the research (for reasons which I will come on to) cannot lead to categorical conclusions.

The problem, as so often with pedagogical research, is the gap between research conditions and real-world classrooms. Before looking at this in a little more detail, one relatively uncontentious observation can be made. Even those scholars who advise against semantic clustering (e.g. Papathanasiou, 2009), acknowledge that the situation is complicated by other factors, especially the level of proficiency of the learner and whether or not one or more of the hyponyms are known to the learner. At higher levels (when it is more likely that one or more of the hyponyms are already, even partially, known), semantic clustering is not a problem. I would add that, on the whole at higher levels, the deliberate learning of vocabulary is even less efficient than at lower levels and should be an increasingly small part of a well-balanced vocabulary programme.

So, why is there a problem drawing practical conclusions from the research? In order to have any scientific validity at all, researchers need to control a large number of variable. They need, for example, to be sure that learners do not already know any of the items that are being presented. The only practical way of doing this is to present sets of invented words, and this is what most of the research does (Sarioğlu, 2018). These artificial words solve one problem, but create others, the most significant of which is item difficulty. Many factors impact on item difficulty, and these include word frequency (obviously a problem with invented words), word length, pronounceability and the familiarity and length of the corresponding item in L1. None of the studies which support the abandonment of semantic clusters have controlled all of these variables (Nakata and Suzuki, 2019). Indeed, it would be practically impossible to do so. Learning pseudo-words is a very different proposition to learning real words, which a learner may subsequently encounter or want to use.

Take, for example, the days of the week. It’s quite common for learners to muddle up Tuesday and Thursday. The reason for this is not just semantic similarity (Tuesday and Monday are less frequently confused). They are also very similar in terms of both spelling and pronunciation. They are ‘synforms’ (see Laufer, 2009), which, like semantic clusters, can hinder learning of new items. But, now imagine a French-speaking learner of Spanish studying the days of the week. It is much less likely that martes and jueves will be muddled, because of their similarity to the French words mardi and jeudi. There would appear to be no good reason not to teach the complete set of days of the week to a learner like this. All other things being equal, it is probably a good idea to avoid semantic clusters, but all other things are very rarely equal.

Again, in an attempt to control for variables, researchers typically present the target items in isolation (in bilingual pairings). But, again, the real world does not normally conform to this condition. Leo Sellivan (2014) suggests that semantic clusters (e.g. colours) are taught as part of collocations. He gives the examples of red dress, green grass and black coffee, and points out that the alliterative patterns can serve as mnemonic devices which will facilitate learning. The suggestion is, I think, a very good one, but, more generally, it’s worth noting that the presentation of lexical items in both digital flashcards and platform courses is rarely context-free. Contexts will inevitably impact on learning and may well obviate the risks of semantic clustering.

Finally, this kind of research typically gives participants very restricted time to memorize the target words (Sarioğlu, 2018) and they are tested in very controlled recall tasks. In the case of language platform courses, practice of target items is usually spread out over a much longer period of time, with a variety of exposure opportunities (in controlled practice tasks, exposure in texts, personalisation tasks, revision exercises, etc.) both within and across learning units. In this light, it is not unreasonable to argue that laboratory-type research offers only limited insights into what should happen in the real world of language learning and teaching. The choice of learning items, the way they are presented and practised, and the variety of activities in the well-balanced vocabulary programme are probably all more significant than the question of whether items are organised into semantic clusters.

Although semantic clusters are quite common in language learning materials, much more common are thematic clusters (i.e. groups of words which are topically related, but include a variety of parts of speech (see below). Researchers, it seems, have no problem with this way of organising lexical sets. By way of conclusion, here’s an extract from a recent book:

‘Introducing new words together that are similar in meaning (synonyms), such as scared and frightened, or forms (synforms), like contain and maintain, can be confusing, and students are less likely to remember them. This problem is known as ‘interference’. One way to avoid this is to choose words that are around the same theme, but which include a mix of different parts of speech. For example, if you want to focus on vocabulary to talk about feelings, instead of picking lots of adjectives (happy, sad, angry, scared, frightened, nervous, etc.) include some verbs (feel, enjoy, complain) and some nouns (fun, feelings, nerves). This also encourages students to use a variety of structures with the vocabulary.’ (Hughes, et al., 2015: 25)

 

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Erten, I.H., & Tekin, M. 2008. Effects on vocabulary acquisition of presenting new words in semantic sets versus semantically-unrelated sets. System, 36 (3), 407-422

Finkbeiner, M. & Nicol, J. 2003. Semantic category effects in second language word learning. Applied Psycholinguistics 24 (2003), 369–383

Folse, K. S. 2004. Vocabulary Myths. Ann Arbor: University of Michigan Press

Hashemi, M.R., & Gowdasiaei, F. 2005. An attribute-treatment interaction study: Lexical-set versus semantically-unrelated vocabulary instruction. RELC Journal, 36 (3), 341-361

Higa, M. 1963. Interference effects of intralist word relationships in verbal learning. Journal of Verbal Learning and Verbal Behavior, 2, 170-175

Hoshino, Y. 2010. The categorical facilitation effects on L2 vocabulary learning in a classroom setting. RELC Journal, 41, 301–312

Hughes, S. H., Mauchline, F. & Moore, J. 2019. ETpedia Vocabulary. Shoreham-by-Sea: Pavilion Publishing and Media

Ishii, T. 2015. Semantic connection or visual connection: Investigating the true source of confusion. Language Teaching Research, 19, 712–722

Laufer, B. 2009. The concept of ‘synforms’ (similar lexical forms) in vocabulary acquisition. Language and Education, 2 (2): 113 – 132

Nakata, T. & Suzuki, Y. 2019. Effects Of Massing And Spacing On The Learning Of Semantically Related And Unrelated Words. Studies in Second Language Acquisition 41 (2), 287 – 311

Nation, P. 2005. Teaching Vocabulary. Asian EFL Journal. http://www.asian-efl-journal.com/sept_05_pn.pdf

Papathanasiou, E. 2009. An investigation of two ways of presenting vocabulary. ELT Journal 63 (4), 313 – 322

Sarioğlu, M. 2018. A Matter of Controversy: Teaching New L2 Words in Semantic Sets or Unrelated Sets. Journal of Higher Education and Science Vol 8 / 1: 172 – 183

Schneider, V. I., Healy, A. F., & Bourne, L. E. 1998. Contextual interference effects in foreign language vocabulary acquisition and retention. In Healy, A. F. & Bourne, L. E. (Eds.), Foreign language learning: Psycholinguistic studies on training and retention (pp. 77–90). Mahwah, NJ: Erlbaum

Schneider, V. I., Healy, A. F., & Bourne, L. E. 2002. What is learned under difficult conditions is hard to forget: Contextual interference effects in foreign vocabulary acquisition, retention, and transfer. Journal of Memory and Language, 46, 419–440

Sellivan, L. 2014. Horizontal alternatives to vertical lists. Blog post: http://leoxicon.blogspot.com/2014/03/horizontal-alternatives-to-vertical.html

Tinkham, T. 1993. The effect of semantic clustering on the learning of second language vocabulary. System 21 (3), 371-380.

Tinkham, T. 1997. The effects of semantic and thematic clustering on the learning of a second language vocabulary. Second Language Research, 13 (2),138-163

Waring, R. 1997. The negative effects of learning words in semantic sets: a replication. System, 25 (2), 261 – 274

Advertisements

At a recent ELT conference, a plenary presentation entitled ‘Getting it right with edtech’ (sponsored by a vendor of – increasingly digital – ELT products) began with the speaker suggesting that technology was basically neutral, that what you do with educational technology matters far more than the nature of the technology itself. The idea that technology is a ‘neutral tool’ has a long pedigree and often accompanies exhortations to embrace edtech in one form or another (see for example Fox, 2001). It is an idea that is supported by no less a luminary than Chomsky, who, in a 2012 video entitled ‘The Purpose of Education’ (Chomsky, 2012), said that:

As far as […] technology […] and education is concerned, technology is basically neutral. It’s kind of like a hammer. I mean, […] the hammer doesn’t care whether you use it to build a house or whether a torturer uses it to crush somebody’s skull; a hammer can do either. The same with the modern technology; say, the Internet, and so on.

Womans hammerAlthough hammers are not usually classic examples of educational technology, they are worthy of a short discussion. Hammers come in all shapes and sizes and when you choose one, you need to consider its head weight (usually between 16 and 20 ounces), the length of the handle, the shape of the grip, etc. Appropriate specifications for particular hammering tasks have been calculated in great detail. The data on which these specifications is based on an analysis of the hand size and upper body strength of the typical user. The typical user is a man, and the typical hammer has been designed for a man. The average male hand length is 177.9 mm, that of the average woman is 10 mm shorter (Wang & Cai, 2017). Women typically have about half the upper body strength of men (Miller et al., 1993). It’s possible, but not easy to find hammers designed for women (they are referred to as ‘Ladies hammers’ on Amazon). They have a much lighter head weight, a shorter handle length, and many come in pink or floral designs. Hammers, in other words, are far from neutral: they are highly gendered.

Moving closer to educational purposes and ways in which we might ‘get it right with edtech’, it is useful to look at the smart phone. The average size of these devices has risen in recent years, and is now 5.5 inches, with the market for 6 inch screens growing fast. Why is this an issue? Well, as Caroline Criado Perez (2019: 159) notes, ‘while we’re all admittedly impressed by the size of your screen, it’s a slightly different matter when it comes to fitting into half the population’s hands. The average man can fairly comfortably use his device one-handed – but the average woman’s hand is not much bigger than the handset itself’. This is despite the fact the fact that women are more likely to own an iPhone than men  .

It is not, of course, just technological artefacts that are gendered. Voice-recognition software is also very biased. One researcher (Tatman, 2017) has found that Google’s speech recognition tool is 13% more accurate for men than it is for women. There are also significant biases for race and social class. The reason lies in the dataset that the tool is trained on: the algorithms may be gender- and socio-culturally-neutral, but the dataset is not. It would not be difficult to redress this bias by training the tool on a different dataset.

The same bias can be found in automatic translation software. Because corpora such as the BNC or COCA have twice as many male pronouns as female ones (as a result of the kinds of text that are selected for the corpora), translation software reflects the bias. With Google Translate, a sentence in a language with a gender-neutral pronoun, such as ‘S/he is a doctor’ is rendered into English as ‘He is a doctor’. Meanwhile, ‘S/he is a nurse’ is translated as ‘She is a nurse’ (Criado Perez, 2019: 166).

Datasets, then, are often very far from neutral. Algorithms are not necessarily any more neutral than the datasets, and Cathy O’Neil’s best-seller ‘Weapons of Math Destruction’ catalogues the many, many ways in which algorithms, posing as neutral mathematical tools, can increase racial, social and gender inequalities.

It would not be hard to provide many more examples, but the selection above is probably enough. Technology, as Langdon Winner (Winner, 1980) observed almost forty years ago, is ‘deeply interwoven in the conditions of modern politics’. Technology cannot be neutral: it has politics.

So far, I have focused primarily on the non-neutrality of technology in terms of gender (and, in passing, race and class). Before returning to broader societal issues, I would like to make a relatively brief mention of another kind of non-neutrality: the pedagogic. Language learning materials necessarily contain content of some kind: texts, topics, the choice of values or role models, language examples, and so on. These cannot be value-free. In the early days of educational computer software, one researcher (Biraimah, 1993) found that it was ‘at least, if not more, biased than the printed page it may one day replace’. My own impression is that this remains true today.

Equally interesting to my mind is the fact that all educational technologies, ranging from the writing slate to the blackboard (see Buzbee, 2014), from the overhead projector to the interactive whiteboard, always privilege a particular kind of teaching (and learning). ‘Technologies are inherently biased because they are built to accomplish certain very specific goals which means that some technologies are good for some tasks while not so good for other tasks’ (Zhao et al., 2004: 25). Digital flashcards, for example, inevitably encourage a focus on rote learning. Contemporary LMSs have impressive multi-functionality (i.e. they often could be used in a very wide variety of ways), but, in practice, most teachers use them in very conservative ways (Laanpere et al., 2004). This may be a result of teacher and institutional preferences, but it is almost certainly due, at least in part, to the way that LMSs are designed. They are usually ‘based on traditional approaches to instruction dating from the nineteenth century: presentation and assessment [and] this can be seen in the selection of features which are most accessible in the interface, and easiest to use’ (Lane, 2009).

The argument that educational technology is neutral because it could be put to many different uses, good or bad, is problematic because the likelihood of one particular use is usually much greater than another. There is, however, another way of looking at technological neutrality, and that is to look at its origins. Elsewhere on this blog, in post after post, I have given examples of the ways in which educational technology has been developed, marketed and sold primarily for commercial purposes. Educational values, if indeed there are any, are often an afterthought. The research literature in this area is rich and growing: Stephen Ball, Larry Cuban, Neil Selwyn, Joel Spring, Audrey Watters, etc.

Rather than revisit old ground here, this is an opportunity to look at a slightly different origin of educational technology: the US military. The close connection of the early history of the internet and the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense is fairly well-known. Much less well-known are the very close connections between the US military and educational technologies, which are catalogued in the recently reissued ‘The Classroom Arsenal’ by Douglas D. Noble.

Following the twin shocks of the Soviet Sputnik 1 (in 1957) and Yuri Gagarin (in 1961), the United States launched a massive programme of investment in the development of high-tech weaponry. This included ‘computer systems design, time-sharing, graphics displays, conversational programming languages, heuristic problem-solving, artificial intelligence, and cognitive science’ (Noble, 1991: 55), all of which are now crucial components in educational technology. But it also quickly became clear that more sophisticated weapons required much better trained operators, hence the US military’s huge (and continuing) interest in training. Early interest focused on teaching machines and programmed instruction (branches of the US military were by far the biggest purchasers of programmed instruction products). It was essential that training was effective and efficient, and this led to a wide interest in the mathematical modelling of learning and instruction.

What was then called computer-based education (CBE) was developed as a response to military needs. The first experiments in computer-based training took place at the Systems Research Laboratory of the Air Force’s RAND Corporation think tank (Noble, 1991: 73). Research and development in this area accelerated in the 1960s and 1970s and CBE (which has morphed into the platforms of today) ‘assumed particular forms because of the historical, contingent, military contexts for which and within which it was developed’ (Noble, 1991: 83). It is possible to imagine computer-based education having developed in very different directions. Between the 1960s and 1980s, for example, the PLATO (Programmed Logic for Automatic Teaching Operations) project at the University of Illinois focused heavily on computer-mediated social interaction (forums, message boards, email, chat rooms and multi-player games). PLATO was also significantly funded by a variety of US military agencies, but proved to be of much less interest to the generals than the work taking place in other laboratories. As Noble observes, ‘some technologies get developed while others do not, and those that do are shaped by particular interests and by the historical and political circumstances surrounding their development (Noble, 1991: 4).

According to Noble, however, the influence of the military reached far beyond the development of particular technologies. Alongside the investment in technologies, the military were the prime movers in a campaign to promote computer literacy in schools.

Computer literacy was an ideological campaign rather than an educational initiative – a campaign designed, at bottom, to render people ‘comfortable’ with the ‘inevitable’ new technologies. Its basic intent was to win the reluctant acquiescence of an entire population in a brave new world sculpted in silicon.

The computer campaign also succeeded in getting people in front of that screen and used to having computers around; it made people ‘computer-friendly’, just as computers were being rendered ‘used-friendly’. It also managed to distract the population, suddenly propelled by the urgency of learning about computers, from learning about other things, such as how computers were being used to erode the quality of their working lives, or why they, supposedly the citizens of a democracy, had no say in technological decisions that were determining the shape of their own futures.

Third, it made possible the successful introduction of millions of computers into schools, factories and offices, even homes, with minimal resistance. The nation’s public schools have by now spent over two billion dollars on over a million and a half computers, and this trend still shows no signs of abating. At this time, schools continue to spend one-fifth as much on computers, software, training and staffing as they do on all books and other instructional materials combined. Yet the impact of this enormous expenditure is a stockpile of often idle machines, typically used for quite unimaginative educational applications. Furthermore, the accumulated results of three decades of research on the effectiveness of computer-based instruction remain ‘inconclusive and often contradictory’. (Noble, 1991: x – xi)

Rather than being neutral in any way, it seems more reasonable to argue, along with (I think) most contemporary researchers, that edtech is profoundly value-laden because it has the potential to (i) influence certain values in students; (ii) change educational values in [various] ways; and (iii) change national values (Omotoyinbo & Omotoyinbo, 2016: 173). Most importantly, the growth in the use of educational technology has been accompanied by a change in the way that education itself is viewed: ‘as a tool, a sophisticated supply system of human cognitive resources, in the service of a computerized, technology-driven economy’ (Noble, 1991: 1). These two trends are inextricably linked.

References

Biraimah, K. 1993. The non-neutrality of educational computer software. Computers and Education 20 / 4: 283 – 290

Buzbee, L. 2014. Blackboard: A Personal History of the Classroom. Minneapolis: Graywolf Press

Chomsky, N. 2012. The Purpose of Education (video). Learning Without Frontiers Conference. https://www.youtube.com/watch?v=DdNAUJWJN08

Criado Perez, C. 2019. Invisible Women. London: Chatto & Windus

Fox, R. 2001. Technological neutrality and practice in higher education. In A. Herrmann and M. M. Kulski (Eds), Expanding Horizons in Teaching and Learning. Proceedings of the 10th Annual Teaching Learning Forum, 7-9 February 2001. Perth: Curtin University of Technology. http://clt.curtin.edu.au/events/conferences/tlf/tlf2001/fox.html

Laanpere, M., Poldoja, H. & Kikkas, K. 2004. The second thoughts about pedagogical neutrality of LMS. Proceedings of IEEE International Conference on Advanced Learning Technologies, 2004. https://ieeexplore.ieee.org/abstract/document/1357664

Lane, L. 2009. Insidious pedagogy: How course management systems impact teaching. First Monday, 14(10). https://firstmonday.org/ojs/index.php/fm/article/view/2530/2303Lane

Miller, A.E., MacDougall, J.D., Tarnopolsky, M. A. & Sale, D.G. 1993. ‘Gender differences in strength and muscle fiber characteristics’ European Journal of Applied Physiology and Occupational Physiology. 66(3): 254-62 https://www.ncbi.nlm.nih.gov/pubmed/8477683

Noble, D. D. 1991. The Classroom Arsenal. Abingdon, Oxon.: Routledge

Omotoyinbo, D. W. & Omotoyinbo, F. R. 2016. Educational Technology and Value Neutrality. Societal Studies, 8 / 2: 163 – 179 https://www3.mruni.eu/ojs/societal-studies/article/view/4652/4276

O’Neil, C. 2016. Weapons of Math Destruction. London: Penguin

Sundström, P. Interpreting the Notion that Technology is Value Neutral. Medicine, Health Care and Philosophy 1, 1998: 42-44

Tatman, R. 2017. ‘Gender and Dialect Bias in YouTube’s Automatic Captions’ Proceedings of the First Workshop on Ethics in Natural Language Processing, pp. 53–59 http://www.ethicsinnlp.org/workshop/pdf/EthNLP06.pdf

Wang, C. & Cai, D. 2017. ‘Hand tool handle design based on hand measurements’ MATEC Web of Conferences 119, 01044 (2017) https://www.matec-conferences.org/articles/matecconf/pdf/2017/33/matecconf_imeti2017_01044.pdf

Winner, L. 1980. Do Artifacts have Politics? Daedalus 109 / 1: 121 – 136

Zhao, Y, Alvarez-Torres, M. J., Smith, B. & Tan, H. S. 2004. The Non-neutrality of Technology: a Theoretical Analysis and Empirical Study of Computer Mediated Communication Technologies. Journal of Educational Computing Research 30 (1 &2): 23 – 55

When the startup, AltSchool, was founded in 2013 by Max Ventilla, the former head of personalization at Google, it quickly drew the attention of venture capitalists and within a few years had raised $174 million from the likes of the Zuckerberg Foundation, Peter Thiel, Laurene Powell Jobs and Pierre Omidyar. It garnered gushing articles in a fawning edtech press which enthused about ‘how successful students can be when they learn in small, personalized communities that champion project-based learning, guided by educators who get a say in the technology they use’. It promised ‘a personalized learning approach that would far surpass the standardized education most kids receive’.

altschoolVentilla was an impressive money-raiser who used, and appeared to believe, every cliché in the edTech sales manual. Dressed in regulation jeans, polo shirt and fleece, he claimed that schools in America were ‘stuck in an industrial-age model, [which] has been in steady decline for the last century’ . What he offered, instead, was a learner-centred, project-based curriculum providing real-world lessons. There was a focus on social-emotional learning activities and critical thinking was vital.

The key to the approach was technology. From the start, software developers, engineers and researchers worked alongside teachers everyday, ‘constantly tweaking the Personalized Learning Plan, which shows students their assignments for each day and helps teachers keep track of and assess student’s learning’. There were tablets for pre-schoolers, laptops for older kids and wall-mounted cameras to record the lessons. There were, of course, Khan Academy videos. Ventilla explained that “we start with a representation of each child”, and even though “the vast majority of the learning should happen non-digitally”, the child’s habits and preferences gets converted into data, “a digital representation of the important things that relate to that child’s learning, not just their academic learning but also their non-academic learning. Everything logistic that goes into setting up the experience for them, whether it’s who has permission to pick them up or their allergy information. You name it.” And just like Netflix matches us to TV shows, “If you have that accurate and actionable representation for each child, now you can start to personalize the whole experience for that child. You can create that kind of loop you described where because we can represent a child well, we can match them to the right experiences.”

AltSchool seemed to offer the possibility of doing something noble, of transforming education, ‘bringing it into the digital age’, and, at the same time, a healthy return on investors’ money. Expanding rapidly, nine AltSchool microschools were opened in New York and the Bay Area, and plans were afoot for further expansion in Chicago. But, by then, it was already clear that something was going wrong. Five of the schools were closed before they had really got started and the attrition rate in some classrooms had reached about 30%. Revenue in 2018 was only $7 million and there were few buyers for the AltSchool platform. Quoting once more from the edTech bible, Ventilla explained the situation: ‘Our whole strategy is to spend more than we make,’ he says. Since software is expensive to develop and cheap to distribute, the losses, he believes, will turn into steep profits once AltSchool refines its product and lands enough customers.

The problems were many and apparent. Some of the buildings were simply not appropriate for schools, with no playgrounds or gyms, malfunctioning toilets, among other issues. Parents were becoming unhappy and accused AltSchool of putting ‘its ambitions as a tech company above its responsibility to teach their children. […] We kind of came to the conclusion that, really, AltSchool as a school was kind of a front for what Max really wants to do, which is develop software that he’s selling,’ a parent of a former AltSchool student told Business Insider. ‘We had really mediocre educators using technology as a crutch,’ said one father who transferred his child to a different private school after two years at AltSchool. […] We learned that it’s almost impossible to really customize the learning experience for each kid.’ Some parents began to wonder whether AltSchool had enticed families into its program merely to extract data from their children, then toss them aside?

With the benefit of hindsight, it would seem that the accusations were hardly unfair. In June of this year, AltSchool announced that its four remaining schools would be operated by a new partner, Higher Ground Education (a well-funded startup founded in 2016 which promotes and ‘modernises’ Montessori education). Meanwhile, AltSchool has been rebranded as Altitude Learning, focusing its ‘resources on the development and expansion of its personalized learning platform’ for licensing to other schools across the country.

Quoting once more from the edTech sales manual, Ventilla has said that education should drive the tech, not the other way round. Not so many years earlier, before starting AltSchool, Ventilla also said that he had read two dozen books on education and emerged a fan of Sir Ken Robinson. He had no experience as a teacher or as an educational administrator. Instead, he had ‘extensive knowledge of networks, and he understood the kinds of insights that can be gleaned from big data’.

The use of big data and analytics in education continues to grow.

A vast apparatus of measurement is being developed to underpin national education systems, institutions and the actions of the individuals who occupy them. […] The presence of digital data and software in education is being amplified through massive financial and political investment in educational technologies, as well as huge growth in data collection and analysis in policymaking practices, extension of performance measurement technologies in the management of educational institutions, and rapid expansion of digital methodologies in educational research. To a significant extent, many of the ways in which classrooms function, educational policy departments and leaders make decisions, and researchers make sense of data, simply would not happen as currently intended without the presence of software code and the digital data processing programs it enacts. (Williamson, 2017: 4)

The most common and successful use of this technology so far has been in the identification of students at risk of dropping out of their courses (Jørno & Gynther, 2018: 204). The kind of analytics used in this context may be called ‘academic analytics’ and focuses on educational processes at the institutional level or higher (Gelan et al, 2018: 3). However, ‘learning analytics’, the capture and analysis of learner and learning data in order to personalize learning ‘(1) through real-time feedback on online courses and e-textbooks that can ‘learn’ from how they are used and ‘talk back’ to the teacher, and (2) individualization and personalization of the educational experience through adaptive learning systems that enable materials to be tailored to each student’s individual needs through automated real-time analysis’ (Mayer-Schönberger & Cukier, 2014) has become ‘the main keyword of data-driven education’ (Williamson, 2017: 10). See my earlier posts on this topic here and here and here.

Learning with big dataNear the start of Mayer-Schönberger and Cukier’s enthusiastic sales pitch (Learning with Big Data: The Future of Education) for the use of big data in education, there is a discussion of Duolingo. They quote Luis von Ahn, the founder of Duolingo, as saying ‘there has been little empirical work on what is the best way to teach a foreign language’. This is so far from the truth as to be laughable. Von Ahn’s comment, along with the Duolingo product itself, is merely indicative of a lack of awareness of the enormous amount of research that has been carried out. But what could the data gleaned from the interactions of millions of users with Duolingo tell us of value? The example that is given is the following. Apparently, ‘in the case of Spanish speakers learning English, it’s common to teach pronouns early on: words like “he,” “she,” and “it”.’ But, Duolingo discovered, ‘the term “it” tends to confuse and create anxiety for Spanish speakers, since the word doesn’t easily translate into their language […] Delaying the introduction of “it” until a few weeks later dramatically improves the number of people who stick with learning English rather than drop out.’ Was von Ahn unaware of the decades of research into language transfer effects? Did von Ahn (who grew up speaking Spanish in Guatemala) need all this data to tell him that English personal pronouns can cause problems for Spanish learners of English? Was von Ahn unaware of the debates concerning the value of teaching isolated words (especially grammar words!)?

The area where little empirical research has been done is not in different ways of learning another language: it is in the use of big data and learning analytics to assist language learning. Claims about the value of these technologies in language learning are almost always speculative – they are based on comparison to other school subjects (especially, mathematics). Gelan et al (2018: 2), who note this lack of research, suggest that ‘understanding language learner behaviour could provide valuable insights into task design for instructors and materials designers, as well as help students with effective learning strategies and personalised learning pathways’ (my italics). Reinders (2018: 81) writes ‘that analysis of prior experiences with certain groups or certain courses may help to identify key moments at which students need to receive more or different support. Analysis of student engagement and performance throughout a course may help with early identification of learning problems and may prompt early intervention’ (italics added). But there is some research out there, and it’s worth having a look at. Most studies that have collected learner-tracking data concern glossary use for reading comprehension and vocabulary retention (Gelan et al, 2018: 5), but a few have attempted to go further in scope.

Volk et al (2015) looked at the behaviour of the 20,000 students per day using the platform which accompanies ‘More!’ (Gerngross et al. 2008) to do their English homework for Austrian lower secondary schools. They discovered that

  • the exercises used least frequently were those that are located further back in the course book
  • usage is highest from Monday to Wednesday, declining from Thursday, with a rise again on Sunday
  • most interaction took place between 3:00 and 5:00 pm.
  • repetition of exercises led to a strong improvement in success rate
  • students performed better on multiple choice and matching exercises than they did where they had to produce some language

The authors of this paper conclude by saying that ‘the results of this study suggest a number of new avenues for research. In general, the authors plan to extend their analysis of exercise results and applied exercises to the population of all schools using the online learning platform more-online.at. This step enables a deeper insight into student’s learning behaviour and allows making more generalizing statements.’ When I shared these research findings with the Austrian lower secondary teachers that I work with, their reaction was one of utter disbelief. People get paid to do this research? Why not just ask us?

More useful, more actionable insights may yet come from other sources. For example, Gu Yueguo, Pro-Vice-Chancellor of the Beijing Foreign Studies University has announced the intention to set up a national Big Data research center, specializing in big data-related research topics in foreign language education (Yu, 2015). Meanwhile, I’m aware of only one big research project that has published its results. The EC Erasmus+ VITAL project (Visualisation Tools and Analytics to monitor Online Language Learning & Teaching) was carried out between 2015 and 2017 and looked at the learning trails of students from universities in Belgium, Britain and the Netherlands. It was discovered (Gelan et al, 2015) that:

  • students who did online exercises when they were supposed to do them were slightly more successful than those who were late carrying out the tasks
  • successful students logged on more often, spent more time online, attempted and completed more tasks, revisited both exercises and theory pages more frequently, did the work in the order in which it was supposed to be done and did more work in the holidays
  • most students preferred to go straight into the assessed exercises and only used the theory pages when they felt they needed to; successful students referred back to the theory pages more often than unsuccessful students
  • students made little use of the voice recording functionality
  • most online activity took place the day before a class and the day of the class itself

EU funding for this VITAL project amounted to 274,840 Euros[1]. The technology for capturing the data has been around for a long time. In my opinion, nothing of value, or at least nothing new, has been learnt. Publishers like Pearson and Cambridge University Press who have large numbers of learners using their platforms have been capturing learning data for many years. They do not publish their findings and, intriguingly, do not even claim that they have learnt anything useful / actionable from the data they have collected. Sure, an exercise here or there may need to be amended. Both teachers and students may need more support in using the more open-ended functionalities of the platforms (e.g. discussion forums). But are they getting ‘unprecedented insights into what works and what doesn’t’ (Mayer-Schönberger & Cukier, 2014)? Are they any closer to building better pedagogies? On the basis of what we know so far, you wouldn’t want to bet on it.

It may be the case that all the learning / learner data that is captured could be used in some way that has nothing to do with language learning. Show me a language-learning app developer who does not dream of monetizing the ‘behavioural surplus’ (Zuboff, 2018) that they collect! But, for the data and analytics to be of any value in guiding language learning, it must lead to actionable insights. Unfortunately, as Jørno & Gynther (2018: 198) point out, there is very little clarity about what is meant by ‘actionable insights’. There is a danger that data and analytics ‘simply gravitates towards insights that confirm longstanding good practice and insights, such as “students tend to ignore optional learning activities … [and] focus on activities that are assessed” (Jørno & Gynther, 2018: 211). While this is happening, the focus on data inevitably shapes the way we look at the object of study (i.e. language learning), ‘thereby systematically excluding other perspectives’ (Mau, 2019: 15; see also Beer, 2019). The belief that tech is always the solution, that all we need is more data and better analytics, remains very powerful: it’s called techno-chauvinism (Broussard, 2018: 7-8).

References

Beer, D. 2019. The Data Gaze. London: Sage

Broussard, M. 2018. Artificial Unintelligence. Cambridge, Mass.: MIT Press

Gelan, A., Fastre, G., Verjans, M., Martin, N., Jansenswillen, G., Creemers, M., Lieben, J., Depaire, B. & Thomas, M. 2018. ‘Affordances and limitations of learning analytics for computer­assisted language learning: a case study of the VITAL project’. Computer Assisted Language Learning. pp. 1­26. http://clok.uclan.ac.uk/21289/

Gerngross, G., Puchta, H., Holzmann, C., Stranks, J., Lewis-Jones, P. & Finnie, R. 2008. More! 1 Cyber Homework. Innsbruck, Austria: Helbling

Jørno, R. L. & Gynther, K. 2018. ‘What Constitutes an “Actionable Insight” in Learning Analytics?’ Journal of Learning Analytics 5 (3): 198 – 221

Mau, S. 2019. The Metric Society. Cambridge: Polity Press

Mayer-Schönberger, V. & Cukier, K. 2014. Learning with Big Data: The Future of Education. New York: Houghton Mifflin Harcourt

Reinders, H. 2018. ‘Learning analytics for language learning and teaching’. JALT CALL Journal 14 / 1: 77 – 86 https://files.eric.ed.gov/fulltext/EJ1177327.pdf

Volk, H., Kellner, K. & Wohlhart, D. 2015. ‘Learning Analytics for English Language Teaching.’ Journal of Universal Computer Science, Vol. 21 / 1: 156-174 http://www.jucs.org/jucs_21_1/learning_analytics_for_english/jucs_21_01_0156_0174_volk.pdf

Williamson, B. 2017. Big Data in Education. London: Sage

Yu, Q. 2015. ‘Learning Analytics: The next frontier for computer assisted language learning in big data age’ SHS Web of Conferences, 17 https://www.shs-conferences.org/articles/shsconf/pdf/2015/04/shsconf_icmetm2015_02013.pdf

Zuboff, S. 2019. The Age of Surveillance Capitalism. London: Profile Books

 

[1] See https://ec.europa.eu/programmes/erasmus-plus/sites/erasmusplus2/files/ka2-2015-he_en.pdf

Jargon buster

Posted: January 18, 2019 in Discourse, ed tech
Tags:

With the 2019 educational conference show season about to start, here’s a handy guide to gaining a REAL understanding of the words you’re likely to come across. Please feel free to add in the comments anything I’ve omitted.

iatefl conference

accountability

Keeping the money-people happy.

AI (artificial intelligence)

Ooh! Aah! Yes, please.

analytics (as in learning analytics)

The analysis of student data to reveal crucial insights such as the fact that students who work more, make more progress. Cf. data

AR (augmented reality)

Out-of-date interactive technology with no convincing classroom value. cf. interactive

benchmark

A word for standard that makes you sound like you know what you’re talking about.

blended (as in blended learning)

Homework. Or, if you want to sound more knowledgeable, the way e-learning is being combined with traditional classroom methods and independent study to create a new, hybrid teaching methodology that is shown by research to facilitate better learning outcomes.

bot

A non-unionized, cheap teacher for the masses.

brain-friendly

A word used by people who haven’t read enough neuro-science.

collaborative

Getting other people to help you, and getting praised for doing so.

CPD (continuous professional development)

Unpaid training.

creativity

A good excuse to get out your guitar, recite a few poems and show how sensitive you are. Cf. 21st century skills

curated (as in curated learning content)

Stuff nicked from other websites. A way of getting more personalization for less investment.

customer

The correct way to refer to students. Cf. markets

data

Information about students that can be sold to advertising companies.

design (as in learning design)

Used to mean curriculum by people selling edtech products who aren’t sure what curriculum means.

discovery learning

A myth with a long-gone expiry date.

disruptive (as in disruptive innovation in education)

A word used in utter seriousness by people who dream of getting rich from the privatisation of education.

drones

Handy for speaking and writing exercises, according to elearningindustry.com. They open up a new set of opportunities to make classes more relevant and engaging for students. They can in fact enrich students’ imagination and get them more involved into the learning process.

ecosystem (as in learning ecosystem)

All the different ways that data about learners can be captured, sold or hacked.

EdSurge

The go-to site for ‘news’ about edtech. The company’s goal is ‘to promote the smart adoption of education technology through impartial reporting’ … much of which is paid for by investors in edtech start-ups.

edutainment

PowerPoint, for example.

efficacy

A fancy word for efficiency that nobody bothers with much any more.

empowerment

Not connected to power in any way at all.

engagement

Sticking with something.

flipped (as in flipped classrooms)

Watching educational videos at home.

formative assessment

A critically important tool in the iterative process of maximizing the learning environment and customizing instruction to meet students’ needs. Also known as testing.

gamification

Persuading people to push buttons.

global citizens

Nice people.

immersive

Used to describe a learning activity that is less boring than other learning activities.

inclusive (as in inclusive practices)

Not to be confused with virtue-signalling.

innovative

A meaningless word that sounds good to some people. Interchangeable with cutting-edge and state-of-the-art

interactive

With buttons that can be pushed.

interactive whiteboard

A term you won’t hear this year, except when accompanied with a scoff, because everyone has forgotten it and wants to move on. Cf. 60% of the other terms in this glossary by 2025

(the) knowledge economy

Platform capitalism.

leadership

A smokescreen for poor pay and conditions. Cf. 21st century skills

literacy (as in critical literacy, digital literacy, emotional literacy, media literacy, visual literacy)

A jargon word used to mean that someone can do something.

MALL (Mobile assisted language learning)

Chatting or playing games with your phone in class.

markets

Another contemporary way of referring to students. Cf. customer

mediation

Translating, interpreting and things like that.

mindfulness

An ever-growing industry.

motivation

U.S. education technology companies raised $1.45 billion from venture capitalists and private-equity investors in 2018 (according to EdSurge).

outcomes (as in learning outcomes)

‘Learning’, or whatever, that can be measured.

personalized

A meaningless word useful for selling edtech stuff. Interchangeable with differentiated and individualized.

providers

A euphemism for sellers. Cf. solutions

publisher

An obsolete word for providers of educational learning solutions. Cf. solutions

quality

A bit of management jargon from the last century. It doesn’t really matter if you don’t know exactly what it means – you can define it yourself.

research

A slippery word that is meant to elicit a positive response.

resilience

Also known as grit, the ability to suspend your better judgment and plough on.

scaffolding

Something to do with Vygotsky, but it probably doesn’t matter what exactly. It’s a ‘good thing’.

SEL (Social-Emotional Learning)

A VA (value-added) experience needed by students who spend too long in CAL in a VLE with poor UX.

skills (as in 21st century skills)

The abilities that young people will need for an imagined future workplace. These are to be paid for by the state, rather than the companies that might employ a small number of them on zero-hour contracts.

soft skills

Everything you need to be a compliant employee.

solutions (as in learning solutions)

A euphemism for stuff that someone is trying to sell to schools.

teacherpreneur

A teacher in need of a reality check.

thought leaders (as in educational thought leaders)

Effective self-promoters, usually with no background in education.

transformative

Nothing to do with Transformative Learning Theory (Mezirow) … just another buzz word.

VR

Technology that makes you dizzy.

ltsigIt’s hype time again. Spurred on, no doubt, by the current spate of books and articles  about AIED (artificial intelligence in education), the IATEFL Learning Technologies SIG is organising an online event on the topic in November of this year. Currently, the most visible online references to AI in language learning are related to Glossika , basically a language learning system that uses spaced repetition, whose marketing department has realised that references to AI might help sell the product. GlossikaThey’re not alone – see, for example, Knowble which I reviewed earlier this year .

In the wider world of education, where AI has made greater inroads than in language teaching, every day brings more stuff: How artificial intelligence is changing teaching , 32 Ways AI is Improving Education , How artificial intelligence could help teachers do a better job , etc., etc. There’s a full-length book by Anthony Seldon, The Fourth Education Revolution: will artificial intelligence liberate or infantilise humanity? (2018, University of Buckingham Press) – one of the most poorly researched and badly edited books on education I’ve ever read, although that won’t stop it selling – and, no surprises here, there’s a Pearson commissioned report called Intelligence Unleashed: An argument for AI in Education (2016) which is available free.

Common to all these publications is the claim that AI will radically change education. When it comes to language teaching, a similar claim has been made by Donald Clark (described by Anthony Seldon as an education guru but perhaps best-known to many in ELT for his demolition of Sugata Mitra). In 2017, Clark wrote a blog post for Cambridge English (now unavailable) entitled How AI will reboot language learning, and a more recent version of this post, called AI has and will change language learning forever (sic) is available on Clark’s own blog. Given the history of the failure of education predictions, Clark is making bold claims. Thomas Edison (1922) believed that movies would revolutionize education. Radios were similarly hyped in the 1940s and in the 1960s it was the turn of TV. In the 1980s, Seymour Papert predicted the end of schools – ‘the computer will blow up the school’, he wrote. Twenty years later, we had the interactive possibilities of Web 2.0. As each technology failed to deliver on the hype, a new generation of enthusiasts found something else to make predictions about.

But is Donald Clark onto something? Developments in AI and computational linguistics have recently resulted in enormous progress in machine translation. Impressive advances in automatic speech recognition and generation, coupled with the power that can be packed into a handheld device, mean that we can expect some re-evaluation of the value of learning another language. Stephen Heppell, a specialist at Bournemouth University in the use of ICT in Education, has said: ‘Simultaneous translation is coming, making language teachers redundant. Modern languages teaching in future may be more about navigating cultural differences’ (quoted by Seldon, p.263). Well, maybe, but this is not Clark’s main interest.

Less a matter of opinion and much closer to the present day is the issue of assessment. AI is becoming ubiquitous in language testing. Cambridge, Pearson, TELC, Babbel and Duolingo are all using or exploring AI in their testing software, and we can expect to see this increase. Current, paper-based systems of testing subject knowledge are, according to Rosemary Luckin and Kristen Weatherby, outdated, ineffective, time-consuming, the cause of great anxiety and can easily be automated (Luckin, R. & Weatherby, K. 2018. ‘Learning analytics, artificial intelligence and the process of assessment’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.253). By capturing data of various kinds throughout a language learner’s course of study and by using AI to analyse learning development, continuous formative assessment becomes possible in ways that were previously unimaginable. ‘Assessment for Learning (AfL)’ or ‘Learning Oriented Assessment (LOA)’ are two terms used by Cambridge English to refer to the potential that AI offers which is described by Luckin (who is also one of the authors of the Pearson paper mentioned earlier). In practical terms, albeit in a still very limited way, this can be seen in the CUP course ‘Empower’, which combines CUP course content with validated LOA from Cambridge Assessment English.

Will this reboot or revolutionise language teaching? Probably not and here’s why. AIED systems need to operate with what is called a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. Some subjects (especially STEM subjects) ‘lend themselves much more readily to having their domains represented in ways that can be automatically reasoned about’ (du Boulay, D. et al., 2018. ‘Artificial intelligences and big data technologies to close the achievement gap’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.258). This is why most AIED systems have been built to teach these areas. Language are rather different. We simply do not have a domain knowledge model, except perhaps for the very lowest levels of language learning (and even that is highly questionable). Language learning is probably not, or not primarily, about acquiring subject knowledge. Debate still rages about the relationship between explicit language knowledge and language competence. AI-driven formative assessment will likely focus most on explicit language knowledge, as does most current language teaching. This will not reboot or revolutionise anything. It will more likely reinforce what is already happening: a model of language learning that assumes there is a strong interface between explicit knowledge and language competence. It is not a model that is shared by most SLA researchers.

So, one thing that AI can do (and is doing) for language learning is to improve the algorithms that determine the way that grammar and vocabulary are presented to individual learners in online programs. AI-optimised delivery of ‘English Grammar in Use’ may lead to some learning gains, but they are unlikely to be significant. It is not, in any case, what language learners need.

AI, Donald Clark suggests, can offer personalised learning. Precisely what kind of personalised learning this might be, and whether or not this is a good thing, remains unclear. A 2015 report funded by the Gates Foundation found that we currently lack evidence about the effectiveness of personalised learning. We do not know which aspects of personalised learning (learner autonomy, individualised learning pathways and instructional approaches, etc.) or which combinations of these will lead to gains in language learning. The complexity of the issues means that we may never have a satisfactory explanation. You can read my own exploration of the problems of personalised learning starting here .

What’s left? Clark suggests that chatbots are one area with ‘huge potential’. I beg to differ and I explained my reasons eighteen months ago . Chatbots work fine in very specific domains. As Clark says, they can be used for ‘controlled practice’, but ‘controlled practice’ means practice of specific language knowledge, the practice of limited conversational routines, for example. It could certainly be useful, but more than that? Taking things a stage further, Clark then suggests more holistic speaking and listening practice with Amazon Echo, Alexa or Google Home. If and when the day comes that we have general, as opposed to domain-specific, AI, chatting with one of these tools would open up vast new possibilities. Unfortunately, general AI does not exist, and until then Alexa and co will remain a poor substitute for human-human interaction (which is readily available online, anyway). Incidentally, AI could be used to form groups of online language learners to carry out communicative tasks – ‘the aim might be to design a grouping of students all at a similar cognitive level and of similar interests, or one where the participants bring different but complementary knowledge and skills’ (Luckin, R., Holmes, W., Griffiths, M. & Forceir, L.B. 2016. Intelligence Unleashed: An argument for AI in Education. London: Pearson, p.26).

Predictions about the impact of technology on education have a tendency to be made by people with a vested interest in the technologies. Edison was a businessman who had invested heavily in motion pictures. Donald Clark is an edtech entrepreneur whose company, Wildfire, uses AI in online learning programs. Stephen Heppell is executive chairman of LP+ who are currently developing a Chinese language learning community for 20 million Chinese school students. The reporting of AIED is almost invariably in websites that are paid for, in one way or another, by edtech companies. Predictions need, therefore, to be treated sceptically. Indeed, the safest prediction we can make about hyped educational technologies is that inflated expectations will be followed by disillusionment, before the technology finds a smaller niche.

 

Learners are different, the argument goes, so learning paths will be different, too. And, the argument continues, if learners will benefit from individualized learning pathways, so instruction should be based around an analysis of the optimal learning pathways for individuals and tailored to match them. In previous posts, I have questioned whether such an analysis is meaningful or reliable and whether the tailoring leads to any measurable learning gains. In this post, I want to focus primarily on the analysis of learner differences.

Family / social background and previous educational experiences are obvious ways in which learners differ when they embark on any course of study. The way they impact on educational success is well researched and well established. Despite this research, there are some who disagree. For example, Dominic Cummings (former adviser to Michael Gove when he was UK Education minister and former campaign director of the pro-Brexit Vote Leave group) has argued  that genetic differences, especially in intelligence, account for more than 50% of the differences in educational achievement.

Cummings got his ideas from Robert Plomin , one of the world’s most cited living psychologists. Plomin, in a recent paper in Nature, ‘The New Genetics of Intelligence’ , argues that ‘intelligence is highly heritable and predicts important educational, occupational and health outcomes better than any other trait’. In an earlier paper, ‘Genetics affects choice of academic subjects as well as achievement’, Plomin and his co-authors argued that ‘choosing to do A-levels and the choice of subjects show substantial genetic influence, as does performance after two years studying the chosen subjects’. Environment matters, says Plomin , but it’s possible that genes matter more.

All of which leads us to the field known as ‘educational genomics’. In an article of breathless enthusiasm entitled ‘How genetics could help future learners unlock hidden potential’ , University of Sussex psychologist, Darya Gaysina, describes educational genomics as the use of ‘detailed information about the human genome – DNA variants – to identify their contribution to particular traits that are related to education [… ] it is thought that one day, educational genomics could enable educational organisations to create tailor-made curriculum programmes based on a pupil’s DNA profile’. It could, she writes, ‘enable schools to accommodate a variety of different learning styles – both well-worn and modern – suited to the individual needs of the learner [and] help society to take a decisive step towards the creation of an education system that plays on the advantages of genetic background. Rather than the current system, that penalises those individuals who do not fit the educational mould’.

The goal is not just personalized learning. It is ‘Personalized Precision Education’ where researchers ‘look for patterns in huge numbers of genetic factors that might explain behaviors and achievements in individuals. It also focuses on the ways that individuals’ genotypes and environments interact, or how other “epigenetic” factors impact on whether and how genes become active’. This will require huge amounts of ‘data gathering from learners and complex analysis to identify patterns across psychological, neural and genetic datasets’. Why not, suggests Darya Gaysina, use the same massive databases that are being used to identify health risks and to develop approaches to preventative medicine?

BG-for-educationIf I had a spare 100 Euros, I (or you) could buy Darya Gaysina’s book, ‘Behavioural Genetics for Education’ (Palgrave Macmillan, 2016) and, no doubt, I’d understand the science better as a result. There is much about the science that seems problematic, to say the least (e.g. the definition and measurement of intelligence, the lack of reference to other research that suggests academic success is linked to non-genetic factors), but it isn’t the science that concerns me most. It’s the ethics. I don’t share Gaysina’s optimism that ‘every child in the future could be given the opportunity to achieve their maximum potential’. Her utopianism is my fear of Gattaca-like dystopias. IQ testing, in its early days, promised something similarly wonderful, but look what became of that. When you already have reporting of educational genomics using terms like ‘dictate’, you have to fear for the future of Gaysina’s brave new world.

Futurism.pngEducational genomics could equally well lead to expectations of ‘certain levels of achievement from certain groups of children – perhaps from different socioeconomic or ethnic groups’ and you can be pretty sure it will lead to ‘companies with the means to assess students’ genetic identities [seeking] to create new marketplaces of products to sell to schools, educators and parents’. The very fact that people like Dominic Cummings (described by David Cameron as a ‘career psychopath’ ) have opted to jump on this particular bandwagon is, for me, more than enough cause for concern.

Underlying my doubts about educational genomics is a much broader concern. It’s the apparent belief of educational genomicists that science can provide technical solutions to educational problems. It’s called ‘solutionism’ and it doesn’t have a pretty history.

On Sunday 17 June I’ll be giving a talk at a conference in London, organised by Regent’s University and Trinity College London. Further information about the conference can be found here.

The talk is entitled ‘Personalized learning: the past, present and future of ELT’ and draws heavily on earlier posts on this blog. For anyone attending the talk, here are links to the references I cite along with further reading.

  1. Personalized learning – attempts to define it and its links to technology: see Personalized learning: Hydra and the power of ambiguity and Evaluating personalization
  2. Goal-setting and standardization: see Personalization and goal-setting
  3. Self-pacing and programmed instruction: see Self-paced language learning
  4. The promotion of personalized learning in ELT: see Personalized learning at IATEFL

 

 

Knowble, claims its developers, is a browser extension that will improve English vocabulary and reading comprehension. It also describes itself as an ‘adaptive language learning solution for publishers’. It’s currently beta and free, and sounds right up my street so I decided to give it a run.

Knowble reader

Users are asked to specify a first language (I chose French) and a level (A1 to C2): I chose B1, but this did not seem to impact on anything that subsequently happened. They are then offered a menu of about 30 up-to-date news items, grouped into 5 categories (world, science, business, sport, entertainment). Clicking on one article takes you to the article on the source website. There’s a good selection, including USA Today, CNN, Reuters, the Independent and the Torygraph from Britain, the Times of India, the Independent from Ireland and the Star from Canada. A large number of words are underlined: a single click brings up a translation in the extension box. Double-clicking on all other words will also bring up translations. Apart from that, there is one very short exercise (which has presumably been automatically generated) for each article.

For my trial run, I picked three articles: ‘Woman asks firefighters to help ‘stoned’ raccoon’ (from the BBC, 240 words), ‘Plastic straw and cotton bud ban proposed’ (also from the BBC, 823 words) and ‘London’s first housing market slump since 2009 weighs on UK price growth’ (from the Torygraph, 471 words).

Translations

Research suggests that the use of translations, rather than definitions, may lead to more learning gains, but the problem with Knowble is that it relies entirely on Google Translate. Google Translate is fast improving. Take the first sentence of the ‘plastic straw and cotton bud’ article, for example. It’s not a bad translation, but it gets the word ‘bid’ completely wrong, translating it as ‘offre’ (= offer), where ‘tentative’ (= attempt) is needed. So, we can still expect a few problems with Google Translate …

google_translateOne of the reasons that Google Translate has improved is that it no longer treats individual words as individual lexical items. It analyses groups of words and translates chunks or phrases (see, for example, the way it translates ‘as part of’). It doesn’t do word-for-word translation. Knowble, however, have set their software to ask Google for translations of each word as individual items, so the phrase ‘as part of’ is translated ‘comme’ + ‘partie’ + ‘de’. Whilst this example is comprehensible, problems arise very quickly. ‘Cotton buds’ (‘cotons-tiges’) become ‘coton’ + ‘bourgeon’ (= botanical shoots of cotton). Phrases like ‘in time’, ‘run into’, ‘sleep it off’ ‘take its course’, ‘fire station’ or ‘going on’ (all from the stoned raccoon text) all cause problems. In addition, Knowble are not using any parsing tools, so the system does not identify parts of speech, and further translation errors inevitably appear. In the short article of 240 words, about 10% are wrongly translated. Knowble claim to be using NLP tools, but there’s no sign of it here. They’re just using Google Translate rather badly.

Highlighted items

word_listNLP tools of some kind are presumably being used to select the words that get underlined. Exactly how this works is unclear. On the whole, it seems that very high frequency words are ignored and that lower frequency words are underlined. Here, for example, is the list of words that were underlined in the stoned raccoon text. I’ve compared them with (1) the CEFR levels for these words in the English Profile Text Inspector, and (2) the frequency information from the Macmillan dictionary (more stars = more frequent). In the other articles, some extremely high frequency words were underlined (e.g. price, cost, year) while much lower frequency items were not.

It is, of course, extremely difficult to predict which items of vocabulary a learner will know, even if we have a fairly accurate idea of their level. Personal interests play a significant part, so, for example, some people at even a low level will have no problem with ‘cannabis’, ‘stoned’ and ‘high’, even if these are low frequency. First language, however, is a reasonably reliable indicator as cognates can be expected to be easy. A French speaker will have no problem with ‘appreciate’, ‘unique’ and ‘symptom’. A recommendation engine that can meaningfully personalize vocabulary suggestions will, at the very least, need to consider cognates.

In short, the selection and underlining of vocabulary items, as it currently stands in Knowble, appears to serve no clear or useful function.

taskVocabulary learning

Knowble offers a very short exercise for each article. They are of three types: word completion, dictation and drag and drop (see the example). The rationale for the selection of the target items is unclear, but, in any case, these exercises are tokenistic in the extreme and are unlikely to lead to any significant learning gains. More valuable would be the possibility of exporting items into a spaced repetition flash card system.

effectiveThe claim that Knowble’s ‘learning effect is proven scientifically’ seems to me to be without any foundation. If there has been any proper research, it’s not signposted anywhere. Sure, reading lots of news articles (with a look-up function – if it works reliably) can only be beneficial for language learners, but they can do that with any decent dictionary running in the background.

Similar in many ways to en.news, which I reviewed in my last post, Knowble is another example of a technology-driven product that shows little understanding of language learning.

Last month, I wrote a post about the automated generation of vocabulary learning materials. Yesterday, I got an email from Mike Elchik, inviting me to take a look at the product that his company, WeSpeke, has developed in partnership with CNN. Called en.news, it’s a very regularly updated and wide selection of video clips and texts from CNN, which are then used to ‘automatically create a pedagogically structured, leveled and game-ified English lesson‘. Available at the AppStore and Google Play, as well as a desktop version, it’s free. Revenues will presumably be generated through advertising and later sales to corporate clients.

With 6.2 million dollars in funding so far, WeSpeke can leverage some state-of-the-art NLP and AI tools. Co-founder and chief technical adviser of the company is Jaime Carbonell, Director of the Language Technologies Institute at Carnegie Mellon University, described in Wikipedia as one of the gurus of machine learning. I decided to have a closer look.

home_page

Users are presented with a menu of CNN content (there were 38 items from yesterday alone), these are tagged with broad categories (Politics, Opinions, Money, Technology, Entertainment, etc.) and given a level, ranging from 1 to 5, although the vast majority of the material is at the two highest levels.

menu.jpg

I picked two lessons: a reading text about Mark Zuckerberg’s Congressional hearing (level 5) and a 9 minute news programme of mixed items (level 2 – illustrated above). In both cases, the lesson begins with the text. With the reading, you can click on words to bring up dictionary entries from the Collins dictionary. With the video, you can activate captions and again click on words for definitions. You can also slow down the speed. So far, so good.

There then follows a series of exercises which focus primarily on a set of words that have been automatically selected. This is where the problems began.

Level

It’s far from clear what the levels (1 – 5) refer to. The Zuckerberg text is 930 words long and is rated as B2 by one readability tool. But, using the English Profile Text Inspector, there are 19 types at C1 level, 14 at C2, and 98 which are unlisted. That suggests something substantially higher than B2. The CNN10 video is delivered at breakneck speed (as is often the case with US news shows). Yes, it can be slowed down, but that still won’t help with some passages, such as the one below:

A squirrel recently fell out of a tree in Western New York. Why would that make news?Because she bwoke her widdle leg and needed a widdle cast! Yes, there are casts for squirrels, as you can see in this video from the Orphaned Wildlife Center. A windstorm knocked the animal’s nest out of a tree, and when a woman saw that the baby squirrel was injured, she took her to a local vet. Doctors say she’s going to be just fine in a couple of weeks. Well, why ‘rodent’ she be? She’s been ‘whiskered’ away and cast in both a video and a plaster. And as long as she doesn’t get too ‘squirrelly’ before she heals, she’ll have quite a ‘tail’ to tell.

It’s hard to understand how a text like this got through the algorithms. But, as materials writers know, it is extremely hard to find authentic text that lends itself to language learning at anything below C1. On the evidence here, there is still some way to go before the process of selection can be automated. It may well be the case that CNN simply isn’t a particularly appropriate source.

Target learning items

The primary focus of these lessons is vocabulary learning, and it’s vocabulary learning of a very deliberate kind. Applied linguists are in general agreement that it makes sense for learners to approach the building of their L2 lexicon in a deliberate way (i.e. by studying individual words) for high-frequency items or items that can be identified as having a high surrender value (e.g. items from the AWL for students studying in an EMI context). Once you get to items that are less frequent than, say, the top 8,000 most frequent words, the effort expended in studying new words needs to be offset against their usefulness. Why spend a lot of time studying low frequency words when you’re unlikely to come across them again for some time … and will probably forget them before you do? Vocabulary development at higher levels is better served by extensive reading (and listening), possibly accompanied by glosses.

The target items in the Zuckerberg text were: advocacy, grilled, handicapping, sparked, diagnose, testified, hefty, imminent, deliberative and hesitant. One of these ‘grilled‘ is listed as A2 by English Vocabulary Profile, but that is with its literal, not metaphorical, meaning. Four of them are listed as C2 and the remaining five are off-list. In the CNN10 video, the target items were: strive, humble (verb), amplify, trafficked, enslaved, enacted, algae, trafficking, ink and squirrels. Of these, one is B1, two are C2 and the rest are unlisted. What is the point of studying these essentially random words? Why spend time going through a series of exercises that practise these items? Wouldn’t your time be better spent just doing some more reading? I have no idea how the automated selection of these items takes place, but it’s clear that it’s not working very well.

Practice exercises

There is plenty of variety of task-type but there are,  I think, two reasons to query the claim that these lessons are ‘pedagogically structured’. The first is the nature of the practice exercises; the second is the sequencing of the exercises. I’ll restrict my observations to a selection of the tasks.

1. Users are presented with a dictionary definition and an anagrammed target item which they must unscramble. For example:

existing for the purpose of discussing or planning something     VLREDBETEIIA

If you can’t solve the problem, you can always scroll through the text to find the answer. Burt the problem is in the task design. Dictionary definitions have been written to help language users decode a word. They simply don’t work very well when they are used for another purpose (as prompts for encoding).

2. Users are presented with a dictionary definition for which they must choose one of four words. There are many potential problems here, not the least of which is that definitions are often more complex than the word they are defining, or they present other challenges. As an example: cause to be unpretentious for to humble. On top of that, lexicographers often need or choose to embed the target item in the definition. For example:

a hefty amount of something, especially money, is very large

an event that is imminent, especially an unpleasant one, will happen very soon

When this is the case, it makes no sense to present these definitions and ask learners to find the target item from a list of four.

The two key pieces of content in this product – the CNN texts and the Collins dictionaries – are both less than ideal for their purposes.

3. Users are presented with a box of jumbled words which they must unscramble to form sentences that appeared in the text.

Rearrange_words_to_make_sentences

The sentences are usually long and hard to reconstruct. You can scroll through the text to find the answer, but I’m unclear what the point of this would be. The example above contains a mistake (vie instead of vice), but this was one of only two glitches I encountered.

4. Users are asked to select the word that they hear on an audio recording. For example:

squirreling     squirrel     squirreled     squirrels

Given the high level of challenge of both the text and the target items, this was a rather strange exercise to kick off the practice. The meaning has not yet been presented (in a matching / definition task), so what exactly is the point of this exercise?

5. Users are presented with gapped sentences from the text and asked to choose the correct grammatical form of the missing word. Some of these were hard (e.g. adjective order), others were very easy (e.g. some vs any). The example below struck me as plain weird for a lesson at this level.

________ have zero expectation that this Congress is going to make adequate changes. (I or Me ?)

6. At the end of both lessons, there were a small number of questions that tested your memory of the text. If, like me, you couldn’t remember all that much about the text after twenty minutes of vocabulary activities, you can scroll through the text to find the answers. This is not a task type that will develop reading skills: I am unclear what it could possibly develop.

Overall?

Using the lessons on offer here wouldn’t do a learner (as long as they already had a high level of proficiency) any harm, but it wouldn’t be the most productive use of their time, either. If a learner is motivated to read the text about Zuckerberg, rather than do lots of ‘busy’ work on a very odd set of words with gap-fills and matching tasks, they’d be better advised just to read the text again once or twice. They could use a look-up for words they want to understand and import them into a flashcard system with spaced repetition (en.news does have flashcards, but there’s no sign of spaced practice yet). More, they could check out another news website and read / watch other articles on the same subject (perhaps choosing websites with a different slant to CNN) and get valuable narrow-reading practice in this way.

My guess is that the technology has driven the product here, but without answering the fundamental questions about which words it’s appropriate for individual learners to study in a deliberate way and how this is best tackled, it doesn’t take learners very far.