Posts Tagged ‘algorithms’

NB This is an edited version of the original review.

Words & Monsters is a new vocabulary app that has caught my attention. There are three reasons for this. Firstly, because it’s free. Secondly, because I was led to believe (falsely, as it turns out) that two of the people behind it are Charles Browne and Brent Culligan, eminently respectable linguists, who were also behind the development of the New General Service List (NGSL), based on data from the Cambridge English Corpus. And thirdly, because a lot of thought, effort and investment have clearly gone into the gamification of Words & Monsters (WAM). It’s to the last of these that I’ll turn my attention first.

WAM teaches vocabulary in the context of a battle between a player’s avatar and a variety of monsters. If users can correctly match a set of target items to definitions or translations in the available time, they ‘defeat’ the monster and accumulate points. The more points you have, the higher you advance through a series of levels and ranks. There are bonuses for meeting daily and weekly goals, there are leaderboards, and trophies and medals can be won. In addition to points, players also win ‘crystals’ after successful battles, and these crystals can be used to buy accessories which change the appearance of the avatar and give the player added ‘powers’. I was never able to fully understand precisely how these ‘powers’ affected the number of points I could win in battle. It remained as baffling to me as the whole system of values with Pokemon cards, which is presumably a large part of the inspiration here. Perhaps others, more used to games like Pokemon, would find it all much more transparent.

The system of rewards is all rather complicated, but perhaps this doesn’t matter too much. In fact, it might be the case that working out how reward systems work is part of what motivates people to play games. But there is another aspect to this: the app’s developers refer in their bumf to research by Howard-Jones and Jay (2016), which suggests that when rewards are uncertain, more dopamine is released in the mid-brain and this may lead to reinforcement of learning, and, possibly, enhancement of declarative memory function. Possibly … but Howard-Jones and Jay point out that ‘the science required to inform the manipulation of reward schedules for educational benefit is very incomplete.’ So, WAM’s developers may be jumping the gun a little and overstating the applicability of the neuroscientific research, but they’re not alone in that!

If you don’t understand a reward system, it’s certain that the rewards are uncertain. But WAM takes this further in at least two ways. Firstly, when you win a ‘battle’, you have to click on a plain treasure bag to collect your crystals, and you don’t know whether you’ll get one, two, three, or zero, crystals. You are given a semblance of agency, but, essentially, the whole thing is random. Secondly, when you want to convert your crystals into accessories for your avatar, random selection determines which accessory you receive, even though, again, there is a semblance of agency. Different accessories have different power values. This extended use of what the developers call ‘the thrill of uncertain rewards’ is certainly interesting, but how effective it is is another matter. My own reaction, after quite some time spent ‘studying’, to getting no crystals or an avatar accessory that I didn’t want was primarily frustration, rather than motivation to carry on. I have no idea how typical my reaction (more ‘treadmill’ than ‘thrill’) might be.

Unsurprisingly, for an app that has so obviously thought carefully about gamification, players are encouraged to interact with each other. As part of the early promotion, WAM is running, from 15 November to 19 December, a free ‘team challenge tournament’, allowing teams of up to 8 players to compete against each other. Ingeniously, it would appear to allow teams and players of varying levels of English to play together, with the app’s algorithms determining each individual’s level of lexical knowledge and therefore the items that will be presented / tested. Social interaction is known to be an important component of successful games (Dehghanzadeh et al., 2019), but for vocabulary apps there’s a huge challenge. In order to learn vocabulary from an app, learners need to put in time – on a regular basis. Team challenge tournaments may help with initial on-boarding of players, but, in the end, learning from a vocabulary app is inevitably and largely a solitary pursuit. Over time, social interaction is unlikely to be maintained, and it is, in any case, of a very limited nature. The other features of successful games – playful freedom and intrinsically motivating tasks (Driver, 2012) – are also absent from vocabulary apps. Playful freedom is mostly incompatible with points, badges and leaderboards. And flashcard tasks, however intrinsically motivating they may be at the outset, will always become repetitive after a while. In the end, what’s left, for those users who hang around long enough, is the reward system.

It’s also worth noting that this free challenge is of limited duration: it is a marketing device attempting to push you towards the non-free use of the app, once the initial promotion is over.

Gamified motivation tools are only of value, of course, if they motivate learners to spend their time doing things that are of clear learning value. To evaluate the learning potential of WAM, then, we need to look at the content (the ‘learning objects’) and the learning tasks that supposedly lead to acquisition of these items.

When you first use WAM, you need to play for about 20 minutes, at which point algorithms determine ‘how many words [you] know and [you can] see scores for English tests such as; TOEFL, TOEIC, IELTS, EIKEN, Kyotsu Shiken, CEFR, SAT and GRE’. The developers claim that these scores correlate pretty highly with actual test scores: ‘they are about as accurate as the tests themselves’, they say. If Browne and Culligan had been behind the app, I would have been tempted to accept the claim – with reservations: after all, it still allows for one item out of 5 to be wrongly identified. But, what is this CEFR test score that is referred to? There is no CEFR test, although many tests are correlated with CEFR. The two tools that I am most familiar with which allocate CEFR levels to individual words – Cambridge’s English Vocabulary Profile and Pearson’s Global Scale of English – often conflict in their results. I suspect that ‘CEFR’ was just thrown into the list of tests as an attempt to broaden the app’s appeal.

English target words are presented and practised with their translation ‘equivalents’ in Japanese. For the moment, Japanese is the only language available, which means the app is of little use to learners who don’t know any Japanese. It’s now well-known that bilingual pairings are more effective in deliberate language learning than using definitions in the same language as the target items. This becomes immediately apparent when, for example, a word like ‘something’ is defined (by WAM) as ‘a thing not known or specified’ and ‘anything’ as ‘a thing of whatever kind’. But although I’m in no position to judge the Japanese translations, there are reasons why I would want to check the spreadsheet before recommending the app. ‘Lady’ is defined as ‘polite word for a woman’; ‘missus’ is defined as ‘wife’; and ‘aye’ is defined as ‘yes’. All of these definitions are, at best, problematic; at worst, they are misleading. Are the Japanese translations more helpful? I wonder … Perhaps these are simply words that do not lend themselves to flashcard treatment?

Because I tested in to the app at C1 level, I was not able to evaluate the selection of words at lower levels. A pity. Instead, I was presented with words like ‘ablution’, ‘abrade’, ‘anode’, and ‘auspice’. The app claims to be suitable ‘for both second-language learners and native speakers’. For lower levels of the former, this may be true (but without looking at the lexical spreadsheets, I can’t tell). But for higher levels, however much fun this may be for some people, it seems unlikely that you’ll learn very much of any value. Outside of words in, say, the top 8000 frequency band, it is practically impossible to differentiate the ‘surrender value’ of words in any meaningful way. Deliberate learning of vocabulary only makes sense with high frequency words that you have a chance of encountering elsewhere. You’d be better off reading, extensively, rather than learning random words from an app. Words, which (for reasons I’ll come on to) you probably won’t actually learn anyway.

With very few exceptions, the learning objects in WAM are single words, rather than phrases, even when the item is of little or no value outside its use in a phrase. ‘Betide’ is defined as ‘to happen to; befall’ but this doesn’t tell a learner much that is useful. It’s practically only ever used following ‘woe’ (but what does ‘woe’ mean?!). Learning items can be checked in the ‘study guide’, which will show that ‘betide’ typically follows ‘woe’, but unless you choose to refer to the study guide (and there’s no reason, in a case like this, that you would know that you need to check things out more fully), you’ll be none the wiser. In other words, checking the study guide is unlikely to betide you. ‘Wee’, as another example, is treated as two items: (1) meaning ‘very small’ as in ‘wee baby’, and (2) meaning ‘very early in the morning’ as in ‘in the wee hours’. For the latter, ‘wee’ can only collocate with ‘in the’ and ‘hours’, so it makes little sense to present it as a single word. This is also an example of how, in some cases, different meanings of particular words are treated as separate learning objects, even when the two meanings are very close and, in my view, are hardly worth learning separately. Examples include ‘czar’ and ‘assonance’. Sometimes, cognates are treated as separate learning objects (e.g. ‘adulterate’ and ‘adulteration’ or ‘dolor’ and ‘dolorous’); with other words (e.g. ‘effulgence’), only one grammatical form appears to be given. I could not begin to figure out any rationale behind any of this.

All in all, then, there are reasons to be a little skeptical about some of the content. Up to level B2 – which, in my view, is the highest level at which it makes sense to use vocabulary flashcards – it may be of value, so long as your first language is Japanese. But given the claim that it can help you prepare for the ‘CEFR test’, I have to wonder …

The learning tasks require players to match target items to translations / definitions (in both directions), with the target item sometimes in written form, sometimes spoken. Users do not, as far as I can tell, ever have to produce the target item: they only have to select. The learning relies on spaced repetition, but there is no generative effect (known to enhance memorisation). When I was experimenting, there were a few words that I did not know, but I was usually able to get the correct answer by eliminating the distractors (a choice of one from three gives players a reasonable chance of guessing correctly). WAM does not teach users how to produce words; its focus is on receptive knowledge (of a limited kind). I learn, for example, what a word like ‘aye’ or ‘missus’ kind of means, but I learn nothing about how to use it appropriately. Contrary to the claims in WAM’s bumf (that ‘all senses and dimensions of each word are fully acquired’), reading and listening comprehension speeds may be improved, but appropriate and accurate use of these words in speaking and writing is much less likely to follow. Does WAM really ‘strengthen and expand the foundation levels of cognition that support all higher level thinking’, as is claimed?

Perhaps it’s unfair to mention some of the more dubious claims of WAM’s promotional material, but here is a small selection, anyway: ‘WAM unleashes the full potential of natural motivation’. ‘WAM promotes Flow by carefully managing the ratio of unknown words. Your mind moves freely in the channel below frustration and above boredom’.

WAM is certainly an interesting project, but, like all the vocabulary apps I have ever looked at, there have to be trade-offs between optimal task design and what will fit on a mobile screen, between freedoms and flexibility for the user and the requirements of gamified points systems, between the amount of linguistic information that is desirable and the amount that spaced repetition can deal with, between attempting to make the app suitable for the greatest number of potential users and making it especially appropriate for particular kinds of users. Design considerations are always a mix of the pedagogical and the practical / commercial. And, of course, the financial. And, like most edtech products, the claims for its efficacy need to be treated with a bucket of salt.

References

Dehghanzadeh, H., Fardanesh, H., Hatami, J., Talaee, E. & Noroozi, O. (2019) Using gamification to support learning English as a second language: a systematic review, Computer Assisted Language Learning, DOI: 10.1080/09588221.2019.1648298

Driver, P. (2012) The Irony of Gamification. In English Digital Magazine 3, British Council Portugal, pp. 21 – 24 http://digitaldebris.info/digital-debris/2011/12/31/the-irony-of-gamification-written-for-ied-magazine.html

Howard-Jones, P. & Jay, T. (2016) Reward, learning and games. Current Opinion in Behavioral Sciences, 10: 65 – 72

Back in the middle of the last century, the first interactive machines for language teaching appeared. Previously, there had been phonograph discs and wire recorders (Ornstein, 1968: 401), but these had never really taken off. This time, things were different. Buoyed by a belief in the power of technology, along with the need (following the Soviet Union’s successful Sputnik programme) to demonstrate the pre-eminence of the United States’ technological expertise, the interactive teaching machines that were used in programmed instruction promised to revolutionize language learning (Valdman, 1968: 1). From coast to coast, ‘tremors of excitement ran through professional journals and conferences and department meetings’ (Kennedy, 1967: 871). The new technology was driven by hard science, supported and promoted by the one of the most well-known and respected psychologists and public intellectuals of the day (Skinner, 1961).

In classrooms, the machines acted as powerfully effective triggers in generating situational interest (Hidi & Renninger, 2006). Even more exciting than the mechanical teaching machines were the computers that were appearing on the scene. ‘Lick’ Licklider, a pioneer in interactive computing at the Advanced Research Projects Agency in Arlington, Virginia, developed an automated drill routine for learning German by hooking up a computer, two typewriters, an oscilloscope and a light pen (Noble, 1991: 124). Students loved it, and some would ‘go on and on, learning German words until they were forced by scheduling to cease their efforts’. Researchers called the seductive nature of the technology ‘stimulus trapping’, and Licklider hoped that ‘before [the student] gets out from under the control of the computer’s incentives, [they] will learn enough German words’ (Noble, 1991: 125).

With many of the developed economies of the world facing a critical shortage of teachers, ‘an urgent pedagogical emergency’ (Hof, 2018), the new approach was considered to be extremely efficient and could equalise opportunity in schools across the country. It was ‘here to stay: [it] appears destined to make progress that could well go beyond the fondest dreams of its originators […] an entire industry is just coming into being and significant sales and profits should not be too long in coming’ (Kozlowski, 1961: 47).

Unfortunately, however, researchers and entrepreneurs had massively underestimated the significance of novelty effects. The triggered situational interest of the machines did not lead to intrinsic individual motivation. Students quickly tired of, and eventually came to dislike, programmed instruction and the machines that delivered it (McDonald et al.: 2005: 89). What’s more, the machines were expensive and ‘research studies conducted on its effectiveness showed that the differences in achievement did not constantly or substantially favour programmed instruction over conventional instruction (Saettler, 2004: 303). Newer technologies, with better ‘stimulus trapping’, were appearing. Programmed instruction lost its backing and disappeared, leaving as traces only its interest in clearly defined learning objectives, the measurement of learning outcomes and a concern with the efficiency of learning approaches.

Hot on the heels of programmed instruction came the language laboratory. Futuristic in appearance, not entirely unlike the deck of the starship USS Enterprise which launched at around the same time, language labs captured the public imagination and promised to explore the final frontiers of language learning. As with the earlier teaching machines, students were initially enthusiastic. Even today, when language labs are introduced into contexts where they may be perceived as new technology, they can lead to high levels of initial motivation (e.g. Ramganesh & Janaki, 2017).

Given the huge investments into these labs, it’s unfortunate that initial interest waned fast. By 1969, many of these rooms had turned into ‘“electronic graveyards,” sitting empty and unused, or perhaps somewhat glorified study halls to which students grudgingly repair to don headphones, turn down the volume, and prepare the next period’s history or English lesson, unmolested by any member of the foreign language faculty’ (Turner, 1969: 1, quoted in Roby, 2003: 527). ‘Many second language students shudder[ed] at the thought of entering into the bowels of the “language laboratory” to practice and perfect the acoustical aerobics of proper pronunciation skills. Visions of sterile white-walled, windowless rooms, filled with endless bolted-down rows of claustrophobic metal carrels, and overseen by a humorless, lab director, evoke[d] fear in the hearts of even the most stout-hearted prospective second-language learners (Wiley, 1990: 44).

By the turn of this century, language labs had mostly gone, consigned to oblivion by the appearance of yet newer technology: the internet, laptops and smartphones. Education had been on the brink of being transformed through new learning technologies for decades (Laurillard, 2008: 1), but this time it really was different. It wasn’t just one technology that had appeared, but a whole slew of them: ‘artificial intelligence, learning analytics, predictive analytics, adaptive learning software, school management software, learning management systems (LMS), school clouds. No school was without these and other technologies branded as ‘superintelligent’ by the late 2020s’ (Macgilchrist et al., 2019). The hardware, especially phones, was ubiquitous and, therefore, free. Unlike teaching machines and language laboratories, students were used to using the technology and expected to use their devices in their studies.

A barrage of publicity, mostly paid for by the industry, surrounded the new technologies. These would ‘meet the demands of Generation Z’, the new generation of students, now cast as consumers, who ‘were accustomed to personalizing everything’.  AR, VR, interactive whiteboards, digital projectors and so on made it easier to ‘create engaging, interactive experiences’. The ‘New Age’ technologies made learning fun and easy,  ‘bringing enthusiasm among the students, improving student engagement, enriching the teaching process, and bringing liveliness in the classroom’. On top of that, they allowed huge amounts of data to be captured and sold, whilst tracking progress and attendance. In any case, resistance to digital technology, said more than one language teaching expert, was pointless (Styring, 2015).slide

At the same time, technology companies increasingly took on ‘central roles as advisors to national governments and local districts on educational futures’ and public educational institutions came to be ‘regarded by many as dispensable or even harmful’ (Macgilchrist et al., 2019).

But, as it turned out, the students of Generation Z were not as uniformly enthusiastic about the new technology as had been assumed, and resistance to digital, personalized delivery in education was not long in coming. In November 2018, high school students at Brooklyn’s Secondary School for Journalism staged a walkout in protest at their school’s use of Summit Learning, a web-based platform promoting personalized learning developed by Facebook. They complained that the platform resulted in coursework requiring students to spend much of their day in front of a computer screen, that made it easy to cheat by looking up answers online, and that some of their teachers didn’t have the proper training for the curriculum (Leskin, 2018). Besides, their school was in a deplorable state of disrepair, especially the toilets. There were similar protests in Kansas, where students staged sit-ins, supported by their parents, one of whom complained that ‘we’re allowing the computers to teach and the kids all looked like zombies’ before pulling his son out of the school (Bowles, 2019). In Pennsylvania and Connecticut, some schools stopped using Summit Learning altogether, following protests.

But the resistance did not last. Protesters were accused of being nostalgic conservatives and educationalists kept largely quiet, fearful of losing their funding from the Chan Zuckerberg Initiative (Facebook) and other philanthro-capitalists. The provision of training in grit, growth mindset, positive psychology and mindfulness (also promoted by the technology companies) was ramped up, and eventually the disaffected students became more quiescent. Before long, the data-intensive, personalized approach, relying on the tools, services and data storage of particular platforms had become ‘baked in’ to educational systems around the world (Moore, 2018: 211). There was no going back (except for small numbers of ultra-privileged students in a few private institutions).

By the middle of the century (2155), most students, of all ages, studied with interactive screens in the comfort of their homes. Algorithmically-driven content, with personalized, adaptive tests had become the norm, but the technology occasionally went wrong, leading to some frustration. One day, two young children discovered a book in their attic. Made of paper with yellow, crinkly pages, where ‘the words stood still instead of moving the way they were supposed to’. The book recounted the experience of schools in the distant past, where ‘all the kids from the neighbourhood came’, sitting in the same room with a human teacher, studying the same things ‘so they could help one another on the homework and talk about it’. Margie, the younger of the children at 11 years old, was engrossed in the book when she received a nudge from her personalized learning platform to return to her studies. But Margie was reluctant to go back to her fractions. She ‘was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had’ (Asimov, 1951).

References

Asimov, I. 1951. The Fun They Had. Accessed September 20, 2019. http://web1.nbed.nb.ca/sites/ASD-S/1820/J%20Johnston/Isaac%20Asimov%20-%20The%20fun%20they%20had.pdf

Bowles, N. 2019. ‘Silicon Valley Came to Kansas Schools. That Started a Rebellion’ The New York Times, April 21. Accessed September 20, 2019. https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html

Hidi, S. & Renninger, K.A. 2006. ‘The Four-Phase Model of Interest Development’ Educational Psychologist, 41 (2), 111 – 127

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47 (4): 445-465

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 (6): 47 – 54

Laurillard, D. 2008. Digital Technologies and their Role in Achieving our Ambitions for Education. London: Institute for Education.

Leskin, P. 2018. ‘Students in Brooklyn protest their school’s use of a Zuckerberg-backed online curriculum that Facebook engineers helped build’ Business Insider, 12.11.18 Accessed 20 September 2019. https://www.businessinsider.de/summit-learning-school-curriculum-funded-by-zuckerberg-faces-backlash-brooklyn-2018-11?r=US&IR=T

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 (2): 84 – 98

Macgilchrist, F., Allert, H. & Bruch, A. 2019. ‚Students and society in the 2020s. Three future ‘histories’ of education and technology’. Learning, Media and Technology, https://www.tandfonline.com/doi/full/10.1080/17439884.2019.1656235 )

Moore, M. 2018. Democracy Hacked. London: Oneworld

Noble, D. D. 1991. The Classroom Arsenal. London: The Falmer Press

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 (7), 401 – 410

Ramganesh, E. & Janaki, S. 2017. ‘Attitude of College Teachers towards the Utilization of Language Laboratories for Learning English’ Asian Journal of Social Science Studies; Vol. 2 (1): 103 – 109

Roby, W.B. 2003. ‘Technology in the service of foreign language teaching: The case of the language laboratory’ In D. Jonassen (ed.), Handbook of Research on Educational Communications and Technology, 2nd ed.: 523 – 541. Mahwah, NJ.: Lawrence Erlbaum Associates

Saettler, P. 2004. The Evolution of American Educational Technology. Greenwich, Conn.: Information Age Publishing

Skinner, B. F. 1961. ‘Teaching Machines’ Scientific American, 205(5), 90-107

Styring, J. 2015. Engaging Generation Z. Cambridge English webinar 2015 https://www.youtube.com/watch?time_continue=4&v=XCxl4TqgQZA

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14.

Wiley, P. D. 1990. ‘Language labs for 1990: User-friendly, expandable and affordable’. Media & Methods, 27(1), 44–47)

jenny-holzer-untitled-protect-me-from-what-i-want-text-displayed-in-times-square-nyc-1982

Jenny Holzer, Protect me from what I want

At a recent ELT conference, a plenary presentation entitled ‘Getting it right with edtech’ (sponsored by a vendor of – increasingly digital – ELT products) began with the speaker suggesting that technology was basically neutral, that what you do with educational technology matters far more than the nature of the technology itself. The idea that technology is a ‘neutral tool’ has a long pedigree and often accompanies exhortations to embrace edtech in one form or another (see for example Fox, 2001). It is an idea that is supported by no less a luminary than Chomsky, who, in a 2012 video entitled ‘The Purpose of Education’ (Chomsky, 2012), said that:

As far as […] technology […] and education is concerned, technology is basically neutral. It’s kind of like a hammer. I mean, […] the hammer doesn’t care whether you use it to build a house or whether a torturer uses it to crush somebody’s skull; a hammer can do either. The same with the modern technology; say, the Internet, and so on.

Womans hammerAlthough hammers are not usually classic examples of educational technology, they are worthy of a short discussion. Hammers come in all shapes and sizes and when you choose one, you need to consider its head weight (usually between 16 and 20 ounces), the length of the handle, the shape of the grip, etc. Appropriate specifications for particular hammering tasks have been calculated in great detail. The data on which these specifications is based on an analysis of the hand size and upper body strength of the typical user. The typical user is a man, and the typical hammer has been designed for a man. The average male hand length is 177.9 mm, that of the average woman is 10 mm shorter (Wang & Cai, 2017). Women typically have about half the upper body strength of men (Miller et al., 1993). It’s possible, but not easy to find hammers designed for women (they are referred to as ‘Ladies hammers’ on Amazon). They have a much lighter head weight, a shorter handle length, and many come in pink or floral designs. Hammers, in other words, are far from neutral: they are highly gendered.

Moving closer to educational purposes and ways in which we might ‘get it right with edtech’, it is useful to look at the smart phone. The average size of these devices has risen in recent years, and is now 5.5 inches, with the market for 6 inch screens growing fast. Why is this an issue? Well, as Caroline Criado Perez (2019: 159) notes, ‘while we’re all admittedly impressed by the size of your screen, it’s a slightly different matter when it comes to fitting into half the population’s hands. The average man can fairly comfortably use his device one-handed – but the average woman’s hand is not much bigger than the handset itself’. This is despite the fact the fact that women are more likely to own an iPhone than men  .

It is not, of course, just technological artefacts that are gendered. Voice-recognition software is also very biased. One researcher (Tatman, 2017) has found that Google’s speech recognition tool is 13% more accurate for men than it is for women. There are also significant biases for race and social class. The reason lies in the dataset that the tool is trained on: the algorithms may be gender- and socio-culturally-neutral, but the dataset is not. It would not be difficult to redress this bias by training the tool on a different dataset.

The same bias can be found in automatic translation software. Because corpora such as the BNC or COCA have twice as many male pronouns as female ones (as a result of the kinds of text that are selected for the corpora), translation software reflects the bias. With Google Translate, a sentence in a language with a gender-neutral pronoun, such as ‘S/he is a doctor’ is rendered into English as ‘He is a doctor’. Meanwhile, ‘S/he is a nurse’ is translated as ‘She is a nurse’ (Criado Perez, 2019: 166).

Datasets, then, are often very far from neutral. Algorithms are not necessarily any more neutral than the datasets, and Cathy O’Neil’s best-seller ‘Weapons of Math Destruction’ catalogues the many, many ways in which algorithms, posing as neutral mathematical tools, can increase racial, social and gender inequalities.

It would not be hard to provide many more examples, but the selection above is probably enough. Technology, as Langdon Winner (Winner, 1980) observed almost forty years ago, is ‘deeply interwoven in the conditions of modern politics’. Technology cannot be neutral: it has politics.

So far, I have focused primarily on the non-neutrality of technology in terms of gender (and, in passing, race and class). Before returning to broader societal issues, I would like to make a relatively brief mention of another kind of non-neutrality: the pedagogic. Language learning materials necessarily contain content of some kind: texts, topics, the choice of values or role models, language examples, and so on. These cannot be value-free. In the early days of educational computer software, one researcher (Biraimah, 1993) found that it was ‘at least, if not more, biased than the printed page it may one day replace’. My own impression is that this remains true today.

Equally interesting to my mind is the fact that all educational technologies, ranging from the writing slate to the blackboard (see Buzbee, 2014), from the overhead projector to the interactive whiteboard, always privilege a particular kind of teaching (and learning). ‘Technologies are inherently biased because they are built to accomplish certain very specific goals which means that some technologies are good for some tasks while not so good for other tasks’ (Zhao et al., 2004: 25). Digital flashcards, for example, inevitably encourage a focus on rote learning. Contemporary LMSs have impressive multi-functionality (i.e. they often could be used in a very wide variety of ways), but, in practice, most teachers use them in very conservative ways (Laanpere et al., 2004). This may be a result of teacher and institutional preferences, but it is almost certainly due, at least in part, to the way that LMSs are designed. They are usually ‘based on traditional approaches to instruction dating from the nineteenth century: presentation and assessment [and] this can be seen in the selection of features which are most accessible in the interface, and easiest to use’ (Lane, 2009).

The argument that educational technology is neutral because it could be put to many different uses, good or bad, is problematic because the likelihood of one particular use is usually much greater than another. There is, however, another way of looking at technological neutrality, and that is to look at its origins. Elsewhere on this blog, in post after post, I have given examples of the ways in which educational technology has been developed, marketed and sold primarily for commercial purposes. Educational values, if indeed there are any, are often an afterthought. The research literature in this area is rich and growing: Stephen Ball, Larry Cuban, Neil Selwyn, Joel Spring, Audrey Watters, etc.

Rather than revisit old ground here, this is an opportunity to look at a slightly different origin of educational technology: the US military. The close connection of the early history of the internet and the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense is fairly well-known. Much less well-known are the very close connections between the US military and educational technologies, which are catalogued in the recently reissued ‘The Classroom Arsenal’ by Douglas D. Noble.

Following the twin shocks of the Soviet Sputnik 1 (in 1957) and Yuri Gagarin (in 1961), the United States launched a massive programme of investment in the development of high-tech weaponry. This included ‘computer systems design, time-sharing, graphics displays, conversational programming languages, heuristic problem-solving, artificial intelligence, and cognitive science’ (Noble, 1991: 55), all of which are now crucial components in educational technology. But it also quickly became clear that more sophisticated weapons required much better trained operators, hence the US military’s huge (and continuing) interest in training. Early interest focused on teaching machines and programmed instruction (branches of the US military were by far the biggest purchasers of programmed instruction products). It was essential that training was effective and efficient, and this led to a wide interest in the mathematical modelling of learning and instruction.

What was then called computer-based education (CBE) was developed as a response to military needs. The first experiments in computer-based training took place at the Systems Research Laboratory of the Air Force’s RAND Corporation think tank (Noble, 1991: 73). Research and development in this area accelerated in the 1960s and 1970s and CBE (which has morphed into the platforms of today) ‘assumed particular forms because of the historical, contingent, military contexts for which and within which it was developed’ (Noble, 1991: 83). It is possible to imagine computer-based education having developed in very different directions. Between the 1960s and 1980s, for example, the PLATO (Programmed Logic for Automatic Teaching Operations) project at the University of Illinois focused heavily on computer-mediated social interaction (forums, message boards, email, chat rooms and multi-player games). PLATO was also significantly funded by a variety of US military agencies, but proved to be of much less interest to the generals than the work taking place in other laboratories. As Noble observes, ‘some technologies get developed while others do not, and those that do are shaped by particular interests and by the historical and political circumstances surrounding their development (Noble, 1991: 4).

According to Noble, however, the influence of the military reached far beyond the development of particular technologies. Alongside the investment in technologies, the military were the prime movers in a campaign to promote computer literacy in schools.

Computer literacy was an ideological campaign rather than an educational initiative – a campaign designed, at bottom, to render people ‘comfortable’ with the ‘inevitable’ new technologies. Its basic intent was to win the reluctant acquiescence of an entire population in a brave new world sculpted in silicon.

The computer campaign also succeeded in getting people in front of that screen and used to having computers around; it made people ‘computer-friendly’, just as computers were being rendered ‘used-friendly’. It also managed to distract the population, suddenly propelled by the urgency of learning about computers, from learning about other things, such as how computers were being used to erode the quality of their working lives, or why they, supposedly the citizens of a democracy, had no say in technological decisions that were determining the shape of their own futures.

Third, it made possible the successful introduction of millions of computers into schools, factories and offices, even homes, with minimal resistance. The nation’s public schools have by now spent over two billion dollars on over a million and a half computers, and this trend still shows no signs of abating. At this time, schools continue to spend one-fifth as much on computers, software, training and staffing as they do on all books and other instructional materials combined. Yet the impact of this enormous expenditure is a stockpile of often idle machines, typically used for quite unimaginative educational applications. Furthermore, the accumulated results of three decades of research on the effectiveness of computer-based instruction remain ‘inconclusive and often contradictory’. (Noble, 1991: x – xi)

Rather than being neutral in any way, it seems more reasonable to argue, along with (I think) most contemporary researchers, that edtech is profoundly value-laden because it has the potential to (i) influence certain values in students; (ii) change educational values in [various] ways; and (iii) change national values (Omotoyinbo & Omotoyinbo, 2016: 173). Most importantly, the growth in the use of educational technology has been accompanied by a change in the way that education itself is viewed: ‘as a tool, a sophisticated supply system of human cognitive resources, in the service of a computerized, technology-driven economy’ (Noble, 1991: 1). These two trends are inextricably linked.

References

Biraimah, K. 1993. The non-neutrality of educational computer software. Computers and Education 20 / 4: 283 – 290

Buzbee, L. 2014. Blackboard: A Personal History of the Classroom. Minneapolis: Graywolf Press

Chomsky, N. 2012. The Purpose of Education (video). Learning Without Frontiers Conference. https://www.youtube.com/watch?v=DdNAUJWJN08

Criado Perez, C. 2019. Invisible Women. London: Chatto & Windus

Fox, R. 2001. Technological neutrality and practice in higher education. In A. Herrmann and M. M. Kulski (Eds), Expanding Horizons in Teaching and Learning. Proceedings of the 10th Annual Teaching Learning Forum, 7-9 February 2001. Perth: Curtin University of Technology. http://clt.curtin.edu.au/events/conferences/tlf/tlf2001/fox.html

Laanpere, M., Poldoja, H. & Kikkas, K. 2004. The second thoughts about pedagogical neutrality of LMS. Proceedings of IEEE International Conference on Advanced Learning Technologies, 2004. https://ieeexplore.ieee.org/abstract/document/1357664

Lane, L. 2009. Insidious pedagogy: How course management systems impact teaching. First Monday, 14(10). https://firstmonday.org/ojs/index.php/fm/article/view/2530/2303Lane

Miller, A.E., MacDougall, J.D., Tarnopolsky, M. A. & Sale, D.G. 1993. ‘Gender differences in strength and muscle fiber characteristics’ European Journal of Applied Physiology and Occupational Physiology. 66(3): 254-62 https://www.ncbi.nlm.nih.gov/pubmed/8477683

Noble, D. D. 1991. The Classroom Arsenal. Abingdon, Oxon.: Routledge

Omotoyinbo, D. W. & Omotoyinbo, F. R. 2016. Educational Technology and Value Neutrality. Societal Studies, 8 / 2: 163 – 179 https://www3.mruni.eu/ojs/societal-studies/article/view/4652/4276

O’Neil, C. 2016. Weapons of Math Destruction. London: Penguin

Sundström, P. Interpreting the Notion that Technology is Value Neutral. Medicine, Health Care and Philosophy 1, 1998: 42-44

Tatman, R. 2017. ‘Gender and Dialect Bias in YouTube’s Automatic Captions’ Proceedings of the First Workshop on Ethics in Natural Language Processing, pp. 53–59 http://www.ethicsinnlp.org/workshop/pdf/EthNLP06.pdf

Wang, C. & Cai, D. 2017. ‘Hand tool handle design based on hand measurements’ MATEC Web of Conferences 119, 01044 (2017) https://www.matec-conferences.org/articles/matecconf/pdf/2017/33/matecconf_imeti2017_01044.pdf

Winner, L. 1980. Do Artifacts have Politics? Daedalus 109 / 1: 121 – 136

Zhao, Y, Alvarez-Torres, M. J., Smith, B. & Tan, H. S. 2004. The Non-neutrality of Technology: a Theoretical Analysis and Empirical Study of Computer Mediated Communication Technologies. Journal of Educational Computing Research 30 (1 &2): 23 – 55

In December last year, I posted a wish list for vocabulary (flashcard) apps. At the time, I hadn’t read a couple of key research texts on the subject. It’s time for an update.

First off, there’s an article called ‘Intentional Vocabulary Learning Using Digital Flashcards’ by Hsiu-Ting Hung. It’s available online here. Given the lack of empirical research into the use of digital flashcards, it’s an important article and well worth a read. Its basic conclusion is that digital flashcards are more effective as a learning tool than printed word lists. No great surprises there, but of more interest, perhaps, are the recommendations that (1) ‘students should be educated about the effective use of flashcards (e.g. the amount and timing of practice), and this can be implemented through explicit strategy instruction in regular language courses or additional study skills workshops ‘ (Hung, 2015: 111), and (2) that digital flashcards can be usefully ‘repurposed for collaborative learning tasks’ (Hung, ibid.).

nakataHowever, what really grabbed my attention was an article by Tatsuya Nakata. Nakata’s research is of particular interest to anyone interested in vocabulary learning, but especially so to those with an interest in digital possibilities. A number of his research articles can be freely accessed via his page at ResearchGate, but the one I am interested in is called ‘Computer-assisted second language vocabulary learning in a paired-associate paradigm: a critical investigation of flashcard software’. Don’t let the title put you off. It’s a review of a pile of web-based flashcard programs: since the article is already five years old, many of the programs have either changed or disappeared, but the critical approach he takes is more or less as valid now as it was then (whether we’re talking about web-based stuff or apps).

Nakata divides his evaluation for criteria into two broad groups.

Flashcard creation and editing

(1) Flashcard creation: Can learners create their own flashcards?

(2) Multilingual support: Can the target words and their translations be created in any language?

(3) Multi-word units: Can flashcards be created for multi-word units as well as single words?

(4) Types of information: Can various kinds of information be added to flashcards besides the word meanings (e.g. parts of speech, contexts, or audios)?

(5) Support for data entry: Does the software support data entry by automatically supplying information about lexical items such as meaning, parts of speech, contexts, or frequency information from an internal database or external resources?

(6) Flashcard set: Does the software allow learners to create their own sets of flashcards?

Learning

(1) Presentation mode: Does the software have a presentation mode, where new items are introduced and learners familiarise themselves with them?

(2) Retrieval mode: Does the software have a retrieval mode, which asks learners to recall or choose the L2 word form or its meaning?

(3) Receptive recall: Does the software ask learners to produce the meanings of target words?

(4) Receptive recognition: Does the software ask learners to choose the meanings of target words?

(5) Productive recall: Does the software ask learners to produce the target word forms corresponding to the meanings provided?

(6) Productive recognition: Does the software ask learners to choose the target word forms corresponding to the meanings provided?

(7) Increasing retrieval effort: For a given item, does the software arrange exercises in the order of increasing difficulty?

(8) Generative use: Does the software encourage generative use of words, where learners encounter or use previously met words in novel contexts?

(9) Block size: Can the number of words studied in one learning session be controlled and altered?

(10) Adaptive sequencing: Does the software change the sequencing of items based on learners’ previous performance on individual items?

(11) Expanded rehearsal: Does the software help implement expanded rehearsal, where the intervals between study trials are gradually increased as learning proceeds? (Nakata, T. (2011): ‘Computer-assisted second language vocabulary learning in a paired-associate paradigm: a critical investigation of flashcard software’ Computer Assisted Language Learning, 24:1, 17-38)

It’s a rather different list from my own (there’s nothing I would disagree with here), because mine is more general and his is exclusively oriented towards learning principles. Nakata makes the point towards the end of the article that it would ‘be useful to investigate learners’ reactions to computer-based flashcards to examine whether they accept flashcard programs developed according to learning principles’ (p. 34). It’s far from clear, he points out, that conformity to learning principles are at the top of learners’ agendas. More than just users’ feelings about computer-based flashcards in general, a key concern will be the fact that there are ‘large individual differences in learners’ perceptions of [any flashcard] program’ (Nakata, N. 2008. ‘English vocabulary learning with word lists, word cards and computers: implications from cognitive psychology research for optimal spaced learning’ ReCALL 20(1), p. 18).

I was trying to make a similar point in another post about motivation and vocabulary apps. In the end, as with any language learning material, research-driven language learning principles can only take us so far. User experience is a far more difficult creature to pin down or to make generalisations about. A user’s reaction to graphics, gamification, uploading time and so on are so powerful and so subjective that learning principles will inevitably play second fiddle. That’s not to say, of course, that Nakata’s questions are not important: it’s merely to wonder whether the bigger question is truly answerable.

Nakata’s research identifies plenty of room for improvement in digital flashcards, and although the article is now quite old, not a lot had changed. Key areas to work on are (1) the provision of generative use of target words, (2) the need to increase retrieval effort, (3) the automatic provision of information about meaning, parts of speech, or contexts (in order to facilitate flashcard creation), and (4) the automatic generation of multiple-choice distractors.

In the conclusion of his study, he identifies one flashcard program which is better than all the others. Unsurprisingly, five years down the line, the software he identifies is no longer free, others have changed more rapidly in the intervening period, and who knows will be out in front next week?

 

Having spent a lot of time recently looking at vocabulary apps, I decided to put together a Christmas wish list of the features of my ideal vocabulary app. The list is not exhaustive and I’ve given more attention to some features than others. What (apart from testing) have I missed out?

1             Spaced repetition

Since the point of a vocabulary app is to help learners memorise vocabulary items, it is hard to imagine a decent system that does not incorporate spaced repetition. Spaced repetition algorithms offer one well-researched way of improving the brain’s ‘forgetting curve’. These algorithms come in different shapes and sizes, and I am not technically competent to judge which is the most efficient. However, as Peter Ellis Jones, the developer of a flashcard system called CardFlash, points out, efficiency is only one half of the rote memorisation problem. If you are not motivated to learn, the cleverness of the algorithm is moot. Fundamentally, learning software needs to be fun, rewarding, and give a solid sense of progression.

2             Quantity, balance and timing of new and ‘old’ items

A spaced repetition algorithm determines the optimum interval between repetitions, but further algorithms will be needed to determine when and with what frequency new items will be added to the deck. Once a system knows how many items a learner needs to learn and the time in which they have to do it, it is possible to determine the timing and frequency of the presentation of new items. But the system cannot know in advance how well an individual learner will learn the items (for any individual, some items will be more readily learnable than others) nor the extent to which learners will live up to their own positive expectations of time spent on-app. As most users of flashcard systems know, it is easy to fall behind, feel swamped and, ultimately, give up. An intelligent system needs to be able to respond to individual variables in order to ensure that the learning load is realistic.

3             Task variety

A standard flashcard system which simply asks learners to indicate whether they ‘know’ a target item before they flip over the card rapidly becomes extremely boring. A system which tests this knowledge soon becomes equally dull. There needs to be a variety of ways in which learners interact with an app, both for reasons of motivation and learning efficiency. It may be the case that, for an individual user, certain task types lead to more rapid gains in learning. An intelligent, adaptive system should be able to capture this information and modify the selection of task types.

Most younger learners and some adult learners will respond well to the inclusion of games within the range of task types. Examples of such games include the puzzles developed by Oliver Rose in his Phrase Maze app to accompany Quizlet practice.Phrase Maze 1Phrase Maze 2

4             Generative use

Memory researchers have long known about the ‘Generation Effect’ (see for example this piece of research from the Journal of Verbal Learning and Learning Behavior, 1978). Items are better learnt when the learner has to generate, in some (even small) way, the target item, rather than simply reading it. In vocabulary learning, this could be, for example, typing in the target word or, more simply, inserting some missing letters. Systems which incorporate task types that require generative use are likely to result in greater learning gains than simple, static flashcards with target items on one side and definitions or translations on the other.

5             Receptive and productive practice

The most basic digital flashcard systems require learners to understand a target item, or to generate it from a definition or translation prompt. Valuable as this may be, it won’t help learners much to use these items productively, since these systems focus exclusively on meaning. In order to do this, information must be provided about collocation, colligation, register, etc and these aspects of word knowledge will need to be focused on within the range of task types. At the same time, most vocabulary apps that I have seen focus primarily on the written word. Although any good system will offer an audio recording of the target item, and many will offer the learner the option of recording themselves, learners are invariably asked to type in their answers, rather than say them. For the latter, speech recognition technology will be needed. Ideally, too, an intelligent system will compare learner recordings with the audio models and provide feedback in such a way that the learner is guided towards a closer reproduction of the model.

6             Scaffolding and feedback

feebuMost flashcard systems are basically low-stakes, practice self-testing. Research (see, for example, Dunlosky et al’s metastudy ‘Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology’) suggests that, as a learning strategy, practice testing has high utility – indeed, of higher utility than other strategies like keyword mnemonics or highlighting. However, an element of tutoring is likely to enhance practice testing, and, for this, scaffolding and feedback will be needed. If, for example, a learner is unable to produce a correct answer, they will probably benefit from being guided towards it through hints, in the same way as a teacher would elicit in a classroom. Likewise, feedback on why an answer is wrong (as opposed to simply being told that you are wrong), followed by encouragement to try again, is likely to enhance learning. Such feedback might, for example, point out that there is perhaps a spelling problem in the learner’s attempted answer, that the attempted answer is in the wrong part of speech, or that it is semantically close to the correct answer but does not collocate with other words in the text. The incorporation of intelligent feedback of this kind will require a number of NLP tools, since it will never be possible for a human item-writer to anticipate all the possible incorrect answers. A current example of intelligent feedback of this kind can be found in the Oxford English Vocabulary Trainer app.

7             Content

At the very least, a decent vocabulary app will need good definitions and translations (how many different languages?), and these will need to be tagged to the senses of the target items. These will need to be supplemented with all the other information that you find in a good learner’s dictionary: syntactic patterns, collocations, cognates, an indication of frequency, etc. The only way of getting this kind of high-quality content is by paying to license it from a company with expertise in lexicography. It doesn’t come cheap.

There will also need to be example sentences, both to illustrate meaning / use and for deployment in tasks. Dictionary databases can provide some of these, but they cannot be relied on as a source. This is because the example sentences in dictionaries have been selected and edited to accompany the other information provided in the dictionary, and not as items in practice exercises, which have rather different requirements. Once more, the solution doesn’t come cheap: experienced item writers will be needed.

Dictionaries describe and illustrate how words are typically used. But examples of typical usage tend to be as dull as they are forgettable. Learning is likely to be enhanced if examples are cognitively salient: weird examples with odd collocations, for example. Another thing for the item writers to think about.

A further challenge for an app which is not level-specific is that both the definitions and example sentences need to be level-specific. An A1 / A2 learner will need the kind of content that is found in, say, the Oxford Essential dictionary; B2 learners and above will need content from, say, the OALD.

8             Artwork and design

My wordbook2It’s easy enough to find artwork or photos of concrete nouns, but try to find or commission a pair of pictures that differentiate, for example, the adjectives ‘wild’ and ‘dangerous’ … What kind of pictures might illustrate simple verbs like ‘learn’ or ‘remember’? Will such illustrations be clear enough when squeezed into a part of a phone screen? Animations or very short video clips might provide a solution in some cases, but these are more expensive to produce and video files are much heavier.

With a few notable exceptions, such as the British Councils’s MyWordBook 2, design in vocabulary apps has been largely forgotten.

9             Importable and personalisable lists

Many learners will want to use a vocabulary app in association with other course material (e.g. coursebooks). Teachers, however, will inevitably want to edit these lists, deleting some items, adding others. Learners will want to do the same. This is a huge headache for app designers. If new items are going to be added to word lists, how will the definitions, example sentences and illustrations be generated? Will the database contain audio recordings of these words? How will these items be added to the practice tasks (if these include task types that go beyond simple double-sided flashcards)? NLP tools are not yet good enough to trawl a large corpus in order to select (and possibly edit) sentences that illustrate the right meaning and which are appropriate for interactive practice exercises. We can personalise the speed of learning and even the types of learning tasks, so long as the target language is predetermined. But as soon as we allow for personalisation of content, we run into difficulties.

10          Gamification

Maintaining motivation to use a vocabulary app is not easy. Gamification may help. Measuring progress against objectives will be a start. Stars and badges and leaderboards may help some users. Rewards may help others. But gamification features need to be built into the heart of the system, into the design and selection of tasks, rather than simply tacked on as an afterthought. They need to be trialled and tweaked, so analytics will be needed.

11          Teacher support

Although the use of vocabulary flashcards is beginning to catch on with English language teachers, teachers need help with ways to incorporate them in the work they do with their students. What can teachers do in class to encourage use of the app? In what ways does app use require teachers to change their approach to vocabulary work in the classroom? Reporting functions can help teachers know about the progress their students are making and provide very detailed information about words that are causing problems. But, as anyone involved in platform-based course materials knows, teachers need a lot of help.

12          And, of course, …

Apps need to be usable with different operating systems. Ideally, they should be (partially) usable offline. Loading times need to be short. They need to be easy and intuitive to use.

It’s unlikely that I’ll be seeing a vocabulary app with all of these features any time soon. Or, possibly, ever. The cost of developing something that could do all this would be extremely high, and there is no indication that there is a market that would be ready to pay the sort of prices that would be needed to cover the costs of development and turn a profit. We need to bear in mind, too, the fact that vocabulary apps can only ever assist in the initial acquisition of vocabulary: apps alone can’t solve the vocabulary learning problem (despite the silly claims of some app developers). The need for meaningful communicative use, extensive reading and listening, will not go away because a learner has been using an app. So, how far can we go in developing better and better vocabulary apps before users decide that a cheap / free app, with all its shortcomings, is actually good enough?

I posted a follow up to this post in October 2016.

In ELT circles, ‘behaviourism’ is a boo word. In the standard history of approaches to language teaching (characterised as a ‘procession of methods’ by Hunter & Smith 2012: 432[1]), there were the bad old days of behaviourism until Chomsky came along, savaged the theory in his review of Skinner’s ‘Verbal Behavior’, and we were all able to see the light. In reality, of course, things weren’t quite like that. The debate between Chomsky and the behaviourists is far from over, behaviourism was not the driving force behind the development of audiolingual approaches to language teaching, and audiolingualism is far from dead. For an entertaining and eye-opening account of something much closer to reality, I would thoroughly recommend a post on Russ Mayne’s Evidence Based ELT blog, along with the discussion which follows it. For anyone who would like to understand what behaviourism is, was, and is not (before they throw the term around as an insult), I’d recommend John A. Mills’ ‘Control: A History of Behavioral Psychology’ (New York University Press, 1998) and John Staddon’s ‘The New Behaviorism 2nd edition’ (Psychology Press, 2014).

There is a close connection between behaviourism and adaptive learning. Audrey Watters, no fan of adaptive technology, suggests that ‘any company touting adaptive learning software’ has been influenced by Skinner. In a more extended piece, ‘Education Technology and Skinner’s Box, Watters explores further her problems with Skinner and the educational technology that has been inspired by behaviourism. But writers much more sympathetic to adaptive learning, also see close connections to behaviourism. ‘The development of adaptive learning systems can be considered as a transformation of teaching machines,’ write Kara & Sevim[2] (2013: 114 – 117), although they go on to point out the differences between the two. Vendors of adaptive learning products, like DreamBox Learning©, are not shy of associating themselves with behaviourism: ‘Adaptive learning has been with us for a while, with its history of adaptive learning rooted in cognitive psychology, beginning with the work of behaviorist B.F. Skinner in the 1950s, and continuing through the artificial intelligence movement of the 1970s.’

That there is a strong connection between adaptive learning and behaviourism is indisputable, but I am not interested in attempting to establish the strength of that connection. This would, in any case, be an impossible task without some reductionist definition of both terms. Instead, my interest here is to explore some of the parallels between the two, and, in the spirit of the topic, I’d like to do this by comparing the behaviours of behaviourists and adaptive learning scientists.

Data and theory

Both behaviourism and adaptive learning (in its big data form) are centrally concerned with behaviour – capturing and measuring it in an objective manner. In both, experimental observation and the collection of ‘facts’ (physical, measurable, behavioural occurrences) precede any formulation of theory. John Mills’ description of behaviourists could apply equally well to adaptive learning scientists: theory construction was a seesaw process whereby one began with crude outgrowths from observations and slowly created one’s theory in such a way that one could make more and more precise observations, building those observations into the theory at each stage. No behaviourist ever considered the possibility of taking existing comprehensive theories of mind and testing or refining them.[3]

Positivism and the panopticon

Both behaviourism and adaptive learning are pragmatically positivist, believing that truth can be established by the study of facts. J. B. Watson, the founding father of behaviourism whose article ‘Psychology as the Behaviorist Views Itset the behaviourist ball rolling, believed that experimental observation could ‘reveal everything that can be known about human beings’[4]. Jose Ferreira of Knewton has made similar claims: We get five orders of magnitude more data per user than Google does. We get more data about people than any other data company gets about people, about anything — and it’s not even close. We’re looking at what you know, what you don’t know, how you learn best. […] We know everything about what you know and how you learn best because we get so much data. Digital data analytics offer something that Watson couldn’t have imagined in his wildest dreams, but he would have approved.

happiness industryThe revolutionary science

Big data (and the adaptive learning which is a part of it) is presented as a game-changer: The era of big data challenges the way we live and interact with the world. […] Society will need to shed some of its obsession for causality in exchange for simple correlations: not knowing why but only what. This overturns centuries of established practices and challenges our most basic understanding of how to make decisions and comprehend reality[5]. But the reverence for technology and the ability to reach understandings of human beings by capturing huge amounts of behavioural data was adumbrated by Watson a century before big data became a widely used term. Watson’s 1913 lecture at Columbia University was ‘a clear pitch’[6] for the supremacy of behaviourism, and its potential as a revolutionary science.

Prediction and controlnudge

The fundamental point of both behaviourism and adaptive learning is the same. The research practices and the theorizing of American behaviourists until the mid-1950s, writes Mills[7] were driven by the intellectual imperative to create theories that could be used to make socially useful predictions. Predictions are only useful to the extent that they can be used to manipulate behaviour. Watson states this very baldly: the theoretical goal of psychology is the prediction and control of behaviour[8]. Contemporary iterations of behaviourism, such as behavioural economics or nudge theory (see, for example, Thaler & Sunstein’s best-selling ‘Nudge’, Penguin Books, 2008), or the British government’s Behavioural Insights Unit, share the same desire to divert individual activity towards goals (selected by those with power), ‘without either naked coercion or democratic deliberation’[9]. Jose Ferreira of Knewton has an identical approach: We can predict failure in advance, which means we can pre-remediate it in advance. We can say, “Oh, she’ll struggle with this, let’s go find the concept from last year’s materials that will help her not struggle with it.” Like the behaviourists, Ferreira makes grand claims about the social usefulness of his predict-and-control technology: The end is a really simple mission. Only 22% of the world finishes high school, and only 55% finish sixth grade. Those are just appalling numbers. As a species, we’re wasting almost four-fifths of the talent we produce. […] I want to solve the access problem for the human race once and for all.

Ethics

Because they rely on capturing large amounts of personal data, both behaviourism and adaptive learning quickly run into ethical problems. Even where informed consent is used, the subjects must remain partly ignorant of exactly what is being tested, or else there is the fear that they might adjust their behaviour accordingly. The goal is to minimise conscious understanding of what is going on[10]. For adaptive learning, the ethical problem is much greater because of the impossibility of ensuring the security of this data. Everything is hackable.

Marketing

Behaviourism was seen as a god-send by the world of advertising. J. B. Watson, after a front-page scandal about his affair with a student, and losing his job at John Hopkins University, quickly found employment on Madison Avenue. ‘Scientific advertising’, as practised by the Mad Men from the 1920s onwards, was based on behaviourism. The use of data analytics by Google, Amazon, et al is a direct descendant of scientific advertising, so it is richly appropriate that adaptive learning is the child of data analytics.

[1] Hunter, D. and Smith, R. (2012) ‘Unpacking the past: “CLT” through ELTJ keywords’. ELT Journal, 66/4: 430-439.

[2] Kara, N. & Sevim, N. 2013. ‘Adaptive learning systems: beyond teaching machines’, Contemporary Educational Technology, 4(2), 108-120

[3] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.5

[4] Davies, W. (2015) The Happiness Industry. London: Verso. p.91

[5] Mayer-Schönberger, V. & Cukier, K. (2013) Big Data. London: John Murray, p.7

[6] Davies, W. (2015) The Happiness Industry. London: Verso. p.87

[7] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.2

[8] Watson, J. B. (1913) ‘Behaviorism as the Psychologist Views it’ Psychological Review 20: 158

[9] Davies, W. (2015) The Happiness Industry. London: Verso. p.88

[10] Davies, W. (2015) The Happiness Industry. London: Verso. p.92

‘Sticky’ – as in ‘sticky learning’ or ‘sticky content’ (as opposed to ‘sticky fingers’ or a ‘sticky problem’) – is itself fast becoming a sticky word. If you check out ‘sticky learning’ on Google Trends, you’ll see that it suddenly spiked in September 2011, following the slightly earlier appearance of ‘sticky content’. The historical rise in this use of the word coincides with the exponential growth in the number of references to ‘big data’.

I am often asked if adaptive learning really will take off as a big thing in language learning. Will adaptivity itself be a sticky idea? When the question is asked, people mean the big data variety of adaptive learning, rather than the much more limited adaptivity of spaced repetition algorithms, which, I think, is firmly here and here to stay. I can’t answer the question with any confidence, but I recently came across a book which suggests a useful way of approaching the question.

41u+NEyWjnL._SY344_BO1,204,203,200_‘From the Ivory Tower to the Schoolhouse’ by Jack Schneider (Harvard Education Press, 2014) investigates the reasons why promising ideas from education research fail to get taken up by practitioners, and why other, less-than-promising ideas, from a research or theoretical perspective, become sticky quite quickly. As an example of the former, Schneider considers Robert Sternberg’s ‘Triarchic Theory’. As an example of the latter, he devotes a chapter to Howard Gardner’s ‘Multiple Intelligences Theory’.

Schneider argues that educational ideas need to possess four key attributes in order for teachers to sit up, take notice and adopt them.

  1. perceived significance: the idea must answer a question central to the profession – offering a big-picture understanding rather than merely one small piece of a larger puzzle
  2. philosophical compatibility: the idea must clearly jibe with closely held [teacher] beliefs like the idea that teachers are professionals, or that all children can learn
  3. occupational realism: it must be possible for the idea to be put easily into immediate use
  4. transportability: the idea needs to find its practical expression in a form that teachers can access and use at the time that they need it – it needs to have a simple core that can travel through pre-service coursework, professional development seminars, independent study and peer networks

To what extent does big data adaptive learning possess these attributes? It certainly comes up trumps with respect to perceived significance. The big question that it attempts to answer is the question of how we can make language learning personalized / differentiated / individualised. As its advocates never cease to remind us, adaptive learning holds out the promise of moving away from a one-size-fits-all approach. The extent to which it can keep this promise is another matter, of course. For it to do so, it will never be enough just to offer different pathways through a digitalised coursebook (or its equivalent). Much, much more content will be needed: at least five or six times the content of a one-size-fits-all coursebook. At the moment, there is little evidence of the necessary investment into content being made (quite the opposite, in fact), but the idea remains powerful nevertheless.

When it comes to philosophical compatibility, adaptive learning begins to run into difficulties. Despite the decades of edging towards more communicative approaches in language teaching, research (e.g. the research into English teaching in Turkey described in a previous post), suggests that teachers still see explanation and explication as key functions of their jobs. They believe that they know their students best and they know what is best for them. Big data adaptive learning challenges these beliefs head on. It is no doubt for this reason that companies like Knewton make such a point of claiming that their technology is there to help teachers. But Jose Ferreira doth protest too much, methinks. Platform-delivered adaptive learning is a direct threat to teachers’ professionalism, their salaries and their jobs.

Occupational realism is more problematic still. Very, very few language teachers around the world have any experience of truly blended learning, and it’s very difficult to envisage precisely what it is that the teacher should be doing in a classroom. Publishers moving towards larger-scale blended adaptive materials know that this is a big problem, and are actively looking at ways of packaging teacher training / teacher development (with a specific focus on blended contexts) into the learner-facing materials that they sell. But the problem won’t go away. Education ministries have a long history of throwing money at technological ‘solutions’ without thinking about obtaining the necessary buy-in from their employees. It is safe to predict that this is something that is unlikely to change. Moreover, learning how to become a blended teacher is much harder than learning, say, how to make good use of an interactive whiteboard. Since there are as many different blended adaptive approaches as there are different educational contexts, there cannot be (irony of ironies) a one-size-fits-all approach to training teachers to make good use of this software.

Finally, how transportable is big data adaptive learning? Not very, is the short answer, and for the same reasons that ‘occupational realism’ is highly problematic.

Looking at things through Jack Schneider’s lens, we might be tempted to come to the conclusion that the future for adaptive learning is a rocky path, at best. But Schneider doesn’t take political or economic considerations into account. Sternberg’s ‘Triarchic Theory’ never had the OECD or the Gates Foundation backing it up. It never had millions and millions of dollars of investment behind it. As we know from political elections (and the big data adaptive learning issue is a profoundly political one), big bucks can buy opinions.

It may also prove to be the case that the opinions of teachers don’t actually matter much. If the big adaptive bucks can win the educational debate at the highest policy-making levels, teachers will be the first victims of the ‘creative disruption’ that adaptivity promises. If you don’t believe me, just look at what is going on in the U.S.

There are causes for concern, but I don’t want to sound too alarmist. Nobody really has a clue whether big data adaptivity will actually work in language learning terms. It remains more of a theory than a research-endorsed practice. And to end on a positive note, regardless of how sticky it proves to be, it might just provide the shot-in-the-arm realisation that language teachers, at their best, are a lot more than competent explainers of grammar or deliverers of gap-fills.

2014-09-30_2216Jose Ferreira, the fast-talking sales rep-in-chief of Knewton, likes to dazzle with numbers. In a 2012 talk hosted by the US Department of Education, Ferreira rattles off the stats: So Knewton students today, we have about 125,000, 180,000 right now, by December it’ll be 650,000, early next year it’ll be in the millions, and next year it’ll be close to 10 million. And that’s just through our Pearson partnership. For each of these students, Knewton gathers millions of data points every day. That, brags Ferreira, is five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. With just a touch of breathless exaggeration, Ferreira goes on: We literally know everything about what you know and how you learn best, everything.

The data is mined to find correlations between learning outcomes and learning behaviours, and, once correlations have been established, learning programmes can be tailored to individual students. Ferreira explains: We take the combined data problem all hundred million to figure out exactly how to teach every concept to each kid. So the 100 million first shows up to learn the rules of exponents, great let’s go find a group of people who are psychometrically equivalent to that kid. They learn the same ways, they have the same learning style, they know the same stuff, because Knewton can figure out things like you learn math best in the morning between 8:40 and 9:13 am. You learn science best in 42 minute bite sizes the 44 minute mark you click right, you start missing questions you would normally get right.

The basic premise here is that the more data you have, the more accurately you can predict what will work best for any individual learner. But how accurate is it? In the absence of any decent, independent research (or, for that matter, any verifiable claims from Knewton), how should we respond to Ferreira’s contribution to the White House Education Datapalooza?

A 51Oy5J3o0yL._AA258_PIkin4,BottomRight,-46,22_AA280_SH20_OU35_new book by Stephen Finlay, Predictive Analytics, Data Mining and Big Data (Palgrave Macmillan, 2014) suggests that predictive analytics are typically about 20 – 30% more accurate than humans attempting to make the same judgements. That’s pretty impressive and perhaps Knewton does better than that, but the key thing to remember is that, however much data Knewton is playing with, and however good their algorithms are, we are still talking about predictions and not certainties. If an adaptive system could predict with 90% accuracy (and the actual figure is typically much lower than that) what learning content and what learning approach would be effective for an individual learner, it would still mean that it was wrong 10% of the time. When this is scaled up to the numbers of students that use Knewton software, it means that millions of students are getting faulty recommendations. Beyond a certain point, further expansion of the data that is mined is unlikely to make any difference to the accuracy of predictions.

A further problem identified by Stephen Finlay is the tendency of people in predictive analytics to confuse correlation and causation. Certain students may have learnt maths best between 8.40 and 9.13, but it does not follow that they learnt it best because they studied at that time. If strong correlations do not involve causality, then actionable insights (such as individualised course design) can be no more than an informed gamble.

Knewton’s claim that they know how every student learns best is marketing hyperbole and should set alarm bells ringing. When it comes to language learning, we simply do not know how students learn (we do not have any generally accepted theory of second language acquisition), let alone how they learn best. More data won’t help our theories of learning! Ferreira’s claim that, with Knewton, every kid gets a perfectly optimized textbook, except it’s also video and other rich media dynamically generated in real time is equally preposterous, not least since the content of the textbook will be at least as significant as the way in which it is ‘optimized’. And, as we all know, textbooks have their faults.

Cui bono? Perhaps huge data and predictive analytics will benefit students; perhaps not. We will need to wait and find out. But Stephen Finlay reminds us that in gold rushes (and internet booms and the exciting world of Big Data) the people who sell the tools make a lot of money. Far more strike it rich selling picks and shovels to prospectors than do the prospectors. Likewise, there is a lot of money to be made selling Big Data solutions. Whether the buyer actually gets any benefit from them is not the primary concern of the sales people. (p.16/17) Which is, perhaps, one of the reasons that some sales people talk so fast.

Personalization is one of the key leitmotifs in current educational discourse. The message is clear: personalization is good, one-size-fits-all is bad. ‘How to personalize learning and how to differentiate instruction for diverse classrooms are two of the great educational challenges of the 21st century,’ write Trilling and Fadel, leading lights in the Partnership for 21st Century Skills (P21)[1]. Barack Obama has repeatedly sung the praises of, and the need for, personalized learning and his policies are fleshed out by his Secretary of State, Arne Duncan, in speeches and on the White House blog: ‘President Obama described the promise of personalized learning when he launched the ConnectED initiative last June. Technology is a powerful tool that helps create robust personalized learning environments.’ In the UK, personalized learning has been government mantra for over 10 years. The EU, UNESCO, OECD, the Gates Foundation – everyone, it seems, is singing the same tune.

Personalization, we might all agree, is a good thing. How could it be otherwise? No one these days is going to promote depersonalization or impersonalization in education. What exactly it means, however, is less clear. According to a UNESCO Policy Brief[2], the term was first used in the context of education in the 1970s by Victor Garcìa Hoz, a senior Spanish educationalist and member of Opus Dei at the University of Madrid. This UNESCO document then points out that ‘unfortunately, up to this date there is no single definition of this concept’.

In ELT, the term has been used in a very wide variety of ways. These range from the far-reaching ideas of people like Gertrude Moskowitz, who advocated a fundamentally learner-centred form of instruction, to the much more banal practice of getting students to produce a few personalized examples of an item of grammar they have just studied. See Scott Thornbury’s A-Z blog for an interesting discussion of personalization in ELT.

As with education in general, and ELT in particular, ‘personalization’ is also bandied around the adaptive learning table. Duolingo advertises itself as the opposite of one-size-fits-all, and as an online equivalent of the ‘personalized education you can get from a small classroom teacher or private tutor’. Babbel offers a ‘personalized review manager’ and Rosetta Stone’s Classroom online solution allows educational institutions ‘to shift their language program away from a ‘one-size-fits-all-curriculum’ to a more individualized approach’. As far as I can tell, the personalization in these examples is extremely restricted. The language syllabus is fixed and although users can take different routes up the ‘skills tree’ or ‘knowledge graph’, they are totally confined by the pre-determination of those trees and graphs. This is no more personalized learning than asking students to make five true sentences using the present perfect. Arguably, it is even less!

This is not, in any case, the kind of personalization that Obama, the Gates Foundation, Knewton, et al have in mind when they conflate adaptive learning with personalization. Their definition is much broader and summarised in the US National Education Technology Plan of 2010: ‘Personalized learning means instruction is paced to learning needs, tailored to learning preferences, and tailored to the specific interests of different learners. In an environment that is fully personalized, the learning objectives and content as well as the method and pace may all vary (so personalization encompasses differentiation and individualization).’ What drives this is the big data generated by the students’ interactions with the technology (see ‘Part 4: big data and analytics’ of ‘The Guide’ on this blog).

What remains unclear is exactly how this might work in English language learning. Adaptive software can only personalize to the extent that the content of an English language learning programme allows it to do so. It may be true that each student using adaptive software ‘gets a more personalised experience no matter whose content the student is consuming’, as Knewton’s David Liu puts it. But the potential for any really meaningful personalization depends crucially on the nature and extent of this content, along with the possibility of variable learning outcomes. For this reason, we are not likely to see any truly personalized large-scale adaptive learning programs for English any time soon.

Nevertheless, technology is now central to personalized language learning. A good learning platform, which allows learners to connect to ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses, various apps’, etc (see Alexandra Chistyakova’s eltdiary), means that personalization could be more easily achieved.

For the time being, at least, adaptive learning systems would seem to work best for ‘those things that can be easily digitized and tested like math problems and reading passages’ writes Barbara Bray . Or low level vocabulary and grammar McNuggets, we might add. Ideal for, say, ‘English Grammar in Use’. But meaningfully personalized language learning?

student-data-and-personalization

‘Personalized learning’ sounds very progressive, a utopian educational horizon, and it sounds like it ought to be the future of ELT (as Cleve Miller argues). It also sounds like a pretty good slogan on which to hitch the adaptive bandwagon. But somehow, just somehow, I suspect that when it comes to adaptive learning we’re more likely to see more testing, more data collection and more depersonalization.

[1] Trilling, B. & Fadel, C. 2009 21st Century Skills (San Francisco: Wiley) p.33

[2] Personalized learning: a new ICT­enabled education approach, UNESCO Institute for Information Technologies in Education, Policy Brief March 2012 iite.unesco.org/pics/publications/en/files/3214716.pdf

 

I mentioned the issue of privacy very briefly in Part 9 of the ‘Guide’, and it seems appropriate to take a more detailed look.

Adaptive learning needs big data. Without the big data, there is nothing for the algorithms to work on, and the bigger the data set, the better the software can work. Adaptive language learning will be delivered via a platform, and the data that is generated by the language learner’s interaction with the English language program on the platform is likely to be only one, very small, part of the data that the system will store and analyse. Full adaptivity requires a psychometric profile for each student.

It would make sense, then, to aggregate as much data as possible in one place. Besides the practical value in massively combining different data sources (in order to enhance the usefulness of the personalized learning pathways), such a move would possibly save educational authorities substantial amounts of money and allow educational technology companies to mine the rich goldmine of student data, along with the standardised platform specifications, to design their products.

And so it has come to pass. The Gates Foundation (yes, them again) provided most of the $100 million funding. A division of Murdoch’s News Corp built the infrastructure. Once everything was ready, a non-profit organization called inBloom was set up to run the thing. The inBloom platform is open source and the database was initially free, although this will change. Preliminary agreements were made with 7 US districts and involved millions of children. The data includes ‘students’ names, birthdates, addresses, social security numbers, grades, test scores, disability status, attendance, and other confidential information’ (Ravitch, D. ‘Reign of Error’ NY: Knopf, 2013, p. 235-236). Under federal law, this information can be ‘shared’ with private companies selling educational technology and services.

The edtech world rejoiced. ‘This is going to be a huge win for us’, said one educational software provider; ‘it’s a godsend for us,’ said another. Others are not so happy. If the technology actually works, if it can radically transform education and ‘produce game-changing outcomes’ (as its proponents claim so often), the price to be paid might just conceivably be worth paying. But the price is high and the research is not there yet. The price is privacy.

The problem is simple. InBloom itself acknowledges that it ‘cannot guarantee the security of the information stored… or that the information will not be intercepted when it is being transmitted.’ Experience has already shown us that organisations as diverse as the CIA or the British health service cannot protect their data. Hackers like a good challenge. So do businesses.

The anti-privatization (and, by extension, the anti-adaptivity) lobby in the US has found an issue which is resonating with electors (and parents). These dissenting voices are led by Class Size Matters, and their voice is being heard. Of the original partners of inBloom, only one is now left. The others have all pulled out, mostly because of concerns about privacy, although the remaining partner, New York, involves personal data on 2.7 million students, which can be shared without any parental notification or consent.

inbloom-student-data-bill-gates

This might seem like a victory for the anti-privatization / anti-adaptivity lobby, but it is likely to be only temporary. There are plenty of other companies that have their eyes on the data-mining opportunities that will be coming their way, and Obama’s ‘Race to the Top’ program means that the inBloom controversy will be only a temporary setback. ‘The reality is that it’s going to be done. It’s not going to be a little part. It’s going to be a big part. And it’s going to be put in place partly because it’s going to be less expensive than doing professional development,’ says Eva Baker of the Center for the Study of Evaluation at UCLA.

It is in this light that the debate about adaptive learning becomes hugely significant. Class Size Matters, the odd academic like Neil Selwyn or the occasional blogger like myself will not be able to reverse a trend with seemingly unstoppable momentum. But we are, collectively, in a position to influence the way these changes will take place.

If you want to find out more, check out the inBloom and Class Size Matters links. And you might like to read more from the news reports which I have used for information in this post. Of these, the second was originally published by Scientific American (owned by Macmillan, one of the leading players in ELT adaptive learning). The third and fourth are from Education Week, which is funded in part by the Gates Foundation.

http://www.reuters.com/article/2013/03/03/us-education-database-idUSBRE92204W20130303

http://www.salon.com/2013/08/01/big_data_puts_teachers_out_of_work_partner/

http://www.edweek.org/ew/articles/2014/01/08/15inbloom_ep.h33.html

http://blogs.edweek.org/edweek/marketplacek12/2013/12/new_york_battle_over_inBloom_data_privacy_heading_to_court.html