Posts Tagged ‘algorithms’

In December last year, I posted a wish list for vocabulary (flashcard) apps. At the time, I hadn’t read a couple of key research texts on the subject. It’s time for an update.

First off, there’s an article called ‘Intentional Vocabulary Learning Using Digital Flashcards’ by Hsiu-Ting Hung. It’s available online here. Given the lack of empirical research into the use of digital flashcards, it’s an important article and well worth a read. Its basic conclusion is that digital flashcards are more effective as a learning tool than printed word lists. No great surprises there, but of more interest, perhaps, are the recommendations that (1) ‘students should be educated about the effective use of flashcards (e.g. the amount and timing of practice), and this can be implemented through explicit strategy instruction in regular language courses or additional study skills workshops ‘ (Hung, 2015: 111), and (2) that digital flashcards can be usefully ‘repurposed for collaborative learning tasks’ (Hung, ibid.).

nakataHowever, what really grabbed my attention was an article by Tatsuya Nakata. Nakata’s research is of particular interest to anyone interested in vocabulary learning, but especially so to those with an interest in digital possibilities. A number of his research articles can be freely accessed via his page at ResearchGate, but the one I am interested in is called ‘Computer-assisted second language vocabulary learning in a paired-associate paradigm: a critical investigation of flashcard software’. Don’t let the title put you off. It’s a review of a pile of web-based flashcard programs: since the article is already five years old, many of the programs have either changed or disappeared, but the critical approach he takes is more or less as valid now as it was then (whether we’re talking about web-based stuff or apps).

Nakata divides his evaluation for criteria into two broad groups.

Flashcard creation and editing

(1) Flashcard creation: Can learners create their own flashcards?

(2) Multilingual support: Can the target words and their translations be created in any language?

(3) Multi-word units: Can flashcards be created for multi-word units as well as single words?

(4) Types of information: Can various kinds of information be added to flashcards besides the word meanings (e.g. parts of speech, contexts, or audios)?

(5) Support for data entry: Does the software support data entry by automatically supplying information about lexical items such as meaning, parts of speech, contexts, or frequency information from an internal database or external resources?

(6) Flashcard set: Does the software allow learners to create their own sets of flashcards?


(1) Presentation mode: Does the software have a presentation mode, where new items are introduced and learners familiarise themselves with them?

(2) Retrieval mode: Does the software have a retrieval mode, which asks learners to recall or choose the L2 word form or its meaning?

(3) Receptive recall: Does the software ask learners to produce the meanings of target words?

(4) Receptive recognition: Does the software ask learners to choose the meanings of target words?

(5) Productive recall: Does the software ask learners to produce the target word forms corresponding to the meanings provided?

(6) Productive recognition: Does the software ask learners to choose the target word forms corresponding to the meanings provided?

(7) Increasing retrieval effort: For a given item, does the software arrange exercises in the order of increasing difficulty?

(8) Generative use: Does the software encourage generative use of words, where learners encounter or use previously met words in novel contexts?

(9) Block size: Can the number of words studied in one learning session be controlled and altered?

(10) Adaptive sequencing: Does the software change the sequencing of items based on learners’ previous performance on individual items?

(11) Expanded rehearsal: Does the software help implement expanded rehearsal, where the intervals between study trials are gradually increased as learning proceeds? (Nakata, T. (2011): ‘Computer-assisted second language vocabulary learning in a paired-associate paradigm: a critical investigation of flashcard software’ Computer Assisted Language Learning, 24:1, 17-38)

It’s a rather different list from my own (there’s nothing I would disagree with here), because mine is more general and his is exclusively oriented towards learning principles. Nakata makes the point towards the end of the article that it would ‘be useful to investigate learners’ reactions to computer-based flashcards to examine whether they accept flashcard programs developed according to learning principles’ (p. 34). It’s far from clear, he points out, that conformity to learning principles are at the top of learners’ agendas. More than just users’ feelings about computer-based flashcards in general, a key concern will be the fact that there are ‘large individual differences in learners’ perceptions of [any flashcard] program’ (Nakata, N. 2008. ‘English vocabulary learning with word lists, word cards and computers: implications from cognitive psychology research for optimal spaced learning’ ReCALL 20(1), p. 18).

I was trying to make a similar point in another post about motivation and vocabulary apps. In the end, as with any language learning material, research-driven language learning principles can only take us so far. User experience is a far more difficult creature to pin down or to make generalisations about. A user’s reaction to graphics, gamification, uploading time and so on are so powerful and so subjective that learning principles will inevitably play second fiddle. That’s not to say, of course, that Nakata’s questions are not important: it’s merely to wonder whether the bigger question is truly answerable.

Nakata’s research identifies plenty of room for improvement in digital flashcards, and although the article is now quite old, not a lot had changed. Key areas to work on are (1) the provision of generative use of target words, (2) the need to increase retrieval effort, (3) the automatic provision of information about meaning, parts of speech, or contexts (in order to facilitate flashcard creation), and (4) the automatic generation of multiple-choice distractors.

In the conclusion of his study, he identifies one flashcard program which is better than all the others. Unsurprisingly, five years down the line, the software he identifies is no longer free, others have changed more rapidly in the intervening period, and who knows will be out in front next week?


Having spent a lot of time recently looking at vocabulary apps, I decided to put together a Christmas wish list of the features of my ideal vocabulary app. The list is not exhaustive and I’ve given more attention to some features than others. What (apart from testing) have I missed out?

1             Spaced repetition

Since the point of a vocabulary app is to help learners memorise vocabulary items, it is hard to imagine a decent system that does not incorporate spaced repetition. Spaced repetition algorithms offer one well-researched way of improving the brain’s ‘forgetting curve’. These algorithms come in different shapes and sizes, and I am not technically competent to judge which is the most efficient. However, as Peter Ellis Jones, the developer of a flashcard system called CardFlash, points out, efficiency is only one half of the rote memorisation problem. If you are not motivated to learn, the cleverness of the algorithm is moot. Fundamentally, learning software needs to be fun, rewarding, and give a solid sense of progression.

2             Quantity, balance and timing of new and ‘old’ items

A spaced repetition algorithm determines the optimum interval between repetitions, but further algorithms will be needed to determine when and with what frequency new items will be added to the deck. Once a system knows how many items a learner needs to learn and the time in which they have to do it, it is possible to determine the timing and frequency of the presentation of new items. But the system cannot know in advance how well an individual learner will learn the items (for any individual, some items will be more readily learnable than others) nor the extent to which learners will live up to their own positive expectations of time spent on-app. As most users of flashcard systems know, it is easy to fall behind, feel swamped and, ultimately, give up. An intelligent system needs to be able to respond to individual variables in order to ensure that the learning load is realistic.

3             Task variety

A standard flashcard system which simply asks learners to indicate whether they ‘know’ a target item before they flip over the card rapidly becomes extremely boring. A system which tests this knowledge soon becomes equally dull. There needs to be a variety of ways in which learners interact with an app, both for reasons of motivation and learning efficiency. It may be the case that, for an individual user, certain task types lead to more rapid gains in learning. An intelligent, adaptive system should be able to capture this information and modify the selection of task types.

Most younger learners and some adult learners will respond well to the inclusion of games within the range of task types. Examples of such games include the puzzles developed by Oliver Rose in his Phrase Maze app to accompany Quizlet practice.Phrase Maze 1Phrase Maze 2

4             Generative use

Memory researchers have long known about the ‘Generation Effect’ (see for example this piece of research from the Journal of Verbal Learning and Learning Behavior, 1978). Items are better learnt when the learner has to generate, in some (even small) way, the target item, rather than simply reading it. In vocabulary learning, this could be, for example, typing in the target word or, more simply, inserting some missing letters. Systems which incorporate task types that require generative use are likely to result in greater learning gains than simple, static flashcards with target items on one side and definitions or translations on the other.

5             Receptive and productive practice

The most basic digital flashcard systems require learners to understand a target item, or to generate it from a definition or translation prompt. Valuable as this may be, it won’t help learners much to use these items productively, since these systems focus exclusively on meaning. In order to do this, information must be provided about collocation, colligation, register, etc and these aspects of word knowledge will need to be focused on within the range of task types. At the same time, most vocabulary apps that I have seen focus primarily on the written word. Although any good system will offer an audio recording of the target item, and many will offer the learner the option of recording themselves, learners are invariably asked to type in their answers, rather than say them. For the latter, speech recognition technology will be needed. Ideally, too, an intelligent system will compare learner recordings with the audio models and provide feedback in such a way that the learner is guided towards a closer reproduction of the model.

6             Scaffolding and feedback

feebuMost flashcard systems are basically low-stakes, practice self-testing. Research (see, for example, Dunlosky et al’s metastudy ‘Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology’) suggests that, as a learning strategy, practice testing has high utility – indeed, of higher utility than other strategies like keyword mnemonics or highlighting. However, an element of tutoring is likely to enhance practice testing, and, for this, scaffolding and feedback will be needed. If, for example, a learner is unable to produce a correct answer, they will probably benefit from being guided towards it through hints, in the same way as a teacher would elicit in a classroom. Likewise, feedback on why an answer is wrong (as opposed to simply being told that you are wrong), followed by encouragement to try again, is likely to enhance learning. Such feedback might, for example, point out that there is perhaps a spelling problem in the learner’s attempted answer, that the attempted answer is in the wrong part of speech, or that it is semantically close to the correct answer but does not collocate with other words in the text. The incorporation of intelligent feedback of this kind will require a number of NLP tools, since it will never be possible for a human item-writer to anticipate all the possible incorrect answers. A current example of intelligent feedback of this kind can be found in the Oxford English Vocabulary Trainer app.

7             Content

At the very least, a decent vocabulary app will need good definitions and translations (how many different languages?), and these will need to be tagged to the senses of the target items. These will need to be supplemented with all the other information that you find in a good learner’s dictionary: syntactic patterns, collocations, cognates, an indication of frequency, etc. The only way of getting this kind of high-quality content is by paying to license it from a company with expertise in lexicography. It doesn’t come cheap.

There will also need to be example sentences, both to illustrate meaning / use and for deployment in tasks. Dictionary databases can provide some of these, but they cannot be relied on as a source. This is because the example sentences in dictionaries have been selected and edited to accompany the other information provided in the dictionary, and not as items in practice exercises, which have rather different requirements. Once more, the solution doesn’t come cheap: experienced item writers will be needed.

Dictionaries describe and illustrate how words are typically used. But examples of typical usage tend to be as dull as they are forgettable. Learning is likely to be enhanced if examples are cognitively salient: weird examples with odd collocations, for example. Another thing for the item writers to think about.

A further challenge for an app which is not level-specific is that both the definitions and example sentences need to be level-specific. An A1 / A2 learner will need the kind of content that is found in, say, the Oxford Essential dictionary; B2 learners and above will need content from, say, the OALD.

8             Artwork and design

My wordbook2It’s easy enough to find artwork or photos of concrete nouns, but try to find or commission a pair of pictures that differentiate, for example, the adjectives ‘wild’ and ‘dangerous’ … What kind of pictures might illustrate simple verbs like ‘learn’ or ‘remember’? Will such illustrations be clear enough when squeezed into a part of a phone screen? Animations or very short video clips might provide a solution in some cases, but these are more expensive to produce and video files are much heavier.

With a few notable exceptions, such as the British Councils’s MyWordBook 2, design in vocabulary apps has been largely forgotten.

9             Importable and personalisable lists

Many learners will want to use a vocabulary app in association with other course material (e.g. coursebooks). Teachers, however, will inevitably want to edit these lists, deleting some items, adding others. Learners will want to do the same. This is a huge headache for app designers. If new items are going to be added to word lists, how will the definitions, example sentences and illustrations be generated? Will the database contain audio recordings of these words? How will these items be added to the practice tasks (if these include task types that go beyond simple double-sided flashcards)? NLP tools are not yet good enough to trawl a large corpus in order to select (and possibly edit) sentences that illustrate the right meaning and which are appropriate for interactive practice exercises. We can personalise the speed of learning and even the types of learning tasks, so long as the target language is predetermined. But as soon as we allow for personalisation of content, we run into difficulties.

10          Gamification

Maintaining motivation to use a vocabulary app is not easy. Gamification may help. Measuring progress against objectives will be a start. Stars and badges and leaderboards may help some users. Rewards may help others. But gamification features need to be built into the heart of the system, into the design and selection of tasks, rather than simply tacked on as an afterthought. They need to be trialled and tweaked, so analytics will be needed.

11          Teacher support

Although the use of vocabulary flashcards is beginning to catch on with English language teachers, teachers need help with ways to incorporate them in the work they do with their students. What can teachers do in class to encourage use of the app? In what ways does app use require teachers to change their approach to vocabulary work in the classroom? Reporting functions can help teachers know about the progress their students are making and provide very detailed information about words that are causing problems. But, as anyone involved in platform-based course materials knows, teachers need a lot of help.

12          And, of course, …

Apps need to be usable with different operating systems. Ideally, they should be (partially) usable offline. Loading times need to be short. They need to be easy and intuitive to use.

It’s unlikely that I’ll be seeing a vocabulary app with all of these features any time soon. Or, possibly, ever. The cost of developing something that could do all this would be extremely high, and there is no indication that there is a market that would be ready to pay the sort of prices that would be needed to cover the costs of development and turn a profit. We need to bear in mind, too, the fact that vocabulary apps can only ever assist in the initial acquisition of vocabulary: apps alone can’t solve the vocabulary learning problem (despite the silly claims of some app developers). The need for meaningful communicative use, extensive reading and listening, will not go away because a learner has been using an app. So, how far can we go in developing better and better vocabulary apps before users decide that a cheap / free app, with all its shortcomings, is actually good enough?

I posted a follow up to this post in October 2016.

In ELT circles, ‘behaviourism’ is a boo word. In the standard history of approaches to language teaching (characterised as a ‘procession of methods’ by Hunter & Smith 2012: 432[1]), there were the bad old days of behaviourism until Chomsky came along, savaged the theory in his review of Skinner’s ‘Verbal Behavior’, and we were all able to see the light. In reality, of course, things weren’t quite like that. The debate between Chomsky and the behaviourists is far from over, behaviourism was not the driving force behind the development of audiolingual approaches to language teaching, and audiolingualism is far from dead. For an entertaining and eye-opening account of something much closer to reality, I would thoroughly recommend a post on Russ Mayne’s Evidence Based ELT blog, along with the discussion which follows it. For anyone who would like to understand what behaviourism is, was, and is not (before they throw the term around as an insult), I’d recommend John A. Mills’ ‘Control: A History of Behavioral Psychology’ (New York University Press, 1998) and John Staddon’s ‘The New Behaviorism 2nd edition’ (Psychology Press, 2014).

There is a close connection between behaviourism and adaptive learning. Audrey Watters, no fan of adaptive technology, suggests that ‘any company touting adaptive learning software’ has been influenced by Skinner. In a more extended piece, ‘Education Technology and Skinner’s Box, Watters explores further her problems with Skinner and the educational technology that has been inspired by behaviourism. But writers much more sympathetic to adaptive learning, also see close connections to behaviourism. ‘The development of adaptive learning systems can be considered as a transformation of teaching machines,’ write Kara & Sevim[2] (2013: 114 – 117), although they go on to point out the differences between the two. Vendors of adaptive learning products, like DreamBox Learning©, are not shy of associating themselves with behaviourism: ‘Adaptive learning has been with us for a while, with its history of adaptive learning rooted in cognitive psychology, beginning with the work of behaviorist B.F. Skinner in the 1950s, and continuing through the artificial intelligence movement of the 1970s.’

That there is a strong connection between adaptive learning and behaviourism is indisputable, but I am not interested in attempting to establish the strength of that connection. This would, in any case, be an impossible task without some reductionist definition of both terms. Instead, my interest here is to explore some of the parallels between the two, and, in the spirit of the topic, I’d like to do this by comparing the behaviours of behaviourists and adaptive learning scientists.

Data and theory

Both behaviourism and adaptive learning (in its big data form) are centrally concerned with behaviour – capturing and measuring it in an objective manner. In both, experimental observation and the collection of ‘facts’ (physical, measurable, behavioural occurrences) precede any formulation of theory. John Mills’ description of behaviourists could apply equally well to adaptive learning scientists: theory construction was a seesaw process whereby one began with crude outgrowths from observations and slowly created one’s theory in such a way that one could make more and more precise observations, building those observations into the theory at each stage. No behaviourist ever considered the possibility of taking existing comprehensive theories of mind and testing or refining them.[3]

Positivism and the panopticon

Both behaviourism and adaptive learning are pragmatically positivist, believing that truth can be established by the study of facts. J. B. Watson, the founding father of behaviourism whose article ‘Psychology as the Behaviorist Views Itset the behaviourist ball rolling, believed that experimental observation could ‘reveal everything that can be known about human beings’[4]. Jose Ferreira of Knewton has made similar claims: We get five orders of magnitude more data per user than Google does. We get more data about people than any other data company gets about people, about anything — and it’s not even close. We’re looking at what you know, what you don’t know, how you learn best. […] We know everything about what you know and how you learn best because we get so much data. Digital data analytics offer something that Watson couldn’t have imagined in his wildest dreams, but he would have approved.

happiness industryThe revolutionary science

Big data (and the adaptive learning which is a part of it) is presented as a game-changer: The era of big data challenges the way we live and interact with the world. […] Society will need to shed some of its obsession for causality in exchange for simple correlations: not knowing why but only what. This overturns centuries of established practices and challenges our most basic understanding of how to make decisions and comprehend reality[5]. But the reverence for technology and the ability to reach understandings of human beings by capturing huge amounts of behavioural data was adumbrated by Watson a century before big data became a widely used term. Watson’s 1913 lecture at Columbia University was ‘a clear pitch’[6] for the supremacy of behaviourism, and its potential as a revolutionary science.

Prediction and controlnudge

The fundamental point of both behaviourism and adaptive learning is the same. The research practices and the theorizing of American behaviourists until the mid-1950s, writes Mills[7] were driven by the intellectual imperative to create theories that could be used to make socially useful predictions. Predictions are only useful to the extent that they can be used to manipulate behaviour. Watson states this very baldly: the theoretical goal of psychology is the prediction and control of behaviour[8]. Contemporary iterations of behaviourism, such as behavioural economics or nudge theory (see, for example, Thaler & Sunstein’s best-selling ‘Nudge’, Penguin Books, 2008), or the British government’s Behavioural Insights Unit, share the same desire to divert individual activity towards goals (selected by those with power), ‘without either naked coercion or democratic deliberation’[9]. Jose Ferreira of Knewton has an identical approach: We can predict failure in advance, which means we can pre-remediate it in advance. We can say, “Oh, she’ll struggle with this, let’s go find the concept from last year’s materials that will help her not struggle with it.” Like the behaviourists, Ferreira makes grand claims about the social usefulness of his predict-and-control technology: The end is a really simple mission. Only 22% of the world finishes high school, and only 55% finish sixth grade. Those are just appalling numbers. As a species, we’re wasting almost four-fifths of the talent we produce. […] I want to solve the access problem for the human race once and for all.


Because they rely on capturing large amounts of personal data, both behaviourism and adaptive learning quickly run into ethical problems. Even where informed consent is used, the subjects must remain partly ignorant of exactly what is being tested, or else there is the fear that they might adjust their behaviour accordingly. The goal is to minimise conscious understanding of what is going on[10]. For adaptive learning, the ethical problem is much greater because of the impossibility of ensuring the security of this data. Everything is hackable.


Behaviourism was seen as a god-send by the world of advertising. J. B. Watson, after a front-page scandal about his affair with a student, and losing his job at John Hopkins University, quickly found employment on Madison Avenue. ‘Scientific advertising’, as practised by the Mad Men from the 1920s onwards, was based on behaviourism. The use of data analytics by Google, Amazon, et al is a direct descendant of scientific advertising, so it is richly appropriate that adaptive learning is the child of data analytics.

[1] Hunter, D. and Smith, R. (2012) ‘Unpacking the past: “CLT” through ELTJ keywords’. ELT Journal, 66/4: 430-439.

[2] Kara, N. & Sevim, N. 2013. ‘Adaptive learning systems: beyond teaching machines’, Contemporary Educational Technology, 4(2), 108-120

[3] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.5

[4] Davies, W. (2015) The Happiness Industry. London: Verso. p.91

[5] Mayer-Schönberger, V. & Cukier, K. (2013) Big Data. London: John Murray, p.7

[6] Davies, W. (2015) The Happiness Industry. London: Verso. p.87

[7] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.2

[8] Watson, J. B. (1913) ‘Behaviorism as the Psychologist Views it’ Psychological Review 20: 158

[9] Davies, W. (2015) The Happiness Industry. London: Verso. p.88

[10] Davies, W. (2015) The Happiness Industry. London: Verso. p.92

‘Sticky’ – as in ‘sticky learning’ or ‘sticky content’ (as opposed to ‘sticky fingers’ or a ‘sticky problem’) – is itself fast becoming a sticky word. If you check out ‘sticky learning’ on Google Trends, you’ll see that it suddenly spiked in September 2011, following the slightly earlier appearance of ‘sticky content’. The historical rise in this use of the word coincides with the exponential growth in the number of references to ‘big data’.

I am often asked if adaptive learning really will take off as a big thing in language learning. Will adaptivity itself be a sticky idea? When the question is asked, people mean the big data variety of adaptive learning, rather than the much more limited adaptivity of spaced repetition algorithms, which, I think, is firmly here and here to stay. I can’t answer the question with any confidence, but I recently came across a book which suggests a useful way of approaching the question.

41u+NEyWjnL._SY344_BO1,204,203,200_‘From the Ivory Tower to the Schoolhouse’ by Jack Schneider (Harvard Education Press, 2014) investigates the reasons why promising ideas from education research fail to get taken up by practitioners, and why other, less-than-promising ideas, from a research or theoretical perspective, become sticky quite quickly. As an example of the former, Schneider considers Robert Sternberg’s ‘Triarchic Theory’. As an example of the latter, he devotes a chapter to Howard Gardner’s ‘Multiple Intelligences Theory’.

Schneider argues that educational ideas need to possess four key attributes in order for teachers to sit up, take notice and adopt them.

  1. perceived significance: the idea must answer a question central to the profession – offering a big-picture understanding rather than merely one small piece of a larger puzzle
  2. philosophical compatibility: the idea must clearly jibe with closely held [teacher] beliefs like the idea that teachers are professionals, or that all children can learn
  3. occupational realism: it must be possible for the idea to be put easily into immediate use
  4. transportability: the idea needs to find its practical expression in a form that teachers can access and use at the time that they need it – it needs to have a simple core that can travel through pre-service coursework, professional development seminars, independent study and peer networks

To what extent does big data adaptive learning possess these attributes? It certainly comes up trumps with respect to perceived significance. The big question that it attempts to answer is the question of how we can make language learning personalized / differentiated / individualised. As its advocates never cease to remind us, adaptive learning holds out the promise of moving away from a one-size-fits-all approach. The extent to which it can keep this promise is another matter, of course. For it to do so, it will never be enough just to offer different pathways through a digitalised coursebook (or its equivalent). Much, much more content will be needed: at least five or six times the content of a one-size-fits-all coursebook. At the moment, there is little evidence of the necessary investment into content being made (quite the opposite, in fact), but the idea remains powerful nevertheless.

When it comes to philosophical compatibility, adaptive learning begins to run into difficulties. Despite the decades of edging towards more communicative approaches in language teaching, research (e.g. the research into English teaching in Turkey described in a previous post), suggests that teachers still see explanation and explication as key functions of their jobs. They believe that they know their students best and they know what is best for them. Big data adaptive learning challenges these beliefs head on. It is no doubt for this reason that companies like Knewton make such a point of claiming that their technology is there to help teachers. But Jose Ferreira doth protest too much, methinks. Platform-delivered adaptive learning is a direct threat to teachers’ professionalism, their salaries and their jobs.

Occupational realism is more problematic still. Very, very few language teachers around the world have any experience of truly blended learning, and it’s very difficult to envisage precisely what it is that the teacher should be doing in a classroom. Publishers moving towards larger-scale blended adaptive materials know that this is a big problem, and are actively looking at ways of packaging teacher training / teacher development (with a specific focus on blended contexts) into the learner-facing materials that they sell. But the problem won’t go away. Education ministries have a long history of throwing money at technological ‘solutions’ without thinking about obtaining the necessary buy-in from their employees. It is safe to predict that this is something that is unlikely to change. Moreover, learning how to become a blended teacher is much harder than learning, say, how to make good use of an interactive whiteboard. Since there are as many different blended adaptive approaches as there are different educational contexts, there cannot be (irony of ironies) a one-size-fits-all approach to training teachers to make good use of this software.

Finally, how transportable is big data adaptive learning? Not very, is the short answer, and for the same reasons that ‘occupational realism’ is highly problematic.

Looking at things through Jack Schneider’s lens, we might be tempted to come to the conclusion that the future for adaptive learning is a rocky path, at best. But Schneider doesn’t take political or economic considerations into account. Sternberg’s ‘Triarchic Theory’ never had the OECD or the Gates Foundation backing it up. It never had millions and millions of dollars of investment behind it. As we know from political elections (and the big data adaptive learning issue is a profoundly political one), big bucks can buy opinions.

It may also prove to be the case that the opinions of teachers don’t actually matter much. If the big adaptive bucks can win the educational debate at the highest policy-making levels, teachers will be the first victims of the ‘creative disruption’ that adaptivity promises. If you don’t believe me, just look at what is going on in the U.S.

There are causes for concern, but I don’t want to sound too alarmist. Nobody really has a clue whether big data adaptivity will actually work in language learning terms. It remains more of a theory than a research-endorsed practice. And to end on a positive note, regardless of how sticky it proves to be, it might just provide the shot-in-the-arm realisation that language teachers, at their best, are a lot more than competent explainers of grammar or deliverers of gap-fills.

2014-09-30_2216Jose Ferreira, the fast-talking sales rep-in-chief of Knewton, likes to dazzle with numbers. In a 2012 talk hosted by the US Department of Education, Ferreira rattles off the stats: So Knewton students today, we have about 125,000, 180,000 right now, by December it’ll be 650,000, early next year it’ll be in the millions, and next year it’ll be close to 10 million. And that’s just through our Pearson partnership. For each of these students, Knewton gathers millions of data points every day. That, brags Ferreira, is five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. With just a touch of breathless exaggeration, Ferreira goes on: We literally know everything about what you know and how you learn best, everything.

The data is mined to find correlations between learning outcomes and learning behaviours, and, once correlations have been established, learning programmes can be tailored to individual students. Ferreira explains: We take the combined data problem all hundred million to figure out exactly how to teach every concept to each kid. So the 100 million first shows up to learn the rules of exponents, great let’s go find a group of people who are psychometrically equivalent to that kid. They learn the same ways, they have the same learning style, they know the same stuff, because Knewton can figure out things like you learn math best in the morning between 8:40 and 9:13 am. You learn science best in 42 minute bite sizes the 44 minute mark you click right, you start missing questions you would normally get right.

The basic premise here is that the more data you have, the more accurately you can predict what will work best for any individual learner. But how accurate is it? In the absence of any decent, independent research (or, for that matter, any verifiable claims from Knewton), how should we respond to Ferreira’s contribution to the White House Education Datapalooza?

A 51Oy5J3o0yL._AA258_PIkin4,BottomRight,-46,22_AA280_SH20_OU35_new book by Stephen Finlay, Predictive Analytics, Data Mining and Big Data (Palgrave Macmillan, 2014) suggests that predictive analytics are typically about 20 – 30% more accurate than humans attempting to make the same judgements. That’s pretty impressive and perhaps Knewton does better than that, but the key thing to remember is that, however much data Knewton is playing with, and however good their algorithms are, we are still talking about predictions and not certainties. If an adaptive system could predict with 90% accuracy (and the actual figure is typically much lower than that) what learning content and what learning approach would be effective for an individual learner, it would still mean that it was wrong 10% of the time. When this is scaled up to the numbers of students that use Knewton software, it means that millions of students are getting faulty recommendations. Beyond a certain point, further expansion of the data that is mined is unlikely to make any difference to the accuracy of predictions.

A further problem identified by Stephen Finlay is the tendency of people in predictive analytics to confuse correlation and causation. Certain students may have learnt maths best between 8.40 and 9.13, but it does not follow that they learnt it best because they studied at that time. If strong correlations do not involve causality, then actionable insights (such as individualised course design) can be no more than an informed gamble.

Knewton’s claim that they know how every student learns best is marketing hyperbole and should set alarm bells ringing. When it comes to language learning, we simply do not know how students learn (we do not have any generally accepted theory of second language acquisition), let alone how they learn best. More data won’t help our theories of learning! Ferreira’s claim that, with Knewton, every kid gets a perfectly optimized textbook, except it’s also video and other rich media dynamically generated in real time is equally preposterous, not least since the content of the textbook will be at least as significant as the way in which it is ‘optimized’. And, as we all know, textbooks have their faults.

Cui bono? Perhaps huge data and predictive analytics will benefit students; perhaps not. We will need to wait and find out. But Stephen Finlay reminds us that in gold rushes (and internet booms and the exciting world of Big Data) the people who sell the tools make a lot of money. Far more strike it rich selling picks and shovels to prospectors than do the prospectors. Likewise, there is a lot of money to be made selling Big Data solutions. Whether the buyer actually gets any benefit from them is not the primary concern of the sales people. (p.16/17) Which is, perhaps, one of the reasons that some sales people talk so fast.

Personalization is one of the key leitmotifs in current educational discourse. The message is clear: personalization is good, one-size-fits-all is bad. ‘How to personalize learning and how to differentiate instruction for diverse classrooms are two of the great educational challenges of the 21st century,’ write Trilling and Fadel, leading lights in the Partnership for 21st Century Skills (P21)[1]. Barack Obama has repeatedly sung the praises of, and the need for, personalized learning and his policies are fleshed out by his Secretary of State, Arne Duncan, in speeches and on the White House blog: ‘President Obama described the promise of personalized learning when he launched the ConnectED initiative last June. Technology is a powerful tool that helps create robust personalized learning environments.’ In the UK, personalized learning has been government mantra for over 10 years. The EU, UNESCO, OECD, the Gates Foundation – everyone, it seems, is singing the same tune.

Personalization, we might all agree, is a good thing. How could it be otherwise? No one these days is going to promote depersonalization or impersonalization in education. What exactly it means, however, is less clear. According to a UNESCO Policy Brief[2], the term was first used in the context of education in the 1970s by Victor Garcìa Hoz, a senior Spanish educationalist and member of Opus Dei at the University of Madrid. This UNESCO document then points out that ‘unfortunately, up to this date there is no single definition of this concept’.

In ELT, the term has been used in a very wide variety of ways. These range from the far-reaching ideas of people like Gertrude Moskowitz, who advocated a fundamentally learner-centred form of instruction, to the much more banal practice of getting students to produce a few personalized examples of an item of grammar they have just studied. See Scott Thornbury’s A-Z blog for an interesting discussion of personalization in ELT.

As with education in general, and ELT in particular, ‘personalization’ is also bandied around the adaptive learning table. Duolingo advertises itself as the opposite of one-size-fits-all, and as an online equivalent of the ‘personalized education you can get from a small classroom teacher or private tutor’. Babbel offers a ‘personalized review manager’ and Rosetta Stone’s Classroom online solution allows educational institutions ‘to shift their language program away from a ‘one-size-fits-all-curriculum’ to a more individualized approach’. As far as I can tell, the personalization in these examples is extremely restricted. The language syllabus is fixed and although users can take different routes up the ‘skills tree’ or ‘knowledge graph’, they are totally confined by the pre-determination of those trees and graphs. This is no more personalized learning than asking students to make five true sentences using the present perfect. Arguably, it is even less!

This is not, in any case, the kind of personalization that Obama, the Gates Foundation, Knewton, et al have in mind when they conflate adaptive learning with personalization. Their definition is much broader and summarised in the US National Education Technology Plan of 2010: ‘Personalized learning means instruction is paced to learning needs, tailored to learning preferences, and tailored to the specific interests of different learners. In an environment that is fully personalized, the learning objectives and content as well as the method and pace may all vary (so personalization encompasses differentiation and individualization).’ What drives this is the big data generated by the students’ interactions with the technology (see ‘Part 4: big data and analytics’ of ‘The Guide’ on this blog).

What remains unclear is exactly how this might work in English language learning. Adaptive software can only personalize to the extent that the content of an English language learning programme allows it to do so. It may be true that each student using adaptive software ‘gets a more personalised experience no matter whose content the student is consuming’, as Knewton’s David Liu puts it. But the potential for any really meaningful personalization depends crucially on the nature and extent of this content, along with the possibility of variable learning outcomes. For this reason, we are not likely to see any truly personalized large-scale adaptive learning programs for English any time soon.

Nevertheless, technology is now central to personalized language learning. A good learning platform, which allows learners to connect to ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses, various apps’, etc (see Alexandra Chistyakova’s eltdiary), means that personalization could be more easily achieved.

For the time being, at least, adaptive learning systems would seem to work best for ‘those things that can be easily digitized and tested like math problems and reading passages’ writes Barbara Bray . Or low level vocabulary and grammar McNuggets, we might add. Ideal for, say, ‘English Grammar in Use’. But meaningfully personalized language learning?


‘Personalized learning’ sounds very progressive, a utopian educational horizon, and it sounds like it ought to be the future of ELT (as Cleve Miller argues). It also sounds like a pretty good slogan on which to hitch the adaptive bandwagon. But somehow, just somehow, I suspect that when it comes to adaptive learning we’re more likely to see more testing, more data collection and more depersonalization.

[1] Trilling, B. & Fadel, C. 2009 21st Century Skills (San Francisco: Wiley) p.33

[2] Personalized learning: a new ICT­enabled education approach, UNESCO Institute for Information Technologies in Education, Policy Brief March 2012


I mentioned the issue of privacy very briefly in Part 9 of the ‘Guide’, and it seems appropriate to take a more detailed look.

Adaptive learning needs big data. Without the big data, there is nothing for the algorithms to work on, and the bigger the data set, the better the software can work. Adaptive language learning will be delivered via a platform, and the data that is generated by the language learner’s interaction with the English language program on the platform is likely to be only one, very small, part of the data that the system will store and analyse. Full adaptivity requires a psychometric profile for each student.

It would make sense, then, to aggregate as much data as possible in one place. Besides the practical value in massively combining different data sources (in order to enhance the usefulness of the personalized learning pathways), such a move would possibly save educational authorities substantial amounts of money and allow educational technology companies to mine the rich goldmine of student data, along with the standardised platform specifications, to design their products.

And so it has come to pass. The Gates Foundation (yes, them again) provided most of the $100 million funding. A division of Murdoch’s News Corp built the infrastructure. Once everything was ready, a non-profit organization called inBloom was set up to run the thing. The inBloom platform is open source and the database was initially free, although this will change. Preliminary agreements were made with 7 US districts and involved millions of children. The data includes ‘students’ names, birthdates, addresses, social security numbers, grades, test scores, disability status, attendance, and other confidential information’ (Ravitch, D. ‘Reign of Error’ NY: Knopf, 2013, p. 235-236). Under federal law, this information can be ‘shared’ with private companies selling educational technology and services.

The edtech world rejoiced. ‘This is going to be a huge win for us’, said one educational software provider; ‘it’s a godsend for us,’ said another. Others are not so happy. If the technology actually works, if it can radically transform education and ‘produce game-changing outcomes’ (as its proponents claim so often), the price to be paid might just conceivably be worth paying. But the price is high and the research is not there yet. The price is privacy.

The problem is simple. InBloom itself acknowledges that it ‘cannot guarantee the security of the information stored… or that the information will not be intercepted when it is being transmitted.’ Experience has already shown us that organisations as diverse as the CIA or the British health service cannot protect their data. Hackers like a good challenge. So do businesses.

The anti-privatization (and, by extension, the anti-adaptivity) lobby in the US has found an issue which is resonating with electors (and parents). These dissenting voices are led by Class Size Matters, and their voice is being heard. Of the original partners of inBloom, only one is now left. The others have all pulled out, mostly because of concerns about privacy, although the remaining partner, New York, involves personal data on 2.7 million students, which can be shared without any parental notification or consent.


This might seem like a victory for the anti-privatization / anti-adaptivity lobby, but it is likely to be only temporary. There are plenty of other companies that have their eyes on the data-mining opportunities that will be coming their way, and Obama’s ‘Race to the Top’ program means that the inBloom controversy will be only a temporary setback. ‘The reality is that it’s going to be done. It’s not going to be a little part. It’s going to be a big part. And it’s going to be put in place partly because it’s going to be less expensive than doing professional development,’ says Eva Baker of the Center for the Study of Evaluation at UCLA.

It is in this light that the debate about adaptive learning becomes hugely significant. Class Size Matters, the odd academic like Neil Selwyn or the occasional blogger like myself will not be able to reverse a trend with seemingly unstoppable momentum. But we are, collectively, in a position to influence the way these changes will take place.

If you want to find out more, check out the inBloom and Class Size Matters links. And you might like to read more from the news reports which I have used for information in this post. Of these, the second was originally published by Scientific American (owned by Macmillan, one of the leading players in ELT adaptive learning). The third and fourth are from Education Week, which is funded in part by the Gates Foundation.

One could be forgiven for thinking that there are no problems associated with adaptive learning in ELT. Type the term into a search engine and you’ll mostly come up with enthusiasm or sales talk. There are, however, a number of reasons to be deeply skeptical about the whole business. In the post after this, I will be considering the political background.

1. Learning theory

Jose Fereira, the CEO of Knewton, spoke, in an interview with Digital Journal[1] in October 2009, about getting down to the ‘granular level’ of learning. He was referencing, in an original turn of phrase, the commonly held belief that learning is centrally concerned with ‘gaining knowledge’, knowledge that can be broken down into very small parts that can be put together again. In this sense, the adaptive learning machine is very similar to the ‘teaching machine’ of B.F. Skinner, the psychologist who believed that learning was a complex process of stimulus and response. But how many applied linguists would agree, firstly, that language can be broken down into atomised parts (rather than viewed as a complex, dynamic system), and, secondly, that these atomised parts can be synthesized in a learning program to reform a complex whole? Human cognitive and linguistic development simply does not work that way, despite the strongly-held contrary views of ‘folk’ theories of learning (Selwyn Education and Technology 2011, p.3).


Furthermore, even if an adaptive system delivers language content in personalized and interesting ways, it is still premised on a view of learning where content is delivered and learners receive it. The actual learning program is not personalized in any meaningful way: it is only the way that it is delivered that responds to the algorithms. This is, again, a view of learning which few educationalists (as opposed to educational leaders) would share. Is language learning ‘simply a technical business of well managed information processing’ or is it ‘a continuing process of ‘participation’ (Selwyn, Education and Technology 2011, p.4)?

Finally, adaptive learning is also premised on the idea that learners have particular learning styles, that these can be identified by the analytics (even if they are not given labels), and that actionable insights can be gained from this analysis (i.e. the software can decide on the most appropriate style of content delivery for an individual learner). Although the idea that teaching programs can be modified to cater to individual learning styles continues to have some currency among language teachers (e.g. those who espouse Neuro-Linguistic Programming or Multiple Intelligences Theory), it is not an idea that has much currency in the research community.

It might be the case that adaptive learning programs will work with some, or even many, learners, but it would be wise to carry out more research (see the section on Research below) before making grand claims about its efficacy. If adaptive learning can be shown to be more effective than other forms of language learning, it will be either because our current theories of language learning are all wrong, or because the learning takes place despite the theory, (and not because of it).

2. Practical problems

However good technological innovations may sound, they can only be as good, in practice, as the way they are implemented. Language laboratories and interactive whiteboards both sounded like very good ideas at the time, but they both fell out of favour long before they were technologically superseded. The reasons are many, but one of the most important is that classroom teachers did not understand sufficiently the potential of these technologies or, more basically, how to use them. Given the much more radical changes that seem to be implied by the adoption of adaptive learning, we would be wise to be cautious. The following is a short, selected list of questions that have not yet been answered.

  • Language teachers often struggle with mixed ability classes. If adaptive programs (as part of a blended program) allow students to progress at their own speed, the range of abilities in face-to-face lessons may be even more marked. How will teachers cope with this? Teacher – student ratios are unlikely to improve!
  • Who will pay for the training that teachers will need to implement effective blended learning and when will this take place?
  • How will teachers respond to a technology that will be perceived by some as a threat to their jobs and their professionalism and as part of a growing trend towards the accommodation of commercial interests (see the next post)?
  • How will students respond to online (adaptive) learning when it becomes the norm, rather than something ‘different’?

3 Research

Technological innovations in education are rarely, if ever, driven by solidly grounded research, but they are invariably accompanied by grand claims about their potential. Motion pictures, radio, television and early computers were all seen, in their time, as wonder technologies that would revolutionize education (Cuban, Teachers and Machines: The Classroom Use of Technology since 1920 1986). Early research seemed to support the claims, but the passage of time has demonstrated all too clearly the precise opposite. The arrival on the scene of e-learning in general, and adaptive learning in particular, has also been accompanied by much cheer-leading and claims of research support.

Examples of such claims of research support for adaptive learning in higher education in the US and Australia include an increase in pass rates of between 7 and 18%, a decrease of between 14 and 47% in student drop-outs, and an acceleration of 25% in the time needed to complete courses[2]. However, research of this kind needs to be taken with a liberal pinch of salt. First of all, the research has usually been commissioned, and sometimes carried out, by those with vested commercial interests in positive results. Secondly, the design of the research study usually guarantees positive results. Finally, the results cannot be interpreted to have any significance beyond their immediate local context. There is no reason to expect that what happened in a particular study into adaptive learning in, say, the University of Arizona would be replicated in, say, the Universities of Amman, Astana or anywhere else. Very often, when this research is reported, the subject of the students’ study is not even mentioned, as if this were of no significance.

The lack of serious research into the effectiveness of adaptive learning does not lead us to the conclusion that it is ineffective. It is simply too soon to say, and if the examples of motion pictures, radio and television are any guide, it will be a long time before we have any good evidence. By that time, it is reasonable to assume, adaptive learning will be a very different beast from what it is today. Given the recency of this kind of learning, the lack of research is not surprising. For online learning in general, a meta-analysis commissioned by the US Department of Education (Means et al, Evaluation of Evidence-Based Practice in Online Learning 2009, p.9) found that there were only a small number of rigorous published studies, and that it was not possible to attribute any gains in learning outcomes to online or blended learning modes. As the authors of this report were aware, there are too many variables (social, cultural and economic) to compare in any direct way the efficacy of one kind of learning with another. This is as true of attempts to compare adaptive online learning with face-to-face instruction as it is with comparisons of different methodological approaches in purely face-to-face teaching. There is, however, an irony in the fact that advocates of adaptive learning (whose interest in analytics leads them to prioritise correlational relationships over causal ones) should choose to make claims about the causal relationship between learning outcomes and adaptive learning.

Perhaps, as Selwyn (Education and Technology 2011, p.87) suggests, attempts to discover the relative learning advantages of adaptive learning are simply asking the wrong question, not least as there cannot be a single straightforward answer. Perhaps a more useful critique would be to look at the contexts in which the claims for adaptive learning are made, and by whom. Selwyn also suggests that useful insights may be gained from taking a historical perspective. It is worth noting that the technicist claims for adaptive learning (that ‘it works’ or that it is ‘effective’) are essentially the same as those that have been made for other education technologies. They take a universalising position and ignore local contexts, forgetting that ‘pedagogical approach is bound up with a web of cultural assumption’ (Wiske, ‘A new culture of teaching for the 21st century’ in Gordon, D.T. (ed.) The Digital Classroom: How Technology is Changing the Way we teach and Learn 2000, p.72). Adaptive learning might just possibly be different from other technologies, but history advises us to be cautious.

[2] These figures are quoted in Learning to Adapt: A Case for Accelerating Adaptive Learning in Higher Education, a booklet produced in March 2013 by Education Growth Advisors, an education consultancy firm. Their research is available at

Adaptive learning is a product to be sold. How?

1 Individualised learning

In the vast majority of contexts, language teaching is tied to a ‘one-size-fits-all’ model. This is manifested in institutional and national syllabuses which provide lists of structures and / or competences that all students must master within a given period of time. It is usually actualized in the use of coursebooks, often designed for ‘global markets’. Reaction against this model has been common currency for some time, and has led to a range of suggestions for alternative approaches (such as DOGME), none of which have really caught on. The advocates of adaptive learning programs have tapped into this zeitgeist and promise ‘truly personalized learning’. Atomico, a venture capital company that focuses on consumer technologies, and a major investor in Knewton, describes the promise of adaptive learning in the following terms: ‘Imagine lessons that adapt on-the-fly to the way in which an individual learns, and powerful predictive analytics that help teachers differentiate instruction and understand what each student needs to work on and why[1].’

This is a seductive message and is often framed in such a way that disagreement seems impossible. A post on one well-respected blog, eltjam, which focuses on educational technology in language learning, argued the case for adaptive learning very strongly in July 2013: ‘Adaptive Learning is a methodology that is geared towards creating a learning experience that is unique to each individual learner through the intervention of computer software. Rather than viewing learners as a homogenous collective with more or less identical preferences, abilities, contexts and objectives who are shepherded through a glossy textbook with static activities/topics, AL attempts to tap into the rich meta-data that is constantly being generated by learners (and disregarded by educators) during the learning process. Rather than pushing a course book at a class full of learners and hoping that it will (somehow) miraculously appeal to them all in a compelling, salubrious way, AL demonstrates that the content of a particular course would be more beneficial if it were dynamic and interactive. When there are as many responses, ideas, personalities and abilities as there are learners in the room, why wouldn’t you ensure that the content was able to map itself to them, rather than the other way around?[2]

Indeed. But it all depends on what, precisely, the content is – a point I will return to in a later post. For the time being, it is worth noting the prominence that this message is given in the promotional discourse. It is a message that is primarily directed at teachers. It is more than a little disingenuous, however, because teachers are not the primary targets of the promotional discourse, for the simple reason that they are not the ones with purchasing power. The slogan on the homepage of the Knewton website shows clearly who the real audience is: ‘Every education leader needs an adaptive learning infrastructure’[3].

2 Learning outcomes and testing

Education leaders, who are more likely these days to come from the world of business and finance than the world of education, are currently very focused on two closely interrelated topics: the need for greater productivity and accountability, and the role of technology. They generally share the assumption of other leaders in the World Economic Forum that ICT is the key to the former and ‘the key to a better tomorrow’ (Spring, Education Networks, 2012, p.52). ‘We’re at an important transition point,’ said Arne Duncan, the U.S. Secretary of Education in 2010, ‘we’re getting ready to move from a predominantly print-based classroom to a digital learning environment’ (quoted by Spring, 2012, p.58). Later in the speech, which was delivered at the time as the release of the new National Education Technology Plan, Duncan said ‘just as technology has increased productivity in the business world, it is an essential tool to help boost educational productivity’. The plan outlines how this increased productivity could be achieved: we must start ‘with being clear about the learning outcomes we expect from the investments we make’ (Office of Educational Technology, Transforming American Education: Learning Powered by Technology, U.S. Department of Education, 2010). The greater part of the plan is devoted to discussion of learning outcomes and assessment of them.

Learning outcomes (and their assessment) are also at the heart of ‘Asking More: the Path to Efficacy’ (Barber and Rizvi (eds), Asking More: the Path to Efficacy Pearson, 2013), Pearson’s blueprint for the future of education. According to John Fallon, the CEO of Pearson, ‘our focus should unfalteringly be on honing and improving the learning outcomes we deliver’ (Barber and Rizvi, 2013, p.3). ‘High quality learning’ is associated with ‘a relentless focus on outcomes’ (ibid, p.3) and words like ‘measuring / measurable’, ‘data’ and ‘investment’ are almost as salient as ‘outcomes’. A ‘sister’ publication, edited by the same team, is entitled ‘The Incomplete Guide to Delivering Learning Outcomes’ (Barber and Rizvi (eds), Pearson, 2013) and explores further Pearson’s ambition to ‘become the world’s leading education company’ and to ‘deliver learning outcomes’.

It is no surprise that words like ‘outcomes’, ‘data’ and ‘measure’ feature equally prominently in the language of adaptive software companies like Knewton (see, for example, the quotation from Jose Ferreira, CEO of Knewton, in an earlier post). Adaptive software is premised on the establishment and measurement of clearly defined learning outcomes. If measurable learning outcomes are what you’re after, it’s hard to imagine a better path to follow than adaptive software. If your priorities include standards and assessment, it is again hard to imagine an easier path to follow than adaptive software, which was used in testing long before its introduction into instruction. As David Kuntz, VP of research at Knewton and, before that, a pioneer of algorithms in the design of tests, points out, ‘when a student takes a course powered by Knewton, we are continuously evaluating their performance, what others have done with that material before, and what [they] know’[4]. Knewton’s claim that every education leader needs an adaptive learning infrastructure has a powerful internal logic.

3 New business models

‘Adapt or die’ (a phrase originally coined by the last prime minister of apartheid South Africa) is a piece of advice that is often given these days to both educational institutions and publishers. British universities must adapt or die, according to Michael Barber, author of ‘An Avalanche is Coming[5]’ (a report commissioned by the British Institute for Public Policy Research), Chief Education Advisor to Pearson, and editor of the Pearson ‘Efficacy’ document (see above). ELT publishers ‘must change or die’, reported the eltjam blog[6], and it is a message that is frequently repeated elsewhere. The move towards adaptive learning is seen increasingly often as one of the necessary adaptations for both these sectors.

The problems facing universities in countries like the U.K. are acute. Basically, as the introduction to ‘An Avalanche is Coming’ puts it, ‘the traditional university is being unbundled’. There are a number of reasons for this including the rising cost of higher education provision, greater global competition for the same students, funding squeezes from central governments, and competition from new educational providers (such as MOOCs). Unsurprisingly, universities (supported by national governments) have turned to technology, especially online course delivery, as an answer to their problems. There are two main reasons for this. Firstly, universities have attempted to reduce operating costs by looking for increases in scale (through mergers, transnational partnerships, international branch campuses and so on). Mega-universities are growing, and there are thirty-three in Asia alone (Selwyn Education in a Digital World New York: Routledge 2013, p.6). Universities like the Turkish Anadolu University, with over one million students, are no longer exceptional in terms of scale. In this world, online educational provision is a key element. Secondly, and not to put too fine a point on it, online instruction is cheaper (Spring, Education Networks 2012, p.2).

All other things being equal, why would any language department of an institute of higher education not choose an online environment with an adaptive element? Adaptive learning, for the time being at any rate, may be seen as ‘the much needed key to the “Iron Triangle” that poses a conundrum to HE providers; cost, access and quality. Any attempt to improve any one of those conditions impacts negatively on the others. If you want to increase access to a course you run the risk of escalating costs and jeopardising quality, and so on.[7]

Meanwhile, ELT publishers have been hit by rampant pirating of their materials, spiraling development costs of their flagship products and the growth of open educational resources. An excellent blog post by David Wiley[8] explains why adaptive learning services are a heaven-sent opportunity for publishers to modify their business model. ‘While the broad availability of free content and open educational resources have trained internet users to expect content to be free, many people are still willing to pay for services. Adaptive learning systems exploit this willingness by deeply intermingling content and services so that you cannot access one with using the other. Naturally, because an adaptive learning service is comprised of content plus adaptive services, it will be more expensive than static content used to be. And because it is a service, you cannot simply purchase it like you used to buy a textbook. An adaptive learning service is something you subscribe to, like Netflix. […] In short, why is it in a content company’s interest to enable you to own anything? Put simply, it is not. When you own a copy, the publisher completely loses control over it. When you subscribe to content through a digital service (like an adaptive learning service), the publisher achieves complete and perfect control over you and your use of their content.’

Although the initial development costs of building a suitable learning platform with adaptive capabilities are high, publishers will subsequently be able to produce and modify content (i.e. learning materials) much more efficiently. Since content will be mashed up and delivered in many different ways, author royalties will be cut or eliminated. Production and distribution costs will be much lower, and sales and marketing efforts can be directed more efficiently towards the most significant customers. The days of ELT sales reps trying unsuccessfully to get an interview with the director of studies of a small language school or university department are becoming a thing of the past. As with the universities, scale will be everything.

[2] (last accessed 13 January 2014)

[3] (last accessed 13 January 2014)

[4] MIT Technology Review, November 26, 2012 (last accessed 13 January 2014)

[7] Tim Gifford Taking it Personally: Adaptive Learning July 9, 2013 (last accessed January 13, 2014)

[8] David Wiley, Buying our Way into Bondage: the risks of adaptive learning services March 20,2013 (last accessed January 13, 2014)

In order to understand more complex models of adaptive learning, it is necessary to take a temporary step sideways away from the world of language learning. Businesses have long used analytics – the analysis of data to find meaningful patterns – in insurance, banking and marketing. With the exponential growth in computer processing power and memory capacity, businesses now have access to volumes of data of almost unimaginable size. This is known as ‘big data’ and has been described as ‘a revolution that will transform how we live, work and think’ (Mayer-Schönberger & Cukier, ‘Big Data’, 2013). Frequently cited examples of the potential of big data are the success of Amazon to analyze and predict buying patterns and the use of big data analysis in Barack Obama’s 2012 presidential re-election. Business commentators are all singing the same song on the subject. This will be looked at again in later posts. For the time being, it is enough to be aware of the main message. ‘The high-performing organisation of the future will be one that places great value on data and analytical exploration’ (The Economist Intelligence Unit, ‘In Search of Insight and Foresight: Getting more out of big data’ 2013, p.15). ‘Almost no sphere of business activity will remain untouched by this movement,’ (McAfee & Brynjolfsson, ‘Big Data: The Management Revolution’, Harvard Business Review (October 2012), p. 65).

The Economist cover

With the growing bonds between business and education (another topic which will be explored later), it is unsurprising that language learning / teaching materials are rapidly going down the big data route. In comparison to what is now being developed for ELT, the data that is analyzed in the adaptive learning models I have described in an earlier post is very limited, and the algorithms used to shape the content are very simple.

The volume and variety of data and the speed of processing are now of an altogether different order. Jose Ferreira, CEO of Knewton, one of the biggest players in adaptive learning in ELT, spells out the kind of data that can be tapped[1]:

At Knewton, we divide educational data into five types: one pertaining to student identity and onboarding, and four student activity-based data sets that have the potential to improve learning outcomes. They’re listed below in order of how difficult they are to attain:

1) Identity Data: Who are you? Are you allowed to use this application? What admin rights do you have? What district are you in? How about demographic info?

2) User Interaction Data: User interaction data includes engagement metrics, click rate, page views, bounce rate, etc. These metrics have long been the cornerstone of internet optimization for consumer web companies, which use them to improve user experience and retention. This is the easiest to collect of the data sets that affect student outcomes. Everyone who creates an online app can and should get this for themselves.

3) Inferred Content Data: How well does a piece of content “perform” across a group, or for any one subgroup, of students? What measurable student proficiency gains result when a certain type of student interacts with a certain piece of content? How well does a question actually assess what it intends to? Efficacy data on instructional materials isn’t easy to generate — it requires algorithmically normed assessment items. However it’s possible now for even small companies to “norm” small quantities of items. (Years ago, before we developed more sophisticated methods of norming items at scale, Knewton did so using Amazon’s “Mechanical Turk” service.)

4) System-Wide Data: Rosters, grades, disciplinary records, and attendance information are all examples of system-wide data. Assuming you have permission (e.g. you’re a teacher or principal), this information is easy to acquire locally for a class or school. But it isn’t very helpful at small scale because there is so little of it on a per-student basis. At very large scale it becomes more useful, and inferences that may help inform system-wide recommendations can be teased out.

5) Inferred Student Data: Exactly what concepts does a student know, at exactly what percentile of proficiency? Was an incorrect answer due to a lack of proficiency, or forgetfulness, or distraction, or a poorly worded question, or something else altogether? What is the probability that a student will pass next week’s quiz, and what can she do right this moment to increase it?

Software of this kind keeps complex personal profiles, with millions of variables per student, on as many students as necessary. The more student profiles (and therefore students) that can be compared, the more useful the data is. Big players in this field, such as Knewton, are aiming for student numbers in the tens to hundreds of millions. Once data volume of this order is achieved, the ‘analytics’, or the algorithms that convert data into ‘actionable insights’ (J. Spring, ‘Education Networks’ (New York: Routledge, 2012), p.55) become much more reliable.