Posts Tagged ‘Knewton’

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

About two and a half years ago when I started writing this blog, there was a lot of hype around adaptive learning and the big data which might drive it. Two and a half years are a long time in technology. A look at Google Trends suggests that interest in adaptive learning has been pretty static for the last couple of years. It’s interesting to note that 3 of the 7 lettered points on this graph are Knewton-related media events (including the most recent, A, which is Knewton’s latest deal with Hachette) and 2 of them concern McGraw-Hill. It would be interesting to know whether these companies follow both parts of Simon Cowell’s dictum of ‘Create the hype, but don’t ever believe it’.

Google_trends

A look at the Hype Cycle (see here for Wikipedia’s entry on the topic and for criticism of the hype of Hype Cycles) of the IT research and advisory firm, Gartner, indicates that both big data and adaptive learning have now slid into the ‘trough of disillusionment’, which means that the market has started to mature, becoming more realistic about how useful the technologies can be for organizations.

A few years ago, the Gates Foundation, one of the leading cheerleaders and financial promoters of adaptive learning, launched its Adaptive Learning Market Acceleration Program (ALMAP) to ‘advance evidence-based understanding of how adaptive learning technologies could improve opportunities for low-income adults to learn and to complete postsecondary credentials’. It’s striking that the program’s aims referred to how such technologies could lead to learning gains, not whether they would. Now, though, with the publication of a report commissioned by the Gates Foundation to analyze the data coming out of the ALMAP Program, things are looking less rosy. The report is inconclusive. There is no firm evidence that adaptive learning systems are leading to better course grades or course completion. ‘The ultimate goal – better student outcomes at lower cost – remains elusive’, the report concludes. Rahim Rajan, a senior program office for Gates, is clear: ‘There is no magical silver bullet here.’

The same conclusion is being reached elsewhere. A report for the National Education Policy Center (in Boulder, Colorado) concludes: Personalized Instruction, in all its many forms, does not seem to be the transformational technology that is needed, however. After more than 30 years, Personalized Instruction is still producing incremental change. The outcomes of large-scale studies and meta-analyses, to the extent they tell us anything useful at all, show mixed results ranging from modest impacts to no impact. Additionally, one must remember that the modest impacts we see in these meta-analyses are coming from blended instruction, which raises the cost of education rather than reducing it (Enyedy, 2014: 15 -see reference at the foot of this post). In the same vein, a recent academic study by Meg Coffin Murray and Jorge Pérez (2015, ‘Informing and Performing: A Study Comparing Adaptive Learning to Traditional Learning’) found that ‘adaptive learning systems have negligible impact on learning outcomes’.

future-ready-learning-reimagining-the-role-of-technology-in-education-1-638In the latest educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Reimagining the Role of Technology in Education’, 2016) the only mentions of the word ‘adaptive’ are in the context of testing. And the latest OECD report on ‘Students, Computers and Learning: Making the Connection’ (2015), finds, more generally, that information and communication technologies, when they are used in the classroom, have, at best, a mixed impact on student performance.

There is, however, too much money at stake for the earlier hype to disappear completely. Sponsored cheerleading for adaptive systems continues to find its way into blogs and national magazines and newspapers. EdSurge, for example, recently published a report called ‘Decoding Adaptive’ (2016), sponsored by Pearson, that continues to wave the flag. Enthusiastic anecdotes take the place of evidence, but, for all that, it’s a useful read.

In the world of ELT, there are plenty of sales people who want new products which they can call ‘adaptive’ (and gamified, too, please). But it’s striking that three years after I started following the hype, such products are rather thin on the ground. Pearson was the first of the big names in ELT to do a deal with Knewton, and invested heavily in the company. Their relationship remains close. But, to the best of my knowledge, the only truly adaptive ELT product that Pearson offers is the PTE test.

Macmillan signed a contract with Knewton in May 2013 ‘to provide personalized grammar and vocabulary lessons, exam reviews, and supplementary materials for each student’. In December of that year, they talked up their new ‘big tree online learning platform’: ‘Look out for the Big Tree logo over the coming year for more information as to how we are using our partnership with Knewton to move forward in the Language Learning division and create content that is tailored to students’ needs and reactive to their progress.’ I’ve been looking out, but it’s all gone rather quiet on the adaptive / platform front.

In September 2013, it was the turn of Cambridge to sign a deal with Knewton ‘to create personalized learning experiences in its industry-leading ELT digital products for students worldwide’. This year saw the launch of a major new CUP series, ‘Empower’. It has an online workbook with personalized extra practice, but there’s nothing (yet) that anyone would call adaptive. More recently, Cambridge has launched the online version of the 2nd edition of Touchstone. Nothing adaptive there, either.

Earlier this year, Cambridge published The Cambridge Guide to Blended Learning for Language Teaching, edited by Mike McCarthy. It contains a chapter by M.O.Z. San Pedro and R. Baker on ‘Adaptive Learning’. It’s an enthusiastic account of the potential of adaptive learning, but it doesn’t contain a single reference to language learning or ELT!

So, what’s going on? Skepticism is becoming the order of the day. The early hype of people like Knewton’s Jose Ferreira is now understood for what it was. Companies like Macmillan got their fingers badly burnt when they barked up the wrong tree with their ‘Big Tree’ platform.

Noel Enyedy captures a more contemporary understanding when he writes: Personalized Instruction is based on the metaphor of personal desktop computers—the technology of the 80s and 90s. Today’s technology is not just personal but mobile, social, and networked. The flexibility and social nature of how technology infuses other aspects of our lives is not captured by the model of Personalized Instruction, which focuses on the isolated individual’s personal path to a fixed end-point. To truly harness the power of modern technology, we need a new vision for educational technology (Enyedy, 2014: 16).

Adaptive solutions aren’t going away, but there is now a much better understanding of what sorts of problems might have adaptive solutions. Testing is certainly one. As the educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Re-imagining the Role of Technology in Education’, 2016) puts it: Computer adaptive testing, which uses algorithms to adjust the difficulty of questions throughout an assessment on the basis of a student’s responses, has facilitated the ability of assessments to estimate accurately what students know and can do across the curriculum in a shorter testing session than would otherwise be necessary. In ELT, Pearson and EF have adaptive tests that have been well researched and designed.

Vocabulary apps which deploy adaptive technology continue to become more sophisticated, although empirical research is lacking. Automated writing tutors with adaptive corrective feedback are also developing fast, and I’ll be writing a post about these soon. Similarly, as speech recognition software improves, we can expect to see better and better automated adaptive pronunciation tutors. But going beyond such applications, there are bigger questions to ask, and answers to these will impact on whatever direction adaptive technologies take. Large platforms (LMSs), with or without adaptive software, are already beginning to look rather dated. Will they be replaced by integrated apps, or are apps themselves going to be replaced by bots (currently riding high in the Hype Cycle)? In language learning and teaching, the future of bots is likely to be shaped by developments in natural language processing (another topic about which I’ll be blogging soon). Nobody really has a clue where the next two and a half years will take us (if anywhere), but it’s becoming increasingly likely that adaptive learning will be only one very small part of it.

 

Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning. Boulder, CO: National Education Policy Center. Retrieved 17.07.16 from http://nepc.colorado.edu/publication/personalized-instruction

In ELT circles, ‘behaviourism’ is a boo word. In the standard history of approaches to language teaching (characterised as a ‘procession of methods’ by Hunter & Smith 2012: 432[1]), there were the bad old days of behaviourism until Chomsky came along, savaged the theory in his review of Skinner’s ‘Verbal Behavior’, and we were all able to see the light. In reality, of course, things weren’t quite like that. The debate between Chomsky and the behaviourists is far from over, behaviourism was not the driving force behind the development of audiolingual approaches to language teaching, and audiolingualism is far from dead. For an entertaining and eye-opening account of something much closer to reality, I would thoroughly recommend a post on Russ Mayne’s Evidence Based ELT blog, along with the discussion which follows it. For anyone who would like to understand what behaviourism is, was, and is not (before they throw the term around as an insult), I’d recommend John A. Mills’ ‘Control: A History of Behavioral Psychology’ (New York University Press, 1998) and John Staddon’s ‘The New Behaviorism 2nd edition’ (Psychology Press, 2014).

There is a close connection between behaviourism and adaptive learning. Audrey Watters, no fan of adaptive technology, suggests that ‘any company touting adaptive learning software’ has been influenced by Skinner. In a more extended piece, ‘Education Technology and Skinner’s Box, Watters explores further her problems with Skinner and the educational technology that has been inspired by behaviourism. But writers much more sympathetic to adaptive learning, also see close connections to behaviourism. ‘The development of adaptive learning systems can be considered as a transformation of teaching machines,’ write Kara & Sevim[2] (2013: 114 – 117), although they go on to point out the differences between the two. Vendors of adaptive learning products, like DreamBox Learning©, are not shy of associating themselves with behaviourism: ‘Adaptive learning has been with us for a while, with its history of adaptive learning rooted in cognitive psychology, beginning with the work of behaviorist B.F. Skinner in the 1950s, and continuing through the artificial intelligence movement of the 1970s.’

That there is a strong connection between adaptive learning and behaviourism is indisputable, but I am not interested in attempting to establish the strength of that connection. This would, in any case, be an impossible task without some reductionist definition of both terms. Instead, my interest here is to explore some of the parallels between the two, and, in the spirit of the topic, I’d like to do this by comparing the behaviours of behaviourists and adaptive learning scientists.

Data and theory

Both behaviourism and adaptive learning (in its big data form) are centrally concerned with behaviour – capturing and measuring it in an objective manner. In both, experimental observation and the collection of ‘facts’ (physical, measurable, behavioural occurrences) precede any formulation of theory. John Mills’ description of behaviourists could apply equally well to adaptive learning scientists: theory construction was a seesaw process whereby one began with crude outgrowths from observations and slowly created one’s theory in such a way that one could make more and more precise observations, building those observations into the theory at each stage. No behaviourist ever considered the possibility of taking existing comprehensive theories of mind and testing or refining them.[3]

Positivism and the panopticon

Both behaviourism and adaptive learning are pragmatically positivist, believing that truth can be established by the study of facts. J. B. Watson, the founding father of behaviourism whose article ‘Psychology as the Behaviorist Views Itset the behaviourist ball rolling, believed that experimental observation could ‘reveal everything that can be known about human beings’[4]. Jose Ferreira of Knewton has made similar claims: We get five orders of magnitude more data per user than Google does. We get more data about people than any other data company gets about people, about anything — and it’s not even close. We’re looking at what you know, what you don’t know, how you learn best. […] We know everything about what you know and how you learn best because we get so much data. Digital data analytics offer something that Watson couldn’t have imagined in his wildest dreams, but he would have approved.

happiness industryThe revolutionary science

Big data (and the adaptive learning which is a part of it) is presented as a game-changer: The era of big data challenges the way we live and interact with the world. […] Society will need to shed some of its obsession for causality in exchange for simple correlations: not knowing why but only what. This overturns centuries of established practices and challenges our most basic understanding of how to make decisions and comprehend reality[5]. But the reverence for technology and the ability to reach understandings of human beings by capturing huge amounts of behavioural data was adumbrated by Watson a century before big data became a widely used term. Watson’s 1913 lecture at Columbia University was ‘a clear pitch’[6] for the supremacy of behaviourism, and its potential as a revolutionary science.

Prediction and controlnudge

The fundamental point of both behaviourism and adaptive learning is the same. The research practices and the theorizing of American behaviourists until the mid-1950s, writes Mills[7] were driven by the intellectual imperative to create theories that could be used to make socially useful predictions. Predictions are only useful to the extent that they can be used to manipulate behaviour. Watson states this very baldly: the theoretical goal of psychology is the prediction and control of behaviour[8]. Contemporary iterations of behaviourism, such as behavioural economics or nudge theory (see, for example, Thaler & Sunstein’s best-selling ‘Nudge’, Penguin Books, 2008), or the British government’s Behavioural Insights Unit, share the same desire to divert individual activity towards goals (selected by those with power), ‘without either naked coercion or democratic deliberation’[9]. Jose Ferreira of Knewton has an identical approach: We can predict failure in advance, which means we can pre-remediate it in advance. We can say, “Oh, she’ll struggle with this, let’s go find the concept from last year’s materials that will help her not struggle with it.” Like the behaviourists, Ferreira makes grand claims about the social usefulness of his predict-and-control technology: The end is a really simple mission. Only 22% of the world finishes high school, and only 55% finish sixth grade. Those are just appalling numbers. As a species, we’re wasting almost four-fifths of the talent we produce. […] I want to solve the access problem for the human race once and for all.

Ethics

Because they rely on capturing large amounts of personal data, both behaviourism and adaptive learning quickly run into ethical problems. Even where informed consent is used, the subjects must remain partly ignorant of exactly what is being tested, or else there is the fear that they might adjust their behaviour accordingly. The goal is to minimise conscious understanding of what is going on[10]. For adaptive learning, the ethical problem is much greater because of the impossibility of ensuring the security of this data. Everything is hackable.

Marketing

Behaviourism was seen as a god-send by the world of advertising. J. B. Watson, after a front-page scandal about his affair with a student, and losing his job at John Hopkins University, quickly found employment on Madison Avenue. ‘Scientific advertising’, as practised by the Mad Men from the 1920s onwards, was based on behaviourism. The use of data analytics by Google, Amazon, et al is a direct descendant of scientific advertising, so it is richly appropriate that adaptive learning is the child of data analytics.

[1] Hunter, D. and Smith, R. (2012) ‘Unpacking the past: “CLT” through ELTJ keywords’. ELT Journal, 66/4: 430-439.

[2] Kara, N. & Sevim, N. 2013. ‘Adaptive learning systems: beyond teaching machines’, Contemporary Educational Technology, 4(2), 108-120

[3] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.5

[4] Davies, W. (2015) The Happiness Industry. London: Verso. p.91

[5] Mayer-Schönberger, V. & Cukier, K. (2013) Big Data. London: John Murray, p.7

[6] Davies, W. (2015) The Happiness Industry. London: Verso. p.87

[7] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.2

[8] Watson, J. B. (1913) ‘Behaviorism as the Psychologist Views it’ Psychological Review 20: 158

[9] Davies, W. (2015) The Happiness Industry. London: Verso. p.88

[10] Davies, W. (2015) The Happiness Industry. London: Verso. p.92

Then and now in educationThe School of Tomorrow will pay far more attention to individuals than the schools of the past. Each child will be studied and measured repeatedly from many angles, both as a basis of prescriptions for treatment and as a means of controlling development. The new education will be scientific in that it will rest on a fact basis. All development of knowledge and skill will be individualized, and classroom practice and recitation as they exist today in conventional schools will largely disappear. […] Experiments in laboratories and in schools of education [will discover] what everyone should know and the best way to learn essential elements.

This is not, you may be forgiven for thinking, from a Knewton blog post. It was written in 1924 and comes from Otis W. Caldwell & Stuart A. Courtis Then and Now in Education, 1845: 1923 (New York: Appleton) and is cited in Petrina, S. 2002. ‘Getting a Purchase on “The School of Tomorrow” and its Constituent Commodities: Histories and Historiographies of Technologies’ History of Education Quarterly, Vol. 42, No. 1 (Spring, 2002), pp. 75-111.

presseyIn the same year that Caldwell and Courtis predicted the School of Tomorrow, Sidney Pressey, ‘contrived an intelligence testing machine, which he transformed during 1924-1934 into an ‘Automatic Teacher.’ His machine automated and individualized routine classroom processes such as testing and drilling. It could reduce the burden of testing and scoring for teachers and therapeutically treat students after examination and diagnosis’ (Petrina, p. 99). Six years later, the ‘Automatic Teacher’ was recognised as a commercial failure. For more on Pressey’s machine (including a video of Pressey demonstrating it), see Audrey Watter’s excellent piece.

Caldwell, Courtis and Pressey are worth bearing in mind when you read the predictions of people like Knewton’s Jose Ferreira. Here are a few of his ‘Then and Now’ predictions:

“Online learning” will soon be known simply as “learning.” All of the world’s education content is being digitized right now, and that process will be largely complete within five years. (01.09.2010)

There will soon be lots of wonderful adaptive learning apps: adaptive quizzing apps, flashcard apps, textbook apps, simulation apps — if you can imagine it, someone will make it. In a few years, every education app will be adaptive. Everyone will be an adaptive learning app maker. (23.04.13)

Right now about 22 percent of the people in the world graduate high school or the equivalent. That’s pathetic. In one generation we could get close to 100 percent, almost for free. (19.07.13)

95% of materials (textbooks, software, etc used for classes, tutoring, corp training…) will be purely online in 5-10 years. That’s a $200B global industry. And people predict that 50% of higher ed and 25% of K-12 will eventually be purely online classes. If so, that would create a new, $3 trillion or so industry. (25.11.2013)

Back in December 2013, in an interview with eltjam , David Liu, COO of the adaptive learning company, Knewton, described how his company’s data analysis could help ELT publishers ‘create more effective learning materials’. He focused on what he calls ‘content efficacy[i]’ (he uses the word ‘efficacy’ five times in the interview), a term which he explains below:

A good example is when we look at the knowledge graph of our partners, which is a map of how concepts relate to other concepts and prerequisites within their product. There may be two or three prerequisites identified in a knowledge graph that a student needs to learn in order to understand a next concept. And when we have hundreds of thousands of students progressing through a course, we begin to understand the efficacy of those said prerequisites, which quite frankly were made by an author or set of authors. In most cases they’re quite good because these authors are actually good in what they do. But in a lot of cases we may find that one of those prerequisites actually is not necessary, and not proven to be useful in achieving true learning or understanding of the current concept that you’re trying to learn. This is interesting information that can be brought back to the publisher as they do revisions, as they actually begin to look at the content as a whole.

One commenter on the post, Tom Ewens, found the idea interesting. It could, potentially, he wrote, give us new insights into how languages are learned much in the same way as how corpora have given us new insights into how language is used. Did Knewton have any plans to disseminate the information publicly, he asked. His question remains unanswered.

At the time, Knewton had just raised $51 million (bringing their total venture capital funding to over $105 million). Now, 16 months later, Knewton have launched their new product, which they are calling Knewton Content Insights. They describe it as the world’s first and only web-based engine to automatically extract statistics comparing the relative quality of content items — enabling us to infer more information about student proficiency and content performance than ever before possible.

The software analyses particular exercises within the learning content (and particular items within them). It measures the relative difficulty of individual items by, for example, analysing how often a question is answered incorrectly and how many tries it takes each student to answer correctly. It also looks at what they call ‘exhaustion’ – how much content students are using in a particular area – and whether they run out of content. The software can correlate difficulty with exhaustion. Lastly, it analyses what they call ‘assessment quality’ – how well  individual questions assess a student’s understanding of a topic.

Knewton’s approach is premised on the idea that learning (in this case language learning) can be broken down into knowledge graphs, in which the information that needs to be learned can be arranged and presented hierarchically. The ‘granular’ concepts are then ‘delivered’ to the learner, and Knewton’s software can optimise the delivery. The first problem, as I explored in a previous post, is that language is a messy, complex system: it doesn’t lend itself terribly well to granularisation. The second problem is that language learning does not proceed in a linear, hierarchical way: it is also messy and complex. The third is that ‘language learning content’ cannot simply be delivered: a process of mediation is unavoidable. Are the people at Knewton unaware of the extensive literature devoted to the differences between synthetic and analytic syllabuses, of the differences between product-oriented and process-oriented approaches? It would seem so.

Knewton’s ‘Content Insights’ can only, at best, provide some sort of insight into the ‘language knowledge’ part of any learning content. It can say nothing about the work that learners do to practise language skills, since these are not susceptible to granularisation: you simply can’t take a piece of material that focuses on reading or listening and analyse its ‘content efficacy at the concept level’. Because of this, I predicted (in the post about Knowledge Graphs) that the likely focus of Knewton’s analytics would be discrete item, sentence-level grammar (typically tenses). It turns out that I was right.

Knewton illustrate their new product with screen shots such as those below.

Content-Insight-Assessment-1

 

 

 

 

 

Content-Insight-Exhaustion-1

 

 

 

 

 

 

 

They give a specific example of the sort of questions their software can answer. It is: do students generally find the present simple tense easier to understand than the present perfect tense? Doh!

It may be the case that Knewton Content Insights might optimise the presentation of this kind of grammar, but optimisation of this presentation and practice is highly unlikely to have any impact on the rate of language acquisition. Students are typically required to study the present perfect at every level from ‘elementary’ upwards. They have to do this, not because the presentation in, say, Headway, is not optimised. What they need is to spend a significantly greater proportion of their time on ‘language use’ and less on ‘language knowledge’. This is not just my personal view: it has been extensively researched, and I am unaware of any dissenting voices.

The number-crunching in Knewton Content Insights is unlikely, therefore, to lead to any actionable insights. It is, however, very likely to lead (as writer colleagues at Pearson and other publishers are finding out) to an obsession with measuring the ‘efficacy’ of material which, quite simply, cannot meaningfully be measured in this way. It is likely to distract from much more pressing issues, notably the question of how we can move further and faster away from peddling sentence-level, discrete-item grammar.

In the long run, it is reasonable to predict that the attempt to optimise the delivery of language knowledge will come to be seen as an attempt to tackle the wrong question. It will make no significant difference to language learners and language learning. In the short term, how much time and money will be wasted?

[i] ‘Efficacy’ is the buzzword around which Pearson has built its materials creation strategy, a strategy which was launched around the same time as this interview. Pearson is a major investor in Knewton.

‘Sticky’ – as in ‘sticky learning’ or ‘sticky content’ (as opposed to ‘sticky fingers’ or a ‘sticky problem’) – is itself fast becoming a sticky word. If you check out ‘sticky learning’ on Google Trends, you’ll see that it suddenly spiked in September 2011, following the slightly earlier appearance of ‘sticky content’. The historical rise in this use of the word coincides with the exponential growth in the number of references to ‘big data’.

I am often asked if adaptive learning really will take off as a big thing in language learning. Will adaptivity itself be a sticky idea? When the question is asked, people mean the big data variety of adaptive learning, rather than the much more limited adaptivity of spaced repetition algorithms, which, I think, is firmly here and here to stay. I can’t answer the question with any confidence, but I recently came across a book which suggests a useful way of approaching the question.

41u+NEyWjnL._SY344_BO1,204,203,200_‘From the Ivory Tower to the Schoolhouse’ by Jack Schneider (Harvard Education Press, 2014) investigates the reasons why promising ideas from education research fail to get taken up by practitioners, and why other, less-than-promising ideas, from a research or theoretical perspective, become sticky quite quickly. As an example of the former, Schneider considers Robert Sternberg’s ‘Triarchic Theory’. As an example of the latter, he devotes a chapter to Howard Gardner’s ‘Multiple Intelligences Theory’.

Schneider argues that educational ideas need to possess four key attributes in order for teachers to sit up, take notice and adopt them.

  1. perceived significance: the idea must answer a question central to the profession – offering a big-picture understanding rather than merely one small piece of a larger puzzle
  2. philosophical compatibility: the idea must clearly jibe with closely held [teacher] beliefs like the idea that teachers are professionals, or that all children can learn
  3. occupational realism: it must be possible for the idea to be put easily into immediate use
  4. transportability: the idea needs to find its practical expression in a form that teachers can access and use at the time that they need it – it needs to have a simple core that can travel through pre-service coursework, professional development seminars, independent study and peer networks

To what extent does big data adaptive learning possess these attributes? It certainly comes up trumps with respect to perceived significance. The big question that it attempts to answer is the question of how we can make language learning personalized / differentiated / individualised. As its advocates never cease to remind us, adaptive learning holds out the promise of moving away from a one-size-fits-all approach. The extent to which it can keep this promise is another matter, of course. For it to do so, it will never be enough just to offer different pathways through a digitalised coursebook (or its equivalent). Much, much more content will be needed: at least five or six times the content of a one-size-fits-all coursebook. At the moment, there is little evidence of the necessary investment into content being made (quite the opposite, in fact), but the idea remains powerful nevertheless.

When it comes to philosophical compatibility, adaptive learning begins to run into difficulties. Despite the decades of edging towards more communicative approaches in language teaching, research (e.g. the research into English teaching in Turkey described in a previous post), suggests that teachers still see explanation and explication as key functions of their jobs. They believe that they know their students best and they know what is best for them. Big data adaptive learning challenges these beliefs head on. It is no doubt for this reason that companies like Knewton make such a point of claiming that their technology is there to help teachers. But Jose Ferreira doth protest too much, methinks. Platform-delivered adaptive learning is a direct threat to teachers’ professionalism, their salaries and their jobs.

Occupational realism is more problematic still. Very, very few language teachers around the world have any experience of truly blended learning, and it’s very difficult to envisage precisely what it is that the teacher should be doing in a classroom. Publishers moving towards larger-scale blended adaptive materials know that this is a big problem, and are actively looking at ways of packaging teacher training / teacher development (with a specific focus on blended contexts) into the learner-facing materials that they sell. But the problem won’t go away. Education ministries have a long history of throwing money at technological ‘solutions’ without thinking about obtaining the necessary buy-in from their employees. It is safe to predict that this is something that is unlikely to change. Moreover, learning how to become a blended teacher is much harder than learning, say, how to make good use of an interactive whiteboard. Since there are as many different blended adaptive approaches as there are different educational contexts, there cannot be (irony of ironies) a one-size-fits-all approach to training teachers to make good use of this software.

Finally, how transportable is big data adaptive learning? Not very, is the short answer, and for the same reasons that ‘occupational realism’ is highly problematic.

Looking at things through Jack Schneider’s lens, we might be tempted to come to the conclusion that the future for adaptive learning is a rocky path, at best. But Schneider doesn’t take political or economic considerations into account. Sternberg’s ‘Triarchic Theory’ never had the OECD or the Gates Foundation backing it up. It never had millions and millions of dollars of investment behind it. As we know from political elections (and the big data adaptive learning issue is a profoundly political one), big bucks can buy opinions.

It may also prove to be the case that the opinions of teachers don’t actually matter much. If the big adaptive bucks can win the educational debate at the highest policy-making levels, teachers will be the first victims of the ‘creative disruption’ that adaptivity promises. If you don’t believe me, just look at what is going on in the U.S.

There are causes for concern, but I don’t want to sound too alarmist. Nobody really has a clue whether big data adaptivity will actually work in language learning terms. It remains more of a theory than a research-endorsed practice. And to end on a positive note, regardless of how sticky it proves to be, it might just provide the shot-in-the-arm realisation that language teachers, at their best, are a lot more than competent explainers of grammar or deliverers of gap-fills.

2014-09-30_2216Jose Ferreira, the fast-talking sales rep-in-chief of Knewton, likes to dazzle with numbers. In a 2012 talk hosted by the US Department of Education, Ferreira rattles off the stats: So Knewton students today, we have about 125,000, 180,000 right now, by December it’ll be 650,000, early next year it’ll be in the millions, and next year it’ll be close to 10 million. And that’s just through our Pearson partnership. For each of these students, Knewton gathers millions of data points every day. That, brags Ferreira, is five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. With just a touch of breathless exaggeration, Ferreira goes on: We literally know everything about what you know and how you learn best, everything.

The data is mined to find correlations between learning outcomes and learning behaviours, and, once correlations have been established, learning programmes can be tailored to individual students. Ferreira explains: We take the combined data problem all hundred million to figure out exactly how to teach every concept to each kid. So the 100 million first shows up to learn the rules of exponents, great let’s go find a group of people who are psychometrically equivalent to that kid. They learn the same ways, they have the same learning style, they know the same stuff, because Knewton can figure out things like you learn math best in the morning between 8:40 and 9:13 am. You learn science best in 42 minute bite sizes the 44 minute mark you click right, you start missing questions you would normally get right.

The basic premise here is that the more data you have, the more accurately you can predict what will work best for any individual learner. But how accurate is it? In the absence of any decent, independent research (or, for that matter, any verifiable claims from Knewton), how should we respond to Ferreira’s contribution to the White House Education Datapalooza?

A 51Oy5J3o0yL._AA258_PIkin4,BottomRight,-46,22_AA280_SH20_OU35_new book by Stephen Finlay, Predictive Analytics, Data Mining and Big Data (Palgrave Macmillan, 2014) suggests that predictive analytics are typically about 20 – 30% more accurate than humans attempting to make the same judgements. That’s pretty impressive and perhaps Knewton does better than that, but the key thing to remember is that, however much data Knewton is playing with, and however good their algorithms are, we are still talking about predictions and not certainties. If an adaptive system could predict with 90% accuracy (and the actual figure is typically much lower than that) what learning content and what learning approach would be effective for an individual learner, it would still mean that it was wrong 10% of the time. When this is scaled up to the numbers of students that use Knewton software, it means that millions of students are getting faulty recommendations. Beyond a certain point, further expansion of the data that is mined is unlikely to make any difference to the accuracy of predictions.

A further problem identified by Stephen Finlay is the tendency of people in predictive analytics to confuse correlation and causation. Certain students may have learnt maths best between 8.40 and 9.13, but it does not follow that they learnt it best because they studied at that time. If strong correlations do not involve causality, then actionable insights (such as individualised course design) can be no more than an informed gamble.

Knewton’s claim that they know how every student learns best is marketing hyperbole and should set alarm bells ringing. When it comes to language learning, we simply do not know how students learn (we do not have any generally accepted theory of second language acquisition), let alone how they learn best. More data won’t help our theories of learning! Ferreira’s claim that, with Knewton, every kid gets a perfectly optimized textbook, except it’s also video and other rich media dynamically generated in real time is equally preposterous, not least since the content of the textbook will be at least as significant as the way in which it is ‘optimized’. And, as we all know, textbooks have their faults.

Cui bono? Perhaps huge data and predictive analytics will benefit students; perhaps not. We will need to wait and find out. But Stephen Finlay reminds us that in gold rushes (and internet booms and the exciting world of Big Data) the people who sell the tools make a lot of money. Far more strike it rich selling picks and shovels to prospectors than do the prospectors. Likewise, there is a lot of money to be made selling Big Data solutions. Whether the buyer actually gets any benefit from them is not the primary concern of the sales people. (p.16/17) Which is, perhaps, one of the reasons that some sales people talk so fast.

Back in the Neanderthal days before Web 2.0, iPhones, tablets, the cloud, learning analytics and so on, Chris Bigum and Jane Kenway wrote a paper called ‘New Information Technologies and the Ambiguous Future of Schooling’. Although published in 1998, it remains relevant and can be accessed here.

They analysed the spectrum of discourse that was concerned with new technologies in education. At one end of this spectrum was a discourse community which they termed ‘boosters’. Then, as now, the boosters were far and away the dominant voices. Bigum and Kenway characterized the boosters as having an ‘unswerving faith in the technology’s capacity to improve education and most other things in society’. I discussed the boosterist discourse in my post on this blog, ‘Saving the World (adaptive marketing)’, focussing on the language of Knewton, as a representative example.

At the other end of Bigum and Kenway’s spectrum was what they termed ‘doomsters’ – ‘unqualified opponents of new technologies’ who see inevitable damage to society and education if we uncritically accept these new technologies.

Since starting this blog, I have been particularly struck by two things. The first of these is that I have had to try to restrain my aversion to the excesses of boosterist discourse – not always, it must be said, with complete success. The second is that I have found myself characterized by some people (perhaps those who have only superficially read a post of two) as an anti-technology doomsterist. At the same time, I have noticed that the debate about adaptive learning and educational technology, in general, tends to become polarized into booster and doomster camps.

To some extent, such polarization is inevitable. When a discourse is especially dominant, anyone who questions it risks finding themselves labelled as the extreme opposite. In some parts of the world, for example, any critique of neoliberal doxa is likely to be critiqued, in its turn, as ‘socialist, or worse’: ‘if you’re not with us, you’re against us’.

GramsciWhen it comes to adaptive learning, one can scoff at the adspeak of Knewton or the gapfills of Voxy, without having a problem with the technology per se. But, given the dominance of the booster discourse, one can’t really be neutral. Neil Selwyn (yes, him again!) suggests that the best way of making full sense of educational technology is to adopt a pessimistic perspective. ‘If nothing else,’ he writes, ‘a pessimistic view remains true to the realities of what has actually taken place with regards to higher education and digital technology over the past thirty years (to be blunt, things have clearly not been transformed or improved by digital technology so far, so why should we expect anything different in the near future?)’. This is not an ‘uncompromising pessimism’, but ‘a position akin to Gramsci’s notion of being ‘a pessimist because of intelligence, but an optimist because of will’’.

Note: The quotes from Neil Selwyn here are taken from his new book Digital Technology and the Contemporary University (2014, Abingdon: Routledge). In the autumn of this year, there will be an online conference, jointly organised by the Learning Technologies and Global Issues Special Interest Groups of IATEFL, during which I will be interviewing Neil Selwyn. I’ll keep you posted.

‘Adaptive’ is a buzzword in the marketing of educational products. Chris Dragon, President of Pearson Digital Learning, complained on the Pearson Research blog. that there are so many EdTech providers claiming to be ‘adaptive’ that you have to wonder if they are not using the term too loosely. He talks about semantic satiation, the process whereby ‘temporary loss of meaning [is] experienced when one is exposed to the uninterrupted repetition of a word or phrase’. He then goes on to claim that Pearson’s SuccessMaker (‘educational software that differentiates and personalizes K-8 reading and math instruction’) is the real adaptive McCoy.

‘Adaptive’ is also a buzzword in marketing itself. Google the phrase ‘adaptive marketing’ and you’ll quickly come up with things like Adaptive Marketing Set to Become the Next Big Thing or Adaptive marketing changes the name of the game. Adaptive marketing is what you might expect: the use of big data to track customers and enable ‘marketers to truly tailor their activities in rapid and unparalleled ways to meet their customers’ interests and needs’ (Advertising Age, February 2012). It strikes me that this sets up an extraordinary potential loop: students using adaptive learning software that generates a huge amount of data which could then be used by adaptive marketers to sell other products.

I decided it might be interesting to look at the way one adaptive software company markets itself. Knewton, for example, which claims its products are more adaptive than anybody else’s.

Knewton clearly spend a lot of time and money on their marketing efforts. There is their blog and a magazine called ‘The Knerd’. There are very regular interviews by senior executives with newspapers, magazines and other blogs. There are very frequent conference presentations. All of these are easily accessible, so it is quite easy to trace Knewton’s marketing message. And even easier when they are so open about it. David Liu, Chief Operating Officer has given an interview  in which he outlines his company’s marketing strategy. Knewton, he says, focuses on driving organic interests and traffic. To that end, we have a digital marketing group that’s highly skilled and focused on creating content marketing so users, influencers and partners alike can understand our product, the value we bring and how to work with us. We also use a lot of advanced digital and online lead generation type of techniques to target potential partners and users to be able to get the right people in those discussions.

The message consists of four main strands, which I will call EdTech, EduCation, EduBusiness and EdUtopia. Depending on the audience, the marketing message will be adapted, with one or other of these strands given more prominence.

1 EdTech

Hardly surprisingly, Knewton focuses on what they call their ‘heavy duty infrastructure for an adaptive world’. They are very proud of their adaptive credentials, their ‘rigorous data science’. The basic message is that ‘only Knewton provides true personalization for any student, anywhere’. They are not shy of using technical jargon and providing technical details to prove their point.

2 EduCation

The key message here is effectiveness (Knewton also uses the term ‘efficacy’). Statistics about growth in pass rates and reduction in withdrawal rates at institutions are cited. At the same time, teachers are directly appealed to with statements like ‘as a teacher, you get tools you never had before’ and ‘teachers will be able to add their own content, upload it, tag it and seamlessly use it’. Accompanying this fairly direct approach is a focus on buzz words and phrases which can be expected to resonate with teachers. Recent blog posts include in their headlines: ‘supporting creativity’, ‘student-centred learning’, ‘peer mentoring’, ‘formative evaluation’, ‘continuous assessment’, ‘learning styles’, ‘scaffolding instruction’, ‘real-world examples’, ‘enrichment’ or ‘lifelong learning’.

There is an apparent openness in Knewton’s readiness to communicate with the rest of the world. The blog invites readers to start discussions and post comments. Almost no one does. But one blog post by Jose Ferreira called ‘Rebooting Learning Styles’  provoked a flurry of highly critical and well-informed responses. These remain unanswered. A similar thing happened when David Liu did a guest post at eltjam. A flurry of criticism, but no response. My interpretation of this is that Knewton are a little scared of engaging in debate and of having their marketing message hijacked.

3 EduBusiness

Here’s a sample of ways that Knewton speak to potential customers and investors:

an enormous new market of online courses that bring high margin revenue and rapid growth for institutions that start offering them early and declining numbers for those who do not.

Because Knewton is trying to disrupt the traditional industry, we have nothing to lose—we’re not cannibalising ourselves—by partnering.

Unlike other groups dabbling in adaptive learning, Knewton doesn’t force you to buy pre-fabricated products using our own content. Our platform makes it possible for anyone — publishers, instructors, app developers, and others — to build her own adaptive applications using any content she likes.

The data platform industries tend to have a winner-take-all dynamic. You take that and multiply it by a very, very high-stakes product and you get an even more winner-take-all dynamic.

4 EdUtopia

I personally find this fourth strand the most interesting. Knewton are not unique in adopting this line, but it is a sign of their ambition that they choose to do so. All of the quotes that follow are from Jose Ferreira:

We can’t improve education by curing poverty. We have to cure poverty by improving education.

Edtech is our best hope to narrow — at scale — the Achievement Gap between rich and poor. Yet, for a time, it will increase that gap. Society must push past that unfortunate moment and use tech-assisted outcome improvements as the rationale to drive spending in poor schools.

I started Knewton to do my bit to fix the world’s education system. Education is among the most important problems we face, because it’s the ultimate “gateway” problem. That is, it drives virtually every global problem that we face as a species. But there’s a flip-side: if we can fix education, then we’ll dramatically improve the other problems, too. So in fact, I started Knewton not just to help fix education but to try to fix just about everything.

What if the girl who invents the cure for ovarian cancer is growing up in a Cambodian fishing village and otherwise wouldn’t have a chance? As distribution of technology continues to improve, adaptive learning will give her and similar students countless opportunities that they otherwise wouldn’t have.

But our ultimate vision – and what really motivated me to start the company – is to solve the access problem for the human race once and for all. Only 22% of the world finishes high school; only 55% finish sixth grade. This is a preventable tragedy. Adaptive learning can give students around the world access to high-quality education they wouldn’t otherwise have.

Personalization is one of the key leitmotifs in current educational discourse. The message is clear: personalization is good, one-size-fits-all is bad. ‘How to personalize learning and how to differentiate instruction for diverse classrooms are two of the great educational challenges of the 21st century,’ write Trilling and Fadel, leading lights in the Partnership for 21st Century Skills (P21)[1]. Barack Obama has repeatedly sung the praises of, and the need for, personalized learning and his policies are fleshed out by his Secretary of State, Arne Duncan, in speeches and on the White House blog: ‘President Obama described the promise of personalized learning when he launched the ConnectED initiative last June. Technology is a powerful tool that helps create robust personalized learning environments.’ In the UK, personalized learning has been government mantra for over 10 years. The EU, UNESCO, OECD, the Gates Foundation – everyone, it seems, is singing the same tune.

Personalization, we might all agree, is a good thing. How could it be otherwise? No one these days is going to promote depersonalization or impersonalization in education. What exactly it means, however, is less clear. According to a UNESCO Policy Brief[2], the term was first used in the context of education in the 1970s by Victor Garcìa Hoz, a senior Spanish educationalist and member of Opus Dei at the University of Madrid. This UNESCO document then points out that ‘unfortunately, up to this date there is no single definition of this concept’.

In ELT, the term has been used in a very wide variety of ways. These range from the far-reaching ideas of people like Gertrude Moskowitz, who advocated a fundamentally learner-centred form of instruction, to the much more banal practice of getting students to produce a few personalized examples of an item of grammar they have just studied. See Scott Thornbury’s A-Z blog for an interesting discussion of personalization in ELT.

As with education in general, and ELT in particular, ‘personalization’ is also bandied around the adaptive learning table. Duolingo advertises itself as the opposite of one-size-fits-all, and as an online equivalent of the ‘personalized education you can get from a small classroom teacher or private tutor’. Babbel offers a ‘personalized review manager’ and Rosetta Stone’s Classroom online solution allows educational institutions ‘to shift their language program away from a ‘one-size-fits-all-curriculum’ to a more individualized approach’. As far as I can tell, the personalization in these examples is extremely restricted. The language syllabus is fixed and although users can take different routes up the ‘skills tree’ or ‘knowledge graph’, they are totally confined by the pre-determination of those trees and graphs. This is no more personalized learning than asking students to make five true sentences using the present perfect. Arguably, it is even less!

This is not, in any case, the kind of personalization that Obama, the Gates Foundation, Knewton, et al have in mind when they conflate adaptive learning with personalization. Their definition is much broader and summarised in the US National Education Technology Plan of 2010: ‘Personalized learning means instruction is paced to learning needs, tailored to learning preferences, and tailored to the specific interests of different learners. In an environment that is fully personalized, the learning objectives and content as well as the method and pace may all vary (so personalization encompasses differentiation and individualization).’ What drives this is the big data generated by the students’ interactions with the technology (see ‘Part 4: big data and analytics’ of ‘The Guide’ on this blog).

What remains unclear is exactly how this might work in English language learning. Adaptive software can only personalize to the extent that the content of an English language learning programme allows it to do so. It may be true that each student using adaptive software ‘gets a more personalised experience no matter whose content the student is consuming’, as Knewton’s David Liu puts it. But the potential for any really meaningful personalization depends crucially on the nature and extent of this content, along with the possibility of variable learning outcomes. For this reason, we are not likely to see any truly personalized large-scale adaptive learning programs for English any time soon.

Nevertheless, technology is now central to personalized language learning. A good learning platform, which allows learners to connect to ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses, various apps’, etc (see Alexandra Chistyakova’s eltdiary), means that personalization could be more easily achieved.

For the time being, at least, adaptive learning systems would seem to work best for ‘those things that can be easily digitized and tested like math problems and reading passages’ writes Barbara Bray . Or low level vocabulary and grammar McNuggets, we might add. Ideal for, say, ‘English Grammar in Use’. But meaningfully personalized language learning?

student-data-and-personalization

‘Personalized learning’ sounds very progressive, a utopian educational horizon, and it sounds like it ought to be the future of ELT (as Cleve Miller argues). It also sounds like a pretty good slogan on which to hitch the adaptive bandwagon. But somehow, just somehow, I suspect that when it comes to adaptive learning we’re more likely to see more testing, more data collection and more depersonalization.

[1] Trilling, B. & Fadel, C. 2009 21st Century Skills (San Francisco: Wiley) p.33

[2] Personalized learning: a new ICT­enabled education approach, UNESCO Institute for Information Technologies in Education, Policy Brief March 2012 iite.unesco.org/pics/publications/en/files/3214716.pdf