Archive for the ‘adaptive’ Category

Who can tell where a blog post might lead? Over six years ago I wrote about adaptive professional development for teachers. I imagined the possibility of bite-sized, personalized CPD material. Now my vision is becoming real.

For the last two years, I have been working with a start-up that has been using AI to generate text using GPT-3 large language models. GPT-3 has recently been in the news because of the phenomenal success of the newly released ChatGPT. The technology certainly has a wow factor, but it has been around for a while now. ChatGPT can generate texts of various genres on any topic (with a few exceptions like current affairs) and the results are impressive. Imagine, then, how much more impressive the results can be when the kind of text is limited by genre and topic, allowing the software to be trained much more reliably.

This is what we have been working on. We took as our training corpus a huge collection of English language teaching teacher development texts that we could access online: blogs from all the major publishers, personal blogs, transcriptions from recorded conference presentations and webinars, magazine articles directed at teachers, along with books from publishers such as DELTA and Pavilion ELT, etc. We identified topics that seemed to be of current interest and asked our AI to generate blog posts. Later, we were able to get suggestions of topics from the software itself.

We then contacted a number of teachers and trainers who contribute to the publishers’ blogs and contracted them, first, to act as human trainers for the software, and, second, to agree to their names being used as the ‘authors’ of the blog posts we generated. In one or two cases, the authors thought that they had actually written the posts themselves! Next we submitted these posts to the marketing departments of the publishers (who run the blogs). Over twenty were submitted in this way, including:

  • What do teachers need to know about teaching 21st century skills in the English classroom?
  • 5 top ways of improving the well-being of English teachers
  • Teaching leadership skills in the primary English classroom
  • How can we promote eco-literacy in the English classroom?
  • My 10 favourite apps for English language learners

We couldn’t, of course, tell the companies that AI had been used to write the copy, but once we were sure that nobody had ever spotted the true authorship of this material, we were ready to move to the next stage of the project. We approached the marketing executives of two publishers and showed how we could generate teacher development material at a fraction of the current cost and in a fraction of the time. Partnerships were quickly signed.

Blog posts were just the beginning. We knew that we could use the same technology to produce webinar scripts, using learning design insights to optimise the webinars. The challenge we faced was that webinars need a presenter. We experimented with using animations, but feedback indicated that participants like to see a face. This is eminently doable, using our contracted authors and deep fake technology, but costs are still prohibitive. It remains cheaper and easier to use our authors delivering the scripts we have generated. This will no doubt change before too long.

The next obvious step was to personalize the development material. Large publishers collect huge amounts of data about visitors to their sites using embedded pixels. It is also relatively cheap and easy to triangulate this data with information from the customer databases and from activity on social media (especially Facebook). We know what kinds of classes people teach, and we know which aspects of teacher development they are interested in.

Publishers have long been interested in personalizing marketing material, and the possibility of extending this to the delivery of real development content is clearly exciting. (See below an email I received this week from the good folks at OUP marketing.)

Earlier this year one of our publishing partners began sending links to personalized materials of the kind we were able to produce with AI. The experiment was such a success that we have already taken it one stage further.

One of the most important clients of our main publishing partner employs hundreds of teachers to deliver online English classes using courseware that has been tailored to the needs of the institution. With so many freelance teachers working for them, along with high turnover of staff, there is inevitably a pressing need for teacher training to ensure optimal delivery. Since the classes are all online, it is possible to capture precisely what is going on. Using an AI-driven tool that was inspired by the Visible Classroom app (informed by the work of John Hattie), we can identify the developmental needs of the teachers. What kinds of activities are they using? How well do they exploit the functionalities of the platform? What can be said about the quality of their teacher talk? We combine this data with everything else and our proprietary algorithms determine what kinds of training materials each teacher receives. It doesn’t stop there. We can also now evaluate the effectiveness of these materials by analysing the learning outcomes of the students.

Teaching efficacy can by massively increased, whilst the training budget of the institution can be slashed. If all goes well, there will be no further need for teacher trainers at all. We won’t be stopping there. If results such as these can be achieved in teacher training, there’s no reason why the same technology cannot be leveraged for the teaching itself. Most of our partner’s teaching and testing materials are now quickly and very cheaply generated using GPT-3.5. If you want to see how this is done, check out the work of edugo.AI (a free trial is available) which can generate gapfills and comprehension test questions in a flash. As for replacing the teachers, we’re getting there. For the time being, though, it’s more cost-effective to use freelancers and to train them up.

On 21 January, I attended the launch webinar of DEFI (the Digital Education Futures Initiative), an initiative of the University of Cambridge, which seeks to work ‘with partners in industry, policy and practice to explore the field of possibilities that digital technology opens up for education’. The opening keynote speaker was Andrea Schleicher, head of education at the OECD. The OECD’s vision of the future of education is outlined in Schleicher’s book, ‘World Class: How to Build a 21st-Century School System’, freely available from the OECD, but his presentation for DEFI offers a relatively short summary. A recording is available here, and this post will take a closer look at some of the things he had to say.

Schleicher is a statistician and the coordinator of the OECD’s PISA programme. Along with other international organisations, such as the World Economic Forum and the World Bank (see my post here), the OECD promotes the global economization and corporatization of education, ‘based on the [human capital] view that developing work skills is the primary purpose of schooling’ (Spring, 2015: 14). In other words, the main proper function of education is seen to be meeting the needs of global corporate interests. In the early days of the COVID-19 pandemic, with the impact of school closures becoming very visible, Schleicher expressed concern about the disruption to human capital development, but thought it was ‘a great moment’: ‘the current wave of school closures offers an opportunity for experimentation and for envisioning new models of education’. Every cloud has a silver lining, and the pandemic has been a godsend for private companies selling digital learning (see my post about this here) and for those who want to reimagine education in a more corporate way.

Schleicher’s presentation for DEFI was a good opportunity to look again at the way in which organisations like the OECD are shaping educational discourse (see my post about the EdTech imaginary and ELT).

He begins by suggesting that, as a result of the development of digital technology (Google, YouTube, etc.) literacy is ‘no longer just about extracting knowledge’. PISA reading scores, he points out, have remained more or less static since 2000, despite the fact that we have invested (globally) more than 15% extra per student in this time. Only 9% of all 15-year-old students in the industrialised world can distinguish between fact and opinion.

To begin with, one might argue about the reliability and validity of the PISA reading scores (Berliner, 2020). One might also argue, as did a collection of 80 education experts in a letter to the Guardian, that the scores themselves are responsible for damaging global education, raising further questions about their validity. One might argue that the increased investment was spent in the wrong way (e.g. on hardware and software, rather than teacher training, for example), because the advice of organisations like OECD has been uncritically followed. And the statistic about critical reading skills is fairly meaningless unless it is compared to comparable metrics over a long time span: there is no reason to believe that susceptibility to fake news is any more of a problem now than it was, say, one hundred years ago. Nor is there any reason to believe that education can solve the fake-news problem (see my post about fake news and critical thinking here). These are more than just quibbles, but the main point that Schleicher is making is that education needs to change.

Schleicher next presents a graph which is designed to show that the amount of time that students spend studying correlates poorly with the amount they learn. His interest is in the (lack of) productivity of educational activities in some contexts. He goes on to argue that there is greater productivity in educational activities when learners have a growth mindset, implying (but not stating) that mindset interventions in schools would lead to a more productive educational environment.

Schleicher appears to confuse what students learn with the things they have learnt that have been measured by PISA. The two are obviously rather different, since PISA is only interested in a relatively small subset of the possible learning outcomes of schooling. His argument for growth mindset interventions hinges on the assumption that such interventions will lead to gains in reading scores. However, his graph demonstrates a correlation between growth mindset and reading scores, not a causal relationship. A causal relationship has not been clearly and empirically demonstrated (see my post about growth mindsets here) and recent work by Carol Dweck and her associates (e.g. Yeager et al., 2016), as well as other researchers (e.g. McPartlan et al, 2020), indicates that the relationship between gains in learning outcomes and mindset interventions is extremely complex.

Schleicher then turns to digitalisation and briefly discusses the positive and negative affordances of technology. He eulogizes platform companies before showing a slide designed to demonstrate that (in the workplace) there is a strong correlation between ICT use and learning. He concludes: ‘the digital world of learning is a hugely empowering world of learning’.

A brief paraphrase of this very disingenuous part of the presentation would be: technology can be good and bad, but I’ll only focus on the former. The discourse appears balanced, but it is anything but.

During the segment, Schleicher argues that technology is empowering, and gives the examples of ‘the most successful companies these days, they’re not created by a big industry, they’re created by a big idea’. This is plainly counterfactual. In the case of Alphabet and Facebook, profits did not follow from a ‘big idea’: the ideas changed as the companies evolved.

Schleicher then sketches a picture of an unpredictable future (pandemics, climate change, AI, cyber wars, etc.) as a way of framing the importance of being open (and resilient) to different futures and how we respond to them. He offers two different kinds of response: maintenance of the status quo, or ‘outsourcing’ of education. The pandemic, he suggests, has made more countries aware that the latter is the way forward.

In his discussion of the maintenance of the status quo, Schleicher talks about the maintenance of educational monopolies. By this, he must be referring to state monopolies on education: this is a favoured way of neoliberals of referring to state-sponsored education. But the extent to which, in 2021 in many OECD countries, the state has any kind of monopoly of education, is very open to debate. Privatization is advancing fast. Even in 2015, the World Education Forum’s ‘Final Report’ wrote that ‘the scale of engagement of nonstate actors at all levels of education is growing and becoming more diversified’. Schleicher goes on to talk about ‘large, bureaucratic school systems’, suggesting that such systems cannot be sufficiently agile, adaptive or responsive. ‘We should ask this question,’ he says, but his own answer to it is totally transparent: ‘changing education can be like moving graveyards’ is the title of the next slide. Education needs to be more like the health sector, he claims, which has been able to develop a COVID vaccine in such a short period of time. We need an education industry that underpins change in the same way as the health industry underpins vaccine development. In case his message isn’t yet clear enough, I’ll spell it out: education needs to be privatized still further.

Schleicher then turns to the ways in which he feels that digital technology can enhance learning. These include the use of AR, VR and AI. Technology, he says, can make learning so much more personalized: ‘the computer can study how you study, and then adapt learning so that it is much more granular, so much more adaptive, so much more responsive to your learning style’. He moves on to the field of assessment, again singing the praises of technology in the ways that it can offer new modes of assessment and ‘increase the reliability of machine rating for essays’. Through technology, we can ‘reunite learning and assessment’. Moving on to learning analytics, he briefly mentions privacy issues, before enthusing at greater length about the benefits of analytics.

Learning styles? Really? The reliability of machine scoring of essays? How reliable exactly? Data privacy as an area worth only a passing mention? The use of sensors to measure learners’ responses to learning experiences? Any pretence of balance appears now to have been shed. This is in-your-face sales talk.

Next up is a graph which purports to show the number of teachers in OECD countries who use technology for learners’ project work. This is followed by another graph showing the number of teachers who have participated in face-to-face and online CPD. The point of this is to argue that online CPD needs to become more common.

I couldn’t understand what point he was trying to make with the first graph. For the second, it is surely the quality of the CPD, rather than the channel, that matters.

Schleicher then turns to two further possible responses of education to unpredictable futures: ‘schools as learning hubs’ and ‘learn-as-you-go’. In the latter, digital infrastructure replaces physical infrastructure. Neither is explored in any detail. The main point appears to be that we should consider these possibilities, weighing up as we do so the risks and the opportunities (see slide below).

Useful ways to frame questions about the future of education, no doubt, but Schleicher is operating with a set of assumptions about the purpose of education, which he chooses not to explore. His fundamental assumption – that the primary purpose of education is to develop human capital in and for the global economy – is not one that I would share. However, if you do take that view, then privatization, economization, digitalization and the training of social-emotional competences are all reasonable corollaries, and the big question about the future concerns how to go about this in a more efficient way.

Schleicher’s (and the OECD’s) views are very much in accord with the libertarian values of the right-wing philanthro-capitalist foundations of the United States (the Gates Foundation, the Broad Foundation and so on), funded by Silicon Valley and hedge-fund managers. It is to the US that we can trace the spread and promotion of these ideas, but it is also, perhaps, to the US that we can now turn in search of hope for an alternative educational future. The privatization / disruption / reform movement in the US has stalled in recent years, as it has become clear that it failed to deliver on its promise of improved learning. The resistance to privatized and digitalized education is chronicled in Diane Ravitch’s latest book, ‘Slaying Goliath’ (2020). School closures during the pandemic may have been ‘a great moment’ for Schleicher, but for most of us, they have underscored the importance of face-to-face free public schooling. Now, with the electoral victory of Joe Biden and the appointment of a new US Secretary for Education (still to be confirmed), we are likely to see, for the first time in decades, an education policy that is firmly committed to public schools. The US is by far the largest contributor to the budget of the OECD – more than twice any other nation. Perhaps a rethink of the OECD’s educational policies will soon be in order?

References

Berliner D.C. (2020) The Implications of Understanding That PISA Is Simply Another Standardized Achievement Test. In Fan G., Popkewitz T. (Eds.) Handbook of Education Policy Studies. Springer, Singapore. https://doi.org/10.1007/978-981-13-8343-4_13

McPartlan, P., Solanki, S., Xu, D. & Sato, B. (2020) Testing Basic Assumptions Reveals When (Not) to Expect Mindset and Belonging Interventions to Succeed. AERA Open, 6 (4): 1 – 16 https://journals.sagepub.com/doi/pdf/10.1177/2332858420966994

Ravitch, D. (2020) Slaying Goliath: The Passionate Resistance to Privatization and the Fight to Save America’s Public School. New York: Vintage Books

Schleicher, A. (2018) World Class: How to Build a 21st-Century School System. Paris: OECD Publishing https://www.oecd.org/education/world-class-9789264300002-en.htm

Spring, J. (2015) Globalization of Education 2nd Edition. New York: Routledge

Yeager, D. S., et al. (2016) Using design thinking to improve psychological interventions: The case of the growth mindset during the transition to high school. Journal of Educational Psychology, 108(3), 374–391. https://doi.org/10.1037/edu0000098

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

I was intrigued to learn earlier this year that Oxford University Press had launched a new online test of English language proficiency, called the Oxford Test of English (OTE). At the conference where I first heard about it, I was struck by the fact that the presentation of the OUP sponsored plenary speaker was entitled ‘The Power of Assessment’ and dealt with formative assessment / assessment for learning. Oxford clearly want to position themselves as serious competitors to Pearson and Cambridge English in the testing business.

The brochure for the exam kicks off with a gem of a marketing slogan, ‘Smart. Smarter. SmarTest’ (geddit?), and the next few pages give us all the key information.

Faster and more flexible‘Traditional language proficiency tests’ is presumably intended to refer to the main competition (Pearson and Cambridge English). Cambridge First takes, in total, 3½ hours; the Pearson Test of English Academic takes 3 hours. The OTE takes, in total, 2 hours and 5 minutes. It can be taken, in theory, on any day of the year, although this depends on the individual Approved Test Centres, and, again, in theory, it can be booked as little as 14 days in advance. Results should take only two weeks to arrive. Further flexibility is offered in the way that candidates can pick ’n’ choose which of the four skills they want to have tests, just one or all four, although, as an incentive to go the whole hog, they will only get a ‘Certificate of Proficiency’ if they do all four.

A further incentive to do all four skills at the same time can be found in the price structure. One centre in Spain is currently offering the test for one single skill at Ꞓ41.50, but do the whole lot, and it will only set you back Ꞓ89. For a high-stakes test, this is cheap. In the UK right now, both Cambridge First and Pearson Academic cost in the region of £150, and IELTS a bit more than that. So, faster, more flexible and cheaper … Oxford means business.

Individual experience

The ‘individual experience’ on the next page of the brochure is pure marketing guff. This is, after all, a high-stakes, standardised test. It may be true that ‘the Speaking and Writing modules provide randomly generated tasks, making the overall test different each time’, but there can only be a certain number of permutations. What’s more, in ‘traditional tests’, like Cambridge First, where there is a live examiner or two, an individualised experience is unavoidable.

More interesting to me is the reference to adaptive technology. According to the brochure, ‘The Listening and Reading modules are adaptive, which means the test difficulty adjusts in response to your answers, quickly finding the right level for each test taker. This means that the questions are at just the right level of challenge, making the test shorter and less stressful than traditional proficiency tests’.

My curiosity piqued, I decided to look more closely at the Reading module. I found one practice test online which is the same as the demo that is available at the OTE website . Unfortunately, this example is not adaptive: it is at B1 level. The actual test records scores between 51 and 140, corresponding to levels A2, B1 and B2.

Test scores

The tasks in the Reading module are familiar from coursebooks and other exams: multiple choice, multiple matching and gapped texts.

Reading tasks

According to the exam specifications, these tasks are designed to measure the following skills:

  • Reading to identify main message, purpose, detail
  • Expeditious reading to identify specific information, opinion and attitude
  • Reading to identify text structure, organizational features of a text
  • Reading to identify attitude / opinion, purpose, reference, the meanings of words in context, global meaning

The ability to perform these skills depends, ultimately, on the candidate’s knowledge of vocabulary and grammar, as can be seen in the examples below.

Task 1Task 2

How exactly, I wonder, does the test difficulty adjust in response to the candidate’s answers? The algorithm that is used depends on measures of the difficulty of the test items. If these items are to be made harder or easier, the only significant way that I can see of doing this is by making the key vocabulary lower- or higher-frequency. This, in turn, is only possible if vocabulary and grammar has been tagged as being at a particular level. The most well-known tools for doing this have been developed by Pearson (with the GSE Teacher Toolkit ) and Cambridge English Profile . To the best of my knowledge, Oxford does not yet have a tool of this kind (at least, none that is publicly available). However, the data that OUP will accumulate from OTE scripts and recordings will be invaluable in building a database which their lexicographers can use in developing such a tool.

Even when a data-driven (and numerically precise) tool is available for modifying the difficulty of test items, I still find it hard to understand how the adaptivity will impact on the length or the stress of the reading test. The Reading module is only 35 minutes long and contains only 22 items. Anything that is significantly shorter must surely impact on the reliability of the test.

My conclusion from this is that the adaptive element of the Reading and Listening modules in the OTE is less important to the test itself than it is to building a sophisticated database (not dissimilar to the GSE Teacher Toolkit or Cambridge English Profile). The value of this will be found, in due course, in calibrating all OUP materials. The OTE has already been aligned to the Oxford Online Placement Test (OOPT) and, presumably, coursebooks will soon follow. This, in turn, will facilitate a vertically integrated business model, like Pearson and CUP, where everything from placement test, to coursework, to formative assessment, to final proficiency testing can be on offer.

Back in the middle of the last century, the first interactive machines for language teaching appeared. Previously, there had been phonograph discs and wire recorders (Ornstein, 1968: 401), but these had never really taken off. This time, things were different. Buoyed by a belief in the power of technology, along with the need (following the Soviet Union’s successful Sputnik programme) to demonstrate the pre-eminence of the United States’ technological expertise, the interactive teaching machines that were used in programmed instruction promised to revolutionize language learning (Valdman, 1968: 1). From coast to coast, ‘tremors of excitement ran through professional journals and conferences and department meetings’ (Kennedy, 1967: 871). The new technology was driven by hard science, supported and promoted by the one of the most well-known and respected psychologists and public intellectuals of the day (Skinner, 1961).

In classrooms, the machines acted as powerfully effective triggers in generating situational interest (Hidi & Renninger, 2006). Even more exciting than the mechanical teaching machines were the computers that were appearing on the scene. ‘Lick’ Licklider, a pioneer in interactive computing at the Advanced Research Projects Agency in Arlington, Virginia, developed an automated drill routine for learning German by hooking up a computer, two typewriters, an oscilloscope and a light pen (Noble, 1991: 124). Students loved it, and some would ‘go on and on, learning German words until they were forced by scheduling to cease their efforts’. Researchers called the seductive nature of the technology ‘stimulus trapping’, and Licklider hoped that ‘before [the student] gets out from under the control of the computer’s incentives, [they] will learn enough German words’ (Noble, 1991: 125).

With many of the developed economies of the world facing a critical shortage of teachers, ‘an urgent pedagogical emergency’ (Hof, 2018), the new approach was considered to be extremely efficient and could equalise opportunity in schools across the country. It was ‘here to stay: [it] appears destined to make progress that could well go beyond the fondest dreams of its originators […] an entire industry is just coming into being and significant sales and profits should not be too long in coming’ (Kozlowski, 1961: 47).

Unfortunately, however, researchers and entrepreneurs had massively underestimated the significance of novelty effects. The triggered situational interest of the machines did not lead to intrinsic individual motivation. Students quickly tired of, and eventually came to dislike, programmed instruction and the machines that delivered it (McDonald et al.: 2005: 89). What’s more, the machines were expensive and ‘research studies conducted on its effectiveness showed that the differences in achievement did not constantly or substantially favour programmed instruction over conventional instruction (Saettler, 2004: 303). Newer technologies, with better ‘stimulus trapping’, were appearing. Programmed instruction lost its backing and disappeared, leaving as traces only its interest in clearly defined learning objectives, the measurement of learning outcomes and a concern with the efficiency of learning approaches.

Hot on the heels of programmed instruction came the language laboratory. Futuristic in appearance, not entirely unlike the deck of the starship USS Enterprise which launched at around the same time, language labs captured the public imagination and promised to explore the final frontiers of language learning. As with the earlier teaching machines, students were initially enthusiastic. Even today, when language labs are introduced into contexts where they may be perceived as new technology, they can lead to high levels of initial motivation (e.g. Ramganesh & Janaki, 2017).

Given the huge investments into these labs, it’s unfortunate that initial interest waned fast. By 1969, many of these rooms had turned into ‘“electronic graveyards,” sitting empty and unused, or perhaps somewhat glorified study halls to which students grudgingly repair to don headphones, turn down the volume, and prepare the next period’s history or English lesson, unmolested by any member of the foreign language faculty’ (Turner, 1969: 1, quoted in Roby, 2003: 527). ‘Many second language students shudder[ed] at the thought of entering into the bowels of the “language laboratory” to practice and perfect the acoustical aerobics of proper pronunciation skills. Visions of sterile white-walled, windowless rooms, filled with endless bolted-down rows of claustrophobic metal carrels, and overseen by a humorless, lab director, evoke[d] fear in the hearts of even the most stout-hearted prospective second-language learners (Wiley, 1990: 44).

By the turn of this century, language labs had mostly gone, consigned to oblivion by the appearance of yet newer technology: the internet, laptops and smartphones. Education had been on the brink of being transformed through new learning technologies for decades (Laurillard, 2008: 1), but this time it really was different. It wasn’t just one technology that had appeared, but a whole slew of them: ‘artificial intelligence, learning analytics, predictive analytics, adaptive learning software, school management software, learning management systems (LMS), school clouds. No school was without these and other technologies branded as ‘superintelligent’ by the late 2020s’ (Macgilchrist et al., 2019). The hardware, especially phones, was ubiquitous and, therefore, free. Unlike teaching machines and language laboratories, students were used to using the technology and expected to use their devices in their studies.

A barrage of publicity, mostly paid for by the industry, surrounded the new technologies. These would ‘meet the demands of Generation Z’, the new generation of students, now cast as consumers, who ‘were accustomed to personalizing everything’.  AR, VR, interactive whiteboards, digital projectors and so on made it easier to ‘create engaging, interactive experiences’. The ‘New Age’ technologies made learning fun and easy,  ‘bringing enthusiasm among the students, improving student engagement, enriching the teaching process, and bringing liveliness in the classroom’. On top of that, they allowed huge amounts of data to be captured and sold, whilst tracking progress and attendance. In any case, resistance to digital technology, said more than one language teaching expert, was pointless (Styring, 2015).slide

At the same time, technology companies increasingly took on ‘central roles as advisors to national governments and local districts on educational futures’ and public educational institutions came to be ‘regarded by many as dispensable or even harmful’ (Macgilchrist et al., 2019).

But, as it turned out, the students of Generation Z were not as uniformly enthusiastic about the new technology as had been assumed, and resistance to digital, personalized delivery in education was not long in coming. In November 2018, high school students at Brooklyn’s Secondary School for Journalism staged a walkout in protest at their school’s use of Summit Learning, a web-based platform promoting personalized learning developed by Facebook. They complained that the platform resulted in coursework requiring students to spend much of their day in front of a computer screen, that made it easy to cheat by looking up answers online, and that some of their teachers didn’t have the proper training for the curriculum (Leskin, 2018). Besides, their school was in a deplorable state of disrepair, especially the toilets. There were similar protests in Kansas, where students staged sit-ins, supported by their parents, one of whom complained that ‘we’re allowing the computers to teach and the kids all looked like zombies’ before pulling his son out of the school (Bowles, 2019). In Pennsylvania and Connecticut, some schools stopped using Summit Learning altogether, following protests.

But the resistance did not last. Protesters were accused of being nostalgic conservatives and educationalists kept largely quiet, fearful of losing their funding from the Chan Zuckerberg Initiative (Facebook) and other philanthro-capitalists. The provision of training in grit, growth mindset, positive psychology and mindfulness (also promoted by the technology companies) was ramped up, and eventually the disaffected students became more quiescent. Before long, the data-intensive, personalized approach, relying on the tools, services and data storage of particular platforms had become ‘baked in’ to educational systems around the world (Moore, 2018: 211). There was no going back (except for small numbers of ultra-privileged students in a few private institutions).

By the middle of the century (2155), most students, of all ages, studied with interactive screens in the comfort of their homes. Algorithmically-driven content, with personalized, adaptive tests had become the norm, but the technology occasionally went wrong, leading to some frustration. One day, two young children discovered a book in their attic. Made of paper with yellow, crinkly pages, where ‘the words stood still instead of moving the way they were supposed to’. The book recounted the experience of schools in the distant past, where ‘all the kids from the neighbourhood came’, sitting in the same room with a human teacher, studying the same things ‘so they could help one another on the homework and talk about it’. Margie, the younger of the children at 11 years old, was engrossed in the book when she received a nudge from her personalized learning platform to return to her studies. But Margie was reluctant to go back to her fractions. She ‘was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had’ (Asimov, 1951).

References

Asimov, I. 1951. The Fun They Had. Accessed September 20, 2019. http://web1.nbed.nb.ca/sites/ASD-S/1820/J%20Johnston/Isaac%20Asimov%20-%20The%20fun%20they%20had.pdf

Bowles, N. 2019. ‘Silicon Valley Came to Kansas Schools. That Started a Rebellion’ The New York Times, April 21. Accessed September 20, 2019. https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html

Hidi, S. & Renninger, K.A. 2006. ‘The Four-Phase Model of Interest Development’ Educational Psychologist, 41 (2), 111 – 127

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47 (4): 445-465

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 (6): 47 – 54

Laurillard, D. 2008. Digital Technologies and their Role in Achieving our Ambitions for Education. London: Institute for Education.

Leskin, P. 2018. ‘Students in Brooklyn protest their school’s use of a Zuckerberg-backed online curriculum that Facebook engineers helped build’ Business Insider, 12.11.18 Accessed 20 September 2019. https://www.businessinsider.de/summit-learning-school-curriculum-funded-by-zuckerberg-faces-backlash-brooklyn-2018-11?r=US&IR=T

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 (2): 84 – 98

Macgilchrist, F., Allert, H. & Bruch, A. 2019. ‚Students and society in the 2020s. Three future ‘histories’ of education and technology’. Learning, Media and Technology, https://www.tandfonline.com/doi/full/10.1080/17439884.2019.1656235 )

Moore, M. 2018. Democracy Hacked. London: Oneworld

Noble, D. D. 1991. The Classroom Arsenal. London: The Falmer Press

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 (7), 401 – 410

Ramganesh, E. & Janaki, S. 2017. ‘Attitude of College Teachers towards the Utilization of Language Laboratories for Learning English’ Asian Journal of Social Science Studies; Vol. 2 (1): 103 – 109

Roby, W.B. 2003. ‘Technology in the service of foreign language teaching: The case of the language laboratory’ In D. Jonassen (ed.), Handbook of Research on Educational Communications and Technology, 2nd ed.: 523 – 541. Mahwah, NJ.: Lawrence Erlbaum Associates

Saettler, P. 2004. The Evolution of American Educational Technology. Greenwich, Conn.: Information Age Publishing

Skinner, B. F. 1961. ‘Teaching Machines’ Scientific American, 205(5), 90-107

Styring, J. 2015. Engaging Generation Z. Cambridge English webinar 2015 https://www.youtube.com/watch?time_continue=4&v=XCxl4TqgQZA

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14.

Wiley, P. D. 1990. ‘Language labs for 1990: User-friendly, expandable and affordable’. Media & Methods, 27(1), 44–47)

jenny-holzer-untitled-protect-me-from-what-i-want-text-displayed-in-times-square-nyc-1982

Jenny Holzer, Protect me from what I want

ltsigIt’s hype time again. Spurred on, no doubt, by the current spate of books and articles  about AIED (artificial intelligence in education), the IATEFL Learning Technologies SIG is organising an online event on the topic in November of this year. Currently, the most visible online references to AI in language learning are related to Glossika , basically a language learning system that uses spaced repetition, whose marketing department has realised that references to AI might help sell the product. GlossikaThey’re not alone – see, for example, Knowble which I reviewed earlier this year .

In the wider world of education, where AI has made greater inroads than in language teaching, every day brings more stuff: How artificial intelligence is changing teaching , 32 Ways AI is Improving Education , How artificial intelligence could help teachers do a better job , etc., etc. There’s a full-length book by Anthony Seldon, The Fourth Education Revolution: will artificial intelligence liberate or infantilise humanity? (2018, University of Buckingham Press) – one of the most poorly researched and badly edited books on education I’ve ever read, although that won’t stop it selling – and, no surprises here, there’s a Pearson commissioned report called Intelligence Unleashed: An argument for AI in Education (2016) which is available free.

Common to all these publications is the claim that AI will radically change education. When it comes to language teaching, a similar claim has been made by Donald Clark (described by Anthony Seldon as an education guru but perhaps best-known to many in ELT for his demolition of Sugata Mitra). In 2017, Clark wrote a blog post for Cambridge English (now unavailable) entitled How AI will reboot language learning, and a more recent version of this post, called AI has and will change language learning forever (sic) is available on Clark’s own blog. Given the history of the failure of education predictions, Clark is making bold claims. Thomas Edison (1922) believed that movies would revolutionize education. Radios were similarly hyped in the 1940s and in the 1960s it was the turn of TV. In the 1980s, Seymour Papert predicted the end of schools – ‘the computer will blow up the school’, he wrote. Twenty years later, we had the interactive possibilities of Web 2.0. As each technology failed to deliver on the hype, a new generation of enthusiasts found something else to make predictions about.

But is Donald Clark onto something? Developments in AI and computational linguistics have recently resulted in enormous progress in machine translation. Impressive advances in automatic speech recognition and generation, coupled with the power that can be packed into a handheld device, mean that we can expect some re-evaluation of the value of learning another language. Stephen Heppell, a specialist at Bournemouth University in the use of ICT in Education, has said: ‘Simultaneous translation is coming, making language teachers redundant. Modern languages teaching in future may be more about navigating cultural differences’ (quoted by Seldon, p.263). Well, maybe, but this is not Clark’s main interest.

Less a matter of opinion and much closer to the present day is the issue of assessment. AI is becoming ubiquitous in language testing. Cambridge, Pearson, TELC, Babbel and Duolingo are all using or exploring AI in their testing software, and we can expect to see this increase. Current, paper-based systems of testing subject knowledge are, according to Rosemary Luckin and Kristen Weatherby, outdated, ineffective, time-consuming, the cause of great anxiety and can easily be automated (Luckin, R. & Weatherby, K. 2018. ‘Learning analytics, artificial intelligence and the process of assessment’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.253). By capturing data of various kinds throughout a language learner’s course of study and by using AI to analyse learning development, continuous formative assessment becomes possible in ways that were previously unimaginable. ‘Assessment for Learning (AfL)’ or ‘Learning Oriented Assessment (LOA)’ are two terms used by Cambridge English to refer to the potential that AI offers which is described by Luckin (who is also one of the authors of the Pearson paper mentioned earlier). In practical terms, albeit in a still very limited way, this can be seen in the CUP course ‘Empower’, which combines CUP course content with validated LOA from Cambridge Assessment English.

Will this reboot or revolutionise language teaching? Probably not and here’s why. AIED systems need to operate with what is called a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. Some subjects (especially STEM subjects) ‘lend themselves much more readily to having their domains represented in ways that can be automatically reasoned about’ (du Boulay, D. et al., 2018. ‘Artificial intelligences and big data technologies to close the achievement gap’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.258). This is why most AIED systems have been built to teach these areas. Language are rather different. We simply do not have a domain knowledge model, except perhaps for the very lowest levels of language learning (and even that is highly questionable). Language learning is probably not, or not primarily, about acquiring subject knowledge. Debate still rages about the relationship between explicit language knowledge and language competence. AI-driven formative assessment will likely focus most on explicit language knowledge, as does most current language teaching. This will not reboot or revolutionise anything. It will more likely reinforce what is already happening: a model of language learning that assumes there is a strong interface between explicit knowledge and language competence. It is not a model that is shared by most SLA researchers.

So, one thing that AI can do (and is doing) for language learning is to improve the algorithms that determine the way that grammar and vocabulary are presented to individual learners in online programs. AI-optimised delivery of ‘English Grammar in Use’ may lead to some learning gains, but they are unlikely to be significant. It is not, in any case, what language learners need.

AI, Donald Clark suggests, can offer personalised learning. Precisely what kind of personalised learning this might be, and whether or not this is a good thing, remains unclear. A 2015 report funded by the Gates Foundation found that we currently lack evidence about the effectiveness of personalised learning. We do not know which aspects of personalised learning (learner autonomy, individualised learning pathways and instructional approaches, etc.) or which combinations of these will lead to gains in language learning. The complexity of the issues means that we may never have a satisfactory explanation. You can read my own exploration of the problems of personalised learning starting here .

What’s left? Clark suggests that chatbots are one area with ‘huge potential’. I beg to differ and I explained my reasons eighteen months ago . Chatbots work fine in very specific domains. As Clark says, they can be used for ‘controlled practice’, but ‘controlled practice’ means practice of specific language knowledge, the practice of limited conversational routines, for example. It could certainly be useful, but more than that? Taking things a stage further, Clark then suggests more holistic speaking and listening practice with Amazon Echo, Alexa or Google Home. If and when the day comes that we have general, as opposed to domain-specific, AI, chatting with one of these tools would open up vast new possibilities. Unfortunately, general AI does not exist, and until then Alexa and co will remain a poor substitute for human-human interaction (which is readily available online, anyway). Incidentally, AI could be used to form groups of online language learners to carry out communicative tasks – ‘the aim might be to design a grouping of students all at a similar cognitive level and of similar interests, or one where the participants bring different but complementary knowledge and skills’ (Luckin, R., Holmes, W., Griffiths, M. & Forceir, L.B. 2016. Intelligence Unleashed: An argument for AI in Education. London: Pearson, p.26).

Predictions about the impact of technology on education have a tendency to be made by people with a vested interest in the technologies. Edison was a businessman who had invested heavily in motion pictures. Donald Clark is an edtech entrepreneur whose company, Wildfire, uses AI in online learning programs. Stephen Heppell is executive chairman of LP+ who are currently developing a Chinese language learning community for 20 million Chinese school students. The reporting of AIED is almost invariably in websites that are paid for, in one way or another, by edtech companies. Predictions need, therefore, to be treated sceptically. Indeed, the safest prediction we can make about hyped educational technologies is that inflated expectations will be followed by disillusionment, before the technology finds a smaller niche.

 

Introduction

Allowing learners to determine the amount of time they spend studying, and, therefore (in theory at least) the speed of their progress is a key feature of most personalized learning programs. In cases where learners follow a linear path of pre-determined learning items, it is often the only element of personalization that the programs offer. In the Duolingo program that I am using, there are basically only two things that can be personalized: the amount of time I spend studying each day, and the possibility of jumping a number of learning items by ‘testing out’.

Self-regulated learning or self-pacing, as this is commonly referred to, has enormous intuitive appeal. It is clear that different people learn different things at different rates. We’ve known for a long time that ‘the developmental stages of child growth and the individual differences among learners make it impossible to impose a single and ‘correct’ sequence on all curricula’ (Stern, 1983: 439). It therefore follows that it makes even less sense for a group of students (typically determined by age) to be obliged to follow the same curriculum at the same pace in a one-size-fits-all approach. We have probably all experienced, as students, the frustration of being behind, or ahead of, the rest of our colleagues in a class. One student who suffered from the lockstep approach was Sal Khan, founder of the Khan Academy. He has described how he was fed up with having to follow an educational path dictated by his age and how, as a result, individual pacing became an important element in his educational approach (Ferster, 2014: 132-133). As teachers, we have all experienced the challenges of teaching a piece of material that is too hard or too easy for many of the students in the class.

Historical attempts to facilitate self-paced learning

Charles_W__Eliot_cph_3a02149An interest in self-paced learning can be traced back to the growth of mass schooling and age-graded classes in the 19th century. In fact, the ‘factory model’ of education has never existed without critics who saw the inherent problems of imposing uniformity on groups of individuals. These critics were not marginal characters. Charles Eliot (president of Harvard from 1869 – 1909), for example, described uniformity as ‘the curse of American schools’ and argued that ‘the process of instructing students in large groups is a quite sufficient school evil without clinging to its twin evil, an inflexible program of studies’ (Grittner, 1975: 324).

Attempts to develop practical solutions were not uncommon and these are reasonably well-documented. One of the earliest, which ran from 1884 to 1894, was launched in Pueblo, Colorado and was ‘a self-paced plan that required each student to complete a sequence of lessons on an individual basis’ (Januszewski, 2001: 58-59). More ambitious was the Burk Plan (at its peak between 1912 and 1915), named after Frederick Burk of the San Francisco State Normal School, which aimed to allow students to progress through materials (including language instruction materials) at their own pace with only a limited amount of teacher presentations (Januszewski, ibid.). Then, there was the Winnetka Plan (1920s), developed by Carlton Washburne, an associate of Frederick Burk and the superintendent of public schools in Winnetka, Illinois, which also ‘allowed learners to proceed at different rates, but also recognised that learners proceed at different rates in different subjects’ (Saettler, 1990: 65). The Winnetka Plan is especially interesting in the way it presaged contemporary attempts to facilitate individualized, self-paced learning. It was described by its developers in the following terms:

A general technique [consisting] of (a) breaking up the common essentials curriculum into very definite units of achievement, (b) using complete diagnostic tests to determine whether a child has mastered each of these units, and, if not, just where his difficulties lie and, (c) the full use of self-instructive, self corrective practice materials. (Washburne, C., Vogel, M. & W.S. Gray. 1926. A Survey of the Winnetka Public Schools. Bloomington, IL: Public School Press)

Not dissimilar was the Dalton (Massachusetts) Plan in the 1920s which also used a self-paced program to accommodate the different ability levels of the children and deployed contractual agreements between students and teachers (something that remains common educational practice around the world). There were many others, both in the U.S. and other parts of the world.

The personalization of learning through self-pacing was not, therefore, a minor interest. Between 1910 and 1924, nearly 500 articles can be documented on the subject of individualization (Grittner, 1975: 328). In just three years (1929 – 1932) of one publication, The Education Digest, there were fifty-one articles dealing with individual instruction and sixty-three entries treating individual differences (Chastain, 1975: 334). Foreign language teaching did not feature significantly in these early attempts to facilitate self-pacing, but see the Burk Plan described above. Only a handful of references to language learning and self-pacing appeared in articles between 1916 and 1924 (Grittner, 1975: 328).

Disappointingly, none of these initiatives lasted long. Both costs and management issues had been significantly underestimated. Plans such as those described above were seen as progress, but not the hoped-for solution. Problems included the fact that the materials themselves were not individualized and instructional methods were too rigid (Pendleton, 1930: 199). However, concomitant with the interest in individualization (mostly, self-pacing), came the advent of educational technology.

Sidney L. Pressey, the inventor of what was arguably the first teaching machine, was inspired by his experiences with schoolchildren in rural Indiana in the 1920s where he ‘was struck by the tremendous variation in their academic abilities and how they were forced to progress together at a slow, lockstep pace that did not serve all students well’ (Ferster, 2014: 52). Although Pressey failed in his attempts to promote his teaching machines, he laid the foundation stones in the synthesizing of individualization and technology.Pressey machine

Pressey may be seen as the direct precursor of programmed instruction, now closely associated with B. F. Skinner (see my post on Behaviourism and Adaptive Learning). It is a quintessentially self-paced approach and is described by John Hattie as follows:

Programmed instruction is a teaching method of presenting new subject matter to students in graded sequence of controlled steps. A book version, for example, presents a problem or issue, then, depending on the student’s answer to a question about the material, the student chooses from optional answers which refers them to particular pages of the book to find out why they were correct or incorrect – and then proceed to the next part of the problem or issue. (Hattie, 2009: 231)

Programmed instruction was mostly used for the teaching of mathematics, but it is estimated that 4% of programmed instruction programs were for foreign languages (Saettler, 1990: 297). It flourished in the 1960s and 1970s, but even by 1968 foreign language instructors were sceptical (Valdman, 1968). A survey carried out by the Center for Applied Linguistics revealed then that only about 10% of foreign language teachers at college and university reported the use of programmed materials in their departments. (Valdman, 1968: 1).grolier min max

Research studies had failed to demonstrate the effectiveness of programmed instruction (Saettler, 1990: 303). Teachers were often resistant and students were often bored, finding ‘ingenious ways to circumvent the program, including the destruction of their teaching machines!’ (Saettler, ibid.).

In the case of language learning, there were other problems. For programmed instruction to have any chance of working, it was necessary to specify rigorously the initial and terminal behaviours of the learner so that the intermediate steps leading from the former to the latter could be programmed. As Valdman (1968: 4) pointed out, this is highly problematic when it comes to languages (a point that I have made repeatedly in this blog). In addition, students missed the personal interaction that conventional instruction offered, got bored and lacked motivation (Valdman, 1968: 10).

Programmed instruction worked best when teachers were very enthusiastic, but perhaps the most significant lesson to be learned from the experiments was that it was ‘a difficult, time-consuming task to introduce programmed instruction’ (Saettler, 1990: 299). It entailed changes to well-established practices and attitudes, and for such changes to succeed there must be consideration of the social, political, and economic contexts. As Saettler (1990: 306), notes, ‘without the support of the community and the entire teaching staff, sustained innovation is unlikely’. In this light, Hattie’s research finding that ‘when comparisons are made between many methods, programmed instruction often comes near the bottom’ (Hattie, 2009: 231) comes as no great surprise.

Just as programmed instruction was in its death throes, the world of language teaching discovered individualization. Launched as a deliberate movement in the early 1970s at the Stanford Conference (Altman & Politzer, 1971), it was a ‘systematic attempt to allow for individual differences in language learning’ (Stern, 1983: 387). Inspired, in part, by the work of Carl Rogers, this ‘humanistic turn’ was a recognition that ‘each learner is unique in personality, abilities, and needs. Education must be personalized to fit the individual; the individual must not be dehumanized in order to meet the needs of an impersonal school system’ (Disick, 1975:38). In ELT, this movement found many adherents and remains extremely influential to this day.

In language teaching more generally, the movement lost impetus after a few years, ‘probably because its advocates had underestimated the magnitude of the task they had set themselves in trying to match individual learner characteristics with appropriate teaching techniques’ (Stern, 1983: 387). What precisely was meant by individualization was never adequately defined or agreed (a problem that remains to the present time). What was left was self-pacing. In 1975, it was reported that ‘to date the majority of the programs in second-language education have been characterized by a self-pacing format […]. Practice seems to indicate that ‘individualized’ instruction is being defined in the class room as students studying individually’ (Chastain, 1975: 344).

Lessons to be learned

This brief account shows that historical attempts to facilitate self-pacing have largely been characterised by failure. The starting point of all these attempts remains as valid as ever, but it is clear that practical solutions are less than simple. To avoid the insanity of doing the same thing over and over again and expecting different results, we should perhaps try to learn from the past.

One of the greatest challenges that teachers face is dealing with different levels of ability in their classes. In any blended scenario where the online component has an element of self-pacing, the challenge will be magnified as ability differentials are likely to grow rather than decrease as a result of the self-pacing. Bart Simpson hit the nail on the head in a memorable line: ‘Let me get this straight. We’re behind the rest of the class and we’re going to catch up to them by going slower than they are? Coo coo!’ Self-pacing runs into immediate difficulties when it comes up against standardised tests and national or state curriculum requirements. As Ferster observes, ‘the notion of individual pacing [remains] antithetical to […] a graded classroom system, which has been the model of schools for the past century. Schools are just not equipped to deal with students who do not learn in age-processed groups, even if this system is clearly one that consistently fails its students (Ferster, 2014: 90-91).bart_simpson

Ability differences are less problematic if the teacher focusses primarily on communicative tasks in F2F time (as opposed to more teaching of language items), but this is a big ‘if’. Many teachers are unsure of how to move towards a more communicative style of teaching, not least in large classes in compulsory schooling. Since there are strong arguments that students would benefit from a more communicative, less transmission-oriented approach anyway, it makes sense to focus institutional resources on equipping teachers with the necessary skills, as well as providing support, before a shift to a blended, more self-paced approach is implemented.

Such issues are less important in private institutions, which are not age-graded, and in self-study contexts. However, even here there may be reasons to proceed cautiously before buying into self-paced approaches. Self-pacing is closely tied to autonomous goal-setting (which I will look at in more detail in another post). Both require a degree of self-awareness at a cognitive and emotional level (McMahon & Oliver, 2001), but not all students have such self-awareness (Magill, 2008). If students do not have the appropriate self-regulatory strategies and are simply left to pace themselves, there is a chance that they will ‘misregulate their learning, exerting control in a misguided or counterproductive fashion and not achieving the desired result’ (Kirschner & van Merriënboer, 2013: 177). Before launching students on a path of self-paced language study, ‘thought needs to be given to the process involved in users becoming aware of themselves and their own understandings’ (McMahon & Oliver, 2001: 1304). Without training and support provided both before and during the self-paced study, the chances of dropping out are high (as we see from the very high attrition rate in language apps).

However well-intentioned, many past attempts to facilitate self-pacing have also suffered from the poor quality of the learning materials. The focus was more on the technology of delivery, and this remains the case today, as many posts on this blog illustrate. Contemporary companies offering language learning programmes show relatively little interest in the content of the learning (take Duolingo as an example). Few app developers show signs of investing in experienced curriculum specialists or materials writers. Glossy photos, contemporary videos, good UX and clever gamification, all of which become dull and repetitive after a while, do not compensate for poorly designed materials.

Over forty years ago, a review of self-paced learning concluded that the evidence on its benefits was inconclusive (Allison, 1975: 5). Nothing has changed since. For some people, in some contexts, for some of the time, self-paced learning may work. Claims that go beyond that cannot be substantiated.

References

Allison, E. 1975. ‘Self-Paced Instruction: A Review’ The Journal of Economic Education 7 / 1: 5 – 12

Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare

Chastain, K. 1975. ‘An Examination of the Basic Assumptions of “Individualized” Instruction’ The Modern Language Journal 59 / 7: 334 – 344

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Ferster, B. 2014. Teaching Machines. Baltimore: John Hopkins University Press

Grittner, F. M. 1975. ‘Individualized Instruction: An Historical Perspective’ The Modern Language Journal 59 / 7: 323 – 333

Hattie, J. 2009. Visible Learning. Abingdon, Oxon.: Routledge

Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited

Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48:3, 169-183

Magill, D. S. 2008. ‘What Part of Self-Paced Don’t You Understand?’ University of Wisconsin 24th Annual Conference on Distance Teaching & Learning Conference Proceedings.

McMahon, M. & Oliver, R. 2001. ‘Promoting self-regulated learning in an on-line environment’ in C. Montgomerie & J. Viteli (eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2001 (pp. 1299-1305). Chesapeake, VA: AACE

Pendleton, C. S. 1930. ‘Personalizing English Teaching’ Peabody Journal of Education 7 / 4: 195 – 200

Saettler, P. 1990. The Evolution of American Educational Technology. Denver: Libraries Unlimited

Stern, H.H. 1983. Fundamental Concepts of Language Teaching. Oxford: Oxford University Press

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German 1 / 2: 1 – 14

 

About two and a half years ago when I started writing this blog, there was a lot of hype around adaptive learning and the big data which might drive it. Two and a half years are a long time in technology. A look at Google Trends suggests that interest in adaptive learning has been pretty static for the last couple of years. It’s interesting to note that 3 of the 7 lettered points on this graph are Knewton-related media events (including the most recent, A, which is Knewton’s latest deal with Hachette) and 2 of them concern McGraw-Hill. It would be interesting to know whether these companies follow both parts of Simon Cowell’s dictum of ‘Create the hype, but don’t ever believe it’.

Google_trends

A look at the Hype Cycle (see here for Wikipedia’s entry on the topic and for criticism of the hype of Hype Cycles) of the IT research and advisory firm, Gartner, indicates that both big data and adaptive learning have now slid into the ‘trough of disillusionment’, which means that the market has started to mature, becoming more realistic about how useful the technologies can be for organizations.

A few years ago, the Gates Foundation, one of the leading cheerleaders and financial promoters of adaptive learning, launched its Adaptive Learning Market Acceleration Program (ALMAP) to ‘advance evidence-based understanding of how adaptive learning technologies could improve opportunities for low-income adults to learn and to complete postsecondary credentials’. It’s striking that the program’s aims referred to how such technologies could lead to learning gains, not whether they would. Now, though, with the publication of a report commissioned by the Gates Foundation to analyze the data coming out of the ALMAP Program, things are looking less rosy. The report is inconclusive. There is no firm evidence that adaptive learning systems are leading to better course grades or course completion. ‘The ultimate goal – better student outcomes at lower cost – remains elusive’, the report concludes. Rahim Rajan, a senior program office for Gates, is clear: ‘There is no magical silver bullet here.’

The same conclusion is being reached elsewhere. A report for the National Education Policy Center (in Boulder, Colorado) concludes: Personalized Instruction, in all its many forms, does not seem to be the transformational technology that is needed, however. After more than 30 years, Personalized Instruction is still producing incremental change. The outcomes of large-scale studies and meta-analyses, to the extent they tell us anything useful at all, show mixed results ranging from modest impacts to no impact. Additionally, one must remember that the modest impacts we see in these meta-analyses are coming from blended instruction, which raises the cost of education rather than reducing it (Enyedy, 2014: 15 -see reference at the foot of this post). In the same vein, a recent academic study by Meg Coffin Murray and Jorge Pérez (2015, ‘Informing and Performing: A Study Comparing Adaptive Learning to Traditional Learning’) found that ‘adaptive learning systems have negligible impact on learning outcomes’.

future-ready-learning-reimagining-the-role-of-technology-in-education-1-638In the latest educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Reimagining the Role of Technology in Education’, 2016) the only mentions of the word ‘adaptive’ are in the context of testing. And the latest OECD report on ‘Students, Computers and Learning: Making the Connection’ (2015), finds, more generally, that information and communication technologies, when they are used in the classroom, have, at best, a mixed impact on student performance.

There is, however, too much money at stake for the earlier hype to disappear completely. Sponsored cheerleading for adaptive systems continues to find its way into blogs and national magazines and newspapers. EdSurge, for example, recently published a report called ‘Decoding Adaptive’ (2016), sponsored by Pearson, that continues to wave the flag. Enthusiastic anecdotes take the place of evidence, but, for all that, it’s a useful read.

In the world of ELT, there are plenty of sales people who want new products which they can call ‘adaptive’ (and gamified, too, please). But it’s striking that three years after I started following the hype, such products are rather thin on the ground. Pearson was the first of the big names in ELT to do a deal with Knewton, and invested heavily in the company. Their relationship remains close. But, to the best of my knowledge, the only truly adaptive ELT product that Pearson offers is the PTE test.

Macmillan signed a contract with Knewton in May 2013 ‘to provide personalized grammar and vocabulary lessons, exam reviews, and supplementary materials for each student’. In December of that year, they talked up their new ‘big tree online learning platform’: ‘Look out for the Big Tree logo over the coming year for more information as to how we are using our partnership with Knewton to move forward in the Language Learning division and create content that is tailored to students’ needs and reactive to their progress.’ I’ve been looking out, but it’s all gone rather quiet on the adaptive / platform front.

In September 2013, it was the turn of Cambridge to sign a deal with Knewton ‘to create personalized learning experiences in its industry-leading ELT digital products for students worldwide’. This year saw the launch of a major new CUP series, ‘Empower’. It has an online workbook with personalized extra practice, but there’s nothing (yet) that anyone would call adaptive. More recently, Cambridge has launched the online version of the 2nd edition of Touchstone. Nothing adaptive there, either.

Earlier this year, Cambridge published The Cambridge Guide to Blended Learning for Language Teaching, edited by Mike McCarthy. It contains a chapter by M.O.Z. San Pedro and R. Baker on ‘Adaptive Learning’. It’s an enthusiastic account of the potential of adaptive learning, but it doesn’t contain a single reference to language learning or ELT!

So, what’s going on? Skepticism is becoming the order of the day. The early hype of people like Knewton’s Jose Ferreira is now understood for what it was. Companies like Macmillan got their fingers badly burnt when they barked up the wrong tree with their ‘Big Tree’ platform.

Noel Enyedy captures a more contemporary understanding when he writes: Personalized Instruction is based on the metaphor of personal desktop computers—the technology of the 80s and 90s. Today’s technology is not just personal but mobile, social, and networked. The flexibility and social nature of how technology infuses other aspects of our lives is not captured by the model of Personalized Instruction, which focuses on the isolated individual’s personal path to a fixed end-point. To truly harness the power of modern technology, we need a new vision for educational technology (Enyedy, 2014: 16).

Adaptive solutions aren’t going away, but there is now a much better understanding of what sorts of problems might have adaptive solutions. Testing is certainly one. As the educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Re-imagining the Role of Technology in Education’, 2016) puts it: Computer adaptive testing, which uses algorithms to adjust the difficulty of questions throughout an assessment on the basis of a student’s responses, has facilitated the ability of assessments to estimate accurately what students know and can do across the curriculum in a shorter testing session than would otherwise be necessary. In ELT, Pearson and EF have adaptive tests that have been well researched and designed.

Vocabulary apps which deploy adaptive technology continue to become more sophisticated, although empirical research is lacking. Automated writing tutors with adaptive corrective feedback are also developing fast, and I’ll be writing a post about these soon. Similarly, as speech recognition software improves, we can expect to see better and better automated adaptive pronunciation tutors. But going beyond such applications, there are bigger questions to ask, and answers to these will impact on whatever direction adaptive technologies take. Large platforms (LMSs), with or without adaptive software, are already beginning to look rather dated. Will they be replaced by integrated apps, or are apps themselves going to be replaced by bots (currently riding high in the Hype Cycle)? In language learning and teaching, the future of bots is likely to be shaped by developments in natural language processing (another topic about which I’ll be blogging soon). Nobody really has a clue where the next two and a half years will take us (if anywhere), but it’s becoming increasingly likely that adaptive learning will be only one very small part of it.

 

Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning. Boulder, CO: National Education Policy Center. Retrieved 17.07.16 from http://nepc.colorado.edu/publication/personalized-instruction

Ok, let’s be honest here. This post is about teacher training, but ‘development’ sounds more respectful, more humane, more modern. Teacher development (self-initiated, self-evaluated, collaborative and holistic) could be adaptive, but it’s unlikely that anyone will want to spend the money on developing an adaptive teacher development platform any time soon. Teacher training (top-down, pre-determined syllabus and externally evaluated) is another matter. If you’re not too clear about this distinction, see Penny Ur’s article in The Language Teacher.

decoding_adaptive jpgThe main point of adaptive learning tools is to facilitate differentiated instruction. They are, as Pearson’s latest infomercial booklet describes them, ‘educational technologies that can respond to a student’s interactions in real-time by automatically providing the student with individual support’. Differentiation or personalization (or whatever you call it) is, as I’ve written before  , the declared goal of almost everyone in educational power these days. What exactly it is may be open to question (see Michael Feldstein’s excellent article), as may be the question of whether or not it is actually such a desideratum (see, for example, this article ). But, for the sake of argument, let’s agree that it’s mostly better than one-size-fits-all.

Teachers around the world are being encouraged to adopt a differentiated approach with their students, and they are being encouraged to use technology to do so. It is technology that can help create ‘robust personalized learning environments’ (says the White House)  . Differentiation for language learners could be facilitated by ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses,’ etc. (see Alexandra Chistyakova’s post on eltdiary ).

But here’s the crux. If we want teachers to adopt a differentiated approach, they really need to have experienced it themselves in their training. An interesting post on edweek  sums this up: If professional development is supposed to lead to better pedagogy that will improve student learning AND we are all in agreement that modeling behaviors is the best way to show people how to do something, THEN why not ensure all professional learning opportunities exhibit the qualities we want classroom teachers to have?

Differentiated teacher development / training is rare. According to the Center for Public Education’s Teaching the Teachers report , almost all teachers participate in ‘professional development’ (PD) throughout the year. However, a majority of those teachers find the PD in which they participate ineffective. Typically, the development is characterised by ‘drive-by’ workshops, one-size-fits-all presentations, ‘been there, done that’ topics, little or no modelling of what is being taught, a focus on rotating fads and a lack of follow-up. This report is not specifically about English language teachers, but it will resonate with many who are working in English language teaching around the world.cindy strickland

The promotion of differentiated teacher development is gaining traction: see here or here , for example, or read Cindy A. Strickland’s ‘Professional Development for Differentiating Instruction’.

Remember, though, that it’s really training, rather than development, that we’re talking about. After all, if one of the objectives is to equip teachers with a skills set that will enable them to become more effective instructors of differentiated learning, this is most definitely ‘training’ (notice the transitivity of the verbs ‘enable’ and ‘equip’!). In this context, a necessary starting point will be some sort of ‘knowledge graph’ (which I’ve written about here ). For language teachers, these already exist, including the European Profiling Grid , the Eaquals Framework for Language Teacher Training and Development, the Cambridge English Teaching Framework and the British Council’s Continuing Professional Development Framework (CPD) for Teachers  . We can expect these to become more refined and more granularised, and a partial move in this direction is the Cambridge English Digital Framework for Teachers  . Once a knowledge graph is in place, the next step will be to tag particular pieces of teacher training content (e.g. webinars, tasks, readings, etc.) to locations in the framework that is being used. It would not be too complicated to engineer dynamic frameworks which could be adapted to individual or institutional needs.cambridge_english_teaching_framework jpg

This process will be facilitated by the fact that teacher training content is already being increasingly granularised. Whether it’s an MA in TESOL or a shorter, more practically oriented course, things are getting more and more bite-sized, with credits being awarded to these short bites, as course providers face stiffer competition and respond to market demands.

Visible classroom home_page_screenshotClassroom practice could also form part of such an adaptive system. One tool that could be deployed would be Visible Classroom , an automated system for providing real-time evaluative feedback for teachers. There is an ‘online dashboard providing teachers with visual information about their teaching for each lesson in real-time. This includes proportion of teacher talk to student talk, number and type of questions, and their talking speed.’ John Hattie, who is behind this project, says that teachers ‘account for about 30% of the variance in student achievement and [are] the largest influence outside of individual student effort.’ Teacher development with a tool like Visible Classroom is ultimately all about measuring teacher performance (against a set of best-practice benchmarks identified by Hattie’s research) in order to improve the learning outcomes of the students.Visible_classroom_panel_image jpg

You may have noticed the direction in which this part of this blog post is going. I began by talking about social networking systems, podcasts, wikis, blogs and so on, and just now I’ve mentioned the summative, credit-bearing possibilities of an adaptive teacher development training programme. It’s a tension that is difficult to resolve. There’s always a paradox in telling anyone that they are going to embark on a self-directed course of professional development. Whoever pays the piper calls the tune and, if an institution decides that it is worth investing significant amounts of money in teacher development, they will want a return for their money. The need for truly personalised teacher development is likely to be overridden by the more pressing need for accountability, which, in turn, typically presupposes pre-determined course outcomes, which can be measured in some way … so that quality (and cost-effectiveness and so on) can be evaluated.

Finally, it’s worth asking if language teaching (any more than language learning) can be broken down into small parts that can be synthesized later into a meaningful and valuable whole. Certainly, there are some aspects of language teaching (such as the ability to use a dashboard on an LMS) which lend themselves to granularisation. But there’s a real danger of losing sight of the forest of teaching if we focus on the individual trees that can be studied and measured.

In ELT circles, ‘behaviourism’ is a boo word. In the standard history of approaches to language teaching (characterised as a ‘procession of methods’ by Hunter & Smith 2012: 432[1]), there were the bad old days of behaviourism until Chomsky came along, savaged the theory in his review of Skinner’s ‘Verbal Behavior’, and we were all able to see the light. In reality, of course, things weren’t quite like that. The debate between Chomsky and the behaviourists is far from over, behaviourism was not the driving force behind the development of audiolingual approaches to language teaching, and audiolingualism is far from dead. For an entertaining and eye-opening account of something much closer to reality, I would thoroughly recommend a post on Russ Mayne’s Evidence Based ELT blog, along with the discussion which follows it. For anyone who would like to understand what behaviourism is, was, and is not (before they throw the term around as an insult), I’d recommend John A. Mills’ ‘Control: A History of Behavioral Psychology’ (New York University Press, 1998) and John Staddon’s ‘The New Behaviorism 2nd edition’ (Psychology Press, 2014).

There is a close connection between behaviourism and adaptive learning. Audrey Watters, no fan of adaptive technology, suggests that ‘any company touting adaptive learning software’ has been influenced by Skinner. In a more extended piece, ‘Education Technology and Skinner’s Box, Watters explores further her problems with Skinner and the educational technology that has been inspired by behaviourism. But writers much more sympathetic to adaptive learning, also see close connections to behaviourism. ‘The development of adaptive learning systems can be considered as a transformation of teaching machines,’ write Kara & Sevim[2] (2013: 114 – 117), although they go on to point out the differences between the two. Vendors of adaptive learning products, like DreamBox Learning©, are not shy of associating themselves with behaviourism: ‘Adaptive learning has been with us for a while, with its history of adaptive learning rooted in cognitive psychology, beginning with the work of behaviorist B.F. Skinner in the 1950s, and continuing through the artificial intelligence movement of the 1970s.’

That there is a strong connection between adaptive learning and behaviourism is indisputable, but I am not interested in attempting to establish the strength of that connection. This would, in any case, be an impossible task without some reductionist definition of both terms. Instead, my interest here is to explore some of the parallels between the two, and, in the spirit of the topic, I’d like to do this by comparing the behaviours of behaviourists and adaptive learning scientists.

Data and theory

Both behaviourism and adaptive learning (in its big data form) are centrally concerned with behaviour – capturing and measuring it in an objective manner. In both, experimental observation and the collection of ‘facts’ (physical, measurable, behavioural occurrences) precede any formulation of theory. John Mills’ description of behaviourists could apply equally well to adaptive learning scientists: theory construction was a seesaw process whereby one began with crude outgrowths from observations and slowly created one’s theory in such a way that one could make more and more precise observations, building those observations into the theory at each stage. No behaviourist ever considered the possibility of taking existing comprehensive theories of mind and testing or refining them.[3]

Positivism and the panopticon

Both behaviourism and adaptive learning are pragmatically positivist, believing that truth can be established by the study of facts. J. B. Watson, the founding father of behaviourism whose article ‘Psychology as the Behaviorist Views Itset the behaviourist ball rolling, believed that experimental observation could ‘reveal everything that can be known about human beings’[4]. Jose Ferreira of Knewton has made similar claims: We get five orders of magnitude more data per user than Google does. We get more data about people than any other data company gets about people, about anything — and it’s not even close. We’re looking at what you know, what you don’t know, how you learn best. […] We know everything about what you know and how you learn best because we get so much data. Digital data analytics offer something that Watson couldn’t have imagined in his wildest dreams, but he would have approved.

happiness industryThe revolutionary science

Big data (and the adaptive learning which is a part of it) is presented as a game-changer: The era of big data challenges the way we live and interact with the world. […] Society will need to shed some of its obsession for causality in exchange for simple correlations: not knowing why but only what. This overturns centuries of established practices and challenges our most basic understanding of how to make decisions and comprehend reality[5]. But the reverence for technology and the ability to reach understandings of human beings by capturing huge amounts of behavioural data was adumbrated by Watson a century before big data became a widely used term. Watson’s 1913 lecture at Columbia University was ‘a clear pitch’[6] for the supremacy of behaviourism, and its potential as a revolutionary science.

Prediction and controlnudge

The fundamental point of both behaviourism and adaptive learning is the same. The research practices and the theorizing of American behaviourists until the mid-1950s, writes Mills[7] were driven by the intellectual imperative to create theories that could be used to make socially useful predictions. Predictions are only useful to the extent that they can be used to manipulate behaviour. Watson states this very baldly: the theoretical goal of psychology is the prediction and control of behaviour[8]. Contemporary iterations of behaviourism, such as behavioural economics or nudge theory (see, for example, Thaler & Sunstein’s best-selling ‘Nudge’, Penguin Books, 2008), or the British government’s Behavioural Insights Unit, share the same desire to divert individual activity towards goals (selected by those with power), ‘without either naked coercion or democratic deliberation’[9]. Jose Ferreira of Knewton has an identical approach: We can predict failure in advance, which means we can pre-remediate it in advance. We can say, “Oh, she’ll struggle with this, let’s go find the concept from last year’s materials that will help her not struggle with it.” Like the behaviourists, Ferreira makes grand claims about the social usefulness of his predict-and-control technology: The end is a really simple mission. Only 22% of the world finishes high school, and only 55% finish sixth grade. Those are just appalling numbers. As a species, we’re wasting almost four-fifths of the talent we produce. […] I want to solve the access problem for the human race once and for all.

Ethics

Because they rely on capturing large amounts of personal data, both behaviourism and adaptive learning quickly run into ethical problems. Even where informed consent is used, the subjects must remain partly ignorant of exactly what is being tested, or else there is the fear that they might adjust their behaviour accordingly. The goal is to minimise conscious understanding of what is going on[10]. For adaptive learning, the ethical problem is much greater because of the impossibility of ensuring the security of this data. Everything is hackable.

Marketing

Behaviourism was seen as a god-send by the world of advertising. J. B. Watson, after a front-page scandal about his affair with a student, and losing his job at John Hopkins University, quickly found employment on Madison Avenue. ‘Scientific advertising’, as practised by the Mad Men from the 1920s onwards, was based on behaviourism. The use of data analytics by Google, Amazon, et al is a direct descendant of scientific advertising, so it is richly appropriate that adaptive learning is the child of data analytics.

[1] Hunter, D. and Smith, R. (2012) ‘Unpacking the past: “CLT” through ELTJ keywords’. ELT Journal, 66/4: 430-439.

[2] Kara, N. & Sevim, N. 2013. ‘Adaptive learning systems: beyond teaching machines’, Contemporary Educational Technology, 4(2), 108-120

[3] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.5

[4] Davies, W. (2015) The Happiness Industry. London: Verso. p.91

[5] Mayer-Schönberger, V. & Cukier, K. (2013) Big Data. London: John Murray, p.7

[6] Davies, W. (2015) The Happiness Industry. London: Verso. p.87

[7] Mills, J. A. (1998) Control: A History of Behavioral Psychology. New York: New York University Press, p.2

[8] Watson, J. B. (1913) ‘Behaviorism as the Psychologist Views it’ Psychological Review 20: 158

[9] Davies, W. (2015) The Happiness Industry. London: Verso. p.88

[10] Davies, W. (2015) The Happiness Industry. London: Verso. p.92