Posts Tagged ‘coursebooks’

There are a number of reasons why we sometimes need to describe a person’s language competence using a single number. Most of these are connected to the need for a shorthand to differentiate people, in summative testing or in job selection, for example. Numerical (or grade) allocation of this kind is so common (and especially in times when accountability is greatly valued) that it is easy to believe that this number is an objective description of a concrete entity, rather than a shorthand description of an abstract concept. In the process, the abstract concept (language competence) becomes reified and there is a tendency to stop thinking about what it actually is.

Language is messy. It’s a complex, adaptive system of communication which has a fundamentally social function. As Diane Larsen-Freeman and others have argued patterns of use strongly affect how language is acquired, is used, and changes. These processes are not independent of one another but are facets of the same complex adaptive system. […] The system consists of multiple agents (the speakers in the speech community) interacting with one another [and] the structures of language emerge from interrelated patterns of experience, social interaction, and cognitive mechanisms.

As such, competence in language use is difficult to measure. There are ways of capturing some of it. Think of the pages and pages of competency statements in the Common European Framework, but there has always been something deeply unsatisfactory about documents of this kind. How, for example, are we supposed to differentiate, exactly and objectively, between, say, can participate fully in an interview (C1) and can carry out an effective, fluent interview (B2)? The short answer is that we can’t. There are too many of these descriptors anyway and, even if we did attempt to use such a detailed tool to describe language competence, we would still be left with a very incomplete picture. There is at least one whole book devoted to attempts to test the untestable in language education (edited by Amos Paran and Lies Sercu, Multilingual Matters, 2010).

So, here is another reason why we are tempted to use shorthand numerical descriptors (such as A1, A2, B1, etc.) to describe something which is very complex and abstract (‘overall language competence’) and to reify this abstraction in the process. From there, it is a very short step to making things even more numerical, more scientific-sounding. Number-creep in recent years has brought us the Pearson Global Scale of English which can place you at a precise point on a scale from 10 to 90. Not to be outdone, Cambridge English Language Assessment now has a scale that runs from 80 points to 230, although Cambridge does, at least, allocate individual scores for four language skills.

As the title of this post suggests (in its reference to Stephen Jay Gould’s The Mismeasure of Man), I am suggesting that there are parallels between attempts to measure language competence and the sad history of attempts to measure ‘general intelligence’. Both are guilty of the twin fallacies of reification and ranking – the ordering of complex information as a gradual ascending scale. These conceptual fallacies then lead us, through the way that they push us to think about language, into making further conceptual errors about language learning. We start to confuse language testing with the ways that language learning can be structured.

We begin to granularise language. We move inexorably away from difficult-to-measure hazy notions of language skills towards what, on the surface at least, seem more readily measurable entities: words and structures. We allocate to them numerical values on our testing scales, so that an individual word can be deemed to be higher or lower on the scale than another word. And then we have a syllabus, a synthetic syllabus, that lends itself to digital delivery and adaptive manipulation. We find ourselves in a situation where materials writers for Pearson, writing for a particular ‘level’, are only allowed to use vocabulary items and grammatical structures that correspond to that ‘level’. We find ourselves, in short, in a situation where the acquisition of a complex and messy system is described as a linear, additive process. Here’s an example from the Pearson website: If you score 29 on the scale, you should be able to identify and order common food and drink from a menu; at 62, you should be able to write a structured review of a film, book or play. And because the GSE is so granular in nature, you can conquer smaller steps more often; and you are more likely to stay motivated as you work towards your goal. It’s a nonsense, a nonsense that is dictated by the needs of testing and adaptive software, but the sciency-sounding numbers help to hide the conceptual fallacies that lie beneath.

Perhaps, though, this doesn’t matter too much for most language learners. In the early stages of language learning (where most language learners are to be found), there are countless millions of people who don’t seem to mind the granularised programmes of Duolingo or Rosetta Stone, or the Grammar McNuggets of coursebooks. In these early stages, anything seems to be better than nothing, and the testing is relatively low-stakes. But as a learner’s interlanguage becomes more complex, and as the language she needs to acquire becomes more complex, attempts to granularise it and to present it in a linearly additive way become more problematic. It is for this reason, I suspect, that the appeal of granularised syllabuses declines so rapidly the more progress a learner makes. It comes as no surprise that, the further up the scale you get, the more that both teachers and learners want to get away from pre-determined syllabuses in coursebooks and software.

Adaptive language learning software is continuing to gain traction in the early stages of learning, in the initial acquisition of basic vocabulary and structures and in coming to grips with a new phonological system. It will almost certainly gain even more. But the challenge for the developers and publishers will be to find ways of making adaptive learning work for more advanced learners. Can it be done? Or will the mismeasure of language make it impossible?

It’s a good time to be in Turkey if you have digital ELT products to sell. Not so good if you happen to be an English language learner. This post takes a look at both sides of the Turkish lira.

OUP, probably the most significant of the big ELT publishers in Turkey, recorded ‘an outstanding performance’ in the country in the last financial year, making it their 5th largest ELT market. OUP’s annual report for 2013 – 2014 describes the particularly strong demand for digital products and services, a demand which is now influencing OUP’s global strategy for digital resources. When asked about the future of ELT, Peter Marshall , Managing Director of OUP’s ELT Division, suggested that Turkey was a country that could point us in the direction of an answer to the question. Marshall and OUP will be hoping that OUP’s recently launched Digital Learning Platform (DLP) ‘for the global distribution of adult and secondary ELT materials’ will be an important part of that future, in Turkey and elsewhere. I can’t think of any good reason for doubting their belief.

tbl-ipad1OUP aren’t the only ones eagerly checking the pound-lira exchange rates. For the last year, CUP also reported ‘significant sales successes’ in Turkey in their annual report . For CUP, too, it was a year in which digital development has been ‘a top priority’. CUP’s Turkish success story has been primarily driven by a deal with Anadolu University (more about this below) to provide ‘a print and online solution to train 1.7 million students’ using their Touchstone course. This was the biggest single sale in CUP’s history and has inspired publishers, both within CUP and outside, to attempt to emulate the deal. The new blended products will, of course, be adaptive.

Just how big is the Turkish digital ELT pie? According to a 2014 report from Ambient Insight , revenues from digital ELT products reached $32.0 million in 2013. They are forecast to more than double to $72.6 million in 2018. This is a growth rate of 17.8%, a rate which is practically unbeatable in any large economy, and Turkey is the 17th largest economy in the world, according to World Bank statistics .

So, what makes Turkey special?

  • Turkey has a large and young population that is growing by about 1.4% each year, which is equivalent to approximately 1 million people. According to the Turkish Ministry of Education, there are currently about 5.5 million students enrolled in upper-secondary schools. Significant growth in numbers is certain.
  • Turkey is currently in the middle of a government-sponsored $990 million project to increase the level of English proficiency in schools. The government’s target is to position the country as one of the top ten global economies by 2023, the centenary of the Turkish Republic, and it believes that this position will be more reachable if it has a population with the requisite foreign language (i.e. English) skills. As part of this project, the government has begun to introduce English in the 1st grade (previously it was in the 4th grade).
  • The level of English in Turkey is famously low and has been described as a ‘national weakness’. In October/November 2011, the Turkish research institute SETA and the Turkish Ministry for Youth and Sports conducted a large survey across Turkey of 10,174 young citizens, aged 15 to 29. The result was sobering: 59 per cent of the young people said they “did not know any foreign language.” A recent British Council report (2013) found the competence level in English of most (90+%) students across Turkey was evidenced as rudimentary – even after 1000+ hours (estimated at end of Grade 12) of English classes. This is, of course, good news for vendors of English language learning / teaching materials.
  • Turkey has launched one of the world’s largest educational technology projects: the FATIH Project (The Movement to Enhance Opportunities and Improve Technology). One of its objectives is to provide tablets for every student between grades 5 and 12. At the same time, according to the Ambient report , the intention is to ‘replace all print-based textbooks with digital content (both eTextbooks and online courses).’
  • Purchasing power in Turkey is concentrated in a relatively small number of hands, with the government as the most important player. Institutions are often very large. Anadolu University, for example, is the second largest university in the world, with over 2 million students, most of whom are studying in virtual classrooms. There are two important consequences of this. Firstly, it makes scalable, big-data-driven LMS-delivered courses with adaptive software a more attractive proposition to purchasers. Secondly, it facilitates the B2B sales model that is now preferred by vendors (including the big ELT publishers).
  • Turkey also has a ‘burgeoning private education sector’, according to Peter Marshall, and a thriving English language school industry. According to Ambient ‘commercial English language learning in Turkey is a $400 million industry with over 600 private schools across the country’. Many of these are grouped into large chains (see the bullet point above).
  • Turkey is also ‘in the vanguard of the adoption of educational technology in ELT’, according to Peter Marshall. With 36 million internet users, the 5th largest internet population in Europe, and the 3rd highest online engagement in Europe, measured by time spent online, (reported by Sina Afra ), the country’s enthusiasm for educational technology is not surprising. Ambient reports that ‘the growth rate for mobile English educational apps is 27.3%’. This enthusiasm is reflected in Turkey’s thriving ELT conference scene. The most popular conference themes and conference presentations are concerned with edtech. A keynote speech by Esat Uğurlu at the ISTEK schools 3rd international ELT conference at Yeditepe in April 2013 gives a flavour of the current interests. The talk was entitled ‘E-Learning: There is nothing to be afraid of and plenty to discover’.

All of the above makes Turkey a good place to be if you’re selling digital ELT products, even though the competition is pretty fierce. If your product isn’t adaptive, personalized and gamified, you may as well not bother.

What impact will all this have on Turkey’s English language learners? A report co-produced by TEPAV (the Economic Policy Research Foundation of Turkey) and the British Council in November 2013 suggests some of the answers, at least in the school population. The report  is entitled ‘Turkey National Needs Assessment of State School English Language Teaching’ and its Executive Summary is brutally frank in its analysis of the low achievements in English language learning in the country. It states:

The teaching of English as a subject and not a language of communication was observed in all schools visited. This grammar-based approach was identified as the first of five main factors that, in the opinion of this report, lead to the failure of Turkish students to speak/ understand English on graduation from High School, despite having received an estimated 1000+ hours of classroom instruction.

In all classes observed, students fail to learn how to communicate and function independently in English. Instead, the present teacher-centric, classroom practice focuses on students learning how to answer teachers’ questions (where there is only one, textbook-type ‘right’ answer), how to complete written exercises in a textbook, and how to pass a grammar-based test. Thus grammar-based exams/grammar tests (with right/wrong answers) drive the teaching and learning process from Grade 4 onwards. This type of classroom practice dominates all English lessons and is presented as the second causal factor with respect to the failure of Turkish students to speak/understand English.

The problem, in other words, is the curriculum and the teaching. In its recommendations, the report makes this crystal clear. Priority needs to be given to developing a revised curriculum and ‘a comprehensive and sustainable system of in-service teacher training for English teachers’. Curriculum renewal and programmes of teacher training / development are the necessary prerequisites for the successful implementation of a programme of educational digitalization. Unfortunately, research has shown again and again that these take a long time and outcomes are difficult to predict in advance.

By going for digitalization first, Turkey is taking a huge risk. What LMSs, adaptive software and most apps do best is the teaching of language knowledge (grammar and vocabulary), not the provision of opportunities for communicative practice (for which there is currently no shortage of opportunity … it is just that these opportunities are not being taken). There is a real danger, therefore, that the technology will push learning priorities in precisely the opposite direction to that which is needed. Without significant investments in curriculum reform and teacher training, how likely is it that the transmission-oriented culture of English language teaching and learning will change?

Even if the money for curriculum reform and teacher training were found, it is also highly unlikely that effective country-wide approaches to blended learning for English would develop before the current generation of tablets and their accompanying content become obsolete.

Sadly, the probability is, once more, that educational technology will be a problem-changer, even a problem-magnifier, rather than a problem-solver. I’d love to be wrong.

2014-09-30_2216Jose Ferreira, the fast-talking sales rep-in-chief of Knewton, likes to dazzle with numbers. In a 2012 talk hosted by the US Department of Education, Ferreira rattles off the stats: So Knewton students today, we have about 125,000, 180,000 right now, by December it’ll be 650,000, early next year it’ll be in the millions, and next year it’ll be close to 10 million. And that’s just through our Pearson partnership. For each of these students, Knewton gathers millions of data points every day. That, brags Ferreira, is five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. With just a touch of breathless exaggeration, Ferreira goes on: We literally know everything about what you know and how you learn best, everything.

The data is mined to find correlations between learning outcomes and learning behaviours, and, once correlations have been established, learning programmes can be tailored to individual students. Ferreira explains: We take the combined data problem all hundred million to figure out exactly how to teach every concept to each kid. So the 100 million first shows up to learn the rules of exponents, great let’s go find a group of people who are psychometrically equivalent to that kid. They learn the same ways, they have the same learning style, they know the same stuff, because Knewton can figure out things like you learn math best in the morning between 8:40 and 9:13 am. You learn science best in 42 minute bite sizes the 44 minute mark you click right, you start missing questions you would normally get right.

The basic premise here is that the more data you have, the more accurately you can predict what will work best for any individual learner. But how accurate is it? In the absence of any decent, independent research (or, for that matter, any verifiable claims from Knewton), how should we respond to Ferreira’s contribution to the White House Education Datapalooza?

A 51Oy5J3o0yL._AA258_PIkin4,BottomRight,-46,22_AA280_SH20_OU35_new book by Stephen Finlay, Predictive Analytics, Data Mining and Big Data (Palgrave Macmillan, 2014) suggests that predictive analytics are typically about 20 – 30% more accurate than humans attempting to make the same judgements. That’s pretty impressive and perhaps Knewton does better than that, but the key thing to remember is that, however much data Knewton is playing with, and however good their algorithms are, we are still talking about predictions and not certainties. If an adaptive system could predict with 90% accuracy (and the actual figure is typically much lower than that) what learning content and what learning approach would be effective for an individual learner, it would still mean that it was wrong 10% of the time. When this is scaled up to the numbers of students that use Knewton software, it means that millions of students are getting faulty recommendations. Beyond a certain point, further expansion of the data that is mined is unlikely to make any difference to the accuracy of predictions.

A further problem identified by Stephen Finlay is the tendency of people in predictive analytics to confuse correlation and causation. Certain students may have learnt maths best between 8.40 and 9.13, but it does not follow that they learnt it best because they studied at that time. If strong correlations do not involve causality, then actionable insights (such as individualised course design) can be no more than an informed gamble.

Knewton’s claim that they know how every student learns best is marketing hyperbole and should set alarm bells ringing. When it comes to language learning, we simply do not know how students learn (we do not have any generally accepted theory of second language acquisition), let alone how they learn best. More data won’t help our theories of learning! Ferreira’s claim that, with Knewton, every kid gets a perfectly optimized textbook, except it’s also video and other rich media dynamically generated in real time is equally preposterous, not least since the content of the textbook will be at least as significant as the way in which it is ‘optimized’. And, as we all know, textbooks have their faults.

Cui bono? Perhaps huge data and predictive analytics will benefit students; perhaps not. We will need to wait and find out. But Stephen Finlay reminds us that in gold rushes (and internet booms and the exciting world of Big Data) the people who sell the tools make a lot of money. Far more strike it rich selling picks and shovels to prospectors than do the prospectors. Likewise, there is a lot of money to be made selling Big Data solutions. Whether the buyer actually gets any benefit from them is not the primary concern of the sales people. (p.16/17) Which is, perhaps, one of the reasons that some sales people talk so fast.

There is a lot that technology can do to help English language learners develop their reading skills. The internet makes it possible for learners to read an almost limitless number of texts that will interest them, and these texts can evaluated for readability and, therefore, suitability for level (see here for a useful article). RSS opens up exciting possibilities for narrow reading and the positive impact of multimedia-enhanced texts was researched many years ago. There are good online bilingual dictionaries and other translation tools. There are apps that go with graded readers (see this review in the Guardian) and there are apps that can force you to read at a certain speed. And there is more. All of this could very effectively be managed on a good learning platform.

Could adaptive software add another valuable element to reading skills development?

Adaptive reading programs are spreading in the US in primary education, and, with some modifications, could be used in ELT courses for younger learners and for those who do not have the Roman alphabet. One of the most well-known has been developed by Lexia Learning®, a company that won a $500,000 grant from the Gates Foundation last year. Lexia Learning® was bought by Rosetta Stone® for $22.5 million in June 2013.

One of their products, Lexia Reading Core5, ‘provides explicit, systematic, personalized learning in the six areas of reading instruction, and delivers norm-referenced performance data and analysis without interrupting the flow of instruction to administer a test. Designed specifically to meet the Common Core and the most rigorous state standards, this research-proven, technology-based approach accelerates reading skills development, predicts students’ year-end performance and provides teachers data-driven action plans to help differentiate instruction’.

core5-ss-small

The predictable claim that it is ‘research-proven’ has not convinced everyone. Richard Allington, a professor of literacy studies at the University of Tennessee and a past president of both the International Reading Association and the National Reading Association, has said that all the companies that have developed this kind of software ‘come up with evidence – albeit potential evidence — that kids could improve their abilities to read by using their product. It’s all marketing. They’re selling a product. Lexia is one of these programs. But there virtually are no commercial programs that have any solid, reliable evidence that they improve reading achievement.’[1] He has argued that the $12 million that has been spent on the Lexia programs would have been better spent on a national program, developed at Ohio State University, that matches specially trained reading instructors with students known to have trouble learning to read.

But what about ELT? For an adaptive program like Lexia’s to work, reading skills need to be broken down in a similar way to the diagram shown above. Let’s get some folk linguistics out of the way first. The sub-skills of reading are not skimming, scanning, inferring meaning from context, etc. These are strategies that readers adopt voluntarily in order to understand a text better. If a reader uses these strategies in their own language, they are likely to transfer these strategies to their English reading. It seems that ELT instruction in strategy use has only limited impact, although this kind of training may be relevant to preparation for exams. This insight is taking a long time to filter down to course and coursebook design, but there really isn’t much debate[2]. Any adaptive ELT reading program that confuses reading strategies with reading sub-skills is going to have big problems.

What, then, are the sub-skills of reading? In what ways could reading be broken down into a skill tree so that it is amenable to adaptive learning? Researchers have provided different answers. Munby (1978), for example, listed 19 reading microskills, Heaton (1988) listed 14. However, a bigger problem is that other researchers (e.g. Lunzer 1979, Rost 1993) have failed to find evidence that distinct sub-skills actually exist. While it is easier to identify sub-skills for very low level readers (especially for those whose own language is very different from English), it is simply not possible to do so for higher levels.

Reading in another language is a complex process which involves both top-down and bottom-up strategies, is intimately linked to vocabulary knowledge and requires the activation of background, cultural knowledge. Reading ability, in the eyes of some researchers, is unitary or holistic. Others prefer to separate things into two components: word recognition and comprehension[3]. Either way, a consensus is beginning to emerge that teachers and learners might do better to focus on vocabulary extension (and this would include extensive reading) than to attempt to develop reading programs that assume the multidivisible nature of reading.

All of which means that adaptive learning software and reading skills in ELT are unlikely bedfellows. To be sure, an increased use of technology (as described in the first paragraph of this post) in reading work will generate a lot of data about learner behaviours. Analysis of this data may lead to actionable insights, and it may not! It will be interesting to find out.

 

[1] http://www.khi.org/news/2013/jun/17/budget-proviso-reading-program-raises-questions/

[2] See, for example, Walter, C. & M. Swan. 2008. ‘Teaching reading skills: mostly a waste of time?’ in Beaven, B. (ed.) IATEFL 2008 Exeter Conference Selections. (Canterbury: IATEFL). Or go back further to Alderson, J. C. 1984 ‘Reading in a foreign language: a reading problem or a language problem?’ in J.C. Alderson & A. H. Urquhart (eds.) Reading in a Foreign Language (London: Longman)

[3] For a useful summary of these issues, see ‘Reading abilities and strategies: a short introduction’ by Feng Liu (International Education Studies 3 / 3 August 2010) www.ccsenet.org/journal/index.php/ies/article/viewFile/6790/5321

Adaptive learning is likely to impact on the lives of language teachers very soon. In my work as a writer of education materials, it has already dramatically impacted on mine. This impact has affected the kinds of things I am asked to write, the way in which I write them and my relationship with the editors and publishers I am writing for. I am as dismissive as Steve Jobs[1] was of the idea that technology can radically transform education, but in the short term it can radically disrupt it. Change is not necessarily progress.

Teachers and teacher trainers need to be very alert to what is going on if they don’t want to wake up one morning and find themselves out of work, or in a very different kind of job. The claims for adaptive language learning need to be considered in the bright light of particular, local contexts. Teachers and teacher trainers can even take a lesson from the proponents of adaptive learning who rail against the educational approach of one-size-fits-all. One size, whether it’s face-to-face with a print coursebook or whether it’s a blended adaptive program, will never fit all. We need to be very skeptical of the publishers and software providers who claim in a TED-style, almost evangelical way that they are doing the right thing for students, our society, or our world. There is a real risk that adaptive learning may be leading simply to ‘a more standardised, minimalist product targeted for a mass market, [that] will further ‘box in’ and ‘dumb down’ education’ (Selwyn, Education and Technology 2011, p.101).

There is nothing wrong, per se, with adaptive learning. It could be put to some good uses, but how likely is this? In order to understand how it may impact on our working lives, we need to be better informed. A historical perspective is often a good place to start and Larry Cuban’s Teachers and Machines: The Classroom Use of Technology since 1920 (New York: Teachers College Press, 1986) is still well worth reading.

81WEOH4yyOL

To get a good picture of where big data and analytics are now and where they are heading, Mayer-Schonberger & Cukier’s Big Data (London: John Murray, 2013) is informative and entertaining reading. If you are ‘an executive looking to integrate analytics in your decision making or a manager seeking to generate better conversations with the quants in your organisation’, I’d recommend Keeping up with the Quants by Thomas H. Davenport and Jinho Kim (Harvard Business School, 2013). Or you could just read ‘The Economist’ for this kind of thing.

If you want to follow up the connections between educational technology and neo-liberalism, the books by Stephen Ball (Global Education Inc., Abingdon, Oxon: Routledge, 2012), Neil Selwyn (Education and Technology, London: Continuum, 2011; Education in a Digital World, New York: Routledge, 2013; Distrusting Educational Technology, New York: Routledge, 2013), Diane Ravitch (Reign of Error, New York: Knopf, 2013) and Joel Spring (Education Networks, New York: Routledge, 2012; The Great American Education-Industrial Complex with Anthony G. Picciano, Routledge, 2013) are all good reads. And keep a look out for anything new from these writers.

Finally, to keep up to date with recent developments, the eltjam blog http://www.eltjam.com/ is a good one to follow, as is Richard Whiteside’s Scoop.it! page http://www.scoop.it/t/elt-publishing-by-richard-whiteside

I’ll be continuing to post things here from time to time! Thanks for following me so far.


[1] Jobs, however, did set his sights ‘on the $8 billion a year textbook industry, which he saw as ‘ripe for digital destruction’. His first instinct seems to have been to relieve kids from having to carry around heavy backpacks crammed with textbooks: ‘The iPad would solve that,’ he said, ever practical’ (Fullan, Stratosphere 2013, p.61).

Given what we know, it is possible to make some predictions about what the next generation of adult ELT materials will be like when they emerge a few years from now. Making predictions is always a hazardous game, but there are a number of reasonable certainties that can be identified, based on the statements and claims of the major publishers and software providers.

1 Major publishers will move gradually away from traditional coursebooks (whether in print or ebook format) towards the delivery of learning content on learning platforms. At its most limited, this will be in the form of workbook-style material with an adaptive element. At its most developed, this will be in the form of courses that can be delivered entirely without traditional coursebooks. These will allow teachers or institutions to decide the extent to which they wish to blend online and face-to-face instruction.

2 The adaptive elements of these courses will focus primarily or exclusively on discrete item grammar, vocabulary, functional language and phonology, since these lend themselves most readily to the software. These courses will be targeted mainly at lower level (B1 and below) learners.

3 The methodological approach of these courses will be significantly influenced by the expectations of the markets where they are predicted to be most popular and most profitable: South and Central America, the Arabian Gulf and Asia.

4 These courses will permit multiple modifications to suit local requirements. They will also allow additional content to be uploaded.

5 Assessment will play an important role in the design of all these courses. Things like discrete item grammar, vocabulary, functional language and phonology, which lend themselves most readily to assessment, will be prioritized over language skills, which are harder to assess.

6 The discrete items of language that are presented will be tagged to level descriptors, using scales like the Common European Framework or English Profile.

7 Language skills work will be included, but only in the more sophisticated (and better-funded) projects will these components be closely tied to the adaptive software.

8 Because of technological differences between different parts of the world, adaptive courses will co-exist with closely related, more traditional print (or ebook) courses.

9 Training for teachers (especially concerning blended learning) will become an increasingly important part of the package sold by the major publishers.

10 These courses will be more than ever driven by the publishers’ perceptions of what the market wants. There will be a concomitant decrease in the extent to which individual authors, or author teams, influence the material.

knewton-lg

Adaptive learning is a product to be sold. How?

1 Individualised learning

In the vast majority of contexts, language teaching is tied to a ‘one-size-fits-all’ model. This is manifested in institutional and national syllabuses which provide lists of structures and / or competences that all students must master within a given period of time. It is usually actualized in the use of coursebooks, often designed for ‘global markets’. Reaction against this model has been common currency for some time, and has led to a range of suggestions for alternative approaches (such as DOGME), none of which have really caught on. The advocates of adaptive learning programs have tapped into this zeitgeist and promise ‘truly personalized learning’. Atomico, a venture capital company that focuses on consumer technologies, and a major investor in Knewton, describes the promise of adaptive learning in the following terms: ‘Imagine lessons that adapt on-the-fly to the way in which an individual learns, and powerful predictive analytics that help teachers differentiate instruction and understand what each student needs to work on and why[1].’

This is a seductive message and is often framed in such a way that disagreement seems impossible. A post on one well-respected blog, eltjam, which focuses on educational technology in language learning, argued the case for adaptive learning very strongly in July 2013: ‘Adaptive Learning is a methodology that is geared towards creating a learning experience that is unique to each individual learner through the intervention of computer software. Rather than viewing learners as a homogenous collective with more or less identical preferences, abilities, contexts and objectives who are shepherded through a glossy textbook with static activities/topics, AL attempts to tap into the rich meta-data that is constantly being generated by learners (and disregarded by educators) during the learning process. Rather than pushing a course book at a class full of learners and hoping that it will (somehow) miraculously appeal to them all in a compelling, salubrious way, AL demonstrates that the content of a particular course would be more beneficial if it were dynamic and interactive. When there are as many responses, ideas, personalities and abilities as there are learners in the room, why wouldn’t you ensure that the content was able to map itself to them, rather than the other way around?[2]

Indeed. But it all depends on what, precisely, the content is – a point I will return to in a later post. For the time being, it is worth noting the prominence that this message is given in the promotional discourse. It is a message that is primarily directed at teachers. It is more than a little disingenuous, however, because teachers are not the primary targets of the promotional discourse, for the simple reason that they are not the ones with purchasing power. The slogan on the homepage of the Knewton website shows clearly who the real audience is: ‘Every education leader needs an adaptive learning infrastructure’[3].

2 Learning outcomes and testing

Education leaders, who are more likely these days to come from the world of business and finance than the world of education, are currently very focused on two closely interrelated topics: the need for greater productivity and accountability, and the role of technology. They generally share the assumption of other leaders in the World Economic Forum that ICT is the key to the former and ‘the key to a better tomorrow’ (Spring, Education Networks, 2012, p.52). ‘We’re at an important transition point,’ said Arne Duncan, the U.S. Secretary of Education in 2010, ‘we’re getting ready to move from a predominantly print-based classroom to a digital learning environment’ (quoted by Spring, 2012, p.58). Later in the speech, which was delivered at the time as the release of the new National Education Technology Plan, Duncan said ‘just as technology has increased productivity in the business world, it is an essential tool to help boost educational productivity’. The plan outlines how this increased productivity could be achieved: we must start ‘with being clear about the learning outcomes we expect from the investments we make’ (Office of Educational Technology, Transforming American Education: Learning Powered by Technology, U.S. Department of Education, 2010). The greater part of the plan is devoted to discussion of learning outcomes and assessment of them.

Learning outcomes (and their assessment) are also at the heart of ‘Asking More: the Path to Efficacy’ (Barber and Rizvi (eds), Asking More: the Path to Efficacy Pearson, 2013), Pearson’s blueprint for the future of education. According to John Fallon, the CEO of Pearson, ‘our focus should unfalteringly be on honing and improving the learning outcomes we deliver’ (Barber and Rizvi, 2013, p.3). ‘High quality learning’ is associated with ‘a relentless focus on outcomes’ (ibid, p.3) and words like ‘measuring / measurable’, ‘data’ and ‘investment’ are almost as salient as ‘outcomes’. A ‘sister’ publication, edited by the same team, is entitled ‘The Incomplete Guide to Delivering Learning Outcomes’ (Barber and Rizvi (eds), Pearson, 2013) and explores further Pearson’s ambition to ‘become the world’s leading education company’ and to ‘deliver learning outcomes’.

It is no surprise that words like ‘outcomes’, ‘data’ and ‘measure’ feature equally prominently in the language of adaptive software companies like Knewton (see, for example, the quotation from Jose Ferreira, CEO of Knewton, in an earlier post). Adaptive software is premised on the establishment and measurement of clearly defined learning outcomes. If measurable learning outcomes are what you’re after, it’s hard to imagine a better path to follow than adaptive software. If your priorities include standards and assessment, it is again hard to imagine an easier path to follow than adaptive software, which was used in testing long before its introduction into instruction. As David Kuntz, VP of research at Knewton and, before that, a pioneer of algorithms in the design of tests, points out, ‘when a student takes a course powered by Knewton, we are continuously evaluating their performance, what others have done with that material before, and what [they] know’[4]. Knewton’s claim that every education leader needs an adaptive learning infrastructure has a powerful internal logic.

3 New business models

‘Adapt or die’ (a phrase originally coined by the last prime minister of apartheid South Africa) is a piece of advice that is often given these days to both educational institutions and publishers. British universities must adapt or die, according to Michael Barber, author of ‘An Avalanche is Coming[5]’ (a report commissioned by the British Institute for Public Policy Research), Chief Education Advisor to Pearson, and editor of the Pearson ‘Efficacy’ document (see above). ELT publishers ‘must change or die’, reported the eltjam blog[6], and it is a message that is frequently repeated elsewhere. The move towards adaptive learning is seen increasingly often as one of the necessary adaptations for both these sectors.

The problems facing universities in countries like the U.K. are acute. Basically, as the introduction to ‘An Avalanche is Coming’ puts it, ‘the traditional university is being unbundled’. There are a number of reasons for this including the rising cost of higher education provision, greater global competition for the same students, funding squeezes from central governments, and competition from new educational providers (such as MOOCs). Unsurprisingly, universities (supported by national governments) have turned to technology, especially online course delivery, as an answer to their problems. There are two main reasons for this. Firstly, universities have attempted to reduce operating costs by looking for increases in scale (through mergers, transnational partnerships, international branch campuses and so on). Mega-universities are growing, and there are thirty-three in Asia alone (Selwyn Education in a Digital World New York: Routledge 2013, p.6). Universities like the Turkish Anadolu University, with over one million students, are no longer exceptional in terms of scale. In this world, online educational provision is a key element. Secondly, and not to put too fine a point on it, online instruction is cheaper (Spring, Education Networks 2012, p.2).

All other things being equal, why would any language department of an institute of higher education not choose an online environment with an adaptive element? Adaptive learning, for the time being at any rate, may be seen as ‘the much needed key to the “Iron Triangle” that poses a conundrum to HE providers; cost, access and quality. Any attempt to improve any one of those conditions impacts negatively on the others. If you want to increase access to a course you run the risk of escalating costs and jeopardising quality, and so on.[7]

Meanwhile, ELT publishers have been hit by rampant pirating of their materials, spiraling development costs of their flagship products and the growth of open educational resources. An excellent blog post by David Wiley[8] explains why adaptive learning services are a heaven-sent opportunity for publishers to modify their business model. ‘While the broad availability of free content and open educational resources have trained internet users to expect content to be free, many people are still willing to pay for services. Adaptive learning systems exploit this willingness by deeply intermingling content and services so that you cannot access one with using the other. Naturally, because an adaptive learning service is comprised of content plus adaptive services, it will be more expensive than static content used to be. And because it is a service, you cannot simply purchase it like you used to buy a textbook. An adaptive learning service is something you subscribe to, like Netflix. […] In short, why is it in a content company’s interest to enable you to own anything? Put simply, it is not. When you own a copy, the publisher completely loses control over it. When you subscribe to content through a digital service (like an adaptive learning service), the publisher achieves complete and perfect control over you and your use of their content.’

Although the initial development costs of building a suitable learning platform with adaptive capabilities are high, publishers will subsequently be able to produce and modify content (i.e. learning materials) much more efficiently. Since content will be mashed up and delivered in many different ways, author royalties will be cut or eliminated. Production and distribution costs will be much lower, and sales and marketing efforts can be directed more efficiently towards the most significant customers. The days of ELT sales reps trying unsuccessfully to get an interview with the director of studies of a small language school or university department are becoming a thing of the past. As with the universities, scale will be everything.


[2]http://www.eltjam.com/adaptive-learning/ (last accessed 13 January 2014)

[3] http://www.knewton.com/ (last accessed 13 January 2014)

[4] MIT Technology Review, November 26, 2012 http://www.technologyreview.com/news/506366/questions-surround-software-that-adapts-to-students/ (last accessed 13 January 2014)

[7] Tim Gifford Taking it Personally: Adaptive Learning July 9, 2013 http://www.eltjam.com/adaptive-learning/ (last accessed January 13, 2014)

[8] David Wiley, Buying our Way into Bondage: the risks of adaptive learning services March 20,2013 http://opencontent.org/blog/archives/2754 (last accessed January 13, 2014)

For some years now, universities and other educational institutions around the world have been using online learning platforms, also known as Learning Management Systems (LMSs) or Virtual Learning Environments (VLEs).Well-known versions of these include Blackboard  and Moodle. The latter is used by over 50% of higher education establishments in the UK (Dudeney & Hockly, How to Teach English with Technology Harlow, Essex: Pearson, 2007, p.53). These platforms allow course content – lectures, videos, activities, etc. – to be stored and delivered, and they allow institutions to modify courses to fit their needs. In addition, they usually have inbuilt mechanisms for assessment, tracking of learners, course administration and communication (email, chat, blogs, etc.). While these platforms can be used for courses that are delivered exclusively online, more commonly they are used to manage blended-learning courses (i.e. a mixture of online and face-to-face teaching). The platforms make the running of such courses relatively easy, as they bring together under one roof everything that the institution or teacher needs: ‘tools that have been designed to work together and have the same design ethos, both pedagogically and visually’ (Sharma & Barrett, Blended Learning Oxford: Macmillan, 2007, p.108).

The major ELT publishers all have their own LMSs, sometimes developed by themselves, sometimes developed in partnership with specialist companies. One of the most familiar, because it has been around for a long time, is the Macmillan English Campus. Campus offers both ready-made courses and a mix-and-match option drawing on the thousands of resources available (for grammar, vocabulary, pronunciation and language skills development). Other content can also be uploaded. The platform also offers automatic marking and mark recording, ready-made tests and messaging options.

MEC3

In the last few years, the situation has changed rapidly. In May 2013, Knewton, the world’s leading adaptive learning technology provider, announced a partnership with Macmillan ‘to build next-generation English Language Learning and Teaching materials’. In September 2013, it was the turn of Cambridge University Press to sign their partnership with Knewton ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. In both cases, Knewton’s adaptive learning technology will be integrated into the publisher’s learning platforms. Pearson, which is also in partnership with Knewton (but not for ELT products), has invested heavily in its MyLab products.

Exactly what will emerge from these new business partnerships and from the continuously evolving technology remains to be seen. The general picture is, however, clearer. We will see an increasing convergence of technologies (administrative systems, educational platforms, communication technologies, big data analytics and adaptive learning) into integrated systems. This will happen first in in-company training departments, universities and colleges of higher education. It is clear already that the ELT divisions of companies like Pearson and Macmillan are beginning to move away from their reliance on printed textbooks for adult learners. This was made graphically clear at the 2013 IATEFL conference in Liverpool when the Pearson exhibition stand had absolutely no books on it (although Pearson now acknowledge this was a ‘mistake). In my next post, I will make a number of more specific predictions about what is coming.

There is a good chance that many readers will have only the haziest idea of what adaptive learning is. There is a much better chance that most English language teachers, especially those working in post-secondary education, will feel the impact of adaptive learning on their professional lives in the next few years. According to Time magazine, it is a ‘hot concept, embraced by education reformers‘, which is ‘poised to reshape education’[1]. According to the educational news website, Education Dive, there is ‘no hotter segment in ed tech right now’[2]. All the major ELT publishers are moving away from traditional printed coursebooks towards the digital delivery of courses that will contain adaptive learning elements. Their investments in the technology are colossal. Universities in many countries, especially the US, are moving in the same direction, again with huge investments. National and regional governments, intergovernmental organisations (such as UNESCO, the OECD, the EU and the World Bank), big business and hugely influential private foundations (such as the Bill and Melinda Gates Foundation) are all lined up in support of the moves towards the digital delivery of education, which (1) will inevitably involve elements of adaptive learning, and (2) will inevitably impact massively on the world of English language teaching.

The next 13 posts will, together, form a guide to adaptive learning in ELT.

1 Introduction

2 Simple models of adaptive learning

3 Gamification

4 Big data, analytics and adaptive learning

5 Platforms and more complex adaptive learning systems

6 The selling points of adaptive learning

7 Ten predictions for the future

8 Theory, research and practice
9 Neo liberalism and solutionism
10 Learn more