Posts Tagged ‘LMS’

It’s a good time to be in Turkey if you have digital ELT products to sell. Not so good if you happen to be an English language learner. This post takes a look at both sides of the Turkish lira.

OUP, probably the most significant of the big ELT publishers in Turkey, recorded ‘an outstanding performance’ in the country in the last financial year, making it their 5th largest ELT market. OUP’s annual report for 2013 – 2014 describes the particularly strong demand for digital products and services, a demand which is now influencing OUP’s global strategy for digital resources. When asked about the future of ELT, Peter Marshall , Managing Director of OUP’s ELT Division, suggested that Turkey was a country that could point us in the direction of an answer to the question. Marshall and OUP will be hoping that OUP’s recently launched Digital Learning Platform (DLP) ‘for the global distribution of adult and secondary ELT materials’ will be an important part of that future, in Turkey and elsewhere. I can’t think of any good reason for doubting their belief.

tbl-ipad1OUP aren’t the only ones eagerly checking the pound-lira exchange rates. For the last year, CUP also reported ‘significant sales successes’ in Turkey in their annual report . For CUP, too, it was a year in which digital development has been ‘a top priority’. CUP’s Turkish success story has been primarily driven by a deal with Anadolu University (more about this below) to provide ‘a print and online solution to train 1.7 million students’ using their Touchstone course. This was the biggest single sale in CUP’s history and has inspired publishers, both within CUP and outside, to attempt to emulate the deal. The new blended products will, of course, be adaptive.

Just how big is the Turkish digital ELT pie? According to a 2014 report from Ambient Insight , revenues from digital ELT products reached $32.0 million in 2013. They are forecast to more than double to $72.6 million in 2018. This is a growth rate of 17.8%, a rate which is practically unbeatable in any large economy, and Turkey is the 17th largest economy in the world, according to World Bank statistics .

So, what makes Turkey special?

  • Turkey has a large and young population that is growing by about 1.4% each year, which is equivalent to approximately 1 million people. According to the Turkish Ministry of Education, there are currently about 5.5 million students enrolled in upper-secondary schools. Significant growth in numbers is certain.
  • Turkey is currently in the middle of a government-sponsored $990 million project to increase the level of English proficiency in schools. The government’s target is to position the country as one of the top ten global economies by 2023, the centenary of the Turkish Republic, and it believes that this position will be more reachable if it has a population with the requisite foreign language (i.e. English) skills. As part of this project, the government has begun to introduce English in the 1st grade (previously it was in the 4th grade).
  • The level of English in Turkey is famously low and has been described as a ‘national weakness’. In October/November 2011, the Turkish research institute SETA and the Turkish Ministry for Youth and Sports conducted a large survey across Turkey of 10,174 young citizens, aged 15 to 29. The result was sobering: 59 per cent of the young people said they “did not know any foreign language.” A recent British Council report (2013) found the competence level in English of most (90+%) students across Turkey was evidenced as rudimentary – even after 1000+ hours (estimated at end of Grade 12) of English classes. This is, of course, good news for vendors of English language learning / teaching materials.
  • Turkey has launched one of the world’s largest educational technology projects: the FATIH Project (The Movement to Enhance Opportunities and Improve Technology). One of its objectives is to provide tablets for every student between grades 5 and 12. At the same time, according to the Ambient report , the intention is to ‘replace all print-based textbooks with digital content (both eTextbooks and online courses).’
  • Purchasing power in Turkey is concentrated in a relatively small number of hands, with the government as the most important player. Institutions are often very large. Anadolu University, for example, is the second largest university in the world, with over 2 million students, most of whom are studying in virtual classrooms. There are two important consequences of this. Firstly, it makes scalable, big-data-driven LMS-delivered courses with adaptive software a more attractive proposition to purchasers. Secondly, it facilitates the B2B sales model that is now preferred by vendors (including the big ELT publishers).
  • Turkey also has a ‘burgeoning private education sector’, according to Peter Marshall, and a thriving English language school industry. According to Ambient ‘commercial English language learning in Turkey is a $400 million industry with over 600 private schools across the country’. Many of these are grouped into large chains (see the bullet point above).
  • Turkey is also ‘in the vanguard of the adoption of educational technology in ELT’, according to Peter Marshall. With 36 million internet users, the 5th largest internet population in Europe, and the 3rd highest online engagement in Europe, measured by time spent online, (reported by Sina Afra ), the country’s enthusiasm for educational technology is not surprising. Ambient reports that ‘the growth rate for mobile English educational apps is 27.3%’. This enthusiasm is reflected in Turkey’s thriving ELT conference scene. The most popular conference themes and conference presentations are concerned with edtech. A keynote speech by Esat Uğurlu at the ISTEK schools 3rd international ELT conference at Yeditepe in April 2013 gives a flavour of the current interests. The talk was entitled ‘E-Learning: There is nothing to be afraid of and plenty to discover’.

All of the above makes Turkey a good place to be if you’re selling digital ELT products, even though the competition is pretty fierce. If your product isn’t adaptive, personalized and gamified, you may as well not bother.

What impact will all this have on Turkey’s English language learners? A report co-produced by TEPAV (the Economic Policy Research Foundation of Turkey) and the British Council in November 2013 suggests some of the answers, at least in the school population. The report  is entitled ‘Turkey National Needs Assessment of State School English Language Teaching’ and its Executive Summary is brutally frank in its analysis of the low achievements in English language learning in the country. It states:

The teaching of English as a subject and not a language of communication was observed in all schools visited. This grammar-based approach was identified as the first of five main factors that, in the opinion of this report, lead to the failure of Turkish students to speak/ understand English on graduation from High School, despite having received an estimated 1000+ hours of classroom instruction.

In all classes observed, students fail to learn how to communicate and function independently in English. Instead, the present teacher-centric, classroom practice focuses on students learning how to answer teachers’ questions (where there is only one, textbook-type ‘right’ answer), how to complete written exercises in a textbook, and how to pass a grammar-based test. Thus grammar-based exams/grammar tests (with right/wrong answers) drive the teaching and learning process from Grade 4 onwards. This type of classroom practice dominates all English lessons and is presented as the second causal factor with respect to the failure of Turkish students to speak/understand English.

The problem, in other words, is the curriculum and the teaching. In its recommendations, the report makes this crystal clear. Priority needs to be given to developing a revised curriculum and ‘a comprehensive and sustainable system of in-service teacher training for English teachers’. Curriculum renewal and programmes of teacher training / development are the necessary prerequisites for the successful implementation of a programme of educational digitalization. Unfortunately, research has shown again and again that these take a long time and outcomes are difficult to predict in advance.

By going for digitalization first, Turkey is taking a huge risk. What LMSs, adaptive software and most apps do best is the teaching of language knowledge (grammar and vocabulary), not the provision of opportunities for communicative practice (for which there is currently no shortage of opportunity … it is just that these opportunities are not being taken). There is a real danger, therefore, that the technology will push learning priorities in precisely the opposite direction to that which is needed. Without significant investments in curriculum reform and teacher training, how likely is it that the transmission-oriented culture of English language teaching and learning will change?

Even if the money for curriculum reform and teacher training were found, it is also highly unlikely that effective country-wide approaches to blended learning for English would develop before the current generation of tablets and their accompanying content become obsolete.

Sadly, the probability is, once more, that educational technology will be a problem-changer, even a problem-magnifier, rather than a problem-solver. I’d love to be wrong.


(This post won’t make a lot of sense unless you read the previous one – Researching research: part 1!)

dropoutsI suggested in the previous post that the research of Jayaprakash et al had confirmed something that we already knew concerning the reasons why some students drop out of college. However, predictive analytics are only part of the story. As the authors of this paper point out, they ‘do not influence course completion and retention rates without being combined with effective intervention strategies aimed at helping at-risk students succeed’. The point of predictive analytics is to facilitate the deployment of effective and appropriate interventions strategies, and to do this sooner than would be possible without the use of the analytics. So, it is to these intervention strategies that I now turn.

Interventions to help at-risk students included the following:

  • Sending students messages to inform them that they are at risk of not completing the course (‘awareness messaging’)
  • Making students more aware of the available academic support services (which could, for example, direct them to a variety of campus-based or online resources)
  • Promoting peer-to-peer engagement (e.g. with an online ‘student lounge’ discussion forum)
  • Providing access to self-assessment tools

The design of these interventions was based on the work that had been done at Purdue, which was, in turn, inspired by the work of Vince Tinto, one of the world’s leading experts on student retention issues.

The work done at Purdue had shown that simple notifications to students that they were at risk could have a significant, and positive, effect on student behaviour. Jayaprakash and the research team took the students who had been identified as at-risk by the analytics and divided them into three groups: the first were issued with ‘awareness messages’, the second were offered a combination of the other three interventions in the bullet point list above, and the third, a control group, had no interventions at all. The results showed that the students who were in treatment groups (of either kind of intervention) showed a statistically significant improvement compared to those who received no treatment at all. However, there seemed to be no difference in the effectiveness of the different kinds of intervention.

So far, so good, but, once again, I was left thinking that I hadn’t really learned very much from all this. But then, in the last five pages, the article suddenly got very interesting. Remember that the primary purpose of this whole research project was to find ways of helping not just at-risk students, but specifically socioeconomically disadvantaged at-risk students (such as those receiving Pell Grants). Accordingly, the researchers then focussed on this group. What did they find?

Once again, interventions proved more effective at raising student scores than no intervention at all. However, the averages of final scores are inevitably affected by drop-out rates (since students who drop out do not have final scores which can be included in the averages). At Purdue, the effect of interventions on drop-out rates had not been found to be significant. Remember that Purdue has a relatively well-off student demographic. However, in this research, which focussed on colleges with a much higher proportion of students on Pell Grants, the picture was very different. Of the Pell Grant students who were identified as at-risk and who were given some kind of treatment, 25.6% withdrew from the course. Of the Pell Grant students who were identified as at-risk but who were not ‘treated’ in any way (i.e. those in the control group), only 14.1% withdrew from the course. I recommend that you read those numbers again!

The research programme had resulted in substantially higher drop-out rates for socioeconomically disadvantaged students – the precise opposite of what it had set out to achieve. Jayaprakash et al devote one page of their article to the ethical issues this raises. They suggest that early intervention, resulting in withdrawal, might actually be to the benefit of some students who were going to fail whatever happened. It is better to get a ‘W’ (withdrawal) grade on your transcript than an ‘F’ (fail), and you may avoid wasting your money at the same time. This may be true, but it would be equally true that not allowing at-risk students (who, of course, are disproportionately from socioeconomically disadvantaged backgrounds) into college at all might also be to their ‘benefit’. The question, though, is: who has the right to make these decisions on behalf of other people?

The authors also acknowledge another ethical problem. The predictive analytics which will prompt the interventions are not 100% accurate. 85% accuracy could be considered a pretty good figure. This means that some students who are not at-risk are labelled as at-risk, and other who are at-risk are not identified. Of these two possibilities, I find the first far more worrying. We are talking about the very real possibility of individual students being pushed into making potentially life-changing decisions on the basis of dodgy analytics. How ethical is that? The authors’ conclusion is that the situation forces them ‘to develop the most accurate predictive models possible, as well as to take steps to reduce the likelihood that any intervention would result in the necessary withdrawal of a student’.

I find this extraordinary. It is premised on the assumption that predictive models can be made much, much more accurate. They seem to be confusing prediction and predeterminism. A predictive model is, by definition, only predictive. There will always be error. How many errors are ethically justifiable? And, the desire to reduce the likelihood of unnecessary withdrawals is a long way from the need to completely eliminate the likelihood of unnecessary withdrawals, which seems to me to be the ethical position. More than anything else in the article, this sentence illustrates that the a priori assumption is that predictive analytics can be a force for good, and that the only real problem is getting the science right. If a number of young lives are screwed up along the way, we can at least say that science is getting better.

In the authors’ final conclusion, they describe the results of their research as ‘promising’. They do not elaborate on who it is promising for. They say that relatively simple intervention strategies can positively impact student learning outcomes, but they could equally well have said that relatively simple intervention strategies can negatively impact learning outcomes. They could have said that predictive analytics and intervention programmes are fine for the well-off, but more problematic for the poor. Remembering once more that the point of the study was to look at the situation of socioeconomically disadvantaged at-risk students, it is striking that there is no mention of this group in the researchers’ eight concluding points. The vast bulk of the paper is devoted to technical descriptions of the design and training of the software; the majority of the conclusions are about the validity of that design and training. The ostensibly intended beneficiaries have got lost somewhere along the way.

How and why is it that a piece of research such as this can so positively slant its results? In the third and final part of this mini-series, I will turn my attention to answering that question.

article-2614966-1D6DC26500000578-127_634x776In the 8th post on this blog (‘Theory, Research and Practice’), I referred to the lack of solid research into learning analytics. Whilst adaptive learning enthusiasts might disagree with much, or even most, of what I have written on this subject, here, at least, was an area of agreement. May of this year, however, saw the launch of the inaugural issue of the Journal of Learning Analytics, the first journal ‘dedicated to research into the challenges of collecting, analysing and reporting data with the specific intent to improve learning’. It is a peer-reviewed, open-access journal, available here , which is published by the Society for Learning Analytics Research (SoLAR), a consortium of academics from 9 universities in the US, Canada, Britain and Australia.

I decided to take a closer look. In this and my next two posts, I will focus on one article from this inaugural issue. It’s called Early Alert of Academically At‐Risk Students: An Open Source Analytics Initiative and it is co-authored by Sandeep M. Jayaprakash, Erik W. Moody, Eitel J.M. Lauría, James R. Regan, and Joshua D. Baron of Marist College in the US. Bear with me, please – it’s more interesting than it might sound!

The background to this paper is the often referred to problem of college drop-outs in the US, and the potential of learning analytics to address what is seen as a ‘national challenge’. The most influential work that has been done in this area to date was carried out at Purdue University. Purdue developed an analytical system, called Course Signals, which identified students at risk of course failure and offered a range of interventions (more about these in the next post) which were designed to improve student outcomes. I will have more to say about the work at Purdue in my third post, but, for the time being, it is enough to say that, in the field, it has been considered very successful, and that the authors of the paper I looked at have based their approach on the work done at Purdue.

Jayaprakash et al developed their own analytical system, based on Purdue’s Course Signals, and used it at their own institution, Marist College. Basically, they wanted to know if they could replicate the good results that had been achieved at Purdue. They then took the same analytical system to four different institutions, of very different kinds (public, as opposed to private; community colleges offering 2-year programmes rather than universities) to see if the results could be replicated there, too. They also wanted to find out if the interventions with students who had been signalled as at-risk would be as effective as they had been at Purdue. So far, so good: it is clearly very important to know if one particular piece of research has any significance beyond its immediate local context.

So, what did Jayaprakash et al find out? Basically, they learnt that their software worked as well at Marist as Course Signals had done at Purdue. They collected data on student demographics and aptitude, course grades and course related data, data on students’ interactions with the LMS they were using and performance data captured by the LMS. Oh, yes, and absenteeism. At the other institutions where they trialled their software, the system was 10% less accurate in predicting drop-outs, but the authors of the research still felt that ‘predictive models developed based on data from one institution may be scalable to other institutions’.

But more interesting than the question of whether or not the predictive analytics worked is the question of which specific features of the data were the most powerful predictors. What they discovered was that absenteeism was highly significant. No surprises there. They also learnt that the other most powerful predictors were (1) the students’ cumulative grade point average (GPA), an average of a student’s academic scores over their entire academic career, and (2) the scores recorded by the LMS of the work that students had done during the course which would contribute to their final grade. No surprises there, either. As the authors point out, ‘given that these two attributes are such fundamental aspects of academic success, it is not surprising that the predictive model has fared so well across these different institutions’.

Agreed, it is not surprising at all that students with lower scores and a history of lower scores are more likely to drop out of college than students with higher scores. But, I couldn’t help wondering, do we really need sophisticated learning analytics to tell us this? Wouldn’t any teacher know this already? They would, of course, if they knew their students, but if the teacher: student ratio is in the order of 1: 100 (not unheard of in lower-funded courses delivered primarily through an LMS), many teachers (and their students) might benefit from automated alert systems.

But back to the differences between the results at Purdue and Marist and at the other institutions. Why were the predictive analytics less successful at the latter? The answer is in the nature of the institutions. Essentially, it boils down to this. In institutions with low drop-out rates, the analytics are more reliable than in institutions with high drop-out rates, because the more at-risk students there are, the harder it is to predict the particular individuals who will actually drop out. Jayaprakash et al provide the key information in a useful table. Students at Marist College are relatively well-off (only 16% receive Pell Grants, which are awarded to students in financial need), and only a small number (12%) are from ‘ethnic minorities’. The rate of course non-completion in normal time is relatively low (at 20%). In contrast, at one of the other institutions, the College of the Redwoods in California, 44% of the students receive Pell Grants and 22% of them are from ‘ethnic minorities’. The non-completion rate is a staggering 96%. At Savannah State University, 78% of the students receive Pell Grants, and the non-completion rate is 70%. The table also shows the strong correlation between student poverty and high student: faculty ratios.

In other words, the poorer you are, the less likely you are to complete your course of study, and the less likely you are to know your tutors (these two factors also correlate). In other other words, the whiter you are, the more likely you are to complete your course of study (because of the strong correlations between race and poverty). While we are playing the game of statistical correlations, let’s take it a little further. As the authors point out, ‘there is considerable evidence that students with lower socio-economic status have lower GPAs and graduation rates’. If, therefore, GPAs are one of the most significant predictors of academic success, we can say that socio-economic status (and therefore race) is one of the most significant predictors of academic success … even if the learning analytics do not capture this directly.

Actually, we have known this for a long time. The socio-economic divide in education is frequently cited as one of the big reasons for moving towards digitally delivered courses. This particular piece of research was funded (more about this in the next posts) with the stipulation that it ‘investigated and demonstrated effective techniques to improve student retention in socio-economically disadvantaged populations’. We have also known for some time that digitally delivered education increases the academic divide between socio-economic groups. So what we now have is a situation where a digital technology (learning analytics) is being used as a partial solution to a problem that has always been around, but which has been exacerbated by the increasing use of another digital technology (LMSs) in education. We could say, then, that if we weren’t using LMSs, learning analytics would not be possible … but we would need them less, anyway.

My next post will look at the results of the interventions with students that were prompted by the alerts generated by the learning analytics. Advance warning: it will make what I have written so far seem positively rosy.

Adaptive learning is a product to be sold. How?

1 Individualised learning

In the vast majority of contexts, language teaching is tied to a ‘one-size-fits-all’ model. This is manifested in institutional and national syllabuses which provide lists of structures and / or competences that all students must master within a given period of time. It is usually actualized in the use of coursebooks, often designed for ‘global markets’. Reaction against this model has been common currency for some time, and has led to a range of suggestions for alternative approaches (such as DOGME), none of which have really caught on. The advocates of adaptive learning programs have tapped into this zeitgeist and promise ‘truly personalized learning’. Atomico, a venture capital company that focuses on consumer technologies, and a major investor in Knewton, describes the promise of adaptive learning in the following terms: ‘Imagine lessons that adapt on-the-fly to the way in which an individual learns, and powerful predictive analytics that help teachers differentiate instruction and understand what each student needs to work on and why[1].’

This is a seductive message and is often framed in such a way that disagreement seems impossible. A post on one well-respected blog, eltjam, which focuses on educational technology in language learning, argued the case for adaptive learning very strongly in July 2013: ‘Adaptive Learning is a methodology that is geared towards creating a learning experience that is unique to each individual learner through the intervention of computer software. Rather than viewing learners as a homogenous collective with more or less identical preferences, abilities, contexts and objectives who are shepherded through a glossy textbook with static activities/topics, AL attempts to tap into the rich meta-data that is constantly being generated by learners (and disregarded by educators) during the learning process. Rather than pushing a course book at a class full of learners and hoping that it will (somehow) miraculously appeal to them all in a compelling, salubrious way, AL demonstrates that the content of a particular course would be more beneficial if it were dynamic and interactive. When there are as many responses, ideas, personalities and abilities as there are learners in the room, why wouldn’t you ensure that the content was able to map itself to them, rather than the other way around?[2]

Indeed. But it all depends on what, precisely, the content is – a point I will return to in a later post. For the time being, it is worth noting the prominence that this message is given in the promotional discourse. It is a message that is primarily directed at teachers. It is more than a little disingenuous, however, because teachers are not the primary targets of the promotional discourse, for the simple reason that they are not the ones with purchasing power. The slogan on the homepage of the Knewton website shows clearly who the real audience is: ‘Every education leader needs an adaptive learning infrastructure’[3].

2 Learning outcomes and testing

Education leaders, who are more likely these days to come from the world of business and finance than the world of education, are currently very focused on two closely interrelated topics: the need for greater productivity and accountability, and the role of technology. They generally share the assumption of other leaders in the World Economic Forum that ICT is the key to the former and ‘the key to a better tomorrow’ (Spring, Education Networks, 2012, p.52). ‘We’re at an important transition point,’ said Arne Duncan, the U.S. Secretary of Education in 2010, ‘we’re getting ready to move from a predominantly print-based classroom to a digital learning environment’ (quoted by Spring, 2012, p.58). Later in the speech, which was delivered at the time as the release of the new National Education Technology Plan, Duncan said ‘just as technology has increased productivity in the business world, it is an essential tool to help boost educational productivity’. The plan outlines how this increased productivity could be achieved: we must start ‘with being clear about the learning outcomes we expect from the investments we make’ (Office of Educational Technology, Transforming American Education: Learning Powered by Technology, U.S. Department of Education, 2010). The greater part of the plan is devoted to discussion of learning outcomes and assessment of them.

Learning outcomes (and their assessment) are also at the heart of ‘Asking More: the Path to Efficacy’ (Barber and Rizvi (eds), Asking More: the Path to Efficacy Pearson, 2013), Pearson’s blueprint for the future of education. According to John Fallon, the CEO of Pearson, ‘our focus should unfalteringly be on honing and improving the learning outcomes we deliver’ (Barber and Rizvi, 2013, p.3). ‘High quality learning’ is associated with ‘a relentless focus on outcomes’ (ibid, p.3) and words like ‘measuring / measurable’, ‘data’ and ‘investment’ are almost as salient as ‘outcomes’. A ‘sister’ publication, edited by the same team, is entitled ‘The Incomplete Guide to Delivering Learning Outcomes’ (Barber and Rizvi (eds), Pearson, 2013) and explores further Pearson’s ambition to ‘become the world’s leading education company’ and to ‘deliver learning outcomes’.

It is no surprise that words like ‘outcomes’, ‘data’ and ‘measure’ feature equally prominently in the language of adaptive software companies like Knewton (see, for example, the quotation from Jose Ferreira, CEO of Knewton, in an earlier post). Adaptive software is premised on the establishment and measurement of clearly defined learning outcomes. If measurable learning outcomes are what you’re after, it’s hard to imagine a better path to follow than adaptive software. If your priorities include standards and assessment, it is again hard to imagine an easier path to follow than adaptive software, which was used in testing long before its introduction into instruction. As David Kuntz, VP of research at Knewton and, before that, a pioneer of algorithms in the design of tests, points out, ‘when a student takes a course powered by Knewton, we are continuously evaluating their performance, what others have done with that material before, and what [they] know’[4]. Knewton’s claim that every education leader needs an adaptive learning infrastructure has a powerful internal logic.

3 New business models

‘Adapt or die’ (a phrase originally coined by the last prime minister of apartheid South Africa) is a piece of advice that is often given these days to both educational institutions and publishers. British universities must adapt or die, according to Michael Barber, author of ‘An Avalanche is Coming[5]’ (a report commissioned by the British Institute for Public Policy Research), Chief Education Advisor to Pearson, and editor of the Pearson ‘Efficacy’ document (see above). ELT publishers ‘must change or die’, reported the eltjam blog[6], and it is a message that is frequently repeated elsewhere. The move towards adaptive learning is seen increasingly often as one of the necessary adaptations for both these sectors.

The problems facing universities in countries like the U.K. are acute. Basically, as the introduction to ‘An Avalanche is Coming’ puts it, ‘the traditional university is being unbundled’. There are a number of reasons for this including the rising cost of higher education provision, greater global competition for the same students, funding squeezes from central governments, and competition from new educational providers (such as MOOCs). Unsurprisingly, universities (supported by national governments) have turned to technology, especially online course delivery, as an answer to their problems. There are two main reasons for this. Firstly, universities have attempted to reduce operating costs by looking for increases in scale (through mergers, transnational partnerships, international branch campuses and so on). Mega-universities are growing, and there are thirty-three in Asia alone (Selwyn Education in a Digital World New York: Routledge 2013, p.6). Universities like the Turkish Anadolu University, with over one million students, are no longer exceptional in terms of scale. In this world, online educational provision is a key element. Secondly, and not to put too fine a point on it, online instruction is cheaper (Spring, Education Networks 2012, p.2).

All other things being equal, why would any language department of an institute of higher education not choose an online environment with an adaptive element? Adaptive learning, for the time being at any rate, may be seen as ‘the much needed key to the “Iron Triangle” that poses a conundrum to HE providers; cost, access and quality. Any attempt to improve any one of those conditions impacts negatively on the others. If you want to increase access to a course you run the risk of escalating costs and jeopardising quality, and so on.[7]

Meanwhile, ELT publishers have been hit by rampant pirating of their materials, spiraling development costs of their flagship products and the growth of open educational resources. An excellent blog post by David Wiley[8] explains why adaptive learning services are a heaven-sent opportunity for publishers to modify their business model. ‘While the broad availability of free content and open educational resources have trained internet users to expect content to be free, many people are still willing to pay for services. Adaptive learning systems exploit this willingness by deeply intermingling content and services so that you cannot access one with using the other. Naturally, because an adaptive learning service is comprised of content plus adaptive services, it will be more expensive than static content used to be. And because it is a service, you cannot simply purchase it like you used to buy a textbook. An adaptive learning service is something you subscribe to, like Netflix. […] In short, why is it in a content company’s interest to enable you to own anything? Put simply, it is not. When you own a copy, the publisher completely loses control over it. When you subscribe to content through a digital service (like an adaptive learning service), the publisher achieves complete and perfect control over you and your use of their content.’

Although the initial development costs of building a suitable learning platform with adaptive capabilities are high, publishers will subsequently be able to produce and modify content (i.e. learning materials) much more efficiently. Since content will be mashed up and delivered in many different ways, author royalties will be cut or eliminated. Production and distribution costs will be much lower, and sales and marketing efforts can be directed more efficiently towards the most significant customers. The days of ELT sales reps trying unsuccessfully to get an interview with the director of studies of a small language school or university department are becoming a thing of the past. As with the universities, scale will be everything.

[2] (last accessed 13 January 2014)

[3] (last accessed 13 January 2014)

[4] MIT Technology Review, November 26, 2012 (last accessed 13 January 2014)

[7] Tim Gifford Taking it Personally: Adaptive Learning July 9, 2013 (last accessed January 13, 2014)

[8] David Wiley, Buying our Way into Bondage: the risks of adaptive learning services March 20,2013 (last accessed January 13, 2014)

For some years now, universities and other educational institutions around the world have been using online learning platforms, also known as Learning Management Systems (LMSs) or Virtual Learning Environments (VLEs).Well-known versions of these include Blackboard  and Moodle. The latter is used by over 50% of higher education establishments in the UK (Dudeney & Hockly, How to Teach English with Technology Harlow, Essex: Pearson, 2007, p.53). These platforms allow course content – lectures, videos, activities, etc. – to be stored and delivered, and they allow institutions to modify courses to fit their needs. In addition, they usually have inbuilt mechanisms for assessment, tracking of learners, course administration and communication (email, chat, blogs, etc.). While these platforms can be used for courses that are delivered exclusively online, more commonly they are used to manage blended-learning courses (i.e. a mixture of online and face-to-face teaching). The platforms make the running of such courses relatively easy, as they bring together under one roof everything that the institution or teacher needs: ‘tools that have been designed to work together and have the same design ethos, both pedagogically and visually’ (Sharma & Barrett, Blended Learning Oxford: Macmillan, 2007, p.108).

The major ELT publishers all have their own LMSs, sometimes developed by themselves, sometimes developed in partnership with specialist companies. One of the most familiar, because it has been around for a long time, is the Macmillan English Campus. Campus offers both ready-made courses and a mix-and-match option drawing on the thousands of resources available (for grammar, vocabulary, pronunciation and language skills development). Other content can also be uploaded. The platform also offers automatic marking and mark recording, ready-made tests and messaging options.


In the last few years, the situation has changed rapidly. In May 2013, Knewton, the world’s leading adaptive learning technology provider, announced a partnership with Macmillan ‘to build next-generation English Language Learning and Teaching materials’. In September 2013, it was the turn of Cambridge University Press to sign their partnership with Knewton ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. In both cases, Knewton’s adaptive learning technology will be integrated into the publisher’s learning platforms. Pearson, which is also in partnership with Knewton (but not for ELT products), has invested heavily in its MyLab products.

Exactly what will emerge from these new business partnerships and from the continuously evolving technology remains to be seen. The general picture is, however, clearer. We will see an increasing convergence of technologies (administrative systems, educational platforms, communication technologies, big data analytics and adaptive learning) into integrated systems. This will happen first in in-company training departments, universities and colleges of higher education. It is clear already that the ELT divisions of companies like Pearson and Macmillan are beginning to move away from their reliance on printed textbooks for adult learners. This was made graphically clear at the 2013 IATEFL conference in Liverpool when the Pearson exhibition stand had absolutely no books on it (although Pearson now acknowledge this was a ‘mistake). In my next post, I will make a number of more specific predictions about what is coming.