Posts Tagged ‘politics’

‘Sticky’ – as in ‘sticky learning’ or ‘sticky content’ (as opposed to ‘sticky fingers’ or a ‘sticky problem’) – is itself fast becoming a sticky word. If you check out ‘sticky learning’ on Google Trends, you’ll see that it suddenly spiked in September 2011, following the slightly earlier appearance of ‘sticky content’. The historical rise in this use of the word coincides with the exponential growth in the number of references to ‘big data’.

I am often asked if adaptive learning really will take off as a big thing in language learning. Will adaptivity itself be a sticky idea? When the question is asked, people mean the big data variety of adaptive learning, rather than the much more limited adaptivity of spaced repetition algorithms, which, I think, is firmly here and here to stay. I can’t answer the question with any confidence, but I recently came across a book which suggests a useful way of approaching the question.

41u+NEyWjnL._SY344_BO1,204,203,200_‘From the Ivory Tower to the Schoolhouse’ by Jack Schneider (Harvard Education Press, 2014) investigates the reasons why promising ideas from education research fail to get taken up by practitioners, and why other, less-than-promising ideas, from a research or theoretical perspective, become sticky quite quickly. As an example of the former, Schneider considers Robert Sternberg’s ‘Triarchic Theory’. As an example of the latter, he devotes a chapter to Howard Gardner’s ‘Multiple Intelligences Theory’.

Schneider argues that educational ideas need to possess four key attributes in order for teachers to sit up, take notice and adopt them.

  1. perceived significance: the idea must answer a question central to the profession – offering a big-picture understanding rather than merely one small piece of a larger puzzle
  2. philosophical compatibility: the idea must clearly jibe with closely held [teacher] beliefs like the idea that teachers are professionals, or that all children can learn
  3. occupational realism: it must be possible for the idea to be put easily into immediate use
  4. transportability: the idea needs to find its practical expression in a form that teachers can access and use at the time that they need it – it needs to have a simple core that can travel through pre-service coursework, professional development seminars, independent study and peer networks

To what extent does big data adaptive learning possess these attributes? It certainly comes up trumps with respect to perceived significance. The big question that it attempts to answer is the question of how we can make language learning personalized / differentiated / individualised. As its advocates never cease to remind us, adaptive learning holds out the promise of moving away from a one-size-fits-all approach. The extent to which it can keep this promise is another matter, of course. For it to do so, it will never be enough just to offer different pathways through a digitalised coursebook (or its equivalent). Much, much more content will be needed: at least five or six times the content of a one-size-fits-all coursebook. At the moment, there is little evidence of the necessary investment into content being made (quite the opposite, in fact), but the idea remains powerful nevertheless.

When it comes to philosophical compatibility, adaptive learning begins to run into difficulties. Despite the decades of edging towards more communicative approaches in language teaching, research (e.g. the research into English teaching in Turkey described in a previous post), suggests that teachers still see explanation and explication as key functions of their jobs. They believe that they know their students best and they know what is best for them. Big data adaptive learning challenges these beliefs head on. It is no doubt for this reason that companies like Knewton make such a point of claiming that their technology is there to help teachers. But Jose Ferreira doth protest too much, methinks. Platform-delivered adaptive learning is a direct threat to teachers’ professionalism, their salaries and their jobs.

Occupational realism is more problematic still. Very, very few language teachers around the world have any experience of truly blended learning, and it’s very difficult to envisage precisely what it is that the teacher should be doing in a classroom. Publishers moving towards larger-scale blended adaptive materials know that this is a big problem, and are actively looking at ways of packaging teacher training / teacher development (with a specific focus on blended contexts) into the learner-facing materials that they sell. But the problem won’t go away. Education ministries have a long history of throwing money at technological ‘solutions’ without thinking about obtaining the necessary buy-in from their employees. It is safe to predict that this is something that is unlikely to change. Moreover, learning how to become a blended teacher is much harder than learning, say, how to make good use of an interactive whiteboard. Since there are as many different blended adaptive approaches as there are different educational contexts, there cannot be (irony of ironies) a one-size-fits-all approach to training teachers to make good use of this software.

Finally, how transportable is big data adaptive learning? Not very, is the short answer, and for the same reasons that ‘occupational realism’ is highly problematic.

Looking at things through Jack Schneider’s lens, we might be tempted to come to the conclusion that the future for adaptive learning is a rocky path, at best. But Schneider doesn’t take political or economic considerations into account. Sternberg’s ‘Triarchic Theory’ never had the OECD or the Gates Foundation backing it up. It never had millions and millions of dollars of investment behind it. As we know from political elections (and the big data adaptive learning issue is a profoundly political one), big bucks can buy opinions.

It may also prove to be the case that the opinions of teachers don’t actually matter much. If the big adaptive bucks can win the educational debate at the highest policy-making levels, teachers will be the first victims of the ‘creative disruption’ that adaptivity promises. If you don’t believe me, just look at what is going on in the U.S.

There are causes for concern, but I don’t want to sound too alarmist. Nobody really has a clue whether big data adaptivity will actually work in language learning terms. It remains more of a theory than a research-endorsed practice. And to end on a positive note, regardless of how sticky it proves to be, it might just provide the shot-in-the-arm realisation that language teachers, at their best, are a lot more than competent explainers of grammar or deliverers of gap-fills.

(This post won’t make a lot of sense unless you read the previous one – Researching research: part 1!)

dropoutsI suggested in the previous post that the research of Jayaprakash et al had confirmed something that we already knew concerning the reasons why some students drop out of college. However, predictive analytics are only part of the story. As the authors of this paper point out, they ‘do not influence course completion and retention rates without being combined with effective intervention strategies aimed at helping at-risk students succeed’. The point of predictive analytics is to facilitate the deployment of effective and appropriate interventions strategies, and to do this sooner than would be possible without the use of the analytics. So, it is to these intervention strategies that I now turn.

Interventions to help at-risk students included the following:

  • Sending students messages to inform them that they are at risk of not completing the course (‘awareness messaging’)
  • Making students more aware of the available academic support services (which could, for example, direct them to a variety of campus-based or online resources)
  • Promoting peer-to-peer engagement (e.g. with an online ‘student lounge’ discussion forum)
  • Providing access to self-assessment tools

The design of these interventions was based on the work that had been done at Purdue, which was, in turn, inspired by the work of Vince Tinto, one of the world’s leading experts on student retention issues.

The work done at Purdue had shown that simple notifications to students that they were at risk could have a significant, and positive, effect on student behaviour. Jayaprakash and the research team took the students who had been identified as at-risk by the analytics and divided them into three groups: the first were issued with ‘awareness messages’, the second were offered a combination of the other three interventions in the bullet point list above, and the third, a control group, had no interventions at all. The results showed that the students who were in treatment groups (of either kind of intervention) showed a statistically significant improvement compared to those who received no treatment at all. However, there seemed to be no difference in the effectiveness of the different kinds of intervention.

So far, so good, but, once again, I was left thinking that I hadn’t really learned very much from all this. But then, in the last five pages, the article suddenly got very interesting. Remember that the primary purpose of this whole research project was to find ways of helping not just at-risk students, but specifically socioeconomically disadvantaged at-risk students (such as those receiving Pell Grants). Accordingly, the researchers then focussed on this group. What did they find?

Once again, interventions proved more effective at raising student scores than no intervention at all. However, the averages of final scores are inevitably affected by drop-out rates (since students who drop out do not have final scores which can be included in the averages). At Purdue, the effect of interventions on drop-out rates had not been found to be significant. Remember that Purdue has a relatively well-off student demographic. However, in this research, which focussed on colleges with a much higher proportion of students on Pell Grants, the picture was very different. Of the Pell Grant students who were identified as at-risk and who were given some kind of treatment, 25.6% withdrew from the course. Of the Pell Grant students who were identified as at-risk but who were not ‘treated’ in any way (i.e. those in the control group), only 14.1% withdrew from the course. I recommend that you read those numbers again!

The research programme had resulted in substantially higher drop-out rates for socioeconomically disadvantaged students – the precise opposite of what it had set out to achieve. Jayaprakash et al devote one page of their article to the ethical issues this raises. They suggest that early intervention, resulting in withdrawal, might actually be to the benefit of some students who were going to fail whatever happened. It is better to get a ‘W’ (withdrawal) grade on your transcript than an ‘F’ (fail), and you may avoid wasting your money at the same time. This may be true, but it would be equally true that not allowing at-risk students (who, of course, are disproportionately from socioeconomically disadvantaged backgrounds) into college at all might also be to their ‘benefit’. The question, though, is: who has the right to make these decisions on behalf of other people?

The authors also acknowledge another ethical problem. The predictive analytics which will prompt the interventions are not 100% accurate. 85% accuracy could be considered a pretty good figure. This means that some students who are not at-risk are labelled as at-risk, and other who are at-risk are not identified. Of these two possibilities, I find the first far more worrying. We are talking about the very real possibility of individual students being pushed into making potentially life-changing decisions on the basis of dodgy analytics. How ethical is that? The authors’ conclusion is that the situation forces them ‘to develop the most accurate predictive models possible, as well as to take steps to reduce the likelihood that any intervention would result in the necessary withdrawal of a student’.

I find this extraordinary. It is premised on the assumption that predictive models can be made much, much more accurate. They seem to be confusing prediction and predeterminism. A predictive model is, by definition, only predictive. There will always be error. How many errors are ethically justifiable? And, the desire to reduce the likelihood of unnecessary withdrawals is a long way from the need to completely eliminate the likelihood of unnecessary withdrawals, which seems to me to be the ethical position. More than anything else in the article, this sentence illustrates that the a priori assumption is that predictive analytics can be a force for good, and that the only real problem is getting the science right. If a number of young lives are screwed up along the way, we can at least say that science is getting better.

In the authors’ final conclusion, they describe the results of their research as ‘promising’. They do not elaborate on who it is promising for. They say that relatively simple intervention strategies can positively impact student learning outcomes, but they could equally well have said that relatively simple intervention strategies can negatively impact learning outcomes. They could have said that predictive analytics and intervention programmes are fine for the well-off, but more problematic for the poor. Remembering once more that the point of the study was to look at the situation of socioeconomically disadvantaged at-risk students, it is striking that there is no mention of this group in the researchers’ eight concluding points. The vast bulk of the paper is devoted to technical descriptions of the design and training of the software; the majority of the conclusions are about the validity of that design and training. The ostensibly intended beneficiaries have got lost somewhere along the way.

How and why is it that a piece of research such as this can so positively slant its results? In the third and final part of this mini-series, I will turn my attention to answering that question.

Back in the Neanderthal days before Web 2.0, iPhones, tablets, the cloud, learning analytics and so on, Chris Bigum and Jane Kenway wrote a paper called ‘New Information Technologies and the Ambiguous Future of Schooling’. Although published in 1998, it remains relevant and can be accessed here.

They analysed the spectrum of discourse that was concerned with new technologies in education. At one end of this spectrum was a discourse community which they termed ‘boosters’. Then, as now, the boosters were far and away the dominant voices. Bigum and Kenway characterized the boosters as having an ‘unswerving faith in the technology’s capacity to improve education and most other things in society’. I discussed the boosterist discourse in my post on this blog, ‘Saving the World (adaptive marketing)’, focussing on the language of Knewton, as a representative example.

At the other end of Bigum and Kenway’s spectrum was what they termed ‘doomsters’ – ‘unqualified opponents of new technologies’ who see inevitable damage to society and education if we uncritically accept these new technologies.

Since starting this blog, I have been particularly struck by two things. The first of these is that I have had to try to restrain my aversion to the excesses of boosterist discourse – not always, it must be said, with complete success. The second is that I have found myself characterized by some people (perhaps those who have only superficially read a post of two) as an anti-technology doomsterist. At the same time, I have noticed that the debate about adaptive learning and educational technology, in general, tends to become polarized into booster and doomster camps.

To some extent, such polarization is inevitable. When a discourse is especially dominant, anyone who questions it risks finding themselves labelled as the extreme opposite. In some parts of the world, for example, any critique of neoliberal doxa is likely to be critiqued, in its turn, as ‘socialist, or worse’: ‘if you’re not with us, you’re against us’.

GramsciWhen it comes to adaptive learning, one can scoff at the adspeak of Knewton or the gapfills of Voxy, without having a problem with the technology per se. But, given the dominance of the booster discourse, one can’t really be neutral. Neil Selwyn (yes, him again!) suggests that the best way of making full sense of educational technology is to adopt a pessimistic perspective. ‘If nothing else,’ he writes, ‘a pessimistic view remains true to the realities of what has actually taken place with regards to higher education and digital technology over the past thirty years (to be blunt, things have clearly not been transformed or improved by digital technology so far, so why should we expect anything different in the near future?)’. This is not an ‘uncompromising pessimism’, but ‘a position akin to Gramsci’s notion of being ‘a pessimist because of intelligence, but an optimist because of will’’.

Note: The quotes from Neil Selwyn here are taken from his new book Digital Technology and the Contemporary University (2014, Abingdon: Routledge). In the autumn of this year, there will be an online conference, jointly organised by the Learning Technologies and Global Issues Special Interest Groups of IATEFL, during which I will be interviewing Neil Selwyn. I’ll keep you posted.

In Part 9 of the ‘guide’ on this blog (neo-liberalism and solutionism), I suggested that the major advocates of adaptive learning form a complex network of vested neo-liberal interests. Along with adaptive learning and the digital delivery of educational content, they promote a free-market, for-profit, ‘choice’-oriented (charter schools in the US and academies in the UK) ideology. The discourses of these advocates are explored in a fascinating article by Neil Selwyn, ‘Discourses of digital ‘disruption’ in education: a critical analysis’ which can be accessed here.

Stephen Ball includes a detailed chart of this kind of network in his ‘Global Education Inc.’ (Routledge 2012). I thought it would be interesting to attempt a similar, but less ambitious, chart of my own. Sugata Mitra’s plenary talk at the IATEFL conference yesterday has generated a lot of discussion, so I thought it would be interesting to focus on him. What such charts demonstrate very clearly is that there is a very close interlinking between EdTech advocacy and a wider raft of issues on the neo-liberal wish list. Adaptive learning developments (or, for example, schools in the cloud) need to be understood in a broader context … in the same way that Mitra, Tooley, Gates et al understand these technologies.

In order to understand the chart, you will need to look at the notes below. Many more nodes could be introduced, but I have tried my best to keep things simple. All of the information here is publicly available, but I found Stephen Ball’s work especially helpful.

mitra chart


Bill Gates is the former chief executive and chairman of Microsoft, co-chair of the Bill and Melinda Gates Foundation.

James Tooley is the Director of the E.G. West Centre. He is a founder of the Educare Trust, founder and chairman of Omega Schools, president of Orient Global, chairman of Rumi School of Excellence, and a former consultant to the International Finance Corporation. He is also a member of the advisory council of the Institute of Economic Affairs and was responsible for creating the Education and Training Unit at the Institute.

Michael Barber is Pearson’s Chief Education Advisor and Chairman of Pearson’s $15 million Affordable Learning Fund. He is also an advisor on ‘deliverology’ to the International Finance Corporation.

Sugata Mitra is Professor of Educational Technology at the E.G. West Centre and he is Chief Scientist, Emeritus, at NIIT. He is best known for his “Hole in the Wall” experiment. In 2013, he won the $1 million TED Prize to develop his idea of a ‘school-in-the-cloud’.


Hiwel (Hole-in-the-Wall Education Limited) is the company behind Mitra’s “Hole in the Wall” experiment. It is a subsidiary of NIIT.

NIIT Limited is an Indian company based in Gurgaon, India that operates several for-profit higher education institutions.

Omega Schools is a privately held chain of affordable, for-profit schools based in Ghana.There are currently 38 schools educating over 20,000 students.

Orient Global is a Singapore-based investment group, which bought a $48 million stake in NIIT.

Pearson is … Pearson. Pearson’s Affordable Learning Fund was set up to invest in private companies committed to innovative approaches. Its first investment was a stake in Omega Schools.

Rumi Schools of Excellence is Orient Global’s chain of low-cost private schools in India, which aims to extend access and improve educational quality through affordable private schooling.

School-in-the-cloud is described by Mitra as’ a learning lab in India, where children can embark on intellectual adventures by engaging and connecting with information and mentoring online’. Microsoft are the key sponsors.

The E.G. West Centre of the University of Newcastle is dedicated to generating knowledge and understanding about how markets and self organising systems work in education.

The Educare Trustis a non-profit agency, formed in 2002 by Professor James Tooley of the University of Newcastle Upon Tyne, England, and other members associated with private unaided schools in India.It is advised by an international team from the University of Newcastle. It services include the running of a loan scheme for schools to improve their infrastructure and facilities.

The Institute of Economic Affairs is a right-wing free market think tank in London whose stated mission is to improve understanding of the fundamental institutions of a free society by analysing and expounding the role of markets in solving economic and social problems.

The International Finance Corporation is an international financial institution which offers investment, advisory, and asset management services to encourage private sector development in developing countries. The IFC is a member of the World Bank Group.

The Templeton Foundation is a philanthropic organization that funds inter-disciplinary research about human purpose and ultimate reality. Described by Barbara Ehrenreich as a ‘right wing venture’, it has a history of supporting the Cato Institute (publishers of Tooley’s most well-known book) , a libertarian think-tank, as well as projects at major research centers and universities that explore themes related to free market economics.

Additional connections

Barber is an old friend of Tooley’s from when both men were working in Zimbabwe in the 1990s.

Omega Schools are taking part in Sugata Mitra’s TED Prize Schools in the Cloud project.

Omega Schools use textbooks developed by Pearson.

Orient Global sponsored an Education Development fund at Newcastle University. The project leaders were Tooley and Mitra. They also sponsored the Hole-in-the-Wall experiment.

Pearson, the Pearson Foundation, Microsoft and the Gates Foundation work closely together on a wide variety of projects.

Some of Tooley’s work for the Educare Trust was funded by the Templeton Trust. Tooley was also winner of the 2006 Templeton Freedom Prize for Excellence.

The International Finance Corporation and the Gates Foundation are joint sponsors of a $60 million project to improve health in Nigeria.

The International Finance Corporation was another sponsor of the Hole-in-the-Wall experiment.

Adaptive learning is likely to impact on the lives of language teachers very soon. In my work as a writer of education materials, it has already dramatically impacted on mine. This impact has affected the kinds of things I am asked to write, the way in which I write them and my relationship with the editors and publishers I am writing for. I am as dismissive as Steve Jobs[1] was of the idea that technology can radically transform education, but in the short term it can radically disrupt it. Change is not necessarily progress.

Teachers and teacher trainers need to be very alert to what is going on if they don’t want to wake up one morning and find themselves out of work, or in a very different kind of job. The claims for adaptive language learning need to be considered in the bright light of particular, local contexts. Teachers and teacher trainers can even take a lesson from the proponents of adaptive learning who rail against the educational approach of one-size-fits-all. One size, whether it’s face-to-face with a print coursebook or whether it’s a blended adaptive program, will never fit all. We need to be very skeptical of the publishers and software providers who claim in a TED-style, almost evangelical way that they are doing the right thing for students, our society, or our world. There is a real risk that adaptive learning may be leading simply to ‘a more standardised, minimalist product targeted for a mass market, [that] will further ‘box in’ and ‘dumb down’ education’ (Selwyn, Education and Technology 2011, p.101).

There is nothing wrong, per se, with adaptive learning. It could be put to some good uses, but how likely is this? In order to understand how it may impact on our working lives, we need to be better informed. A historical perspective is often a good place to start and Larry Cuban’s Teachers and Machines: The Classroom Use of Technology since 1920 (New York: Teachers College Press, 1986) is still well worth reading.


To get a good picture of where big data and analytics are now and where they are heading, Mayer-Schonberger & Cukier’s Big Data (London: John Murray, 2013) is informative and entertaining reading. If you are ‘an executive looking to integrate analytics in your decision making or a manager seeking to generate better conversations with the quants in your organisation’, I’d recommend Keeping up with the Quants by Thomas H. Davenport and Jinho Kim (Harvard Business School, 2013). Or you could just read ‘The Economist’ for this kind of thing.

If you want to follow up the connections between educational technology and neo-liberalism, the books by Stephen Ball (Global Education Inc., Abingdon, Oxon: Routledge, 2012), Neil Selwyn (Education and Technology, London: Continuum, 2011; Education in a Digital World, New York: Routledge, 2013; Distrusting Educational Technology, New York: Routledge, 2013), Diane Ravitch (Reign of Error, New York: Knopf, 2013) and Joel Spring (Education Networks, New York: Routledge, 2012; The Great American Education-Industrial Complex with Anthony G. Picciano, Routledge, 2013) are all good reads. And keep a look out for anything new from these writers.

Finally, to keep up to date with recent developments, the eltjam blog is a good one to follow, as is Richard Whiteside’s! page

I’ll be continuing to post things here from time to time! Thanks for following me so far.

[1] Jobs, however, did set his sights ‘on the $8 billion a year textbook industry, which he saw as ‘ripe for digital destruction’. His first instinct seems to have been to relieve kids from having to carry around heavy backpacks crammed with textbooks: ‘The iPad would solve that,’ he said, ever practical’ (Fullan, Stratosphere 2013, p.61).

There is a good chance that many readers will have only the haziest idea of what adaptive learning is. There is a much better chance that most English language teachers, especially those working in post-secondary education, will feel the impact of adaptive learning on their professional lives in the next few years. According to Time magazine, it is a ‘hot concept, embraced by education reformers‘, which is ‘poised to reshape education’[1]. According to the educational news website, Education Dive, there is ‘no hotter segment in ed tech right now’[2]. All the major ELT publishers are moving away from traditional printed coursebooks towards the digital delivery of courses that will contain adaptive learning elements. Their investments in the technology are colossal. Universities in many countries, especially the US, are moving in the same direction, again with huge investments. National and regional governments, intergovernmental organisations (such as UNESCO, the OECD, the EU and the World Bank), big business and hugely influential private foundations (such as the Bill and Melinda Gates Foundation) are all lined up in support of the moves towards the digital delivery of education, which (1) will inevitably involve elements of adaptive learning, and (2) will inevitably impact massively on the world of English language teaching.

The next 13 posts will, together, form a guide to adaptive learning in ELT.

1 Introduction

2 Simple models of adaptive learning

3 Gamification

4 Big data, analytics and adaptive learning

5 Platforms and more complex adaptive learning systems

6 The selling points of adaptive learning

7 Ten predictions for the future

8 Theory, research and practice
9 Neo liberalism and solutionism
10 Learn more