Posts Tagged ‘Gates Foundation’

by Philip Kerr & Andrew Wickham

from IATEFL 2016 Birmingham Conference Selections (ed. Tania Pattison) Faversham, Kent: IATEFL pp. 75 – 78

ELT publishing, international language testing and private language schools are all industries: products are produced, bought and sold for profit. English language teaching (ELT) is not. It is an umbrella term that is used to describe a range of activities, some of which are industries, and some of which (such as English teaching in high schools around the world) might better be described as public services. ELT, like education more generally, is, nevertheless, often referred to as an ‘industry’.

Education in a neoliberal world

The framing of ELT as an industry is both a reflection of how we understand the term and a force that shapes our understanding. Associated with the idea of ‘industry’ is a constellation of other ideas and words (such as efficacy, productivity, privatization, marketization, consumerization, digitalization and globalization) which become a part of ELT once it is framed as an industry. Repeated often enough, ‘ELT as an industry’ can become a metaphor that we think and live by. Those activities that fall under the ELT umbrella, but which are not industries, become associated with the desirability of industrial practices through such discourse.

The shift from education, seen as a public service, to educational managerialism (where education is seen in industrial terms with a focus on efficiency, free market competition, privatization and a view of students as customers) can be traced to the 1980s and 1990s (Gewirtz, 2001). In 1999, under pressure from developed economies, the General Agreement on Trade in Services (GATS) transformed education into a commodity that could be traded like any other in the marketplace (Robertson, 2006). The global industrialisation and privatization of education continues to be promoted by transnational organisations (such as the World Bank and the OECD), well-funded free-market think-tanks (such as the Cato Institute), philanthro-capitalist foundations (such as the Gates Foundation) and educational businesses (such as Pearson) (Ball, 2012).

Efficacy and learning outcomes

Managerialist approaches to education require educational products and services to be measured and compared. In ELT, the most visible manifestation of this requirement is the current ubiquity of learning outcomes. Contemporary coursebooks are full of ‘can-do’ statements, although these are not necessarily of any value to anyone. Examples from one unit of one best-selling course include ‘Now I can understand advice people give about hotels’ and ‘Now I can read an article about unique hotels’ (McCarthy et al. 2014: 74). However, in a world where accountability is paramount, they are deemed indispensable. The problem from a pedagogical perspective is that teaching input does not necessarily equate with learning uptake. Indeed, there is no reason why it should.

Drawing on the Common European Framework of Reference for Languages (CEFR) for inspiration, new performance scales have emerged in recent years. These include the Cambridge English Scale and the Pearson Global Scale of English. Moving away from the broad six categories of the CEFR, such scales permit finer-grained measurement and we now see individual vocabulary and grammar items tagged to levels. Whilst such initiatives undoubtedly support measurements of efficacy, the problem from a pedagogical perspective is that they assume that language learning is linear and incremental, as opposed to complex and jagged.

Given the importance accorded to the measurement of language learning (or what might pass for language learning), it is unsurprising that attention is shifting towards the measurement of what is probably the most important factor impacting on learning: the teaching. Teacher competency scales have been developed by Cambridge Assessment, the British Council and EAQUALS (Evaluation and Accreditation of Quality Language Services), among others.

The backwash effects of the deployment of such scales are yet to be fully experienced, but the likely increase in the perception of both language learning and teacher learning as the synthesis of granularised ‘bits of knowledge’ is cause for concern.

Digital technology

Digital technology may offer advantages to both English language teachers and learners, but its rapid growth in language learning is the result, primarily but not exclusively, of the way it has been promoted by those who stand to gain financially. In education, generally, and in English language teaching, more specifically, advocacy of the privatization of education is always accompanied by advocacy of digitalization. The global market for digital English language learning products was reported to be $2.8 billion in 2015 and is predicted to reach $3.8 billion by 2020 (Ambient Insight, 2016).

In tandem with the increased interest in measuring learning outcomes, there is fierce competition in the market for high-stakes examinations, and these are increasingly digitally delivered and marked. In the face of this competition and in a climate of digital disruption, companies like Pearson and Cambridge English are developing business models of vertical integration where they can provide and sell everything from placement testing, to courseware (either print or delivered through an LMS), teaching, assessment and teacher training. Huge investments are being made in pursuit of such models. Pearson, for example, recently bought GlobalEnglish, Wall Street English, and set up a partnership with Busuu, thus covering all aspects of language learning from resources provision and publishing to off- and online training delivery.

As regards assessment, the most recent adult coursebook from Cambridge University Press (in collaboration with Cambridge English Language Assessment), ‘Empower’ (Doff, et. Al, 2015) sells itself on a combination of course material with integrated, validated assessment.

Besides its potential for scalability (and therefore greater profit margins), the appeal (to some) of platform-delivered English language instruction is that it facilitates assessment that is much finer-grained and actionable in real time. Digitization and testing go hand in hand.

Few English language teachers have been unaffected by the move towards digital. In the state sectors, large-scale digitization initiatives (such as the distribution of laptops for educational purposes, the installation of interactive whiteboards, the move towards blended models of instruction or the move away from printed coursebooks) are becoming commonplace. In the private sectors, online (or partially online) language schools are taking market share from the traditional bricks-and-mortar institutions.

These changes have entailed modifications to the skill-sets that teachers need to have. Two announcements at this conference reflect this shift. First of all, Cambridge English launched their ‘Digital Framework for Teachers’, a matrix of six broad competency areas organised into four levels of proficiency. Secondly, Aqueduto, the Association for Quality Education and Training Online, was launched, setting itself up as an accreditation body for online or blended teacher training courses.

Teachers’ pay and conditions

In the United States, and likely soon in the UK, the move towards privatization is accompanied by an overt attack on teachers’ unions, rights, pay and conditions (Selwyn, 2014). As English language teaching in both public and private sectors is commodified and marketized it is no surprise to find that the drive to bring down costs has a negative impact on teachers worldwide. Gwynt (2015), for example, catalogues cuts in funding, large-scale redundancies, a narrowing of the curriculum, intensified workloads (including the need to comply with ‘quality control measures’), the deskilling of teachers, dilapidated buildings, minimal resources and low morale in an ESOL department in one British further education college. In France, a large-scale study by Wickham, Cagnol, Wright and Oldmeadow (Linguaid, 2015; Wright, 2016) found that EFL teachers in the very competitive private sector typically had multiple employers, limited or no job security, limited sick pay and holiday pay, very little training and low hourly rates that were deteriorating. One of the principle drivers of the pressure on salaries is the rise of online training delivery through Skype and other online platforms, using offshore teachers in low-cost countries such as the Philippines. This type of training represents 15% in value and up to 25% in volume of all language training in the French corporate sector and is developing fast in emerging countries. These examples are illustrative of a broad global trend.

Implications

Given the current climate, teachers will benefit from closer networking with fellow professionals in order, not least, to be aware of the rapidly changing landscape. It is likely that they will need to develop and extend their skill sets (especially their online skills and visibility and their specialised knowledge), to differentiate themselves from competitors and to be able to demonstrate that they are in tune with current demands. More generally, it is important to recognise that current trends have yet to run their full course. Conditions for teachers are likely to deteriorate further before they improve. More than ever before, teachers who want to have any kind of influence on the way that marketization and industrialization are shaping their working lives will need to do so collectively.

References

Ambient Insight. 2016. The 2015-2020 Worldwide Digital English Language Learning Market. http://www.ambientinsight.com/Resources/Documents/AmbientInsight_2015-2020_Worldwide_Digital_English_Market_Sample.pdf

Ball, S. J. 2012. Global Education Inc. Abingdon, Oxon.: Routledge

Doff, A., Thaine, C., Puchta, H., Stranks, J. and P. Lewis-Jones 2015. Empower. Cambridge: Cambridge University Press

Gewirtz, S. 2001. The Managerial School: Post-welfarism and Social Justice in Education. Abingdon, Oxon.: Routledge

Gwynt, W. 2015. ‘The effects of policy changes on ESOL’. Language Issues 26 / 2: 58 – 60

McCarthy, M., McCarten, J. and H. Sandiford 2014. Touchstone 2 Student’s Book Second Edition. Cambridge: Cambridge University Press

Linguaid, 2015. Le Marché de la Formation Langues à l’Heure de la Mondialisation. Guildford: Linguaid

Robertson, S. L. 2006. ‘Globalisation, GATS and trading in education services.’ published by the Centre for Globalisation, Education and Societies, University of Bristol, Bristol BS8 1JA, UK at http://www.bris.ac.uk/education/people/academicStaff/edslr/publications/04slr

Selwyn, N. 2014. Distrusting Educational Technology. New York: Routledge

Wright, R. 2016. ‘My teacher is rich … or not!’ English Teaching Professional 103: 54 – 56

 

 

About two and a half years ago when I started writing this blog, there was a lot of hype around adaptive learning and the big data which might drive it. Two and a half years are a long time in technology. A look at Google Trends suggests that interest in adaptive learning has been pretty static for the last couple of years. It’s interesting to note that 3 of the 7 lettered points on this graph are Knewton-related media events (including the most recent, A, which is Knewton’s latest deal with Hachette) and 2 of them concern McGraw-Hill. It would be interesting to know whether these companies follow both parts of Simon Cowell’s dictum of ‘Create the hype, but don’t ever believe it’.

Google_trends

A look at the Hype Cycle (see here for Wikipedia’s entry on the topic and for criticism of the hype of Hype Cycles) of the IT research and advisory firm, Gartner, indicates that both big data and adaptive learning have now slid into the ‘trough of disillusionment’, which means that the market has started to mature, becoming more realistic about how useful the technologies can be for organizations.

A few years ago, the Gates Foundation, one of the leading cheerleaders and financial promoters of adaptive learning, launched its Adaptive Learning Market Acceleration Program (ALMAP) to ‘advance evidence-based understanding of how adaptive learning technologies could improve opportunities for low-income adults to learn and to complete postsecondary credentials’. It’s striking that the program’s aims referred to how such technologies could lead to learning gains, not whether they would. Now, though, with the publication of a report commissioned by the Gates Foundation to analyze the data coming out of the ALMAP Program, things are looking less rosy. The report is inconclusive. There is no firm evidence that adaptive learning systems are leading to better course grades or course completion. ‘The ultimate goal – better student outcomes at lower cost – remains elusive’, the report concludes. Rahim Rajan, a senior program office for Gates, is clear: ‘There is no magical silver bullet here.’

The same conclusion is being reached elsewhere. A report for the National Education Policy Center (in Boulder, Colorado) concludes: Personalized Instruction, in all its many forms, does not seem to be the transformational technology that is needed, however. After more than 30 years, Personalized Instruction is still producing incremental change. The outcomes of large-scale studies and meta-analyses, to the extent they tell us anything useful at all, show mixed results ranging from modest impacts to no impact. Additionally, one must remember that the modest impacts we see in these meta-analyses are coming from blended instruction, which raises the cost of education rather than reducing it (Enyedy, 2014: 15 -see reference at the foot of this post). In the same vein, a recent academic study by Meg Coffin Murray and Jorge Pérez (2015, ‘Informing and Performing: A Study Comparing Adaptive Learning to Traditional Learning’) found that ‘adaptive learning systems have negligible impact on learning outcomes’.

future-ready-learning-reimagining-the-role-of-technology-in-education-1-638In the latest educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Reimagining the Role of Technology in Education’, 2016) the only mentions of the word ‘adaptive’ are in the context of testing. And the latest OECD report on ‘Students, Computers and Learning: Making the Connection’ (2015), finds, more generally, that information and communication technologies, when they are used in the classroom, have, at best, a mixed impact on student performance.

There is, however, too much money at stake for the earlier hype to disappear completely. Sponsored cheerleading for adaptive systems continues to find its way into blogs and national magazines and newspapers. EdSurge, for example, recently published a report called ‘Decoding Adaptive’ (2016), sponsored by Pearson, that continues to wave the flag. Enthusiastic anecdotes take the place of evidence, but, for all that, it’s a useful read.

In the world of ELT, there are plenty of sales people who want new products which they can call ‘adaptive’ (and gamified, too, please). But it’s striking that three years after I started following the hype, such products are rather thin on the ground. Pearson was the first of the big names in ELT to do a deal with Knewton, and invested heavily in the company. Their relationship remains close. But, to the best of my knowledge, the only truly adaptive ELT product that Pearson offers is the PTE test.

Macmillan signed a contract with Knewton in May 2013 ‘to provide personalized grammar and vocabulary lessons, exam reviews, and supplementary materials for each student’. In December of that year, they talked up their new ‘big tree online learning platform’: ‘Look out for the Big Tree logo over the coming year for more information as to how we are using our partnership with Knewton to move forward in the Language Learning division and create content that is tailored to students’ needs and reactive to their progress.’ I’ve been looking out, but it’s all gone rather quiet on the adaptive / platform front.

In September 2013, it was the turn of Cambridge to sign a deal with Knewton ‘to create personalized learning experiences in its industry-leading ELT digital products for students worldwide’. This year saw the launch of a major new CUP series, ‘Empower’. It has an online workbook with personalized extra practice, but there’s nothing (yet) that anyone would call adaptive. More recently, Cambridge has launched the online version of the 2nd edition of Touchstone. Nothing adaptive there, either.

Earlier this year, Cambridge published The Cambridge Guide to Blended Learning for Language Teaching, edited by Mike McCarthy. It contains a chapter by M.O.Z. San Pedro and R. Baker on ‘Adaptive Learning’. It’s an enthusiastic account of the potential of adaptive learning, but it doesn’t contain a single reference to language learning or ELT!

So, what’s going on? Skepticism is becoming the order of the day. The early hype of people like Knewton’s Jose Ferreira is now understood for what it was. Companies like Macmillan got their fingers badly burnt when they barked up the wrong tree with their ‘Big Tree’ platform.

Noel Enyedy captures a more contemporary understanding when he writes: Personalized Instruction is based on the metaphor of personal desktop computers—the technology of the 80s and 90s. Today’s technology is not just personal but mobile, social, and networked. The flexibility and social nature of how technology infuses other aspects of our lives is not captured by the model of Personalized Instruction, which focuses on the isolated individual’s personal path to a fixed end-point. To truly harness the power of modern technology, we need a new vision for educational technology (Enyedy, 2014: 16).

Adaptive solutions aren’t going away, but there is now a much better understanding of what sorts of problems might have adaptive solutions. Testing is certainly one. As the educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Re-imagining the Role of Technology in Education’, 2016) puts it: Computer adaptive testing, which uses algorithms to adjust the difficulty of questions throughout an assessment on the basis of a student’s responses, has facilitated the ability of assessments to estimate accurately what students know and can do across the curriculum in a shorter testing session than would otherwise be necessary. In ELT, Pearson and EF have adaptive tests that have been well researched and designed.

Vocabulary apps which deploy adaptive technology continue to become more sophisticated, although empirical research is lacking. Automated writing tutors with adaptive corrective feedback are also developing fast, and I’ll be writing a post about these soon. Similarly, as speech recognition software improves, we can expect to see better and better automated adaptive pronunciation tutors. But going beyond such applications, there are bigger questions to ask, and answers to these will impact on whatever direction adaptive technologies take. Large platforms (LMSs), with or without adaptive software, are already beginning to look rather dated. Will they be replaced by integrated apps, or are apps themselves going to be replaced by bots (currently riding high in the Hype Cycle)? In language learning and teaching, the future of bots is likely to be shaped by developments in natural language processing (another topic about which I’ll be blogging soon). Nobody really has a clue where the next two and a half years will take us (if anywhere), but it’s becoming increasingly likely that adaptive learning will be only one very small part of it.

 

Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning. Boulder, CO: National Education Policy Center. Retrieved 17.07.16 from http://nepc.colorado.edu/publication/personalized-instruction

‘Sticky’ – as in ‘sticky learning’ or ‘sticky content’ (as opposed to ‘sticky fingers’ or a ‘sticky problem’) – is itself fast becoming a sticky word. If you check out ‘sticky learning’ on Google Trends, you’ll see that it suddenly spiked in September 2011, following the slightly earlier appearance of ‘sticky content’. The historical rise in this use of the word coincides with the exponential growth in the number of references to ‘big data’.

I am often asked if adaptive learning really will take off as a big thing in language learning. Will adaptivity itself be a sticky idea? When the question is asked, people mean the big data variety of adaptive learning, rather than the much more limited adaptivity of spaced repetition algorithms, which, I think, is firmly here and here to stay. I can’t answer the question with any confidence, but I recently came across a book which suggests a useful way of approaching the question.

41u+NEyWjnL._SY344_BO1,204,203,200_‘From the Ivory Tower to the Schoolhouse’ by Jack Schneider (Harvard Education Press, 2014) investigates the reasons why promising ideas from education research fail to get taken up by practitioners, and why other, less-than-promising ideas, from a research or theoretical perspective, become sticky quite quickly. As an example of the former, Schneider considers Robert Sternberg’s ‘Triarchic Theory’. As an example of the latter, he devotes a chapter to Howard Gardner’s ‘Multiple Intelligences Theory’.

Schneider argues that educational ideas need to possess four key attributes in order for teachers to sit up, take notice and adopt them.

  1. perceived significance: the idea must answer a question central to the profession – offering a big-picture understanding rather than merely one small piece of a larger puzzle
  2. philosophical compatibility: the idea must clearly jibe with closely held [teacher] beliefs like the idea that teachers are professionals, or that all children can learn
  3. occupational realism: it must be possible for the idea to be put easily into immediate use
  4. transportability: the idea needs to find its practical expression in a form that teachers can access and use at the time that they need it – it needs to have a simple core that can travel through pre-service coursework, professional development seminars, independent study and peer networks

To what extent does big data adaptive learning possess these attributes? It certainly comes up trumps with respect to perceived significance. The big question that it attempts to answer is the question of how we can make language learning personalized / differentiated / individualised. As its advocates never cease to remind us, adaptive learning holds out the promise of moving away from a one-size-fits-all approach. The extent to which it can keep this promise is another matter, of course. For it to do so, it will never be enough just to offer different pathways through a digitalised coursebook (or its equivalent). Much, much more content will be needed: at least five or six times the content of a one-size-fits-all coursebook. At the moment, there is little evidence of the necessary investment into content being made (quite the opposite, in fact), but the idea remains powerful nevertheless.

When it comes to philosophical compatibility, adaptive learning begins to run into difficulties. Despite the decades of edging towards more communicative approaches in language teaching, research (e.g. the research into English teaching in Turkey described in a previous post), suggests that teachers still see explanation and explication as key functions of their jobs. They believe that they know their students best and they know what is best for them. Big data adaptive learning challenges these beliefs head on. It is no doubt for this reason that companies like Knewton make such a point of claiming that their technology is there to help teachers. But Jose Ferreira doth protest too much, methinks. Platform-delivered adaptive learning is a direct threat to teachers’ professionalism, their salaries and their jobs.

Occupational realism is more problematic still. Very, very few language teachers around the world have any experience of truly blended learning, and it’s very difficult to envisage precisely what it is that the teacher should be doing in a classroom. Publishers moving towards larger-scale blended adaptive materials know that this is a big problem, and are actively looking at ways of packaging teacher training / teacher development (with a specific focus on blended contexts) into the learner-facing materials that they sell. But the problem won’t go away. Education ministries have a long history of throwing money at technological ‘solutions’ without thinking about obtaining the necessary buy-in from their employees. It is safe to predict that this is something that is unlikely to change. Moreover, learning how to become a blended teacher is much harder than learning, say, how to make good use of an interactive whiteboard. Since there are as many different blended adaptive approaches as there are different educational contexts, there cannot be (irony of ironies) a one-size-fits-all approach to training teachers to make good use of this software.

Finally, how transportable is big data adaptive learning? Not very, is the short answer, and for the same reasons that ‘occupational realism’ is highly problematic.

Looking at things through Jack Schneider’s lens, we might be tempted to come to the conclusion that the future for adaptive learning is a rocky path, at best. But Schneider doesn’t take political or economic considerations into account. Sternberg’s ‘Triarchic Theory’ never had the OECD or the Gates Foundation backing it up. It never had millions and millions of dollars of investment behind it. As we know from political elections (and the big data adaptive learning issue is a profoundly political one), big bucks can buy opinions.

It may also prove to be the case that the opinions of teachers don’t actually matter much. If the big adaptive bucks can win the educational debate at the highest policy-making levels, teachers will be the first victims of the ‘creative disruption’ that adaptivity promises. If you don’t believe me, just look at what is going on in the U.S.

There are causes for concern, but I don’t want to sound too alarmist. Nobody really has a clue whether big data adaptivity will actually work in language learning terms. It remains more of a theory than a research-endorsed practice. And to end on a positive note, regardless of how sticky it proves to be, it might just provide the shot-in-the-arm realisation that language teachers, at their best, are a lot more than competent explainers of grammar or deliverers of gap-fills.

(This post won’t make a lot of sense unless you read the previous two – Researching research: part 1 and part 2!)

The work of Jayaprakash et al was significantly informed and inspired by the work done at Purdue University. In the words of these authors, they even ‘relied on [the] work at Purdue with Course Signals’ for parts of the design of their research. They didn’t know when they were doing their research that the Purdue studies were fundamentally flawed. This was, however, common knowledge (since September 2013) before their article (‘Early Alert of Academically At-Risk Students’) was published. This raises the interesting question of why the authors (and the journal in which they published) didn’t pull the article when they could still have done so. I can’t answer that question, but I can suggest some possible reasons. First, though, a little background on the Purdue research.

The Purdue research is important, more than important, because it was the first significant piece of research to demonstrate the efficacy of academic analytics. Except that, in all probability, it doesn’t! Michael Caulfield, director of blended and networked learning at Washington State University at Vancouver, and Alfred Essa, McGraw-Hill Education’s vice-president of research and development and analytics, took a closer look at the data. What they found was that the results were probably the result of selection bias rather than a real finding. In other words, as summarized by Carl Straumsheim in Inside Higher Ed in November of last year, there was no causal connection between students who use [Course Signals] and their tendency to stick with their studies .The Times Higher Education and the e-Literate blog contacted Purdue, but, to date, there has been no serious response to the criticism. The research is still on Purdue’s website .

The Purdue research article, ‘Course Signals at Purdue: Using Learning Analytics to Increase Student Success’ by Kimberley Arnold and Matt Pistilli, was first published as part of the proceedings of the Learning Analytics and Knowledge (LAK) conference in May 2012. The LAK conference is organised by the Society for Learning Analytics Research (SoLAR), in partnership with Purdue. SoLAR, you may remember, is the organisation which published the new journal in which Jayaprakash et al’s article appeared. Pistilli happens to be an associate editor of the journal. Jayaprakash et al also presented at the LAK ’12 conference. Small world.

The Purdue research was further publicized by Pistilli and Arnold in the Educause review. Their research had been funded by the Gates Foundation (a grant of $1.2 million in November 2011). Educause, in its turn, is also funded by the Gates Foundation (a grant of $9 million in November 2011). The research of Jayaprakash et al was also funded by Educause, which stipulated that ‘effective techniques to improve student retention be investigated and demonstrated’ (my emphasis). Given the terms of their grant, we can perhaps understand why they felt the need to claim they had demonstrated something.

What exactly is Educause, which plays such an important role in all of this? According to their own website, it is a non-profit association whose mission is to advance higher education through the use of information technology. However, it is rather more than that. It is also a lobbying and marketing umbrella for edtech. The following screenshot from their website makes this abundantly clear.educause

If you’ll bear with me, I’d like to describe one more connection between the various players I’ve been talking about. Purdue’s Couse Signals is marketed by a company called Ellucian. Ellucian’s client list includes both Educause and the Gates Foundation. A former Senior Vice President of Ellucian, Anne K Keehn, is currently ‘Senior Fellow -Technology and Innovation, Education, Post-Secondary Success’ at the Gates Foundation – presumably the sort of person to whom you’d have to turn if you wanted funding from the Gates Foundation. Small world.

Personal, academic and commercial networks are intricately intertwined in the high-stakes world of edtech. In such a world (not so very different from the pharmaceutical industry), independent research is practically impossible. The pressure to publish positive research results must be extreme. The temptation to draw conclusions of the kind that your paymasters are looking for must be high. Th edtech juggernaut must keep rolling on.

While the big money will continue to go, for the time being, into further attempts to prove that big data is the future of education, there are still some people who are interested in alternatives. Coincidentally (?), a recent survey  has been carried out at Purdue which looks into what students think about their college experience, about what is meaningful to them. Guess what? It doesn’t have much to do with technology.

(This post won’t make a lot of sense unless you read the previous one – Researching research: part 1!)

dropoutsI suggested in the previous post that the research of Jayaprakash et al had confirmed something that we already knew concerning the reasons why some students drop out of college. However, predictive analytics are only part of the story. As the authors of this paper point out, they ‘do not influence course completion and retention rates without being combined with effective intervention strategies aimed at helping at-risk students succeed’. The point of predictive analytics is to facilitate the deployment of effective and appropriate interventions strategies, and to do this sooner than would be possible without the use of the analytics. So, it is to these intervention strategies that I now turn.

Interventions to help at-risk students included the following:

  • Sending students messages to inform them that they are at risk of not completing the course (‘awareness messaging’)
  • Making students more aware of the available academic support services (which could, for example, direct them to a variety of campus-based or online resources)
  • Promoting peer-to-peer engagement (e.g. with an online ‘student lounge’ discussion forum)
  • Providing access to self-assessment tools

The design of these interventions was based on the work that had been done at Purdue, which was, in turn, inspired by the work of Vince Tinto, one of the world’s leading experts on student retention issues.

The work done at Purdue had shown that simple notifications to students that they were at risk could have a significant, and positive, effect on student behaviour. Jayaprakash and the research team took the students who had been identified as at-risk by the analytics and divided them into three groups: the first were issued with ‘awareness messages’, the second were offered a combination of the other three interventions in the bullet point list above, and the third, a control group, had no interventions at all. The results showed that the students who were in treatment groups (of either kind of intervention) showed a statistically significant improvement compared to those who received no treatment at all. However, there seemed to be no difference in the effectiveness of the different kinds of intervention.

So far, so good, but, once again, I was left thinking that I hadn’t really learned very much from all this. But then, in the last five pages, the article suddenly got very interesting. Remember that the primary purpose of this whole research project was to find ways of helping not just at-risk students, but specifically socioeconomically disadvantaged at-risk students (such as those receiving Pell Grants). Accordingly, the researchers then focussed on this group. What did they find?

Once again, interventions proved more effective at raising student scores than no intervention at all. However, the averages of final scores are inevitably affected by drop-out rates (since students who drop out do not have final scores which can be included in the averages). At Purdue, the effect of interventions on drop-out rates had not been found to be significant. Remember that Purdue has a relatively well-off student demographic. However, in this research, which focussed on colleges with a much higher proportion of students on Pell Grants, the picture was very different. Of the Pell Grant students who were identified as at-risk and who were given some kind of treatment, 25.6% withdrew from the course. Of the Pell Grant students who were identified as at-risk but who were not ‘treated’ in any way (i.e. those in the control group), only 14.1% withdrew from the course. I recommend that you read those numbers again!

The research programme had resulted in substantially higher drop-out rates for socioeconomically disadvantaged students – the precise opposite of what it had set out to achieve. Jayaprakash et al devote one page of their article to the ethical issues this raises. They suggest that early intervention, resulting in withdrawal, might actually be to the benefit of some students who were going to fail whatever happened. It is better to get a ‘W’ (withdrawal) grade on your transcript than an ‘F’ (fail), and you may avoid wasting your money at the same time. This may be true, but it would be equally true that not allowing at-risk students (who, of course, are disproportionately from socioeconomically disadvantaged backgrounds) into college at all might also be to their ‘benefit’. The question, though, is: who has the right to make these decisions on behalf of other people?

The authors also acknowledge another ethical problem. The predictive analytics which will prompt the interventions are not 100% accurate. 85% accuracy could be considered a pretty good figure. This means that some students who are not at-risk are labelled as at-risk, and other who are at-risk are not identified. Of these two possibilities, I find the first far more worrying. We are talking about the very real possibility of individual students being pushed into making potentially life-changing decisions on the basis of dodgy analytics. How ethical is that? The authors’ conclusion is that the situation forces them ‘to develop the most accurate predictive models possible, as well as to take steps to reduce the likelihood that any intervention would result in the necessary withdrawal of a student’.

I find this extraordinary. It is premised on the assumption that predictive models can be made much, much more accurate. They seem to be confusing prediction and predeterminism. A predictive model is, by definition, only predictive. There will always be error. How many errors are ethically justifiable? And, the desire to reduce the likelihood of unnecessary withdrawals is a long way from the need to completely eliminate the likelihood of unnecessary withdrawals, which seems to me to be the ethical position. More than anything else in the article, this sentence illustrates that the a priori assumption is that predictive analytics can be a force for good, and that the only real problem is getting the science right. If a number of young lives are screwed up along the way, we can at least say that science is getting better.

In the authors’ final conclusion, they describe the results of their research as ‘promising’. They do not elaborate on who it is promising for. They say that relatively simple intervention strategies can positively impact student learning outcomes, but they could equally well have said that relatively simple intervention strategies can negatively impact learning outcomes. They could have said that predictive analytics and intervention programmes are fine for the well-off, but more problematic for the poor. Remembering once more that the point of the study was to look at the situation of socioeconomically disadvantaged at-risk students, it is striking that there is no mention of this group in the researchers’ eight concluding points. The vast bulk of the paper is devoted to technical descriptions of the design and training of the software; the majority of the conclusions are about the validity of that design and training. The ostensibly intended beneficiaries have got lost somewhere along the way.

How and why is it that a piece of research such as this can so positively slant its results? In the third and final part of this mini-series, I will turn my attention to answering that question.

article-2614966-1D6DC26500000578-127_634x776In the 8th post on this blog (‘Theory, Research and Practice’), I referred to the lack of solid research into learning analytics. Whilst adaptive learning enthusiasts might disagree with much, or even most, of what I have written on this subject, here, at least, was an area of agreement. May of this year, however, saw the launch of the inaugural issue of the Journal of Learning Analytics, the first journal ‘dedicated to research into the challenges of collecting, analysing and reporting data with the specific intent to improve learning’. It is a peer-reviewed, open-access journal, available here , which is published by the Society for Learning Analytics Research (SoLAR), a consortium of academics from 9 universities in the US, Canada, Britain and Australia.

I decided to take a closer look. In this and my next two posts, I will focus on one article from this inaugural issue. It’s called Early Alert of Academically At‐Risk Students: An Open Source Analytics Initiative and it is co-authored by Sandeep M. Jayaprakash, Erik W. Moody, Eitel J.M. Lauría, James R. Regan, and Joshua D. Baron of Marist College in the US. Bear with me, please – it’s more interesting than it might sound!

The background to this paper is the often referred to problem of college drop-outs in the US, and the potential of learning analytics to address what is seen as a ‘national challenge’. The most influential work that has been done in this area to date was carried out at Purdue University. Purdue developed an analytical system, called Course Signals, which identified students at risk of course failure and offered a range of interventions (more about these in the next post) which were designed to improve student outcomes. I will have more to say about the work at Purdue in my third post, but, for the time being, it is enough to say that, in the field, it has been considered very successful, and that the authors of the paper I looked at have based their approach on the work done at Purdue.

Jayaprakash et al developed their own analytical system, based on Purdue’s Course Signals, and used it at their own institution, Marist College. Basically, they wanted to know if they could replicate the good results that had been achieved at Purdue. They then took the same analytical system to four different institutions, of very different kinds (public, as opposed to private; community colleges offering 2-year programmes rather than universities) to see if the results could be replicated there, too. They also wanted to find out if the interventions with students who had been signalled as at-risk would be as effective as they had been at Purdue. So far, so good: it is clearly very important to know if one particular piece of research has any significance beyond its immediate local context.

So, what did Jayaprakash et al find out? Basically, they learnt that their software worked as well at Marist as Course Signals had done at Purdue. They collected data on student demographics and aptitude, course grades and course related data, data on students’ interactions with the LMS they were using and performance data captured by the LMS. Oh, yes, and absenteeism. At the other institutions where they trialled their software, the system was 10% less accurate in predicting drop-outs, but the authors of the research still felt that ‘predictive models developed based on data from one institution may be scalable to other institutions’.

But more interesting than the question of whether or not the predictive analytics worked is the question of which specific features of the data were the most powerful predictors. What they discovered was that absenteeism was highly significant. No surprises there. They also learnt that the other most powerful predictors were (1) the students’ cumulative grade point average (GPA), an average of a student’s academic scores over their entire academic career, and (2) the scores recorded by the LMS of the work that students had done during the course which would contribute to their final grade. No surprises there, either. As the authors point out, ‘given that these two attributes are such fundamental aspects of academic success, it is not surprising that the predictive model has fared so well across these different institutions’.

Agreed, it is not surprising at all that students with lower scores and a history of lower scores are more likely to drop out of college than students with higher scores. But, I couldn’t help wondering, do we really need sophisticated learning analytics to tell us this? Wouldn’t any teacher know this already? They would, of course, if they knew their students, but if the teacher: student ratio is in the order of 1: 100 (not unheard of in lower-funded courses delivered primarily through an LMS), many teachers (and their students) might benefit from automated alert systems.

But back to the differences between the results at Purdue and Marist and at the other institutions. Why were the predictive analytics less successful at the latter? The answer is in the nature of the institutions. Essentially, it boils down to this. In institutions with low drop-out rates, the analytics are more reliable than in institutions with high drop-out rates, because the more at-risk students there are, the harder it is to predict the particular individuals who will actually drop out. Jayaprakash et al provide the key information in a useful table. Students at Marist College are relatively well-off (only 16% receive Pell Grants, which are awarded to students in financial need), and only a small number (12%) are from ‘ethnic minorities’. The rate of course non-completion in normal time is relatively low (at 20%). In contrast, at one of the other institutions, the College of the Redwoods in California, 44% of the students receive Pell Grants and 22% of them are from ‘ethnic minorities’. The non-completion rate is a staggering 96%. At Savannah State University, 78% of the students receive Pell Grants, and the non-completion rate is 70%. The table also shows the strong correlation between student poverty and high student: faculty ratios.

In other words, the poorer you are, the less likely you are to complete your course of study, and the less likely you are to know your tutors (these two factors also correlate). In other other words, the whiter you are, the more likely you are to complete your course of study (because of the strong correlations between race and poverty). While we are playing the game of statistical correlations, let’s take it a little further. As the authors point out, ‘there is considerable evidence that students with lower socio-economic status have lower GPAs and graduation rates’. If, therefore, GPAs are one of the most significant predictors of academic success, we can say that socio-economic status (and therefore race) is one of the most significant predictors of academic success … even if the learning analytics do not capture this directly.

Actually, we have known this for a long time. The socio-economic divide in education is frequently cited as one of the big reasons for moving towards digitally delivered courses. This particular piece of research was funded (more about this in the next posts) with the stipulation that it ‘investigated and demonstrated effective techniques to improve student retention in socio-economically disadvantaged populations’. We have also known for some time that digitally delivered education increases the academic divide between socio-economic groups. So what we now have is a situation where a digital technology (learning analytics) is being used as a partial solution to a problem that has always been around, but which has been exacerbated by the increasing use of another digital technology (LMSs) in education. We could say, then, that if we weren’t using LMSs, learning analytics would not be possible … but we would need them less, anyway.

My next post will look at the results of the interventions with students that were prompted by the alerts generated by the learning analytics. Advance warning: it will make what I have written so far seem positively rosy.

In Part 9 of the ‘guide’ on this blog (neo-liberalism and solutionism), I suggested that the major advocates of adaptive learning form a complex network of vested neo-liberal interests. Along with adaptive learning and the digital delivery of educational content, they promote a free-market, for-profit, ‘choice’-oriented (charter schools in the US and academies in the UK) ideology. The discourses of these advocates are explored in a fascinating article by Neil Selwyn, ‘Discourses of digital ‘disruption’ in education: a critical analysis’ which can be accessed here.

Stephen Ball includes a detailed chart of this kind of network in his ‘Global Education Inc.’ (Routledge 2012). I thought it would be interesting to attempt a similar, but less ambitious, chart of my own. Sugata Mitra’s plenary talk at the IATEFL conference yesterday has generated a lot of discussion, so I thought it would be interesting to focus on him. What such charts demonstrate very clearly is that there is a very close interlinking between EdTech advocacy and a wider raft of issues on the neo-liberal wish list. Adaptive learning developments (or, for example, schools in the cloud) need to be understood in a broader context … in the same way that Mitra, Tooley, Gates et al understand these technologies.

In order to understand the chart, you will need to look at the notes below. Many more nodes could be introduced, but I have tried my best to keep things simple. All of the information here is publicly available, but I found Stephen Ball’s work especially helpful.

mitra chart

People

Bill Gates is the former chief executive and chairman of Microsoft, co-chair of the Bill and Melinda Gates Foundation.

James Tooley is the Director of the E.G. West Centre. He is a founder of the Educare Trust, founder and chairman of Omega Schools, president of Orient Global, chairman of Rumi School of Excellence, and a former consultant to the International Finance Corporation. He is also a member of the advisory council of the Institute of Economic Affairs and was responsible for creating the Education and Training Unit at the Institute.

Michael Barber is Pearson’s Chief Education Advisor and Chairman of Pearson’s $15 million Affordable Learning Fund. He is also an advisor on ‘deliverology’ to the International Finance Corporation.

Sugata Mitra is Professor of Educational Technology at the E.G. West Centre and he is Chief Scientist, Emeritus, at NIIT. He is best known for his “Hole in the Wall” experiment. In 2013, he won the $1 million TED Prize to develop his idea of a ‘school-in-the-cloud’.

Institutions

Hiwel (Hole-in-the-Wall Education Limited) is the company behind Mitra’s “Hole in the Wall” experiment. It is a subsidiary of NIIT.

NIIT Limited is an Indian company based in Gurgaon, India that operates several for-profit higher education institutions.

Omega Schools is a privately held chain of affordable, for-profit schools based in Ghana.There are currently 38 schools educating over 20,000 students.

Orient Global is a Singapore-based investment group, which bought a $48 million stake in NIIT.

Pearson is … Pearson. Pearson’s Affordable Learning Fund was set up to invest in private companies committed to innovative approaches. Its first investment was a stake in Omega Schools.

Rumi Schools of Excellence is Orient Global’s chain of low-cost private schools in India, which aims to extend access and improve educational quality through affordable private schooling.

School-in-the-cloud is described by Mitra as’ a learning lab in India, where children can embark on intellectual adventures by engaging and connecting with information and mentoring online’. Microsoft are the key sponsors.

The E.G. West Centre of the University of Newcastle is dedicated to generating knowledge and understanding about how markets and self organising systems work in education.

The Educare Trustis a non-profit agency, formed in 2002 by Professor James Tooley of the University of Newcastle Upon Tyne, England, and other members associated with private unaided schools in India.It is advised by an international team from the University of Newcastle. It services include the running of a loan scheme for schools to improve their infrastructure and facilities.

The Institute of Economic Affairs is a right-wing free market think tank in London whose stated mission is to improve understanding of the fundamental institutions of a free society by analysing and expounding the role of markets in solving economic and social problems.

The International Finance Corporation is an international financial institution which offers investment, advisory, and asset management services to encourage private sector development in developing countries. The IFC is a member of the World Bank Group.

The Templeton Foundation is a philanthropic organization that funds inter-disciplinary research about human purpose and ultimate reality. Described by Barbara Ehrenreich as a ‘right wing venture’, it has a history of supporting the Cato Institute (publishers of Tooley’s most well-known book) , a libertarian think-tank, as well as projects at major research centers and universities that explore themes related to free market economics.

Additional connections

Barber is an old friend of Tooley’s from when both men were working in Zimbabwe in the 1990s.

Omega Schools are taking part in Sugata Mitra’s TED Prize Schools in the Cloud project.

Omega Schools use textbooks developed by Pearson.

Orient Global sponsored an Education Development fund at Newcastle University. The project leaders were Tooley and Mitra. They also sponsored the Hole-in-the-Wall experiment.

Pearson, the Pearson Foundation, Microsoft and the Gates Foundation work closely together on a wide variety of projects.

Some of Tooley’s work for the Educare Trust was funded by the Templeton Trust. Tooley was also winner of the 2006 Templeton Freedom Prize for Excellence.

The International Finance Corporation and the Gates Foundation are joint sponsors of a $60 million project to improve health in Nigeria.

The International Finance Corporation was another sponsor of the Hole-in-the-Wall experiment.

Personalization is one of the key leitmotifs in current educational discourse. The message is clear: personalization is good, one-size-fits-all is bad. ‘How to personalize learning and how to differentiate instruction for diverse classrooms are two of the great educational challenges of the 21st century,’ write Trilling and Fadel, leading lights in the Partnership for 21st Century Skills (P21)[1]. Barack Obama has repeatedly sung the praises of, and the need for, personalized learning and his policies are fleshed out by his Secretary of State, Arne Duncan, in speeches and on the White House blog: ‘President Obama described the promise of personalized learning when he launched the ConnectED initiative last June. Technology is a powerful tool that helps create robust personalized learning environments.’ In the UK, personalized learning has been government mantra for over 10 years. The EU, UNESCO, OECD, the Gates Foundation – everyone, it seems, is singing the same tune.

Personalization, we might all agree, is a good thing. How could it be otherwise? No one these days is going to promote depersonalization or impersonalization in education. What exactly it means, however, is less clear. According to a UNESCO Policy Brief[2], the term was first used in the context of education in the 1970s by Victor Garcìa Hoz, a senior Spanish educationalist and member of Opus Dei at the University of Madrid. This UNESCO document then points out that ‘unfortunately, up to this date there is no single definition of this concept’.

In ELT, the term has been used in a very wide variety of ways. These range from the far-reaching ideas of people like Gertrude Moskowitz, who advocated a fundamentally learner-centred form of instruction, to the much more banal practice of getting students to produce a few personalized examples of an item of grammar they have just studied. See Scott Thornbury’s A-Z blog for an interesting discussion of personalization in ELT.

As with education in general, and ELT in particular, ‘personalization’ is also bandied around the adaptive learning table. Duolingo advertises itself as the opposite of one-size-fits-all, and as an online equivalent of the ‘personalized education you can get from a small classroom teacher or private tutor’. Babbel offers a ‘personalized review manager’ and Rosetta Stone’s Classroom online solution allows educational institutions ‘to shift their language program away from a ‘one-size-fits-all-curriculum’ to a more individualized approach’. As far as I can tell, the personalization in these examples is extremely restricted. The language syllabus is fixed and although users can take different routes up the ‘skills tree’ or ‘knowledge graph’, they are totally confined by the pre-determination of those trees and graphs. This is no more personalized learning than asking students to make five true sentences using the present perfect. Arguably, it is even less!

This is not, in any case, the kind of personalization that Obama, the Gates Foundation, Knewton, et al have in mind when they conflate adaptive learning with personalization. Their definition is much broader and summarised in the US National Education Technology Plan of 2010: ‘Personalized learning means instruction is paced to learning needs, tailored to learning preferences, and tailored to the specific interests of different learners. In an environment that is fully personalized, the learning objectives and content as well as the method and pace may all vary (so personalization encompasses differentiation and individualization).’ What drives this is the big data generated by the students’ interactions with the technology (see ‘Part 4: big data and analytics’ of ‘The Guide’ on this blog).

What remains unclear is exactly how this might work in English language learning. Adaptive software can only personalize to the extent that the content of an English language learning programme allows it to do so. It may be true that each student using adaptive software ‘gets a more personalised experience no matter whose content the student is consuming’, as Knewton’s David Liu puts it. But the potential for any really meaningful personalization depends crucially on the nature and extent of this content, along with the possibility of variable learning outcomes. For this reason, we are not likely to see any truly personalized large-scale adaptive learning programs for English any time soon.

Nevertheless, technology is now central to personalized language learning. A good learning platform, which allows learners to connect to ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses, various apps’, etc (see Alexandra Chistyakova’s eltdiary), means that personalization could be more easily achieved.

For the time being, at least, adaptive learning systems would seem to work best for ‘those things that can be easily digitized and tested like math problems and reading passages’ writes Barbara Bray . Or low level vocabulary and grammar McNuggets, we might add. Ideal for, say, ‘English Grammar in Use’. But meaningfully personalized language learning?

student-data-and-personalization

‘Personalized learning’ sounds very progressive, a utopian educational horizon, and it sounds like it ought to be the future of ELT (as Cleve Miller argues). It also sounds like a pretty good slogan on which to hitch the adaptive bandwagon. But somehow, just somehow, I suspect that when it comes to adaptive learning we’re more likely to see more testing, more data collection and more depersonalization.

[1] Trilling, B. & Fadel, C. 2009 21st Century Skills (San Francisco: Wiley) p.33

[2] Personalized learning: a new ICT­enabled education approach, UNESCO Institute for Information Technologies in Education, Policy Brief March 2012 iite.unesco.org/pics/publications/en/files/3214716.pdf

 

There is a lot that technology can do to help English language learners develop their reading skills. The internet makes it possible for learners to read an almost limitless number of texts that will interest them, and these texts can evaluated for readability and, therefore, suitability for level (see here for a useful article). RSS opens up exciting possibilities for narrow reading and the positive impact of multimedia-enhanced texts was researched many years ago. There are good online bilingual dictionaries and other translation tools. There are apps that go with graded readers (see this review in the Guardian) and there are apps that can force you to read at a certain speed. And there is more. All of this could very effectively be managed on a good learning platform.

Could adaptive software add another valuable element to reading skills development?

Adaptive reading programs are spreading in the US in primary education, and, with some modifications, could be used in ELT courses for younger learners and for those who do not have the Roman alphabet. One of the most well-known has been developed by Lexia Learning®, a company that won a $500,000 grant from the Gates Foundation last year. Lexia Learning® was bought by Rosetta Stone® for $22.5 million in June 2013.

One of their products, Lexia Reading Core5, ‘provides explicit, systematic, personalized learning in the six areas of reading instruction, and delivers norm-referenced performance data and analysis without interrupting the flow of instruction to administer a test. Designed specifically to meet the Common Core and the most rigorous state standards, this research-proven, technology-based approach accelerates reading skills development, predicts students’ year-end performance and provides teachers data-driven action plans to help differentiate instruction’.

core5-ss-small

The predictable claim that it is ‘research-proven’ has not convinced everyone. Richard Allington, a professor of literacy studies at the University of Tennessee and a past president of both the International Reading Association and the National Reading Association, has said that all the companies that have developed this kind of software ‘come up with evidence – albeit potential evidence — that kids could improve their abilities to read by using their product. It’s all marketing. They’re selling a product. Lexia is one of these programs. But there virtually are no commercial programs that have any solid, reliable evidence that they improve reading achievement.’[1] He has argued that the $12 million that has been spent on the Lexia programs would have been better spent on a national program, developed at Ohio State University, that matches specially trained reading instructors with students known to have trouble learning to read.

But what about ELT? For an adaptive program like Lexia’s to work, reading skills need to be broken down in a similar way to the diagram shown above. Let’s get some folk linguistics out of the way first. The sub-skills of reading are not skimming, scanning, inferring meaning from context, etc. These are strategies that readers adopt voluntarily in order to understand a text better. If a reader uses these strategies in their own language, they are likely to transfer these strategies to their English reading. It seems that ELT instruction in strategy use has only limited impact, although this kind of training may be relevant to preparation for exams. This insight is taking a long time to filter down to course and coursebook design, but there really isn’t much debate[2]. Any adaptive ELT reading program that confuses reading strategies with reading sub-skills is going to have big problems.

What, then, are the sub-skills of reading? In what ways could reading be broken down into a skill tree so that it is amenable to adaptive learning? Researchers have provided different answers. Munby (1978), for example, listed 19 reading microskills, Heaton (1988) listed 14. However, a bigger problem is that other researchers (e.g. Lunzer 1979, Rost 1993) have failed to find evidence that distinct sub-skills actually exist. While it is easier to identify sub-skills for very low level readers (especially for those whose own language is very different from English), it is simply not possible to do so for higher levels.

Reading in another language is a complex process which involves both top-down and bottom-up strategies, is intimately linked to vocabulary knowledge and requires the activation of background, cultural knowledge. Reading ability, in the eyes of some researchers, is unitary or holistic. Others prefer to separate things into two components: word recognition and comprehension[3]. Either way, a consensus is beginning to emerge that teachers and learners might do better to focus on vocabulary extension (and this would include extensive reading) than to attempt to develop reading programs that assume the multidivisible nature of reading.

All of which means that adaptive learning software and reading skills in ELT are unlikely bedfellows. To be sure, an increased use of technology (as described in the first paragraph of this post) in reading work will generate a lot of data about learner behaviours. Analysis of this data may lead to actionable insights, and it may not! It will be interesting to find out.

 

[1] http://www.khi.org/news/2013/jun/17/budget-proviso-reading-program-raises-questions/

[2] See, for example, Walter, C. & M. Swan. 2008. ‘Teaching reading skills: mostly a waste of time?’ in Beaven, B. (ed.) IATEFL 2008 Exeter Conference Selections. (Canterbury: IATEFL). Or go back further to Alderson, J. C. 1984 ‘Reading in a foreign language: a reading problem or a language problem?’ in J.C. Alderson & A. H. Urquhart (eds.) Reading in a Foreign Language (London: Longman)

[3] For a useful summary of these issues, see ‘Reading abilities and strategies: a short introduction’ by Feng Liu (International Education Studies 3 / 3 August 2010) www.ccsenet.org/journal/index.php/ies/article/viewFile/6790/5321

I mentioned the issue of privacy very briefly in Part 9 of the ‘Guide’, and it seems appropriate to take a more detailed look.

Adaptive learning needs big data. Without the big data, there is nothing for the algorithms to work on, and the bigger the data set, the better the software can work. Adaptive language learning will be delivered via a platform, and the data that is generated by the language learner’s interaction with the English language program on the platform is likely to be only one, very small, part of the data that the system will store and analyse. Full adaptivity requires a psychometric profile for each student.

It would make sense, then, to aggregate as much data as possible in one place. Besides the practical value in massively combining different data sources (in order to enhance the usefulness of the personalized learning pathways), such a move would possibly save educational authorities substantial amounts of money and allow educational technology companies to mine the rich goldmine of student data, along with the standardised platform specifications, to design their products.

And so it has come to pass. The Gates Foundation (yes, them again) provided most of the $100 million funding. A division of Murdoch’s News Corp built the infrastructure. Once everything was ready, a non-profit organization called inBloom was set up to run the thing. The inBloom platform is open source and the database was initially free, although this will change. Preliminary agreements were made with 7 US districts and involved millions of children. The data includes ‘students’ names, birthdates, addresses, social security numbers, grades, test scores, disability status, attendance, and other confidential information’ (Ravitch, D. ‘Reign of Error’ NY: Knopf, 2013, p. 235-236). Under federal law, this information can be ‘shared’ with private companies selling educational technology and services.

The edtech world rejoiced. ‘This is going to be a huge win for us’, said one educational software provider; ‘it’s a godsend for us,’ said another. Others are not so happy. If the technology actually works, if it can radically transform education and ‘produce game-changing outcomes’ (as its proponents claim so often), the price to be paid might just conceivably be worth paying. But the price is high and the research is not there yet. The price is privacy.

The problem is simple. InBloom itself acknowledges that it ‘cannot guarantee the security of the information stored… or that the information will not be intercepted when it is being transmitted.’ Experience has already shown us that organisations as diverse as the CIA or the British health service cannot protect their data. Hackers like a good challenge. So do businesses.

The anti-privatization (and, by extension, the anti-adaptivity) lobby in the US has found an issue which is resonating with electors (and parents). These dissenting voices are led by Class Size Matters, and their voice is being heard. Of the original partners of inBloom, only one is now left. The others have all pulled out, mostly because of concerns about privacy, although the remaining partner, New York, involves personal data on 2.7 million students, which can be shared without any parental notification or consent.

inbloom-student-data-bill-gates

This might seem like a victory for the anti-privatization / anti-adaptivity lobby, but it is likely to be only temporary. There are plenty of other companies that have their eyes on the data-mining opportunities that will be coming their way, and Obama’s ‘Race to the Top’ program means that the inBloom controversy will be only a temporary setback. ‘The reality is that it’s going to be done. It’s not going to be a little part. It’s going to be a big part. And it’s going to be put in place partly because it’s going to be less expensive than doing professional development,’ says Eva Baker of the Center for the Study of Evaluation at UCLA.

It is in this light that the debate about adaptive learning becomes hugely significant. Class Size Matters, the odd academic like Neil Selwyn or the occasional blogger like myself will not be able to reverse a trend with seemingly unstoppable momentum. But we are, collectively, in a position to influence the way these changes will take place.

If you want to find out more, check out the inBloom and Class Size Matters links. And you might like to read more from the news reports which I have used for information in this post. Of these, the second was originally published by Scientific American (owned by Macmillan, one of the leading players in ELT adaptive learning). The third and fourth are from Education Week, which is funded in part by the Gates Foundation.

http://www.reuters.com/article/2013/03/03/us-education-database-idUSBRE92204W20130303

http://www.salon.com/2013/08/01/big_data_puts_teachers_out_of_work_partner/

http://www.edweek.org/ew/articles/2014/01/08/15inbloom_ep.h33.html

http://blogs.edweek.org/edweek/marketplacek12/2013/12/new_york_battle_over_inBloom_data_privacy_heading_to_court.html