Archive for March, 2014

Let’s take a look at the business of adaptive learning from a publisher’s perspective. Not an ELT publisher, but someone a few rungs higher on the ladder with strategic responsibilities. You might not know a great deal about ELT. It is, after all, only one of a number of divisions you are responsible for, and not an especially profitable one at that. You will, however, know a lot about creative destruction, the process by which one industry is replaced by another. The decline and demise of printed magazines, newspapers and books, of book reviewers and traditional booksellers, and their replacement by digital products will be such a part of your professional knowledge that they hardly need to be mentioned. Graphs such as the one below from PricewaterhouseCoopers (PwC) will be terribly familiar. You will also be aware that the gales of creative destruction in publishing are blowing harder than ever before.

2014-03-31_1020

In fact, you probably owe your job to your ability to talk convincingly about creative destruction and how to profit from it. Whatever your particular strategy for the future might be, you will have noted the actions of others. You will have evaluated advice, such as the following, from Publishing Perspectives

  • Do not delay in taking action when there are clear signals of a decline in market value.
  • Trade your low-profit print assets (even though some may have limited digital value) for core product that has a higher likelihood of success and can be exploited digitally.
  • Look for an orderly transition from print to digital product which enables a company to reinvent itself.

You will be looking to invest in technology, and prioritizing the acquisition of technological expertise (through partnerships or the purchase of start-ups) over the development of traditional ELT products. Your company will be restructured, and possibly renamed, to facilitate the necessary changes.

You will also know that big data and analytics have already transformed other industries. And you will know that educational publishing is moving towards a winner-take-all business market, where ‘the best performers are able to capture a very large share of the rewards, and the remaining competitors are left with very little’ (Investopedia). Erik Brynjolfsson and Andrew McAfee’s new book, The Second Machine Age (New York: Norton, 2014), argues that ‘each time a market becomes more digital, winner-take-all economics become a little more compelling …Digitization creates winner-take-all markets because [there are] enormous economies of scale, giving the market leader a huge cost advantage and room to beat the price of any competitor while still making a good profit’ (pp.153-155).

the second machine age

It is in this light that we need to understand the way that companies like Pearson and Macmillan are banking everything on a digital future. Laurie Harrison’s excellent blog post at eltjam  summarises the Pearson position: ‘the world’s biggest education publisher is spending £150m on a total restructure which involves an immediate move to digital learning, a focus on emerging markets, and a transformation from publisher to education services provider. If the English language learning market is worth $4billion a year, then Pearson still only have a very small chunk of it. And if you’re a company as successful and ambitious as Pearson, that just isn’t good enough – so a change of direction is needed. In order to deliver this change, the company have recently announced their new senior management team.’

Adaptive learning fits the new business paradigm perfectly. If the hype is to be believed, adaptive learning will be a game-changer. ‘The shifting of education from analog to digital is a one-time event in the history of the human race. At scale, it will have as big an effect on the world as indoor plumbing or electricity,’ writes Jose Ferreira of Knewton. ‘True disruption,’ he says elsewhere, ‘happens when entrepreneurs aim big and go after a giant problem, a problem that, if solved, would usher in an era of large-scale transformation across industries and nations. … Education is the last of the information industries to move online,’ he goes on. ‘When it breaks, it breaks fast. And that’s going to happen in the next five years. All the education content will go online in the next 10 years. And textbooks will go away. … Ultimately, all learning materials will be digital and they will all be adaptive.’

Ferreira clearly knows all about creative disruption. He also knows about winner-take-all markets. ‘The question is who is going to power [the] platform,’ he writes. ‘It’s probably going to be one or two companies’. He states his ambition for Knewton very clearly: ‘Knewton’s goal is to be like Amazon Web Services for education’. ‘It’s pretty clear to us,’ he writes, ‘that there’s going to be one dominant data platform for education, the way there’s one dominant data platform for search, social media, etailing. But in education, it’s going to be even more winner-take-all; there will be a number of companies that make up the platform, like Wintel. People might make a perverse choice to use Bing for search because they don’t like Google. But no one’s going to make the choice to subject their kid to the second-best adaptive learning platform, if that means there’s a 23% structural disadvantage. The data platform industries tend to have a winner-take-all dynamic. You take that and multiply it by a very, very high-stakes product and you get an even more winner-take-all dynamic.’

What is at stake in this winner-take-all market? Over to Jose Ferreira one more time: ‘The industry is massive. It’s so massive that virtually nobody I’ve met truly grasps how big it is. It’s beyond their frame of reference. The total amount of money (both public and private) spent annually exceeds all spending, both online and offline, of every other information industry combined: that is, all media, entertainment, games, news, software, Internet and mobile media, e-tailing, etc.’

But, still, a few questions continue to nag away at me. If all of this is so certain, why does Jose Ferreira feel the need to talk about it so much? If all of this is so certain, why don’t all the ELT publishers jump on the bandwagon? What sort of track record does economic forecasting have, anyway?

Advertisements

There is a lot that technology can do to help English language learners develop their reading skills. The internet makes it possible for learners to read an almost limitless number of texts that will interest them, and these texts can evaluated for readability and, therefore, suitability for level (see here for a useful article). RSS opens up exciting possibilities for narrow reading and the positive impact of multimedia-enhanced texts was researched many years ago. There are good online bilingual dictionaries and other translation tools. There are apps that go with graded readers (see this review in the Guardian) and there are apps that can force you to read at a certain speed. And there is more. All of this could very effectively be managed on a good learning platform.

Could adaptive software add another valuable element to reading skills development?

Adaptive reading programs are spreading in the US in primary education, and, with some modifications, could be used in ELT courses for younger learners and for those who do not have the Roman alphabet. One of the most well-known has been developed by Lexia Learning®, a company that won a $500,000 grant from the Gates Foundation last year. Lexia Learning® was bought by Rosetta Stone® for $22.5 million in June 2013.

One of their products, Lexia Reading Core5, ‘provides explicit, systematic, personalized learning in the six areas of reading instruction, and delivers norm-referenced performance data and analysis without interrupting the flow of instruction to administer a test. Designed specifically to meet the Common Core and the most rigorous state standards, this research-proven, technology-based approach accelerates reading skills development, predicts students’ year-end performance and provides teachers data-driven action plans to help differentiate instruction’.

core5-ss-small

The predictable claim that it is ‘research-proven’ has not convinced everyone. Richard Allington, a professor of literacy studies at the University of Tennessee and a past president of both the International Reading Association and the National Reading Association, has said that all the companies that have developed this kind of software ‘come up with evidence – albeit potential evidence — that kids could improve their abilities to read by using their product. It’s all marketing. They’re selling a product. Lexia is one of these programs. But there virtually are no commercial programs that have any solid, reliable evidence that they improve reading achievement.’[1] He has argued that the $12 million that has been spent on the Lexia programs would have been better spent on a national program, developed at Ohio State University, that matches specially trained reading instructors with students known to have trouble learning to read.

But what about ELT? For an adaptive program like Lexia’s to work, reading skills need to be broken down in a similar way to the diagram shown above. Let’s get some folk linguistics out of the way first. The sub-skills of reading are not skimming, scanning, inferring meaning from context, etc. These are strategies that readers adopt voluntarily in order to understand a text better. If a reader uses these strategies in their own language, they are likely to transfer these strategies to their English reading. It seems that ELT instruction in strategy use has only limited impact, although this kind of training may be relevant to preparation for exams. This insight is taking a long time to filter down to course and coursebook design, but there really isn’t much debate[2]. Any adaptive ELT reading program that confuses reading strategies with reading sub-skills is going to have big problems.

What, then, are the sub-skills of reading? In what ways could reading be broken down into a skill tree so that it is amenable to adaptive learning? Researchers have provided different answers. Munby (1978), for example, listed 19 reading microskills, Heaton (1988) listed 14. However, a bigger problem is that other researchers (e.g. Lunzer 1979, Rost 1993) have failed to find evidence that distinct sub-skills actually exist. While it is easier to identify sub-skills for very low level readers (especially for those whose own language is very different from English), it is simply not possible to do so for higher levels.

Reading in another language is a complex process which involves both top-down and bottom-up strategies, is intimately linked to vocabulary knowledge and requires the activation of background, cultural knowledge. Reading ability, in the eyes of some researchers, is unitary or holistic. Others prefer to separate things into two components: word recognition and comprehension[3]. Either way, a consensus is beginning to emerge that teachers and learners might do better to focus on vocabulary extension (and this would include extensive reading) than to attempt to develop reading programs that assume the multidivisible nature of reading.

All of which means that adaptive learning software and reading skills in ELT are unlikely bedfellows. To be sure, an increased use of technology (as described in the first paragraph of this post) in reading work will generate a lot of data about learner behaviours. Analysis of this data may lead to actionable insights, and it may not! It will be interesting to find out.

 

[1] http://www.khi.org/news/2013/jun/17/budget-proviso-reading-program-raises-questions/

[2] See, for example, Walter, C. & M. Swan. 2008. ‘Teaching reading skills: mostly a waste of time?’ in Beaven, B. (ed.) IATEFL 2008 Exeter Conference Selections. (Canterbury: IATEFL). Or go back further to Alderson, J. C. 1984 ‘Reading in a foreign language: a reading problem or a language problem?’ in J.C. Alderson & A. H. Urquhart (eds.) Reading in a Foreign Language (London: Longman)

[3] For a useful summary of these issues, see ‘Reading abilities and strategies: a short introduction’ by Feng Liu (International Education Studies 3 / 3 August 2010) www.ccsenet.org/journal/index.php/ies/article/viewFile/6790/5321

In a recent interesting post on eltjam, Cleve Miller wrote the following

Knewton asks its publishing partners to organize their courses into a “knowledge graph” where content is mapped to an analyzable form that consists of the smallest meaningful chunks (called “concepts”), organized as prerequisites to specific learning goals. You can see here the influence of general learning theory and not SLA/ELT, but let’s not concern ourselves with nomenclature and just call their “knowledge graph” an “acquisition graph”, and call “concepts” anything else at all, say…“items”. Basically our acquisition graph could be something like the CEFR, and the items are the specifications in a completed English Profile project that detail the grammar, lexis, and functions necessary for each of the can-do’s in the CEFR. Now, even though this is a somewhat plausible scenario, it opens Knewton up to several objections, foremost the degree of granularity and linearity.

In this post, Cleve acknowledges that, for the time being, adaptive learning may be best suited to ‘certain self-study material, some online homework, and exam prep – anywhere the language is fairly defined and the content more amenable to algorithmic micro-adaptation.’ I would agree, but its value / usefulness will depend on getting the knowledge graph right.

Which knowledge graph, then? Cleve suggests that it could be something like the CEFR, but it couldn’t be the CEFR itself because it is, quite simply, too vague. This was recognized by Pearson when they developed their Global Scale of English (GSE), an instrument which, they claim, can provide ‘for more granular and detailed measurements of learners’ levels than is possible with the CEFR itself, with its limited number of wide levels’. This Global Scale of English will serve as ‘the metric underlying all Pearson English learning, teaching and assessment products’, including, therefore, the adaptive products under development.

gse2

‘As part of the GSE project, Pearson is creating an associated set of Pearson Syllabuses […]. These will help to link instructional content with assessments and to create a reference for authoring, instruction and testing.’ These syllabuses will contain grammar and vocabulary inventories which ‘will be expressed in the form of can-do statements with suggested sample exponents rather than as the prescriptive lists found in more traditional syllabuses.’ I haven’t been able to get my hands on one of these syllabuses yet: perhaps someone could help me out?

Informal feedback from writer colleagues working for Pearson suggests that, in practice, these inventories are much more prescriptive than Pearson claim, but this is hardly surprising, as the value of an inventory is precisely its more-or-less finite nature.

Until I see more, I will have to limit my observations to two documents in the public domain which are the closest we have to what might become knowledge graphs. The first of these is the British Council / EAQUALS Core Inventory for General EnglishScott Thornbury, back in 2011, very clearly set out the problems with this document and, to my knowledge, the reservations he expressed have not yet been adequately answered. To be fair, this inventory was never meant to be used as a knowledge graph: ‘It is a description, not a prescription’, wrote the author (North, 2010). But presumably a knowledge graph would look much like this, and it would have the same problems. The second place where we can find what a knowledge graph might look like is English Profile and this is mentioned by Cleve. Would English Profile work any better? Possibly not. Michael Swan’s critique of English Profile (ELTJ 68/1 January 2014 pp.89-96) asks some big questions that have yet, to my knowledge, to be answered.

Knewton’s Sally Searby has said that, for ELT, knowledge graphing needs to be ‘much more nuanced’. Her comment suggests a belief that knowledge graphing can be much more nuanced, but this is open to debate. Michael Swan quotes Prodeau, Lopez and Véronique (2012): ‘the sum of pragmatic and linguistic skills needed to achieve communicative success at each level makes it difficult, if not impossible, to find lexical and grammatical means that would characterize only one level’. He observes that ‘the problem may, in fact, simply not be soluble’.

So, what kind of knowledge graph are we likely to see? My best bet is that it would look a bit like a Headway syllabus.

I mentioned the issue of privacy very briefly in Part 9 of the ‘Guide’, and it seems appropriate to take a more detailed look.

Adaptive learning needs big data. Without the big data, there is nothing for the algorithms to work on, and the bigger the data set, the better the software can work. Adaptive language learning will be delivered via a platform, and the data that is generated by the language learner’s interaction with the English language program on the platform is likely to be only one, very small, part of the data that the system will store and analyse. Full adaptivity requires a psychometric profile for each student.

It would make sense, then, to aggregate as much data as possible in one place. Besides the practical value in massively combining different data sources (in order to enhance the usefulness of the personalized learning pathways), such a move would possibly save educational authorities substantial amounts of money and allow educational technology companies to mine the rich goldmine of student data, along with the standardised platform specifications, to design their products.

And so it has come to pass. The Gates Foundation (yes, them again) provided most of the $100 million funding. A division of Murdoch’s News Corp built the infrastructure. Once everything was ready, a non-profit organization called inBloom was set up to run the thing. The inBloom platform is open source and the database was initially free, although this will change. Preliminary agreements were made with 7 US districts and involved millions of children. The data includes ‘students’ names, birthdates, addresses, social security numbers, grades, test scores, disability status, attendance, and other confidential information’ (Ravitch, D. ‘Reign of Error’ NY: Knopf, 2013, p. 235-236). Under federal law, this information can be ‘shared’ with private companies selling educational technology and services.

The edtech world rejoiced. ‘This is going to be a huge win for us’, said one educational software provider; ‘it’s a godsend for us,’ said another. Others are not so happy. If the technology actually works, if it can radically transform education and ‘produce game-changing outcomes’ (as its proponents claim so often), the price to be paid might just conceivably be worth paying. But the price is high and the research is not there yet. The price is privacy.

The problem is simple. InBloom itself acknowledges that it ‘cannot guarantee the security of the information stored… or that the information will not be intercepted when it is being transmitted.’ Experience has already shown us that organisations as diverse as the CIA or the British health service cannot protect their data. Hackers like a good challenge. So do businesses.

The anti-privatization (and, by extension, the anti-adaptivity) lobby in the US has found an issue which is resonating with electors (and parents). These dissenting voices are led by Class Size Matters, and their voice is being heard. Of the original partners of inBloom, only one is now left. The others have all pulled out, mostly because of concerns about privacy, although the remaining partner, New York, involves personal data on 2.7 million students, which can be shared without any parental notification or consent.

inbloom-student-data-bill-gates

This might seem like a victory for the anti-privatization / anti-adaptivity lobby, but it is likely to be only temporary. There are plenty of other companies that have their eyes on the data-mining opportunities that will be coming their way, and Obama’s ‘Race to the Top’ program means that the inBloom controversy will be only a temporary setback. ‘The reality is that it’s going to be done. It’s not going to be a little part. It’s going to be a big part. And it’s going to be put in place partly because it’s going to be less expensive than doing professional development,’ says Eva Baker of the Center for the Study of Evaluation at UCLA.

It is in this light that the debate about adaptive learning becomes hugely significant. Class Size Matters, the odd academic like Neil Selwyn or the occasional blogger like myself will not be able to reverse a trend with seemingly unstoppable momentum. But we are, collectively, in a position to influence the way these changes will take place.

If you want to find out more, check out the inBloom and Class Size Matters links. And you might like to read more from the news reports which I have used for information in this post. Of these, the second was originally published by Scientific American (owned by Macmillan, one of the leading players in ELT adaptive learning). The third and fourth are from Education Week, which is funded in part by the Gates Foundation.

http://www.reuters.com/article/2013/03/03/us-education-database-idUSBRE92204W20130303

http://www.salon.com/2013/08/01/big_data_puts_teachers_out_of_work_partner/

http://www.edweek.org/ew/articles/2014/01/08/15inbloom_ep.h33.html

http://blogs.edweek.org/edweek/marketplacek12/2013/12/new_york_battle_over_inBloom_data_privacy_heading_to_court.html

Talk the big data talk

Posted: March 7, 2014 in big data
Tags: ,

Pearson’s Efficacy document has a chapter called ‘A New Era of Learning Efficacy on a Planet of Smarter Systems’ by Jon Iwata. It’s basically a paean of praise to the potential of big data.  I threw the text into a word cloud program to see what would come up. And what we get is a handy little guide for anyone who wants to bluff their way in a discussion about adaptive learning. Alternatively, you could use it for buzz word bingo.

Efficacy

Some of the words you won’t be needing at all are ‘teach’, ‘teachers’, ‘classrooms’ or ‘lessons’. Sorry about the blur in the image: it’s my laptop having an emotional response.