Posts Tagged ‘personalized’

All aboard …

The point of adaptive learning is that it can personalize learning. When we talk about personalization, mention of learning styles is rarely far away. Jose Ferreira of Knewton (but now ex-CEO Knewton) made his case for learning styles in a blog post that generated a superb and, for Ferreira, embarrassing  discussion in the comments that were subsequently deleted by Knewton. fluentu_learning_stylesFluentU (which I reviewed here) clearly approves of learning styles, or at least sees them as a useful way to market their product, even though it is unclear how their product caters to different styles. Busuu claims to be ‘personalised to fit your style of learning’. Voxy, Inc. (according to their company overview) ‘operates a language learning platform that creates custom curricula for English language learners based on their interests, routines, goals, and learning styles’. Bliu Bliu (which I reviewed here) recommended, in a recent blog post, that learners should ‘find out their language learner type and use it to their advantage’ and suggests, as a starter, trying out ‘Bliu Bliu, where pretty much any learner can find what suits them best’. Memrise ‘uses clever science to adapt to your personal learning style’.  Duolingo’s learning tree ‘effectively rearranges itself to suit individual learning styles’ according to founder, Louis Von Ahn. This list could go on and on.

Learning styles are thriving in ELT coursebooks, too. Here are just three recent examples for learners of various ages. Today! by Todd, D. & Thompson, T. (Pearson, 2014) ‘shapes learning around individual students with graded difficulty practice for mixed-ability classes’ and ‘makes testing mixed-ability classes easier with tests that you can personalise to students’ abilities’.today

Move  it! by Barraclough, C., Beddall, F., Stannett, K., Wildman, J. (Pearson, 2015) offers ‘personalized pathways [which] allow students to optimize their learning outcomes’ and a ‘complete assessment package to monitor students’ learning process’. pearson_move_it

Open Mind Elementary (A2) 2nd edition by Rogers, M., Taylor-Knowles, J. & Taylor-Knowles, S. (Macmillan, 2014) has a whole page devoted to learning styles in the ‘Life Skills’ strand of the course. The scope and sequence describes it in the following terms: ‘Thinking about what you like to do to find your learning style and improve how you learn English’. Here’s the relevant section:macmillan_coursebook

rosenber-learning-stylesMethodology books offer more tips for ways that teachers can cater to different learning styles. Recent examples include Patrycja Kamińska’s  Learning Styles and Second Language Education (Cambridge Scholars, 2014), Tammy Gregersen & Peter D. MacIntyre’s Capitalizing on Language Learners’ Individuality (Multilingual Matters, 2014) and Marjorie Rosenberg’s Spotlight on Learning Styles (Delta Publishing, 2013). Teacher magazines show a continuing interest  in the topic. Humanising Language Teaching and English Teaching Professional are particularly keen. The British Council offers courses about learning styles and its Teaching English website has many articles and lesson plans on the subject (my favourite explains that your students will be more successful if you match your teaching style to their learning styles), as do the websites of all the major publishers. Most ELT conferences will also offer something on the topic.oup_learning_styles

How about language teaching qualifications and frameworks? The Cambridge English Teaching Framework contains a component entitled ‘Understanding learners’ and this specifies as the first part of the component a knowledge of concepts such as learning styles (e.g., visual, auditory, kinaesthetic), multiple intelligences, learning strategies, special needs, affect. Unsurprisingly, the Cambridge CELTA qualification requires successful candidates to demonstrate an awareness of the different learning styles and preferences that adults bring to learning English. The Cambridge DELTA requires successful candidates to accommodate learners according to their different abilities, motivations, and learning styles. The Eaquals Framework for Language Teacher Training and Development requires teachers at Development Phase 2 t0 have the skill of determining and anticipating learners’ language learning needs and learning styles at a range of levels, selecting appropriate ways of finding out about these.

Outside of ELT, learning styles also continue to thrive. Phil Newton (2015 ‘The learning styles myth is thriving in higher education’ Frontiers in Psychology 6: 1908) carried out a survey of educational publications  (higher education) between 2013 and 2016, and found that an overwhelming majority (89%) implicitly or directly endorse the use of learning styles. He also cites research showing that 93% of UK schoolteachers believe that ‘individuals learn better when they receive information in their preferred Learning Style’, with similar figures in other countries. 72% of Higher Education institutions in the US teach ‘learning style theory’ as part of faculty development for online teachers. Advocates of learning styles in English language teaching are not alone.

But, unfortunately, …

In case you weren’t aware of it, there is a rather big problem with learning styles. There is a huge amount of research  which suggests that learning styles (and, in particular, teaching attempts to cater to learning styles) need to be approached with extreme scepticism. Much of this research was published long before the blog posts, advertising copy, books and teaching frameworks (listed above) were written.  What does this research have to tell us?

The first problem concerns learning styles taxonomies. There are three issues here: many people do not fit one particular style, the information used to assign people to styles is often inadequate, and there are so many different styles that it becomes cumbersome to link particular learners to particular styles (Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48 / 3, 169-183). To summarise, given the lack of clarity as to which learning styles actually exist, it may be ‘neither viable nor justified’ for learning styles to form the basis of lesson planning (Hall, G. 2011. Exploring English Language Teaching. Abingdon, Oxon.: Routledge p.140). More detailed information about these issues can be found in the following sources:

Coffield, F., Moseley, D., Hall, E. & Ecclestone, K. 2004. Learning styles and pedagogy in post-16 learning: a systematic and critical review. London: Learning and Skills Research Centre

Dembo, M. H. & Howard, K. 2007. Advice about the use of learning styles: a major myth in education. Journal of College Reading & Learning 37 / 2: 101 – 109

Kirschner, P. A. 2017. Stop propagating the learning styles myth. Computers & Education 106: 166 – 171

Pashler, H., McDaniel, M., Rohrer, D. & Bjork, E. 2008. Learning styles concepts and evidence. Psychological Science in the Public Interest 9 / 3: 105 – 119

Riener, C. & Willingham, D. 2010. The myth of learning styles. Change – The Magazine of Higher Learning

The second problem concerns what Pashler et al refer to as the ‘meshing hypothesis’: the idea that instructional interventions can be effectively tailored to match particular learning styles. Pashler et al concluded that the available taxonomies of student types do not offer any valid help in deciding what kind of instruction to offer each individual. Even in 2008, their finding was not new. Back in 1978, a review of 15 studies that looked at attempts to match learning styles to approaches to first language reading instruction, concluded that modality preference ‘has not been found to interact significantly with the method of teaching’ (Tarver, Sara & M. M. Dawson. 1978. Modality preference and the teaching of reading. Journal of Learning Disabilities 11: 17 – 29). The following year, two other researchers concluded that [the assumption that one can improve instruction by matching materials to children’s modality strengths] appears to lack even minimal empirical support. (Arter, J.A. & Joseph A. Jenkins 1979 ‘Differential diagnosis-prescriptive teaching: A critical appraisal’ Review of Educational Research 49: 517-555). Fast forward 20 years to 1999, and Stahl (Different strokes for different folks?’ American Educator Fall 1999 pp. 1 – 5) was writing the reason researchers roll their eyes at learning styles is the utter failure to find that assessing children’s learning styles and matching to instructional methods has any effect on learning. The area with the most research has been the global and analytic styles […]. Over the past 30 years, the names of these styles have changed – from ‘visual’ to ‘global’ and from ‘auditory’ to ‘analytic’ – but the research results have not changed. For a recent evaluation of the practical applications of learning styles, have a look at Rogowsky, B. A., Calhoun, B. M. & Tallal, P. 2015. ‘Matching Learning Style to Instructional Method: Effects on Comprehension’ Journal of Educational Psychology 107 / 1: 64 – 78. Even David Kolb, the Big Daddy of learning styles, now concedes that there is no strong evidence that teachers should tailor their instruction to their student’s particular learning styles (reported in Glenn, D. 2009. ‘Matching teaching style to learning style may not help students’ The Chronicle of Higher Education). To summarise, the meshing hypothesis is entirely unsupported in the scientific literature. It is a myth (Howard-Jones, P. A. 2014. ‘Neuroscience and education: myths and messages’ Nature Reviews Neuroscience).

This brings me back to the blog posts, advertising blurb, coursebooks, methodology books and so on that continue to tout learning styles. The writers of these texts typically do not acknowledge that there’s a problem of any kind. Are they unaware of the research? Or are they aware of it, but choose not to acknowledge it? I suspect that the former is often the case with the app developers. But if the latter is the case, what  might those reasons be? In the case of teacher training specifications, the reason is probably practical. Changing a syllabus is an expensive and time-consuming operation. But in the case of some of the ELT writers, I suspect that they hang on in there because they so much want to believe.

As Newton (2015: 2) notes, intuitively, there is much that is attractive about the concept of Learning Styles. People are obviously different and Learning Styles appear to offer educators a way to accommodate individual learner differences.  Pashler et al (2009:107) add that another related factor that may play a role in the popularity of the learning-styles approach has to do with responsibility. If a person or a person’s child is not succeeding or excelling in school, it may be more comfortable for the person to think that the educational system, not the person or the child himself or herself, is responsible. That is, rather than attribute one’s lack of success to any lack of ability or effort on one’s part, it may be more appealing to think that the fault lies with instruction being inadequately tailored to one’s learning style. In that respect, there may be linkages to the self-esteem movement that became so influential, internationally, starting in the 1970s. There is no reason to doubt that many of those who espouse learning styles have good intentions.

No one, I think, seriously questions whether learners might not benefit from a wide variety of input styles and learning tasks. People are obviously different. MacIntyre et al (MacIntyre, P.D., Gregersen, T. & Clément, R. 2016. ‘Individual Differences’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon, Oxon: Routledge, pp.310 – 323, p.319) suggest that teachers might consider instructional methods that allow them to capitalise on both variety and choice and also help learners find ways to do this for themselves inside and outside the classroom. Jill Hadfield (2006. ‘Teacher Education and Trainee Learning Style’ RELC Journal 37 / 3: 369 – 388) recommends that we design our learning tasks across the range of learning styles so that our trainees can move across the spectrum, experiencing both the comfort of matching and the challenge produced by mismatching. But this is not the same thing as claiming that identification of a particular learning style can lead to instructional decisions. The value of books like Rosenberg’s Spotlight on Learning Styles lies in the wide range of practical suggestions for varying teaching styles and tasks. They contain ideas of educational value: it is unfortunate that the theoretical background is so thin.

In ELT things are, perhaps, beginning to change. Russ Mayne’s blog post Learning styles: facts and fictions in 2012 got a few heads nodding, and he followed this up 2 years later with a presentation at IATEFL looking at various aspects of ELT, including learning styles, which have little or no scientific credibility. Carol Lethaby and Patricia Harries gave a talk at IATEFL 2016, Changing the way we approach learning styles in teacher education, which was also much discussed and shared online. They also had an article in ELT Journal called Learning styles and teacher training: are we perpetuating neuromyths? (2016 ELTJ 70 / 1: 16 – 27). Even Pearson, in a blog post of November 2016, (Mythbusters: A review of research on learning styles) acknowledges that there is a shocking lack of evidence to support the core learning styles claim that customizing instruction based on students’ preferred learning styles produces better learning than effective universal instruction, concluding that  it is impossible to recommend learning styles as an effective strategy for improving learning outcomes.

 

 

About two and a half years ago when I started writing this blog, there was a lot of hype around adaptive learning and the big data which might drive it. Two and a half years are a long time in technology. A look at Google Trends suggests that interest in adaptive learning has been pretty static for the last couple of years. It’s interesting to note that 3 of the 7 lettered points on this graph are Knewton-related media events (including the most recent, A, which is Knewton’s latest deal with Hachette) and 2 of them concern McGraw-Hill. It would be interesting to know whether these companies follow both parts of Simon Cowell’s dictum of ‘Create the hype, but don’t ever believe it’.

Google_trends

A look at the Hype Cycle (see here for Wikipedia’s entry on the topic and for criticism of the hype of Hype Cycles) of the IT research and advisory firm, Gartner, indicates that both big data and adaptive learning have now slid into the ‘trough of disillusionment’, which means that the market has started to mature, becoming more realistic about how useful the technologies can be for organizations.

A few years ago, the Gates Foundation, one of the leading cheerleaders and financial promoters of adaptive learning, launched its Adaptive Learning Market Acceleration Program (ALMAP) to ‘advance evidence-based understanding of how adaptive learning technologies could improve opportunities for low-income adults to learn and to complete postsecondary credentials’. It’s striking that the program’s aims referred to how such technologies could lead to learning gains, not whether they would. Now, though, with the publication of a report commissioned by the Gates Foundation to analyze the data coming out of the ALMAP Program, things are looking less rosy. The report is inconclusive. There is no firm evidence that adaptive learning systems are leading to better course grades or course completion. ‘The ultimate goal – better student outcomes at lower cost – remains elusive’, the report concludes. Rahim Rajan, a senior program office for Gates, is clear: ‘There is no magical silver bullet here.’

The same conclusion is being reached elsewhere. A report for the National Education Policy Center (in Boulder, Colorado) concludes: Personalized Instruction, in all its many forms, does not seem to be the transformational technology that is needed, however. After more than 30 years, Personalized Instruction is still producing incremental change. The outcomes of large-scale studies and meta-analyses, to the extent they tell us anything useful at all, show mixed results ranging from modest impacts to no impact. Additionally, one must remember that the modest impacts we see in these meta-analyses are coming from blended instruction, which raises the cost of education rather than reducing it (Enyedy, 2014: 15 -see reference at the foot of this post). In the same vein, a recent academic study by Meg Coffin Murray and Jorge Pérez (2015, ‘Informing and Performing: A Study Comparing Adaptive Learning to Traditional Learning’) found that ‘adaptive learning systems have negligible impact on learning outcomes’.

future-ready-learning-reimagining-the-role-of-technology-in-education-1-638In the latest educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Reimagining the Role of Technology in Education’, 2016) the only mentions of the word ‘adaptive’ are in the context of testing. And the latest OECD report on ‘Students, Computers and Learning: Making the Connection’ (2015), finds, more generally, that information and communication technologies, when they are used in the classroom, have, at best, a mixed impact on student performance.

There is, however, too much money at stake for the earlier hype to disappear completely. Sponsored cheerleading for adaptive systems continues to find its way into blogs and national magazines and newspapers. EdSurge, for example, recently published a report called ‘Decoding Adaptive’ (2016), sponsored by Pearson, that continues to wave the flag. Enthusiastic anecdotes take the place of evidence, but, for all that, it’s a useful read.

In the world of ELT, there are plenty of sales people who want new products which they can call ‘adaptive’ (and gamified, too, please). But it’s striking that three years after I started following the hype, such products are rather thin on the ground. Pearson was the first of the big names in ELT to do a deal with Knewton, and invested heavily in the company. Their relationship remains close. But, to the best of my knowledge, the only truly adaptive ELT product that Pearson offers is the PTE test.

Macmillan signed a contract with Knewton in May 2013 ‘to provide personalized grammar and vocabulary lessons, exam reviews, and supplementary materials for each student’. In December of that year, they talked up their new ‘big tree online learning platform’: ‘Look out for the Big Tree logo over the coming year for more information as to how we are using our partnership with Knewton to move forward in the Language Learning division and create content that is tailored to students’ needs and reactive to their progress.’ I’ve been looking out, but it’s all gone rather quiet on the adaptive / platform front.

In September 2013, it was the turn of Cambridge to sign a deal with Knewton ‘to create personalized learning experiences in its industry-leading ELT digital products for students worldwide’. This year saw the launch of a major new CUP series, ‘Empower’. It has an online workbook with personalized extra practice, but there’s nothing (yet) that anyone would call adaptive. More recently, Cambridge has launched the online version of the 2nd edition of Touchstone. Nothing adaptive there, either.

Earlier this year, Cambridge published The Cambridge Guide to Blended Learning for Language Teaching, edited by Mike McCarthy. It contains a chapter by M.O.Z. San Pedro and R. Baker on ‘Adaptive Learning’. It’s an enthusiastic account of the potential of adaptive learning, but it doesn’t contain a single reference to language learning or ELT!

So, what’s going on? Skepticism is becoming the order of the day. The early hype of people like Knewton’s Jose Ferreira is now understood for what it was. Companies like Macmillan got their fingers badly burnt when they barked up the wrong tree with their ‘Big Tree’ platform.

Noel Enyedy captures a more contemporary understanding when he writes: Personalized Instruction is based on the metaphor of personal desktop computers—the technology of the 80s and 90s. Today’s technology is not just personal but mobile, social, and networked. The flexibility and social nature of how technology infuses other aspects of our lives is not captured by the model of Personalized Instruction, which focuses on the isolated individual’s personal path to a fixed end-point. To truly harness the power of modern technology, we need a new vision for educational technology (Enyedy, 2014: 16).

Adaptive solutions aren’t going away, but there is now a much better understanding of what sorts of problems might have adaptive solutions. Testing is certainly one. As the educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Re-imagining the Role of Technology in Education’, 2016) puts it: Computer adaptive testing, which uses algorithms to adjust the difficulty of questions throughout an assessment on the basis of a student’s responses, has facilitated the ability of assessments to estimate accurately what students know and can do across the curriculum in a shorter testing session than would otherwise be necessary. In ELT, Pearson and EF have adaptive tests that have been well researched and designed.

Vocabulary apps which deploy adaptive technology continue to become more sophisticated, although empirical research is lacking. Automated writing tutors with adaptive corrective feedback are also developing fast, and I’ll be writing a post about these soon. Similarly, as speech recognition software improves, we can expect to see better and better automated adaptive pronunciation tutors. But going beyond such applications, there are bigger questions to ask, and answers to these will impact on whatever direction adaptive technologies take. Large platforms (LMSs), with or without adaptive software, are already beginning to look rather dated. Will they be replaced by integrated apps, or are apps themselves going to be replaced by bots (currently riding high in the Hype Cycle)? In language learning and teaching, the future of bots is likely to be shaped by developments in natural language processing (another topic about which I’ll be blogging soon). Nobody really has a clue where the next two and a half years will take us (if anywhere), but it’s becoming increasingly likely that adaptive learning will be only one very small part of it.

 

Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning. Boulder, CO: National Education Policy Center. Retrieved 17.07.16 from http://nepc.colorado.edu/publication/personalized-instruction

Ok, let’s be honest here. This post is about teacher training, but ‘development’ sounds more respectful, more humane, more modern. Teacher development (self-initiated, self-evaluated, collaborative and holistic) could be adaptive, but it’s unlikely that anyone will want to spend the money on developing an adaptive teacher development platform any time soon. Teacher training (top-down, pre-determined syllabus and externally evaluated) is another matter. If you’re not too clear about this distinction, see Penny Ur’s article in The Language Teacher.

decoding_adaptive jpgThe main point of adaptive learning tools is to facilitate differentiated instruction. They are, as Pearson’s latest infomercial booklet describes them, ‘educational technologies that can respond to a student’s interactions in real-time by automatically providing the student with individual support’. Differentiation or personalization (or whatever you call it) is, as I’ve written before  , the declared goal of almost everyone in educational power these days. What exactly it is may be open to question (see Michael Feldstein’s excellent article), as may be the question of whether or not it is actually such a desideratum (see, for example, this article ). But, for the sake of argument, let’s agree that it’s mostly better than one-size-fits-all.

Teachers around the world are being encouraged to adopt a differentiated approach with their students, and they are being encouraged to use technology to do so. It is technology that can help create ‘robust personalized learning environments’ (says the White House)  . Differentiation for language learners could be facilitated by ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses,’ etc. (see Alexandra Chistyakova’s post on eltdiary ).

But here’s the crux. If we want teachers to adopt a differentiated approach, they really need to have experienced it themselves in their training. An interesting post on edweek  sums this up: If professional development is supposed to lead to better pedagogy that will improve student learning AND we are all in agreement that modeling behaviors is the best way to show people how to do something, THEN why not ensure all professional learning opportunities exhibit the qualities we want classroom teachers to have?

Differentiated teacher development / training is rare. According to the Center for Public Education’s Teaching the Teachers report , almost all teachers participate in ‘professional development’ (PD) throughout the year. However, a majority of those teachers find the PD in which they participate ineffective. Typically, the development is characterised by ‘drive-by’ workshops, one-size-fits-all presentations, ‘been there, done that’ topics, little or no modelling of what is being taught, a focus on rotating fads and a lack of follow-up. This report is not specifically about English language teachers, but it will resonate with many who are working in English language teaching around the world.cindy strickland

The promotion of differentiated teacher development is gaining traction: see here or here , for example, or read Cindy A. Strickland’s ‘Professional Development for Differentiating Instruction’.

Remember, though, that it’s really training, rather than development, that we’re talking about. After all, if one of the objectives is to equip teachers with a skills set that will enable them to become more effective instructors of differentiated learning, this is most definitely ‘training’ (notice the transitivity of the verbs ‘enable’ and ‘equip’!). In this context, a necessary starting point will be some sort of ‘knowledge graph’ (which I’ve written about here ). For language teachers, these already exist, including the European Profiling Grid , the Eaquals Framework for Language Teacher Training and Development, the Cambridge English Teaching Framework and the British Council’s Continuing Professional Development Framework (CPD) for Teachers  . We can expect these to become more refined and more granularised, and a partial move in this direction is the Cambridge English Digital Framework for Teachers  . Once a knowledge graph is in place, the next step will be to tag particular pieces of teacher training content (e.g. webinars, tasks, readings, etc.) to locations in the framework that is being used. It would not be too complicated to engineer dynamic frameworks which could be adapted to individual or institutional needs.cambridge_english_teaching_framework jpg

This process will be facilitated by the fact that teacher training content is already being increasingly granularised. Whether it’s an MA in TESOL or a shorter, more practically oriented course, things are getting more and more bite-sized, with credits being awarded to these short bites, as course providers face stiffer competition and respond to market demands.

Visible classroom home_page_screenshotClassroom practice could also form part of such an adaptive system. One tool that could be deployed would be Visible Classroom , an automated system for providing real-time evaluative feedback for teachers. There is an ‘online dashboard providing teachers with visual information about their teaching for each lesson in real-time. This includes proportion of teacher talk to student talk, number and type of questions, and their talking speed.’ John Hattie, who is behind this project, says that teachers ‘account for about 30% of the variance in student achievement and [are] the largest influence outside of individual student effort.’ Teacher development with a tool like Visible Classroom is ultimately all about measuring teacher performance (against a set of best-practice benchmarks identified by Hattie’s research) in order to improve the learning outcomes of the students.Visible_classroom_panel_image jpg

You may have noticed the direction in which this part of this blog post is going. I began by talking about social networking systems, podcasts, wikis, blogs and so on, and just now I’ve mentioned the summative, credit-bearing possibilities of an adaptive teacher development training programme. It’s a tension that is difficult to resolve. There’s always a paradox in telling anyone that they are going to embark on a self-directed course of professional development. Whoever pays the piper calls the tune and, if an institution decides that it is worth investing significant amounts of money in teacher development, they will want a return for their money. The need for truly personalised teacher development is likely to be overridden by the more pressing need for accountability, which, in turn, typically presupposes pre-determined course outcomes, which can be measured in some way … so that quality (and cost-effectiveness and so on) can be evaluated.

Finally, it’s worth asking if language teaching (any more than language learning) can be broken down into small parts that can be synthesized later into a meaningful and valuable whole. Certainly, there are some aspects of language teaching (such as the ability to use a dashboard on an LMS) which lend themselves to granularisation. But there’s a real danger of losing sight of the forest of teaching if we focus on the individual trees that can be studied and measured.

51Fgn6C4sWL__SY344_BO1,204,203,200_Decent research into adaptive learning remains very thin on the ground. Disappointingly, the Journal of Learning Analytics has only managed one issue so far in 2015, compared to three in 2014. But I recently came across an article in Vol. 18 (pp. 111 – 125) of  Informing Science: the International Journal of an Emerging Transdiscipline entitled Informing and performing: A study comparing adaptive learning to traditional learning by Murray, M. C., & Pérez, J. of Kennesaw State University.

The article is worth reading, not least because of the authors’ digestible review of  adaptive learning theory and their discussion of levels of adaptation, including a handy diagram (see below) which they have reproduced from a white paper by Tyton Partners ‘Learning to Adapt: Understanding the Adaptive Learning Supplier Landscape’. Murray and Pérez make clear that adaptive learning theory is closely connected to the belief that learning is improved when instruction is personalized — adapted to individual learning styles, but their approach is surprisingly uncritical. They write, for example, that the general acceptance of learning styles is evidenced in recommended teaching strategies in nearly every discipline, and learning styles continue to inform the evolution of adaptive learning systems, and quote from the much-quoted Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008) Learning styles: concepts and evidence, Psychological Science in the Public Interest, 9, 105–119. But Pashler et al concluded that the current evidence supporting the use of learning style-matched approaches is virtually non-existent (see here for a review of Pashler et al). And, in the world of ELT, an article in the latest edition of ELTJ by Carol Lethaby and Patricia Harries disses learning styles and other neuromyths. Given the close connection between adaptive learning theory and learning styles, one might reasonably predict that a comparative study of adaptive learning and traditional learning would not come out with much evidence in support of the former.

adaptive_taxonomyMurray and Pérez set out, anyway, to explore the hypothesis that adapting instruction to an individual’s learning style results in better learning outcomes. Their study compared adaptive and traditional methods in a university-level digital literacy course. Their conclusion? This study and a few others like it indicate that today’s adaptive learning systems have negligible impact on learning outcomes.

I was, however, more interested in the comments which followed this general conclusion. They point out that learning outcomes are only one measure of quality. Others, such as student persistence and engagement, they claim, can be positively affected by the employment of adaptive systems. I am not convinced. I think it’s simply far too soon to be able to judge this, and we need to wait quite some time for novelty effects to wear off. Murray and Pérez provide two references in support of their claim. One is an article by Josh Jarrett, Bigfoot, Goldilocks, and Moonshots: A Report from the Frontiers of Personalized Learning in Educause. Jarrett is Deputy Director for Postsecondary Success at the Bill & Melinda Gates Foundation and Educause is significantly funded by the Gates Foundation. Not, therefore, an entirely unbiased and trustworthy source. The other is a journalistic piece in Forbes. It’s by Tim Zimmer, entitled Rethinking higher ed: A case for adaptive learning and it reads like an advert. Zimmer is a ‘CCAP contributor’. CCAP is the Centre for College Affordability and Productivity, a libertarian, conservative foundation with a strong privatization agenda. Not, therefore, a particularly reliable source, either.

Despite their own findings, Murray and Pérez follow up their claim about student persistence and engagement with what they describe as a more compelling still argument for adaptive learning. This, they say, is the intuitively appealing case for adaptive learning systems as engines with which institutions can increase access and reduce costs. Ah, now we’re getting to the point!

 

 

 

 

 

 

 

.

 

 

 

 

Then and now in educationThe School of Tomorrow will pay far more attention to individuals than the schools of the past. Each child will be studied and measured repeatedly from many angles, both as a basis of prescriptions for treatment and as a means of controlling development. The new education will be scientific in that it will rest on a fact basis. All development of knowledge and skill will be individualized, and classroom practice and recitation as they exist today in conventional schools will largely disappear. […] Experiments in laboratories and in schools of education [will discover] what everyone should know and the best way to learn essential elements.

This is not, you may be forgiven for thinking, from a Knewton blog post. It was written in 1924 and comes from Otis W. Caldwell & Stuart A. Courtis Then and Now in Education, 1845: 1923 (New York: Appleton) and is cited in Petrina, S. 2002. ‘Getting a Purchase on “The School of Tomorrow” and its Constituent Commodities: Histories and Historiographies of Technologies’ History of Education Quarterly, Vol. 42, No. 1 (Spring, 2002), pp. 75-111.

presseyIn the same year that Caldwell and Courtis predicted the School of Tomorrow, Sidney Pressey, ‘contrived an intelligence testing machine, which he transformed during 1924-1934 into an ‘Automatic Teacher.’ His machine automated and individualized routine classroom processes such as testing and drilling. It could reduce the burden of testing and scoring for teachers and therapeutically treat students after examination and diagnosis’ (Petrina, p. 99). Six years later, the ‘Automatic Teacher’ was recognised as a commercial failure. For more on Pressey’s machine (including a video of Pressey demonstrating it), see Audrey Watter’s excellent piece.

Caldwell, Courtis and Pressey are worth bearing in mind when you read the predictions of people like Knewton’s Jose Ferreira. Here are a few of his ‘Then and Now’ predictions:

“Online learning” will soon be known simply as “learning.” All of the world’s education content is being digitized right now, and that process will be largely complete within five years. (01.09.2010)

There will soon be lots of wonderful adaptive learning apps: adaptive quizzing apps, flashcard apps, textbook apps, simulation apps — if you can imagine it, someone will make it. In a few years, every education app will be adaptive. Everyone will be an adaptive learning app maker. (23.04.13)

Right now about 22 percent of the people in the world graduate high school or the equivalent. That’s pathetic. In one generation we could get close to 100 percent, almost for free. (19.07.13)

95% of materials (textbooks, software, etc used for classes, tutoring, corp training…) will be purely online in 5-10 years. That’s a $200B global industry. And people predict that 50% of higher ed and 25% of K-12 will eventually be purely online classes. If so, that would create a new, $3 trillion or so industry. (25.11.2013)

‘Sticky’ – as in ‘sticky learning’ or ‘sticky content’ (as opposed to ‘sticky fingers’ or a ‘sticky problem’) – is itself fast becoming a sticky word. If you check out ‘sticky learning’ on Google Trends, you’ll see that it suddenly spiked in September 2011, following the slightly earlier appearance of ‘sticky content’. The historical rise in this use of the word coincides with the exponential growth in the number of references to ‘big data’.

I am often asked if adaptive learning really will take off as a big thing in language learning. Will adaptivity itself be a sticky idea? When the question is asked, people mean the big data variety of adaptive learning, rather than the much more limited adaptivity of spaced repetition algorithms, which, I think, is firmly here and here to stay. I can’t answer the question with any confidence, but I recently came across a book which suggests a useful way of approaching the question.

41u+NEyWjnL._SY344_BO1,204,203,200_‘From the Ivory Tower to the Schoolhouse’ by Jack Schneider (Harvard Education Press, 2014) investigates the reasons why promising ideas from education research fail to get taken up by practitioners, and why other, less-than-promising ideas, from a research or theoretical perspective, become sticky quite quickly. As an example of the former, Schneider considers Robert Sternberg’s ‘Triarchic Theory’. As an example of the latter, he devotes a chapter to Howard Gardner’s ‘Multiple Intelligences Theory’.

Schneider argues that educational ideas need to possess four key attributes in order for teachers to sit up, take notice and adopt them.

  1. perceived significance: the idea must answer a question central to the profession – offering a big-picture understanding rather than merely one small piece of a larger puzzle
  2. philosophical compatibility: the idea must clearly jibe with closely held [teacher] beliefs like the idea that teachers are professionals, or that all children can learn
  3. occupational realism: it must be possible for the idea to be put easily into immediate use
  4. transportability: the idea needs to find its practical expression in a form that teachers can access and use at the time that they need it – it needs to have a simple core that can travel through pre-service coursework, professional development seminars, independent study and peer networks

To what extent does big data adaptive learning possess these attributes? It certainly comes up trumps with respect to perceived significance. The big question that it attempts to answer is the question of how we can make language learning personalized / differentiated / individualised. As its advocates never cease to remind us, adaptive learning holds out the promise of moving away from a one-size-fits-all approach. The extent to which it can keep this promise is another matter, of course. For it to do so, it will never be enough just to offer different pathways through a digitalised coursebook (or its equivalent). Much, much more content will be needed: at least five or six times the content of a one-size-fits-all coursebook. At the moment, there is little evidence of the necessary investment into content being made (quite the opposite, in fact), but the idea remains powerful nevertheless.

When it comes to philosophical compatibility, adaptive learning begins to run into difficulties. Despite the decades of edging towards more communicative approaches in language teaching, research (e.g. the research into English teaching in Turkey described in a previous post), suggests that teachers still see explanation and explication as key functions of their jobs. They believe that they know their students best and they know what is best for them. Big data adaptive learning challenges these beliefs head on. It is no doubt for this reason that companies like Knewton make such a point of claiming that their technology is there to help teachers. But Jose Ferreira doth protest too much, methinks. Platform-delivered adaptive learning is a direct threat to teachers’ professionalism, their salaries and their jobs.

Occupational realism is more problematic still. Very, very few language teachers around the world have any experience of truly blended learning, and it’s very difficult to envisage precisely what it is that the teacher should be doing in a classroom. Publishers moving towards larger-scale blended adaptive materials know that this is a big problem, and are actively looking at ways of packaging teacher training / teacher development (with a specific focus on blended contexts) into the learner-facing materials that they sell. But the problem won’t go away. Education ministries have a long history of throwing money at technological ‘solutions’ without thinking about obtaining the necessary buy-in from their employees. It is safe to predict that this is something that is unlikely to change. Moreover, learning how to become a blended teacher is much harder than learning, say, how to make good use of an interactive whiteboard. Since there are as many different blended adaptive approaches as there are different educational contexts, there cannot be (irony of ironies) a one-size-fits-all approach to training teachers to make good use of this software.

Finally, how transportable is big data adaptive learning? Not very, is the short answer, and for the same reasons that ‘occupational realism’ is highly problematic.

Looking at things through Jack Schneider’s lens, we might be tempted to come to the conclusion that the future for adaptive learning is a rocky path, at best. But Schneider doesn’t take political or economic considerations into account. Sternberg’s ‘Triarchic Theory’ never had the OECD or the Gates Foundation backing it up. It never had millions and millions of dollars of investment behind it. As we know from political elections (and the big data adaptive learning issue is a profoundly political one), big bucks can buy opinions.

It may also prove to be the case that the opinions of teachers don’t actually matter much. If the big adaptive bucks can win the educational debate at the highest policy-making levels, teachers will be the first victims of the ‘creative disruption’ that adaptivity promises. If you don’t believe me, just look at what is going on in the U.S.

There are causes for concern, but I don’t want to sound too alarmist. Nobody really has a clue whether big data adaptivity will actually work in language learning terms. It remains more of a theory than a research-endorsed practice. And to end on a positive note, regardless of how sticky it proves to be, it might just provide the shot-in-the-arm realisation that language teachers, at their best, are a lot more than competent explainers of grammar or deliverers of gap-fills.

FluentU, busuu, Bliu Bliu … what is it with all the ‘u’s? Hong-Kong based FluentU used to be called FluentFlix, but they changed their name a while back. The service for English learners is relatively new. Before that, they focused on Chinese, where the competition is much less fierce.

At the core of FluentU is a collection of short YouTube videos, which are sorted into 6 levels and grouped into 7 topic categories. The videos are accompanied by transcriptions. As learners watch a video, they can click on any word in the transcript. This will temporarily freeze the video and show a pop-up which offers a definition of the word, information about part of speech, a couple of examples of this word in other sentences, and more example sentences of the word from other videos that are linked on FluentU. These can, in turn, be clicked on to bring up a video collage of these sentences. Learners can click on an ‘Add to Vocab’ button, which will add the word to personalised vocabulary lists. These are later studied through spaced repetition.

FluentU describes its approach in the following terms: FluentU selects the best authentic video content from the web, and provides the scaffolding and support necessary to bring that authentic content within reach for your students. It seems appropriate, therefore, to look first at the nature of that content. At the moment, there appear to be just under 1,000 clips which are allocated to levels as follows:

Newbie 123 Intermediate 294 Advanced 111
Elementary 138 Upper Int 274 Native 40

It has to be assumed that the amount of content will continue to grow, but, for the time being, it’s not unreasonable to say that there isn’t a lot there. I looked at the Upper Intermediate level where the shortest was 32 seconds long, the longest 4 minutes 34 seconds, but most were between 1 and 2 minutes. That means that there is the equivalent of about 400 minutes (say, 7 hours) for this level.

The actual amount that anyone would want to watch / study can be seen to be significantly less when the topics are considered. These break down as follows:

Arts & entertainment 105 Everyday life 60 Science & tech 17
Business 34 Health & lifestyle 28
Culture 29 Politics & society 6

The screenshots below give an idea of the videos on offer:

menu1menu2

I may be a little difficult, but there wasn’t much here that appealed. Forget the movie trailers for crap movies, for a start. Forget the low level business stuff, too. ‘The History of New Year’s Resolutions’ looked promising, but turned out to be a Wikipedia style piece. FluentU certainly doesn’t have the eye for interesting, original video content of someone like Jamie Keddie or Kieran Donaghy.

But, perhaps, the underwhelming content is of less importance than what you do with it. After all, if you’re really interested in content, you can just go to YouTube and struggle through the transcriptions on your own. The transcripts can be downloaded as pdfs, which, strangely are marked with a FluentU copyright notice.copyright FluentU doesn’t need to own the copyright of the videos, because they just provide links, but claiming copyright for someone else’s script seemed questionable to me. Anyway, the only real reason to be on this site is to learn some vocabulary. How well does it perform?

fluentu1

Level is self-selected. It wasn’t entirely clear how videos had been allocated to level, but I didn’t find any major discrepancies between FluentU’s allocation and my own, intuitive grading of the content. Clicking on words in the transcript, the look-up / dictionary function wasn’t too bad, compared to some competing products I have looked at. The system could deal with some chunks and phrases (e.g. at your service, figure out) and the definitions were appropriate to the way these had been used in context. The accuracy was far from consistent, though. Some definitions were harder than the word they were explaining (e.g. telephone = an instrument used to call someone) and some were plain silly (e.g. the definition of I is me).

have_been_definitionSome chunks were not recognised, so definitions were amusingly wonky. Come out, get through and have been were all wrong. For the phrase talk her into it, the program didn’t recognise the phrasal verb, and offered me communicate using speech for talk, and to the condition, state or form of for into.

For many words, there are pictures to help you with the meaning, but you wonder about some of them, e.g. the picture of someone clutching a suitcase to illustrate the meaning of of, or a woman holding up a finger and thumb to illustrate the meaning of what (as a pronoun).what_definition

The example sentences don’t seem to be graded in any way and are not always useful. The example sentences for of, for example, are The pages of the book are ripped, the lemurs of Madagascar and what time of day are you free. Since the definition is given as belonging to, there seems to be a problem with, at least, the last of these examples!

With the example sentence that link you to other video examples of this word being used, I found that it took a long time to load … and it really wasn’t worth waiting for.

After a catalogue of problems like this, you might wonder how I can say that this function wasn’t too bad, but I’ve seen a lot worse. It was, at least, mostly accurate.

Moving away from the ‘Watch’ options, I explored the ‘Learn’ section. Bearing in mind that I had described myself as ‘Upper Intermediate’, I was surprised to be offered the following words for study: Good morning, may, help, think, so. This then took me to the following screen:great job

I was getting increasingly confused. After watching another video, I could practise some of the words I had highlighted, but, again, I wasn’t sure quite what was going on. There was a task that asked me to ‘pick the correct translation’, but this was, in fact a multiple choice dictation task.translation task

Next, I was asked to study the meaning of the word in, followed by an unhelpful gap-fill task:gap fill

Confused? I was. I decided to look for something a little more straightforward, and clicked on a menu of vocabulary flash cards that I could import. These included sets based on copyright material from both CUP and OUP, and I wondered what these publishers might think of their property being used in this way.flashcards

FluentU claims  that it is based on the following principles:

  1. Individualized scaffolding: FluentU makes language learning easy by teaching new words with vocabulary students already know.
  2. Mastery Learning: FluentU sets students up for success by making sure they master the basics before moving on to more advanced topics.
  3. Gamification: FluentU incorporates the latest game design mechanics to make learning fun and engaging.
  4. Personalization: Each student’s FluentU experience is unlike anyone else’s. Video clips, examples, and quizzes are picked to match their vocabulary and interests.

The ‘individualized scaffolding’ is no more than common sense, dressed up in sciency-sounding language. The reference to ‘Mastery Learning’ is opaque, to say the least, with some confusion between language features and topic. The gamification is rudimentary, and the personalization is pretty limited. It doesn’t come cheap, either.

price table

Lingua.ly is an Israeli start-up which, in its own words, ‘is an innovative new learning solution that helps you learn a language from the open web’. Its platform ‘uses big-data paired with spaced repetition to help users bootstrap their way to fluency’. You can read more of this kind of adspeak at the Lingua.ly blog  or the Wikipedia entry  which seems to have been written by someone from the company.

How does it work? First of all, state the language you want to study (currently there are 10 available) and the language you already speak (currently there are 18 available). Then, there are three possible starting points: insert a word which you want to study, click on a word in any web text or click on a word in one of the suggested reading texts. This then brings up a bilingual dictionary entry which, depending on the word, will offer a number of parts of speech and a number of translated word senses. Click on the appropriate part of speech and the appropriate word sense, and the item will be added to your personal word list. Once you have a handful of words in your word list, you can begin practising these words. Here there are two options. The first is a spaced repetition flashcard system. It presents the target word and 8 different translations in your own language, and you have to click on the correct option. Like most flashcard apps, spaced repetition software determines when and how often you will be re-presented with the item.

The second option is to read an authentic web text which contains one or more of your target items. The company calls this ‘digital language immersion, a method of employing a virtual learning environment to simulate the language learning environment’. The app ‘relies on a number of applied linguistics principles, including the Natural Approach and Krashen’s Input Hypothesis’, according to the Wikipedia entry. Apparently, the more you use the app, the more it knows about you as a learner, and the better able it is to select texts that are appropriate for you. As you read these texts, of course, you can click on more words and add them to your word list.

I tried out Lingua.ly, logging on as a French speaker wanting to learn English, and clicking on words as the fancy took me. I soon had a selection of texts to read. Users are offered a topic menu which consisted of the following: arts, business, education, entertainment, food, weird, beginners, green, health, living, news, politics, psychology, religion, science, sports, style. The sources are varied and not at all bad – Christian Science Monitor, The Grauniad, Huffington Post, Time, for example –and there are many very recent articles. Some texts were interesting; others seemed very niche. I began clicking on more words that I thought would be interesting to explore and here my problems began.

I quickly discovered that the system could only deal with single words, so phrasal verbs were off limits. One text I looked at had the phrasal verb ‘ripping off’, and although I could get translations for ‘ripping’ and ‘off’, this was obviously not terribly helpful. Learners who don’t know the phrasal verb ‘ripped off’ do not necessarily know that it is a phrasal verb, so the translations offered for the two parts of the verb are worse than unhelpful; they are actually misleading. Proper nouns were also a problem, although some of the more common ones were recognised. But the system failed to recognise many proper nouns for what they were, and offered me translations of homonymous nouns. new_word_added_'ripping_off' With some words (e.g. ‘stablemate’), the dictionary offered only one translation (in this case, the literal translation), but not the translation (the much more common idiomatic one) that was needed in the context in which I came across the word. With others (e.g. ‘pertain’), I was offered a list of translations which included the one that was appropriate in the context, but, unfortunately, this is the French word ‘porter’, which has so many possible meanings that, if you genuinely didn’t know the word, you would be none the wiser.

Once you’ve clicked on an appropriate part of speech and translation (if you can find one), the dictionary look-up function offers both photos and example sentences. Here again there were problems. I’d clicked on the verb ‘pan’ which I’d encountered in the context of a critic panning a book they’d read. I was able to select an appropriate translation, but when I got to the photos, I was offered only multiple pictures of frying pans. There were no example sentences for my meaning of ‘pan’: instead, I was offered multiple sentences about cooking pans, and one about Peter Pan. In other cases, the example sentences were either unhelpful (e.g. the example for ‘deal’ was ‘I deal with that’) or bizarre (e.g. the example sentence for ‘deemed’ was ‘The boy deemed that he cheated in the examination’). For some words, there were no example sentences at all.

Primed in this way, I was intrigued to see how the system would deal with the phrase ‘heaving bosoms’ which came up in one text. ‘Heaving bosoms’ is an interesting case. It’s a strong collocation, and, statistically, ‘heaving bosoms’ plural are much more frequent than ‘a heaving bosom’ singular. ‘Heaving’, as an adjective, only really collocates with ‘bosoms’. You don’t find ‘heaving’ collocating with any of the synonyms for ‘bosoms’. The phrase is also heavily connoted, strongly associated with romance novels, and often used with humorous intent. Finally, there is also a problem of usage with ‘bosom’ / ‘bosoms’: men or women, one or two – all in all, it’s a tricky word.

Lingua.ly was no help at all. There was no dictionary entry for an adjectival ‘heaving’, and the translations for the verb ‘heave’ were amusing, but less than appropriate. As for ‘bosom’, there were appropriate translations (‘sein’ and ‘poitrine’), but absolutely no help with how the word is actually used. Example sentences, which are clearly not tagged to the translation which has been chosen, included ‘Or whether he shall die in the bosom of his family or neglected and despised in a foreign land’ and ‘Can a man take fire in his bosom, and his clothes not be burned?’

Lingua.ly has a number of problems. First off, its software hinges on a dictionary (it’s a Babylon dictionary) which can only deal with single words, is incomplete, and does not deal with collocation, connotation, style or register. As such, it can only be of limited value for receptive use, and of no value whatsoever for productive use. Secondly, the web corpus that it is using simply isn’t big enough. Thirdly, it doesn’t seem to have any Natural Language Processing tool which could enable it to deal with meanings in context. It can’t disambiguate words automatically. Such software does now exist, and Lingua.ly desperately needs it.

Unfortunately, there are other problems, too. The flashcard practice is very repetitive and soon becomes boring. With eight translations to choose from, you have to scroll down the page to see them all. But there’s a timer mechanism, and I frequently timed out before being able to select the correct translation (partly because words are presented with no context, so you have to remember the meaning which you clicked in an earlier study session). The texts do not seem to be graded for level. There is no indication of word frequency or word sense frequency. There is just one gamification element (a score card), but there is no indication of how scores are achieved. Last, but certainly not least, the system is buggy. My word list disappeared into the cloud earlier today, and has not been seen since.

I think it’s a pity that Lingua.ly is not better. The idea behind it is good – even if the references to Krashen are a little unfortunate. The company says that they have raised $800,000 in funding, but with their freemium model they’ll be desperately needing more, and they’ve gone to market too soon. One reviewer, Language Surfer,  wrote a withering review of Lingua.ly’s Arabic program (‘it will do more harm than good to the Arabic student’), and Brendan Wightman, commenting at eltjam,  called it ‘dull as dish water, […] still very crude, limited and replete with multiple flaws’. But, at least, it’s free.

Personalization is one of the key leitmotifs in current educational discourse. The message is clear: personalization is good, one-size-fits-all is bad. ‘How to personalize learning and how to differentiate instruction for diverse classrooms are two of the great educational challenges of the 21st century,’ write Trilling and Fadel, leading lights in the Partnership for 21st Century Skills (P21)[1]. Barack Obama has repeatedly sung the praises of, and the need for, personalized learning and his policies are fleshed out by his Secretary of State, Arne Duncan, in speeches and on the White House blog: ‘President Obama described the promise of personalized learning when he launched the ConnectED initiative last June. Technology is a powerful tool that helps create robust personalized learning environments.’ In the UK, personalized learning has been government mantra for over 10 years. The EU, UNESCO, OECD, the Gates Foundation – everyone, it seems, is singing the same tune.

Personalization, we might all agree, is a good thing. How could it be otherwise? No one these days is going to promote depersonalization or impersonalization in education. What exactly it means, however, is less clear. According to a UNESCO Policy Brief[2], the term was first used in the context of education in the 1970s by Victor Garcìa Hoz, a senior Spanish educationalist and member of Opus Dei at the University of Madrid. This UNESCO document then points out that ‘unfortunately, up to this date there is no single definition of this concept’.

In ELT, the term has been used in a very wide variety of ways. These range from the far-reaching ideas of people like Gertrude Moskowitz, who advocated a fundamentally learner-centred form of instruction, to the much more banal practice of getting students to produce a few personalized examples of an item of grammar they have just studied. See Scott Thornbury’s A-Z blog for an interesting discussion of personalization in ELT.

As with education in general, and ELT in particular, ‘personalization’ is also bandied around the adaptive learning table. Duolingo advertises itself as the opposite of one-size-fits-all, and as an online equivalent of the ‘personalized education you can get from a small classroom teacher or private tutor’. Babbel offers a ‘personalized review manager’ and Rosetta Stone’s Classroom online solution allows educational institutions ‘to shift their language program away from a ‘one-size-fits-all-curriculum’ to a more individualized approach’. As far as I can tell, the personalization in these examples is extremely restricted. The language syllabus is fixed and although users can take different routes up the ‘skills tree’ or ‘knowledge graph’, they are totally confined by the pre-determination of those trees and graphs. This is no more personalized learning than asking students to make five true sentences using the present perfect. Arguably, it is even less!

This is not, in any case, the kind of personalization that Obama, the Gates Foundation, Knewton, et al have in mind when they conflate adaptive learning with personalization. Their definition is much broader and summarised in the US National Education Technology Plan of 2010: ‘Personalized learning means instruction is paced to learning needs, tailored to learning preferences, and tailored to the specific interests of different learners. In an environment that is fully personalized, the learning objectives and content as well as the method and pace may all vary (so personalization encompasses differentiation and individualization).’ What drives this is the big data generated by the students’ interactions with the technology (see ‘Part 4: big data and analytics’ of ‘The Guide’ on this blog).

What remains unclear is exactly how this might work in English language learning. Adaptive software can only personalize to the extent that the content of an English language learning programme allows it to do so. It may be true that each student using adaptive software ‘gets a more personalised experience no matter whose content the student is consuming’, as Knewton’s David Liu puts it. But the potential for any really meaningful personalization depends crucially on the nature and extent of this content, along with the possibility of variable learning outcomes. For this reason, we are not likely to see any truly personalized large-scale adaptive learning programs for English any time soon.

Nevertheless, technology is now central to personalized language learning. A good learning platform, which allows learners to connect to ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses, various apps’, etc (see Alexandra Chistyakova’s eltdiary), means that personalization could be more easily achieved.

For the time being, at least, adaptive learning systems would seem to work best for ‘those things that can be easily digitized and tested like math problems and reading passages’ writes Barbara Bray . Or low level vocabulary and grammar McNuggets, we might add. Ideal for, say, ‘English Grammar in Use’. But meaningfully personalized language learning?

student-data-and-personalization

‘Personalized learning’ sounds very progressive, a utopian educational horizon, and it sounds like it ought to be the future of ELT (as Cleve Miller argues). It also sounds like a pretty good slogan on which to hitch the adaptive bandwagon. But somehow, just somehow, I suspect that when it comes to adaptive learning we’re more likely to see more testing, more data collection and more depersonalization.

[1] Trilling, B. & Fadel, C. 2009 21st Century Skills (San Francisco: Wiley) p.33

[2] Personalized learning: a new ICT­enabled education approach, UNESCO Institute for Information Technologies in Education, Policy Brief March 2012 iite.unesco.org/pics/publications/en/files/3214716.pdf

 

I mentioned the issue of privacy very briefly in Part 9 of the ‘Guide’, and it seems appropriate to take a more detailed look.

Adaptive learning needs big data. Without the big data, there is nothing for the algorithms to work on, and the bigger the data set, the better the software can work. Adaptive language learning will be delivered via a platform, and the data that is generated by the language learner’s interaction with the English language program on the platform is likely to be only one, very small, part of the data that the system will store and analyse. Full adaptivity requires a psychometric profile for each student.

It would make sense, then, to aggregate as much data as possible in one place. Besides the practical value in massively combining different data sources (in order to enhance the usefulness of the personalized learning pathways), such a move would possibly save educational authorities substantial amounts of money and allow educational technology companies to mine the rich goldmine of student data, along with the standardised platform specifications, to design their products.

And so it has come to pass. The Gates Foundation (yes, them again) provided most of the $100 million funding. A division of Murdoch’s News Corp built the infrastructure. Once everything was ready, a non-profit organization called inBloom was set up to run the thing. The inBloom platform is open source and the database was initially free, although this will change. Preliminary agreements were made with 7 US districts and involved millions of children. The data includes ‘students’ names, birthdates, addresses, social security numbers, grades, test scores, disability status, attendance, and other confidential information’ (Ravitch, D. ‘Reign of Error’ NY: Knopf, 2013, p. 235-236). Under federal law, this information can be ‘shared’ with private companies selling educational technology and services.

The edtech world rejoiced. ‘This is going to be a huge win for us’, said one educational software provider; ‘it’s a godsend for us,’ said another. Others are not so happy. If the technology actually works, if it can radically transform education and ‘produce game-changing outcomes’ (as its proponents claim so often), the price to be paid might just conceivably be worth paying. But the price is high and the research is not there yet. The price is privacy.

The problem is simple. InBloom itself acknowledges that it ‘cannot guarantee the security of the information stored… or that the information will not be intercepted when it is being transmitted.’ Experience has already shown us that organisations as diverse as the CIA or the British health service cannot protect their data. Hackers like a good challenge. So do businesses.

The anti-privatization (and, by extension, the anti-adaptivity) lobby in the US has found an issue which is resonating with electors (and parents). These dissenting voices are led by Class Size Matters, and their voice is being heard. Of the original partners of inBloom, only one is now left. The others have all pulled out, mostly because of concerns about privacy, although the remaining partner, New York, involves personal data on 2.7 million students, which can be shared without any parental notification or consent.

inbloom-student-data-bill-gates

This might seem like a victory for the anti-privatization / anti-adaptivity lobby, but it is likely to be only temporary. There are plenty of other companies that have their eyes on the data-mining opportunities that will be coming their way, and Obama’s ‘Race to the Top’ program means that the inBloom controversy will be only a temporary setback. ‘The reality is that it’s going to be done. It’s not going to be a little part. It’s going to be a big part. And it’s going to be put in place partly because it’s going to be less expensive than doing professional development,’ says Eva Baker of the Center for the Study of Evaluation at UCLA.

It is in this light that the debate about adaptive learning becomes hugely significant. Class Size Matters, the odd academic like Neil Selwyn or the occasional blogger like myself will not be able to reverse a trend with seemingly unstoppable momentum. But we are, collectively, in a position to influence the way these changes will take place.

If you want to find out more, check out the inBloom and Class Size Matters links. And you might like to read more from the news reports which I have used for information in this post. Of these, the second was originally published by Scientific American (owned by Macmillan, one of the leading players in ELT adaptive learning). The third and fourth are from Education Week, which is funded in part by the Gates Foundation.

http://www.reuters.com/article/2013/03/03/us-education-database-idUSBRE92204W20130303

http://www.salon.com/2013/08/01/big_data_puts_teachers_out_of_work_partner/

http://www.edweek.org/ew/articles/2014/01/08/15inbloom_ep.h33.html

http://blogs.edweek.org/edweek/marketplacek12/2013/12/new_york_battle_over_inBloom_data_privacy_heading_to_court.html