Archive for the ‘Online learning’ Category

by Philip Kerr & Andrew Wickham

from IATEFL 2016 Birmingham Conference Selections (ed. Tania Pattison) Faversham, Kent: IATEFL pp. 75 – 78

ELT publishing, international language testing and private language schools are all industries: products are produced, bought and sold for profit. English language teaching (ELT) is not. It is an umbrella term that is used to describe a range of activities, some of which are industries, and some of which (such as English teaching in high schools around the world) might better be described as public services. ELT, like education more generally, is, nevertheless, often referred to as an ‘industry’.

Education in a neoliberal world

The framing of ELT as an industry is both a reflection of how we understand the term and a force that shapes our understanding. Associated with the idea of ‘industry’ is a constellation of other ideas and words (such as efficacy, productivity, privatization, marketization, consumerization, digitalization and globalization) which become a part of ELT once it is framed as an industry. Repeated often enough, ‘ELT as an industry’ can become a metaphor that we think and live by. Those activities that fall under the ELT umbrella, but which are not industries, become associated with the desirability of industrial practices through such discourse.

The shift from education, seen as a public service, to educational managerialism (where education is seen in industrial terms with a focus on efficiency, free market competition, privatization and a view of students as customers) can be traced to the 1980s and 1990s (Gewirtz, 2001). In 1999, under pressure from developed economies, the General Agreement on Trade in Services (GATS) transformed education into a commodity that could be traded like any other in the marketplace (Robertson, 2006). The global industrialisation and privatization of education continues to be promoted by transnational organisations (such as the World Bank and the OECD), well-funded free-market think-tanks (such as the Cato Institute), philanthro-capitalist foundations (such as the Gates Foundation) and educational businesses (such as Pearson) (Ball, 2012).

Efficacy and learning outcomes

Managerialist approaches to education require educational products and services to be measured and compared. In ELT, the most visible manifestation of this requirement is the current ubiquity of learning outcomes. Contemporary coursebooks are full of ‘can-do’ statements, although these are not necessarily of any value to anyone. Examples from one unit of one best-selling course include ‘Now I can understand advice people give about hotels’ and ‘Now I can read an article about unique hotels’ (McCarthy et al. 2014: 74). However, in a world where accountability is paramount, they are deemed indispensable. The problem from a pedagogical perspective is that teaching input does not necessarily equate with learning uptake. Indeed, there is no reason why it should.

Drawing on the Common European Framework of Reference for Languages (CEFR) for inspiration, new performance scales have emerged in recent years. These include the Cambridge English Scale and the Pearson Global Scale of English. Moving away from the broad six categories of the CEFR, such scales permit finer-grained measurement and we now see individual vocabulary and grammar items tagged to levels. Whilst such initiatives undoubtedly support measurements of efficacy, the problem from a pedagogical perspective is that they assume that language learning is linear and incremental, as opposed to complex and jagged.

Given the importance accorded to the measurement of language learning (or what might pass for language learning), it is unsurprising that attention is shifting towards the measurement of what is probably the most important factor impacting on learning: the teaching. Teacher competency scales have been developed by Cambridge Assessment, the British Council and EAQUALS (Evaluation and Accreditation of Quality Language Services), among others.

The backwash effects of the deployment of such scales are yet to be fully experienced, but the likely increase in the perception of both language learning and teacher learning as the synthesis of granularised ‘bits of knowledge’ is cause for concern.

Digital technology

Digital technology may offer advantages to both English language teachers and learners, but its rapid growth in language learning is the result, primarily but not exclusively, of the way it has been promoted by those who stand to gain financially. In education, generally, and in English language teaching, more specifically, advocacy of the privatization of education is always accompanied by advocacy of digitalization. The global market for digital English language learning products was reported to be $2.8 billion in 2015 and is predicted to reach $3.8 billion by 2020 (Ambient Insight, 2016).

In tandem with the increased interest in measuring learning outcomes, there is fierce competition in the market for high-stakes examinations, and these are increasingly digitally delivered and marked. In the face of this competition and in a climate of digital disruption, companies like Pearson and Cambridge English are developing business models of vertical integration where they can provide and sell everything from placement testing, to courseware (either print or delivered through an LMS), teaching, assessment and teacher training. Huge investments are being made in pursuit of such models. Pearson, for example, recently bought GlobalEnglish, Wall Street English, and set up a partnership with Busuu, thus covering all aspects of language learning from resources provision and publishing to off- and online training delivery.

As regards assessment, the most recent adult coursebook from Cambridge University Press (in collaboration with Cambridge English Language Assessment), ‘Empower’ (Doff, et. Al, 2015) sells itself on a combination of course material with integrated, validated assessment.

Besides its potential for scalability (and therefore greater profit margins), the appeal (to some) of platform-delivered English language instruction is that it facilitates assessment that is much finer-grained and actionable in real time. Digitization and testing go hand in hand.

Few English language teachers have been unaffected by the move towards digital. In the state sectors, large-scale digitization initiatives (such as the distribution of laptops for educational purposes, the installation of interactive whiteboards, the move towards blended models of instruction or the move away from printed coursebooks) are becoming commonplace. In the private sectors, online (or partially online) language schools are taking market share from the traditional bricks-and-mortar institutions.

These changes have entailed modifications to the skill-sets that teachers need to have. Two announcements at this conference reflect this shift. First of all, Cambridge English launched their ‘Digital Framework for Teachers’, a matrix of six broad competency areas organised into four levels of proficiency. Secondly, Aqueduto, the Association for Quality Education and Training Online, was launched, setting itself up as an accreditation body for online or blended teacher training courses.

Teachers’ pay and conditions

In the United States, and likely soon in the UK, the move towards privatization is accompanied by an overt attack on teachers’ unions, rights, pay and conditions (Selwyn, 2014). As English language teaching in both public and private sectors is commodified and marketized it is no surprise to find that the drive to bring down costs has a negative impact on teachers worldwide. Gwynt (2015), for example, catalogues cuts in funding, large-scale redundancies, a narrowing of the curriculum, intensified workloads (including the need to comply with ‘quality control measures’), the deskilling of teachers, dilapidated buildings, minimal resources and low morale in an ESOL department in one British further education college. In France, a large-scale study by Wickham, Cagnol, Wright and Oldmeadow (Linguaid, 2015; Wright, 2016) found that EFL teachers in the very competitive private sector typically had multiple employers, limited or no job security, limited sick pay and holiday pay, very little training and low hourly rates that were deteriorating. One of the principle drivers of the pressure on salaries is the rise of online training delivery through Skype and other online platforms, using offshore teachers in low-cost countries such as the Philippines. This type of training represents 15% in value and up to 25% in volume of all language training in the French corporate sector and is developing fast in emerging countries. These examples are illustrative of a broad global trend.

Implications

Given the current climate, teachers will benefit from closer networking with fellow professionals in order, not least, to be aware of the rapidly changing landscape. It is likely that they will need to develop and extend their skill sets (especially their online skills and visibility and their specialised knowledge), to differentiate themselves from competitors and to be able to demonstrate that they are in tune with current demands. More generally, it is important to recognise that current trends have yet to run their full course. Conditions for teachers are likely to deteriorate further before they improve. More than ever before, teachers who want to have any kind of influence on the way that marketization and industrialization are shaping their working lives will need to do so collectively.

References

Ambient Insight. 2016. The 2015-2020 Worldwide Digital English Language Learning Market. http://www.ambientinsight.com/Resources/Documents/AmbientInsight_2015-2020_Worldwide_Digital_English_Market_Sample.pdf

Ball, S. J. 2012. Global Education Inc. Abingdon, Oxon.: Routledge

Doff, A., Thaine, C., Puchta, H., Stranks, J. and P. Lewis-Jones 2015. Empower. Cambridge: Cambridge University Press

Gewirtz, S. 2001. The Managerial School: Post-welfarism and Social Justice in Education. Abingdon, Oxon.: Routledge

Gwynt, W. 2015. ‘The effects of policy changes on ESOL’. Language Issues 26 / 2: 58 – 60

McCarthy, M., McCarten, J. and H. Sandiford 2014. Touchstone 2 Student’s Book Second Edition. Cambridge: Cambridge University Press

Linguaid, 2015. Le Marché de la Formation Langues à l’Heure de la Mondialisation. Guildford: Linguaid

Robertson, S. L. 2006. ‘Globalisation, GATS and trading in education services.’ published by the Centre for Globalisation, Education and Societies, University of Bristol, Bristol BS8 1JA, UK at http://www.bris.ac.uk/education/people/academicStaff/edslr/publications/04slr

Selwyn, N. 2014. Distrusting Educational Technology. New York: Routledge

Wright, R. 2016. ‘My teacher is rich … or not!’ English Teaching Professional 103: 54 – 56

 

 

All aboard …

The point of adaptive learning is that it can personalize learning. When we talk about personalization, mention of learning styles is rarely far away. Jose Ferreira of Knewton (but now ex-CEO Knewton) made his case for learning styles in a blog post that generated a superb and, for Ferreira, embarrassing  discussion in the comments that were subsequently deleted by Knewton. fluentu_learning_stylesFluentU (which I reviewed here) clearly approves of learning styles, or at least sees them as a useful way to market their product, even though it is unclear how their product caters to different styles. Busuu claims to be ‘personalised to fit your style of learning’. Voxy, Inc. (according to their company overview) ‘operates a language learning platform that creates custom curricula for English language learners based on their interests, routines, goals, and learning styles’. Bliu Bliu (which I reviewed here) recommended, in a recent blog post, that learners should ‘find out their language learner type and use it to their advantage’ and suggests, as a starter, trying out ‘Bliu Bliu, where pretty much any learner can find what suits them best’. Memrise ‘uses clever science to adapt to your personal learning style’.  Duolingo’s learning tree ‘effectively rearranges itself to suit individual learning styles’ according to founder, Louis Von Ahn. This list could go on and on.

Learning styles are thriving in ELT coursebooks, too. Here are just three recent examples for learners of various ages. Today! by Todd, D. & Thompson, T. (Pearson, 2014) ‘shapes learning around individual students with graded difficulty practice for mixed-ability classes’ and ‘makes testing mixed-ability classes easier with tests that you can personalise to students’ abilities’.today

Move  it! by Barraclough, C., Beddall, F., Stannett, K., Wildman, J. (Pearson, 2015) offers ‘personalized pathways [which] allow students to optimize their learning outcomes’ and a ‘complete assessment package to monitor students’ learning process’. pearson_move_it

Open Mind Elementary (A2) 2nd edition by Rogers, M., Taylor-Knowles, J. & Taylor-Knowles, S. (Macmillan, 2014) has a whole page devoted to learning styles in the ‘Life Skills’ strand of the course. The scope and sequence describes it in the following terms: ‘Thinking about what you like to do to find your learning style and improve how you learn English’. Here’s the relevant section:macmillan_coursebook

rosenber-learning-stylesMethodology books offer more tips for ways that teachers can cater to different learning styles. Recent examples include Patrycja Kamińska’s  Learning Styles and Second Language Education (Cambridge Scholars, 2014), Tammy Gregersen & Peter D. MacIntyre’s Capitalizing on Language Learners’ Individuality (Multilingual Matters, 2014) and Marjorie Rosenberg’s Spotlight on Learning Styles (Delta Publishing, 2013). Teacher magazines show a continuing interest  in the topic. Humanising Language Teaching and English Teaching Professional are particularly keen. The British Council offers courses about learning styles and its Teaching English website has many articles and lesson plans on the subject (my favourite explains that your students will be more successful if you match your teaching style to their learning styles), as do the websites of all the major publishers. Most ELT conferences will also offer something on the topic.oup_learning_styles

How about language teaching qualifications and frameworks? The Cambridge English Teaching Framework contains a component entitled ‘Understanding learners’ and this specifies as the first part of the component a knowledge of concepts such as learning styles (e.g., visual, auditory, kinaesthetic), multiple intelligences, learning strategies, special needs, affect. Unsurprisingly, the Cambridge CELTA qualification requires successful candidates to demonstrate an awareness of the different learning styles and preferences that adults bring to learning English. The Cambridge DELTA requires successful candidates to accommodate learners according to their different abilities, motivations, and learning styles. The Eaquals Framework for Language Teacher Training and Development requires teachers at Development Phase 2 t0 have the skill of determining and anticipating learners’ language learning needs and learning styles at a range of levels, selecting appropriate ways of finding out about these.

Outside of ELT, learning styles also continue to thrive. Phil Newton (2015 ‘The learning styles myth is thriving in higher education’ Frontiers in Psychology 6: 1908) carried out a survey of educational publications  (higher education) between 2013 and 2016, and found that an overwhelming majority (89%) implicitly or directly endorse the use of learning styles. He also cites research showing that 93% of UK schoolteachers believe that ‘individuals learn better when they receive information in their preferred Learning Style’, with similar figures in other countries. 72% of Higher Education institutions in the US teach ‘learning style theory’ as part of faculty development for online teachers. Advocates of learning styles in English language teaching are not alone.

But, unfortunately, …

In case you weren’t aware of it, there is a rather big problem with learning styles. There is a huge amount of research  which suggests that learning styles (and, in particular, teaching attempts to cater to learning styles) need to be approached with extreme scepticism. Much of this research was published long before the blog posts, advertising copy, books and teaching frameworks (listed above) were written.  What does this research have to tell us?

The first problem concerns learning styles taxonomies. There are three issues here: many people do not fit one particular style, the information used to assign people to styles is often inadequate, and there are so many different styles that it becomes cumbersome to link particular learners to particular styles (Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48 / 3, 169-183). To summarise, given the lack of clarity as to which learning styles actually exist, it may be ‘neither viable nor justified’ for learning styles to form the basis of lesson planning (Hall, G. 2011. Exploring English Language Teaching. Abingdon, Oxon.: Routledge p.140). More detailed information about these issues can be found in the following sources:

Coffield, F., Moseley, D., Hall, E. & Ecclestone, K. 2004. Learning styles and pedagogy in post-16 learning: a systematic and critical review. London: Learning and Skills Research Centre

Dembo, M. H. & Howard, K. 2007. Advice about the use of learning styles: a major myth in education. Journal of College Reading & Learning 37 / 2: 101 – 109

Kirschner, P. A. 2017. Stop propagating the learning styles myth. Computers & Education 106: 166 – 171

Pashler, H., McDaniel, M., Rohrer, D. & Bjork, E. 2008. Learning styles concepts and evidence. Psychological Science in the Public Interest 9 / 3: 105 – 119

Riener, C. & Willingham, D. 2010. The myth of learning styles. Change – The Magazine of Higher Learning

The second problem concerns what Pashler et al refer to as the ‘meshing hypothesis’: the idea that instructional interventions can be effectively tailored to match particular learning styles. Pashler et al concluded that the available taxonomies of student types do not offer any valid help in deciding what kind of instruction to offer each individual. Even in 2008, their finding was not new. Back in 1978, a review of 15 studies that looked at attempts to match learning styles to approaches to first language reading instruction, concluded that modality preference ‘has not been found to interact significantly with the method of teaching’ (Tarver, Sara & M. M. Dawson. 1978. Modality preference and the teaching of reading. Journal of Learning Disabilities 11: 17 – 29). The following year, two other researchers concluded that [the assumption that one can improve instruction by matching materials to children’s modality strengths] appears to lack even minimal empirical support. (Arter, J.A. & Joseph A. Jenkins 1979 ‘Differential diagnosis-prescriptive teaching: A critical appraisal’ Review of Educational Research 49: 517-555). Fast forward 20 years to 1999, and Stahl (Different strokes for different folks?’ American Educator Fall 1999 pp. 1 – 5) was writing the reason researchers roll their eyes at learning styles is the utter failure to find that assessing children’s learning styles and matching to instructional methods has any effect on learning. The area with the most research has been the global and analytic styles […]. Over the past 30 years, the names of these styles have changed – from ‘visual’ to ‘global’ and from ‘auditory’ to ‘analytic’ – but the research results have not changed. For a recent evaluation of the practical applications of learning styles, have a look at Rogowsky, B. A., Calhoun, B. M. & Tallal, P. 2015. ‘Matching Learning Style to Instructional Method: Effects on Comprehension’ Journal of Educational Psychology 107 / 1: 64 – 78. Even David Kolb, the Big Daddy of learning styles, now concedes that there is no strong evidence that teachers should tailor their instruction to their student’s particular learning styles (reported in Glenn, D. 2009. ‘Matching teaching style to learning style may not help students’ The Chronicle of Higher Education). To summarise, the meshing hypothesis is entirely unsupported in the scientific literature. It is a myth (Howard-Jones, P. A. 2014. ‘Neuroscience and education: myths and messages’ Nature Reviews Neuroscience).

This brings me back to the blog posts, advertising blurb, coursebooks, methodology books and so on that continue to tout learning styles. The writers of these texts typically do not acknowledge that there’s a problem of any kind. Are they unaware of the research? Or are they aware of it, but choose not to acknowledge it? I suspect that the former is often the case with the app developers. But if the latter is the case, what  might those reasons be? In the case of teacher training specifications, the reason is probably practical. Changing a syllabus is an expensive and time-consuming operation. But in the case of some of the ELT writers, I suspect that they hang on in there because they so much want to believe.

As Newton (2015: 2) notes, intuitively, there is much that is attractive about the concept of Learning Styles. People are obviously different and Learning Styles appear to offer educators a way to accommodate individual learner differences.  Pashler et al (2009:107) add that another related factor that may play a role in the popularity of the learning-styles approach has to do with responsibility. If a person or a person’s child is not succeeding or excelling in school, it may be more comfortable for the person to think that the educational system, not the person or the child himself or herself, is responsible. That is, rather than attribute one’s lack of success to any lack of ability or effort on one’s part, it may be more appealing to think that the fault lies with instruction being inadequately tailored to one’s learning style. In that respect, there may be linkages to the self-esteem movement that became so influential, internationally, starting in the 1970s. There is no reason to doubt that many of those who espouse learning styles have good intentions.

No one, I think, seriously questions whether learners might not benefit from a wide variety of input styles and learning tasks. People are obviously different. MacIntyre et al (MacIntyre, P.D., Gregersen, T. & Clément, R. 2016. ‘Individual Differences’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon, Oxon: Routledge, pp.310 – 323, p.319) suggest that teachers might consider instructional methods that allow them to capitalise on both variety and choice and also help learners find ways to do this for themselves inside and outside the classroom. Jill Hadfield (2006. ‘Teacher Education and Trainee Learning Style’ RELC Journal 37 / 3: 369 – 388) recommends that we design our learning tasks across the range of learning styles so that our trainees can move across the spectrum, experiencing both the comfort of matching and the challenge produced by mismatching. But this is not the same thing as claiming that identification of a particular learning style can lead to instructional decisions. The value of books like Rosenberg’s Spotlight on Learning Styles lies in the wide range of practical suggestions for varying teaching styles and tasks. They contain ideas of educational value: it is unfortunate that the theoretical background is so thin.

In ELT things are, perhaps, beginning to change. Russ Mayne’s blog post Learning styles: facts and fictions in 2012 got a few heads nodding, and he followed this up 2 years later with a presentation at IATEFL looking at various aspects of ELT, including learning styles, which have little or no scientific credibility. Carol Lethaby and Patricia Harries gave a talk at IATEFL 2016, Changing the way we approach learning styles in teacher education, which was also much discussed and shared online. They also had an article in ELT Journal called Learning styles and teacher training: are we perpetuating neuromyths? (2016 ELTJ 70 / 1: 16 – 27). Even Pearson, in a blog post of November 2016, (Mythbusters: A review of research on learning styles) acknowledges that there is a shocking lack of evidence to support the core learning styles claim that customizing instruction based on students’ preferred learning styles produces better learning than effective universal instruction, concluding that  it is impossible to recommend learning styles as an effective strategy for improving learning outcomes.

 

 

About two and a half years ago when I started writing this blog, there was a lot of hype around adaptive learning and the big data which might drive it. Two and a half years are a long time in technology. A look at Google Trends suggests that interest in adaptive learning has been pretty static for the last couple of years. It’s interesting to note that 3 of the 7 lettered points on this graph are Knewton-related media events (including the most recent, A, which is Knewton’s latest deal with Hachette) and 2 of them concern McGraw-Hill. It would be interesting to know whether these companies follow both parts of Simon Cowell’s dictum of ‘Create the hype, but don’t ever believe it’.

Google_trends

A look at the Hype Cycle (see here for Wikipedia’s entry on the topic and for criticism of the hype of Hype Cycles) of the IT research and advisory firm, Gartner, indicates that both big data and adaptive learning have now slid into the ‘trough of disillusionment’, which means that the market has started to mature, becoming more realistic about how useful the technologies can be for organizations.

A few years ago, the Gates Foundation, one of the leading cheerleaders and financial promoters of adaptive learning, launched its Adaptive Learning Market Acceleration Program (ALMAP) to ‘advance evidence-based understanding of how adaptive learning technologies could improve opportunities for low-income adults to learn and to complete postsecondary credentials’. It’s striking that the program’s aims referred to how such technologies could lead to learning gains, not whether they would. Now, though, with the publication of a report commissioned by the Gates Foundation to analyze the data coming out of the ALMAP Program, things are looking less rosy. The report is inconclusive. There is no firm evidence that adaptive learning systems are leading to better course grades or course completion. ‘The ultimate goal – better student outcomes at lower cost – remains elusive’, the report concludes. Rahim Rajan, a senior program office for Gates, is clear: ‘There is no magical silver bullet here.’

The same conclusion is being reached elsewhere. A report for the National Education Policy Center (in Boulder, Colorado) concludes: Personalized Instruction, in all its many forms, does not seem to be the transformational technology that is needed, however. After more than 30 years, Personalized Instruction is still producing incremental change. The outcomes of large-scale studies and meta-analyses, to the extent they tell us anything useful at all, show mixed results ranging from modest impacts to no impact. Additionally, one must remember that the modest impacts we see in these meta-analyses are coming from blended instruction, which raises the cost of education rather than reducing it (Enyedy, 2014: 15 -see reference at the foot of this post). In the same vein, a recent academic study by Meg Coffin Murray and Jorge Pérez (2015, ‘Informing and Performing: A Study Comparing Adaptive Learning to Traditional Learning’) found that ‘adaptive learning systems have negligible impact on learning outcomes’.

future-ready-learning-reimagining-the-role-of-technology-in-education-1-638In the latest educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Reimagining the Role of Technology in Education’, 2016) the only mentions of the word ‘adaptive’ are in the context of testing. And the latest OECD report on ‘Students, Computers and Learning: Making the Connection’ (2015), finds, more generally, that information and communication technologies, when they are used in the classroom, have, at best, a mixed impact on student performance.

There is, however, too much money at stake for the earlier hype to disappear completely. Sponsored cheerleading for adaptive systems continues to find its way into blogs and national magazines and newspapers. EdSurge, for example, recently published a report called ‘Decoding Adaptive’ (2016), sponsored by Pearson, that continues to wave the flag. Enthusiastic anecdotes take the place of evidence, but, for all that, it’s a useful read.

In the world of ELT, there are plenty of sales people who want new products which they can call ‘adaptive’ (and gamified, too, please). But it’s striking that three years after I started following the hype, such products are rather thin on the ground. Pearson was the first of the big names in ELT to do a deal with Knewton, and invested heavily in the company. Their relationship remains close. But, to the best of my knowledge, the only truly adaptive ELT product that Pearson offers is the PTE test.

Macmillan signed a contract with Knewton in May 2013 ‘to provide personalized grammar and vocabulary lessons, exam reviews, and supplementary materials for each student’. In December of that year, they talked up their new ‘big tree online learning platform’: ‘Look out for the Big Tree logo over the coming year for more information as to how we are using our partnership with Knewton to move forward in the Language Learning division and create content that is tailored to students’ needs and reactive to their progress.’ I’ve been looking out, but it’s all gone rather quiet on the adaptive / platform front.

In September 2013, it was the turn of Cambridge to sign a deal with Knewton ‘to create personalized learning experiences in its industry-leading ELT digital products for students worldwide’. This year saw the launch of a major new CUP series, ‘Empower’. It has an online workbook with personalized extra practice, but there’s nothing (yet) that anyone would call adaptive. More recently, Cambridge has launched the online version of the 2nd edition of Touchstone. Nothing adaptive there, either.

Earlier this year, Cambridge published The Cambridge Guide to Blended Learning for Language Teaching, edited by Mike McCarthy. It contains a chapter by M.O.Z. San Pedro and R. Baker on ‘Adaptive Learning’. It’s an enthusiastic account of the potential of adaptive learning, but it doesn’t contain a single reference to language learning or ELT!

So, what’s going on? Skepticism is becoming the order of the day. The early hype of people like Knewton’s Jose Ferreira is now understood for what it was. Companies like Macmillan got their fingers badly burnt when they barked up the wrong tree with their ‘Big Tree’ platform.

Noel Enyedy captures a more contemporary understanding when he writes: Personalized Instruction is based on the metaphor of personal desktop computers—the technology of the 80s and 90s. Today’s technology is not just personal but mobile, social, and networked. The flexibility and social nature of how technology infuses other aspects of our lives is not captured by the model of Personalized Instruction, which focuses on the isolated individual’s personal path to a fixed end-point. To truly harness the power of modern technology, we need a new vision for educational technology (Enyedy, 2014: 16).

Adaptive solutions aren’t going away, but there is now a much better understanding of what sorts of problems might have adaptive solutions. Testing is certainly one. As the educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Re-imagining the Role of Technology in Education’, 2016) puts it: Computer adaptive testing, which uses algorithms to adjust the difficulty of questions throughout an assessment on the basis of a student’s responses, has facilitated the ability of assessments to estimate accurately what students know and can do across the curriculum in a shorter testing session than would otherwise be necessary. In ELT, Pearson and EF have adaptive tests that have been well researched and designed.

Vocabulary apps which deploy adaptive technology continue to become more sophisticated, although empirical research is lacking. Automated writing tutors with adaptive corrective feedback are also developing fast, and I’ll be writing a post about these soon. Similarly, as speech recognition software improves, we can expect to see better and better automated adaptive pronunciation tutors. But going beyond such applications, there are bigger questions to ask, and answers to these will impact on whatever direction adaptive technologies take. Large platforms (LMSs), with or without adaptive software, are already beginning to look rather dated. Will they be replaced by integrated apps, or are apps themselves going to be replaced by bots (currently riding high in the Hype Cycle)? In language learning and teaching, the future of bots is likely to be shaped by developments in natural language processing (another topic about which I’ll be blogging soon). Nobody really has a clue where the next two and a half years will take us (if anywhere), but it’s becoming increasingly likely that adaptive learning will be only one very small part of it.

 

Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning. Boulder, CO: National Education Policy Center. Retrieved 17.07.16 from http://nepc.colorado.edu/publication/personalized-instruction

Adaptive learning providers make much of their ability to provide learners with personalised feedback and to provide teachers with dashboard feedback on the performance of both individuals and groups. All well and good, but my interest here is in the automated feedback that software could provide on very specific learning tasks. Scott Thornbury, in a recent talk, ‘Ed Tech: The Mouse that Roared?’, listed six ‘problems’ of language acquisition that educational technology for language learning needs to address. One of these he framed as follows: ‘The feedback problem, i.e. how does the learner get optimal feedback at the point of need?’, and suggested that technological applications ‘have some way to go.’ He was referring, not to the kind of feedback that dashboards can provide, but to the kind of feedback that characterises a good language teacher: corrective feedback (CF) – the way that teachers respond to learner utterances (typically those containing errors, but not necessarily restricted to these) in what Ellis and Shintani call ‘form-focused episodes’[1]. These responses may include a direct indication that there is an error, a reformulation, a request for repetition, a request for clarification, an echo with questioning intonation, etc. Basically, they are correction techniques.

These days, there isn’t really any debate about the value of CF. There is a clear research consensus that it can aid language acquisition. Discussing learning in more general terms, Hattie[2] claims that ‘the most powerful single influence enhancing achievement is feedback’. The debate now centres around the kind of feedback, and when it is given. Interestingly, evidence[3] has been found that CF is more effective in the learning of discrete items (e.g. some grammatical structures) than in communicative activities. Since it is precisely this kind of approach to language learning that we are more likely to find in adaptive learning programs, it is worth exploring further.

What do we know about CF in the learning of discrete items? First of all, it works better when it is explicit than when it is implicit (Li, 2010), although this needs to be nuanced. In immediate post-tests, explicit CF is better than implicit variations. But over a longer period of time, implicit CF provides better results. Secondly, formative feedback (as opposed to right / wrong testing-style feedback) strengthens retention of the learning items: this typically involves the learner repairing their error, rather than simply noticing that an error has been made. This is part of what cognitive scientists[4] sometimes describe as the ‘generation effect’. Whilst learners may benefit from formative feedback without repairing their errors, Ellis and Shintani (2014: 273) argue that the repair may result in ‘deeper processing’ and, therefore, assist learning. Thirdly, there is evidence that some delay in receiving feedback aids subsequent recall, especially over the longer term. Ellis and Shintani (2014: 276) suggest that immediate CF may ‘benefit the development of learners’ procedural knowledge’, while delayed CF is ‘perhaps more likely to foster metalinguistic understanding’. You can read a useful summary of a meta-analysis of feedback effects in online learning here, or you can buy the whole article here.

I have yet to see an online language learning program which can do CF well, but I think it’s a matter of time before things improve significantly. First of all, at the moment, feedback is usually immediate, or almost immediate. This is unlikely to change, for a number of reasons – foremost among them being the pride that ed tech takes in providing immediate feedback, and the fact that online learning is increasingly being conceptualised and consumed in bite-sized chunks, something you do on your phone between doing other things. What will change in better programs, however, is that feedback will become more formative. As things stand, tasks are usually of a very closed variety, with drag-and-drop being one of the most popular. Only one answer is possible and feedback is usually of the right / wrong-and-here’s-the-correct-answer kind. But tasks of this kind are limited in their value, and, at some point, tasks are needed where more than one answer is possible.

Here’s an example of a translation task from Duolingo, where a simple sentence could be translated into English in quite a large number of ways.

i_am_doing_a_basketDecontextualised as it is, the sentence could be translated in the way that I have done it, although it’s unlikely. The feedback, however, is of relatively little help to the learner, who would benefit from guidance of some sort. The simple reason that Duolingo doesn’t offer useful feedback is that the programme is static. It has been programmed to accept certain answers (e.g. in this case both the present simple and the present continuous are acceptable), but everything else will be rejected. Why? Because it would take too long and cost too much to anticipate and enter in all the possible answers. Why doesn’t it offer formative feedback? Because in order to do so, it would need to identify the kind of error that has been made. If we can identify the kind of error, we can make a reasonable guess about the cause of the error, and select appropriate CF … this is what good teachers do all the time.

Analysing the kind of error that has been made is the first step in providing appropriate CF, and it can be done, with increasing accuracy, by current technology, but it requires a lot of computing. Let’s take spelling as a simple place to start. If you enter ‘I am makeing a basket for my mother’ in the Duolingo translation above, the program tells you ‘Nice try … there’s a typo in your answer’. Given the configuration of keyboards, it is highly unlikely that this is a typo. It’s a simple spelling mistake and teachers recognise it as such because they see it so often. For software to achieve the same insight, it would need, as a start, to trawl a large English dictionary database and a large tagged database of learner English. The process is quite complicated, but it’s perfectably do-able, and learners could be provided with CF in the form of a ‘spelling hint’.i_am_makeing_a_basket

Rather more difficult is the error illustrated in my first screen shot. What’s the cause of this ‘error’? Teachers know immediately that this is probably a classic confusion of ‘do’ and ‘make’. They know that the French verb ‘faire’ can be translated into English as ‘make’ or ‘do’ (among other possibilities), and the error is a common language transfer problem. Software could do the same thing. It would need a large corpus (to establish that ‘make’ collocates with ‘a basket’ more often than ‘do’), a good bilingualised dictionary (plenty of these now exist), and a tagged database of learner English. Again, appropriate automated feedback could be provided in the form of some sort of indication that ‘faire’ is only sometimes translated as ‘make’.

These are both relatively simple examples, but it’s easy to think of others that are much more difficult to analyse automatically. Duolingo rejects ‘I am making one basket for my mother’: it’s not very plausible, but it’s not wrong. Teachers know why learners do this (again, it’s probably a transfer problem) and know how to respond (perhaps by saying something like ‘Only one?’). Duolingo also rejects ‘I making a basket for my mother’ (a common enough error), but is unable to provide any help beyond the correct answer. Automated CF could, however, be provided in both cases if more tools are brought into play. Multiple parsing machines (one is rarely accurate enough on its own) and semantic analysis will be needed. Both the range and the complexity of the available tools are increasing so rapidly (see here for the sort of research that Google is doing and here for an insight into current applications of this research in language learning) that Duolingo-style right / wrong feedback will very soon seem positively antediluvian.

One further development is worth mentioning here, and it concerns feedback and gamification. Teachers know from the way that most learners respond to written CF that they are usually much more interested in knowing what they got right or wrong, rather than the reasons for this. Most students are more likely to spend more time looking at the score at the bottom of a corrected piece of written work than at the laborious annotations of the teacher throughout the text. Getting students to pay close attention to the feedback we provide is not easy. Online language learning systems with gamification elements, like Duolingo, typically reward learners for getting things right, and getting things right in the fewest attempts possible. They encourage learners to look for the shortest or cheapest route to finding the correct answers: learning becomes a sexed-up form of test. If, however, the automated feedback is good, this sort of gamification encourages the wrong sort of learning behaviour. Gamification designers will need to shift their attention away from the current concern with right / wrong, and towards ways of motivating learners to look at and respond to feedback. It’s tricky, because you want to encourage learners to take more risks (and reward them for doing so), but it makes no sense to penalise them for getting things right. The probable solution is to have a dual points system: one set of points for getting things right, another for employing positive learning strategies.

The provision of automated ‘optimal feedback at the point of need’ may not be quite there yet, but it seems we’re on the way for some tasks in discrete-item learning. There will probably always be some teachers who can outperform computers in providing appropriate feedback, in the same way that a few top chess players can beat ‘Deep Blue’ and its scions. But the rest of us had better watch our backs: in the provision of some kinds of feedback, computers are catching up with us fast.

[1] Ellis, R. & N. Shintani (2014) Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon: Routledge p. 249

[2] Hattie, K. (2009) Visible Learning. Abingdon: Routledge p.12

[3] Li, S. (2010) ‘The effectiveness of corrective feedback in SLA: a meta-analysis’ Language Learning 60 / 2: 309 -365

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)

busuu is an online language learning service. I did not refer to it in the ‘guide’ because it does not seem to use any adaptive learning software yet, but this is set to change. According to founder Bernhard Niesner, the company is already working on incorporation of adaptive software.

A few statistics will show the significance of busuu. The site currently has over 40 million users (El Pais, 8 February 2014) and is growing by 40,000 a day. The basic service is free, but the premium service costs Euro 69.99 a year. The company will not give detailed user statistics, but say that ‘hundreds of thousands’ are paying for the premium service, that turnover was a 7-figure number last year and will rise to 8 figures this year.

It is easy to understand why traditional publishers might be worried about competition like busuu and why they are turning away from print-based courses.

Busuu offers 12 languages, but, as a translation-based service, any one of these languages can only be studied if you speak one of the other languages on offer. The levels of the different courses are tagged to the CEFR.

busuuframe

In some ways, busuu is not so different from competitors like duolingo. Students are presented with bilingual vocabulary sets, accompanied by pictures, which are tested in a variety of ways. As with duolingo, some of this is a little strange. For German at level A1, I did a vocabulary set on ‘pets’ which presented the German words for a ferret, a tortoise and a guinea-pig, among others. There are dialogues, which are both written and recorded, that are sometimes surreal.

Child: Mum, look over there, there’s a dog without a collar, can we take it?

Mother: No, darling, our house is too small to have a dog.

Child: Mum your bedroom is very big, it can sleep with dad and you.

Mother: Come on, I’ll buy you a toy dog.

The dialogues are followed up by multiple choice questions which test your memory of the dialogue. There are also writing exercises where you are given a picture from National Geographic and asked to write about it. It’s not always clear what one is supposed to write. What would you say about a photo that showed a large number of parachutes in the sky, beyond ‘I can see a lot of parachutes’?

There are also many gamification elements. There is a learning carrot where you can set your own learning targets and users can earn ‘busuuberries’ which can then be traded in for animations in a ‘language garden’.

2014-02-25_0911

But in one significant respect, busuu differs from its competitors. It combines the usual vocabulary, grammar and dialogue work with social networking. Users can interact with text or video, and feedback on written work comes from other users. My own experience with this was mixed, but the potential is clear. Feedback on other learners’ work is encouraged by the awarding of ‘busuuberries’.

We will have to wait and see what busuu does with adaptive software and what it will do with the big data it is generating. For the moment, its interest lies in illustrating what could be done with a learning platform and adaptive software. The big ELT publishers know they have a new kind of competition and, with a lot more money to invest than busuu, we have to assume that what they will launch a few years from now will do everything that busuu does, and more. Meanwhile, busuu are working on site redesign and adaptivity. They would do well, too, to sort out their syllabus!