One could be forgiven for thinking that there are no problems associated with adaptive learning in ELT. Type the term into a search engine and you’ll mostly come up with enthusiasm or sales talk. There are, however, a number of reasons to be deeply skeptical about the whole business. In the post after this, I will be considering the political background.
1. Learning theory
Jose Fereira, the CEO of Knewton, spoke, in an interview with Digital Journal[1] in October 2009, about getting down to the ‘granular level’ of learning. He was referencing, in an original turn of phrase, the commonly held belief that learning is centrally concerned with ‘gaining knowledge’, knowledge that can be broken down into very small parts that can be put together again. In this sense, the adaptive learning machine is very similar to the ‘teaching machine’ of B.F. Skinner, the psychologist who believed that learning was a complex process of stimulus and response. But how many applied linguists would agree, firstly, that language can be broken down into atomised parts (rather than viewed as a complex, dynamic system), and, secondly, that these atomised parts can be synthesized in a learning program to reform a complex whole? Human cognitive and linguistic development simply does not work that way, despite the strongly-held contrary views of ‘folk’ theories of learning (Selwyn Education and Technology 2011, p.3).
Furthermore, even if an adaptive system delivers language content in personalized and interesting ways, it is still premised on a view of learning where content is delivered and learners receive it. The actual learning program is not personalized in any meaningful way: it is only the way that it is delivered that responds to the algorithms. This is, again, a view of learning which few educationalists (as opposed to educational leaders) would share. Is language learning ‘simply a technical business of well managed information processing’ or is it ‘a continuing process of ‘participation’ (Selwyn, Education and Technology 2011, p.4)?
Finally, adaptive learning is also premised on the idea that learners have particular learning styles, that these can be identified by the analytics (even if they are not given labels), and that actionable insights can be gained from this analysis (i.e. the software can decide on the most appropriate style of content delivery for an individual learner). Although the idea that teaching programs can be modified to cater to individual learning styles continues to have some currency among language teachers (e.g. those who espouse Neuro-Linguistic Programming or Multiple Intelligences Theory), it is not an idea that has much currency in the research community.
It might be the case that adaptive learning programs will work with some, or even many, learners, but it would be wise to carry out more research (see the section on Research below) before making grand claims about its efficacy. If adaptive learning can be shown to be more effective than other forms of language learning, it will be either because our current theories of language learning are all wrong, or because the learning takes place despite the theory, (and not because of it).
2. Practical problems
However good technological innovations may sound, they can only be as good, in practice, as the way they are implemented. Language laboratories and interactive whiteboards both sounded like very good ideas at the time, but they both fell out of favour long before they were technologically superseded. The reasons are many, but one of the most important is that classroom teachers did not understand sufficiently the potential of these technologies or, more basically, how to use them. Given the much more radical changes that seem to be implied by the adoption of adaptive learning, we would be wise to be cautious. The following is a short, selected list of questions that have not yet been answered.
- Language teachers often struggle with mixed ability classes. If adaptive programs (as part of a blended program) allow students to progress at their own speed, the range of abilities in face-to-face lessons may be even more marked. How will teachers cope with this? Teacher – student ratios are unlikely to improve!
- Who will pay for the training that teachers will need to implement effective blended learning and when will this take place?
- How will teachers respond to a technology that will be perceived by some as a threat to their jobs and their professionalism and as part of a growing trend towards the accommodation of commercial interests (see the next post)?
- How will students respond to online (adaptive) learning when it becomes the norm, rather than something ‘different’?
3 Research
Technological innovations in education are rarely, if ever, driven by solidly grounded research, but they are invariably accompanied by grand claims about their potential. Motion pictures, radio, television and early computers were all seen, in their time, as wonder technologies that would revolutionize education (Cuban, Teachers and Machines: The Classroom Use of Technology since 1920 1986). Early research seemed to support the claims, but the passage of time has demonstrated all too clearly the precise opposite. The arrival on the scene of e-learning in general, and adaptive learning in particular, has also been accompanied by much cheer-leading and claims of research support.
Examples of such claims of research support for adaptive learning in higher education in the US and Australia include an increase in pass rates of between 7 and 18%, a decrease of between 14 and 47% in student drop-outs, and an acceleration of 25% in the time needed to complete courses[2]. However, research of this kind needs to be taken with a liberal pinch of salt. First of all, the research has usually been commissioned, and sometimes carried out, by those with vested commercial interests in positive results. Secondly, the design of the research study usually guarantees positive results. Finally, the results cannot be interpreted to have any significance beyond their immediate local context. There is no reason to expect that what happened in a particular study into adaptive learning in, say, the University of Arizona would be replicated in, say, the Universities of Amman, Astana or anywhere else. Very often, when this research is reported, the subject of the students’ study is not even mentioned, as if this were of no significance.
The lack of serious research into the effectiveness of adaptive learning does not lead us to the conclusion that it is ineffective. It is simply too soon to say, and if the examples of motion pictures, radio and television are any guide, it will be a long time before we have any good evidence. By that time, it is reasonable to assume, adaptive learning will be a very different beast from what it is today. Given the recency of this kind of learning, the lack of research is not surprising. For online learning in general, a meta-analysis commissioned by the US Department of Education (Means et al, Evaluation of Evidence-Based Practice in Online Learning 2009, p.9) found that there were only a small number of rigorous published studies, and that it was not possible to attribute any gains in learning outcomes to online or blended learning modes. As the authors of this report were aware, there are too many variables (social, cultural and economic) to compare in any direct way the efficacy of one kind of learning with another. This is as true of attempts to compare adaptive online learning with face-to-face instruction as it is with comparisons of different methodological approaches in purely face-to-face teaching. There is, however, an irony in the fact that advocates of adaptive learning (whose interest in analytics leads them to prioritise correlational relationships over causal ones) should choose to make claims about the causal relationship between learning outcomes and adaptive learning.
Perhaps, as Selwyn (Education and Technology 2011, p.87) suggests, attempts to discover the relative learning advantages of adaptive learning are simply asking the wrong question, not least as there cannot be a single straightforward answer. Perhaps a more useful critique would be to look at the contexts in which the claims for adaptive learning are made, and by whom. Selwyn also suggests that useful insights may be gained from taking a historical perspective. It is worth noting that the technicist claims for adaptive learning (that ‘it works’ or that it is ‘effective’) are essentially the same as those that have been made for other education technologies. They take a universalising position and ignore local contexts, forgetting that ‘pedagogical approach is bound up with a web of cultural assumption’ (Wiske, ‘A new culture of teaching for the 21st century’ in Gordon, D.T. (ed.) The Digital Classroom: How Technology is Changing the Way we teach and Learn 2000, p.72). Adaptive learning might just possibly be different from other technologies, but history advises us to be cautious.
[1] http://www.digitaljournal.com/article/279940?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+frc+(First+Round+Capital+News)
[2] These figures are quoted in Learning to Adapt: A Case for Accelerating Adaptive Learning in Higher Education, a booklet produced in March 2013 by Education Growth Advisors, an education consultancy firm. Their research is available at http://edgrowthadvisors.com/research/
You are absolutely right to draw a parallel between research into educational technologies and comparative methods research (‘there are too many variables’). In the face of the (disappointing) fact that there is ‘no best method’, Prabhu (1990) advised practitioners to evaluate methodological claims in terms of their ‘sense of plausibility’. Applying this ‘test’ to ed tech might suggest that there is no best technology, either. Moreover, is it really plausible that the co-adaptive complexity of language emergence can be nurtured by algorithms? Well, only if you believe it is neither co-adaptive nor complex.
I have finally got round to reading parts 1 – 8, Philip. What a terrific piece of work this is, and really important for everyone to read. I had a hard time with a publisher’s marketing person last weekend who was starry-eyed about adaptive learning (and of course, what big sales it would generate).
If I had to express worried about adaptive learning and the analytics it was premised on I would say that:
1 You can only measure what you can measure – and that, as you say, is discrete points, ticked boxes, things you can COUNT. Hence, for example, the Pearson General Scale of English, which is/will be wired in to everything they do.
2 Data analytics may help Amazon to mail me and keep telling me what I want to read/buy. But they are often wrong and worse, it really pisses me off and so is totally counter-productive.
3 Adaptive learning relies on the learner to be happy to be tied into all this stuff. Teachers have traditionally seen their role (rightly, I think) as motivators, focusers etc.
4 Speaking personally, I like being an individual (having things tailored to me me me), but I’m crazy about the groups I live in too. I believe many other students are like that, and that whether you want to use generalised courseboooks, or go all naked and unplugged, the group experience is part of the magic.
Anyway, I’m really looking forward top 9 – 13. Thank you very much.
Jeremy
Actually, there will only be 2 more posts. I decided to condense some sections. Thanks for your comments, Jeremy.
A newsletter from one of my publishers has just arrived in which it trumpets a partnership with a ‘much-talked about start-up offering adaptive technology that’s set to light the world of education on fire’.
Unfortunate choice of metaphor, I’d venture.
Hi Philip – I’m suddenly and distressingly afraid that I subscribe to “folk” theories
I don’t really follow what you mean by:
“But how many applied linguists would agree, firstly, that language can be broken down into atomised parts (rather than viewed as a complex, dynamic system), and, secondly, that these atomised parts can be synthesized in a learning program to reform a complex whole? Human cognitive and linguistic development simply does not work that way, despite the strongly-held contrary views of ‘folk’ theories of learning.”
Maybe you are using a very narrow definition of “atomised”? What exactly do you mean?
And if we follow the usual definition of “atomize” as “to treat as made up of many discrete units” (i.e. a range of granularity levels), would a possible reformulation be:
“Language is a complex and dynamic system, which can be broken down into atomised parts for descriptive, analytic or pedagogical purposes” ?
Or maybe I’m missing something fundamental?
HI Cleve
I’m sorry if I haven’t been clear enough. Language is, of course, regularly broken down into atomised parts for all sorts of purposes. Dictionaries are a clear example. But when we break down the lexical system into single-word lexical items, we run into a problem. Words do not operate independently of other words. They do not have essential, Platonic meanings. Both their meanings and the way they are used can only be understood when we look at the way they operate in relation to other words. We have known, at least since Pawley & Snyder’s 1983 article “Two puzzles for linguistic theory: Nativelike selection and nativelike fluency”, that lexis is organised into and processed in chunks. Small, pocket dictionaries which only list single-word items do not, therefore, reflect how language works terribly well. And they do not help users put these atomised items together to form the chunks they will need to use a language fluently. This may not be an issue at the very lowest levels of language learning, but as soon as a learner is at B1, an ‘atomised’ vocabulary syllabus of single items is not enough.
Grammar is the same. In commercial ELT materials, grammar is almost always treated as a collection of what Scott Thornbury has called ‘grammar mcnuggets’ – bite-sized pieces of grammar, at or below the sentence level, which are organised and presented in self-contained categories, such as ‘present perfect continuous’. But, again, neither language nor language learning are quite as simple as that. A learner cannot effectively and accurately use this verb form without, simultaneously, having a grasp of the way that aspect works in English, of the way that associated adverbs operate, of the distinction between stative and dynamic verbs, etc, etc. Dave Willis summarised the consensus view of applied linguists when he wrote ‘it is actually impossible to separate one [bit] and say, ‘This is an item’. You may do it for the purposes of syllabus specification, but it is a very artificial exercise, because [language] only has meaning when in relation with other ‘items’.’
Breaking down a language into atomized parts assumes that they can be learnt in a linear fashion (and this is very much the assumption of adaptive learning). Dave Willis, again, ‘All that we know about the way people learn languages may not be a great deal, but we know how people don’t learn languages, and they don’t learn them like that . . . they don’t learn them by adding on one little bit at a time.’
When it comes to grammar, this may, again, be not such an issue at the very lowest levels. The presentation of simple ‘rules’ (which are not really rules) about ‘discrete’ items of language (which are not really discrete) might be justified in a classroom context because of the overriding need to build a learner’s confidence. But language is not made up of discrete items, and I am unaware of a single researcher who would claim that it is.
The question of how we relate our descriptions or models of language to conclusions about teaching / classroom practice is, however, very complicated. Quite an old article (but still a useful one, in my opinion) which looks at this issue can be found at http://www.nuis.ac.jp/~hadley/publication/relc/relc.htm
Best wishes
Philip
Thanks Philip for your very helpful reply, and the article link. I very much appreciate those who reply seriously and in depth.
I’m afraid I’m still in the “folk theories of learning” camp, however. Let me try to explain where I feel uncomfortable, but first some clarification of terms.
By “atomized” I’m not referring to single words or phonemes but rather a range of smaller or larger units that have been broken out of natural language, and to a variable extent have been decontextualized. This would include lexical chunks and McNuggets and even larger units, and both discrete items and integrative items (following the testing definitions), along with the various drills and other similar activities. The reason I was thinking of “atomized” this way is that these are the type of items that adaptive learning uses, since they can be objectively scored. This is what Jose Fereira was referring to with the term “granular”. Knewton isn’t suggesting that their customers (the publishers) will hook Knewton’s engine up to word lists, but rather that they will use Knewton for activities or exercises that can be objectively scored.
You write: “Breaking down a language into atomized parts assumes that they can be learnt in a linear fashion (and this is very much the assumption of adaptive learning).”
Not sure I‘m convinced of this. Let’s take a typical dogme class as the antithesis of the linear McNugget-laden coursebook. Students drive the topics and the teacher/facilitator guides, clarifies, and points out/teaches language that emerges from the student-led discussion. It’s not at all linear and there are no materials. But at the end of the class the whiteboard is absolutely filled up with atomized parts (relevant forms, or vocab, or chunks, etc.) that the teacher has dropped into the discussion, and often expanded on, or even semi-drilled briefly.
Also, I’m not quite sure that “linear” is necessarily “the assumption of adaptive learning”. All Knewton does is hook their infrastructure up to what the publishers put out. Let’s say a publisher were to go topic-based, or task-based, or a teacher creates a blended component to a dogme course, where the language that emerged from the students in class is later documented, reviewed, and practiced online. As long as some of the activities were objectively scored, then adaptive learning can be gainfully employed. Now, I agree that the publishers will put out the same linear McNugget titles, but that is the publishers’ responsibility, not that of adaptive learning per se. In fact, I bet you could find value for adaptive learning in almost any course or method, except perhaps something like a full-on Krashen I+1 approach.
If we go back to the original sentence that stood out for me…
“But how many applied linguists would agree, firstly, that language can be broken down into atomised parts (rather than viewed as a complex, dynamic system), and, secondly, that these atomised parts can be synthesized in a learning program to reform a complex whole? Human cognitive and linguistic development simply does not work that way, despite the strongly-held contrary views of ‘folk’ theories of learning.”
…then I think there are only three possible options here:
1) if the definition of “atomized parts” is restricted to words and phonemes, then this is a straw man argument, because today no one holds the view being argued against,
2) or, the definition of “atomized parts” follows the more extended version of the definition, as a range of smaller or larger units such as chunks or McNuggets or discrete or integrative items and activities. If this is the definition, and if “synthesized into a learning program” means that *only* the atomized parts can be used for the program, and nothing else, no readings, L/C, discussion activities, etc, then again it’s a straw man argument. No one is suggesting this – certainly not the publishers who are creating the material,
3) or, it could be that the definition of “atomized parts” follows the more extended version of the definition, and the definition of “synthesized in a learning program” allows the typical readings, L/C, discussion activities, etc. If this is the definition, then I’d question it strongly, and propose as a replacement, “Language is a complex and dynamic system, which can be broken down into atomised parts for descriptive, analytic or pedagogical purposes”
Hi Cleve
I don’t think we’re going to see eye to eye on this, but your comment deserves a response!
You refer to a Dogme class where, at the end of the lesson, the board is covered with atomized parts. I’m no Dogmetist, so I may be wrong, but my understanding of this approach is that the language that emerges in the course of a lesson is not considered to be the learning outcome of that lesson. The point of a Dogme lesson is to create windows of opportunity for learning, but there is no assumption that learning of specific items will occur. In a similar vein, task-based approaches do not assume specific learning outcomes. The focus on the atomized bits of language are conceptualized as ‘consciousness-raising’, not ‘knowledge-acquiring’.
Dogme and TBL are process-oriented (as opposed to product-oriented) approaches because they recognise the difficulty of pre-selecting and ordering the language syllabus. They do not lend themselves to the measurement of learning outcomes because the classroom activities cannot, typically, be objectively scored. It is very hard, therefore, to imagine a marriage of adaptive learning and approaches such as these.
I think you are right when you say that it is not adaptive learning per se that will be responsible for the adaptive, linear McNugget courses that we will see. But technological choices inevitably influence syllabus design, and adaptive learning lends itself particularly well to an approach to language learning that takes an atomized syllabus of McNuggets and vocabulary as its starting point, and prioritizes this syllabus strand over skills development. Clearly defined learning outcomes and tasks that can be objectively scored will push course design in only one direction.
Skills work can and will obviously be part of the new courses being developed. The software could track all sorts of things while learners are involved in skills work, and, at least as far as reading and listening are concerned, could objectively score task performance. But, crucially, it cannot objectively score a learner’s reading or listening skill, which is very different from performance on a particular task or group of tasks. Comprehension tasks for reading and listening texts cannot be related to definable learning outcomes.
In your option 2, you say that the publishers are not envisaging a separation of the syllabus into two strands: one (grammar and vocabulary) that lends itself to the adaptive software, and another (skills, especially speaking and writing, and especially at higher levels) that does not lend itself to adaptivity. But I think you’ll find that some publishers are envisaging precisely this, at least for the time being.
We do not know what correlations the adaptive software may find between particular learner behaviours (when involved in skills work) and their objectively scored language knowledge. It may be that interesting insights emerge. In the meantime, however, we cannot expect anything other than a primary focus on McNuggets.
But to return to the main bone of contention … If, as you seem to agree, language is a complex and dynamic system, it does not, by definition, have atomized parts! The idea that language is made up of items (words, chunks, structures) which are discrete (in the sense that their meaning and use can be described and learnt independently of other items) has no place in linguistic theory. For pedagogical convenience, we might pretend otherwise, but that is another matter!
Best wishes
Philip
There is a fascinating debate about learning styles going on at the Knewton blog, following a post by CEO Jose Ferreira, in which he insists that ‘it’s pretty obvious that different learning styles exist’, and conflates learning styles with learning strategies. Poor old Jose is way, way out of his depth, but all credit to Knewton for attempting to enter the debate. Check out, in particular, the response by Johan van Eeden. http://www.knewton.com/blog/ceo-jose-ferreira/rebooting-learning-styles/