Posts Tagged ‘learning theory’

All aboard …

The point of adaptive learning is that it can personalize learning. When we talk about personalization, mention of learning styles is rarely far away. Jose Ferreira of Knewton (but now ex-CEO Knewton) made his case for learning styles in a blog post that generated a superb and, for Ferreira, embarrassing  discussion in the comments that were subsequently deleted by Knewton. fluentu_learning_stylesFluentU (which I reviewed here) clearly approves of learning styles, or at least sees them as a useful way to market their product, even though it is unclear how their product caters to different styles. Busuu claims to be ‘personalised to fit your style of learning’. Voxy, Inc. (according to their company overview) ‘operates a language learning platform that creates custom curricula for English language learners based on their interests, routines, goals, and learning styles’. Bliu Bliu (which I reviewed here) recommended, in a recent blog post, that learners should ‘find out their language learner type and use it to their advantage’ and suggests, as a starter, trying out ‘Bliu Bliu, where pretty much any learner can find what suits them best’. Memrise ‘uses clever science to adapt to your personal learning style’.  Duolingo’s learning tree ‘effectively rearranges itself to suit individual learning styles’ according to founder, Louis Von Ahn. This list could go on and on.

Learning styles are thriving in ELT coursebooks, too. Here are just three recent examples for learners of various ages. Today! by Todd, D. & Thompson, T. (Pearson, 2014) ‘shapes learning around individual students with graded difficulty practice for mixed-ability classes’ and ‘makes testing mixed-ability classes easier with tests that you can personalise to students’ abilities’.today

Move  it! by Barraclough, C., Beddall, F., Stannett, K., Wildman, J. (Pearson, 2015) offers ‘personalized pathways [which] allow students to optimize their learning outcomes’ and a ‘complete assessment package to monitor students’ learning process’. pearson_move_it

Open Mind Elementary (A2) 2nd edition by Rogers, M., Taylor-Knowles, J. & Taylor-Knowles, S. (Macmillan, 2014) has a whole page devoted to learning styles in the ‘Life Skills’ strand of the course. The scope and sequence describes it in the following terms: ‘Thinking about what you like to do to find your learning style and improve how you learn English’. Here’s the relevant section:macmillan_coursebook

rosenber-learning-stylesMethodology books offer more tips for ways that teachers can cater to different learning styles. Recent examples include Patrycja Kamińska’s  Learning Styles and Second Language Education (Cambridge Scholars, 2014), Tammy Gregersen & Peter D. MacIntyre’s Capitalizing on Language Learners’ Individuality (Multilingual Matters, 2014) and Marjorie Rosenberg’s Spotlight on Learning Styles (Delta Publishing, 2013). Teacher magazines show a continuing interest  in the topic. Humanising Language Teaching and English Teaching Professional are particularly keen. The British Council offers courses about learning styles and its Teaching English website has many articles and lesson plans on the subject (my favourite explains that your students will be more successful if you match your teaching style to their learning styles), as do the websites of all the major publishers. Most ELT conferences will also offer something on the topic.oup_learning_styles

How about language teaching qualifications and frameworks? The Cambridge English Teaching Framework contains a component entitled ‘Understanding learners’ and this specifies as the first part of the component a knowledge of concepts such as learning styles (e.g., visual, auditory, kinaesthetic), multiple intelligences, learning strategies, special needs, affect. Unsurprisingly, the Cambridge CELTA qualification requires successful candidates to demonstrate an awareness of the different learning styles and preferences that adults bring to learning English. The Cambridge DELTA requires successful candidates to accommodate learners according to their different abilities, motivations, and learning styles. The Eaquals Framework for Language Teacher Training and Development requires teachers at Development Phase 2 t0 have the skill of determining and anticipating learners’ language learning needs and learning styles at a range of levels, selecting appropriate ways of finding out about these.

Outside of ELT, learning styles also continue to thrive. Phil Newton (2015 ‘The learning styles myth is thriving in higher education’ Frontiers in Psychology 6: 1908) carried out a survey of educational publications  (higher education) between 2013 and 2016, and found that an overwhelming majority (89%) implicitly or directly endorse the use of learning styles. He also cites research showing that 93% of UK schoolteachers believe that ‘individuals learn better when they receive information in their preferred Learning Style’, with similar figures in other countries. 72% of Higher Education institutions in the US teach ‘learning style theory’ as part of faculty development for online teachers. Advocates of learning styles in English language teaching are not alone.

But, unfortunately, …

In case you weren’t aware of it, there is a rather big problem with learning styles. There is a huge amount of research  which suggests that learning styles (and, in particular, teaching attempts to cater to learning styles) need to be approached with extreme scepticism. Much of this research was published long before the blog posts, advertising copy, books and teaching frameworks (listed above) were written.  What does this research have to tell us?

The first problem concerns learning styles taxonomies. There are three issues here: many people do not fit one particular style, the information used to assign people to styles is often inadequate, and there are so many different styles that it becomes cumbersome to link particular learners to particular styles (Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48 / 3, 169-183). To summarise, given the lack of clarity as to which learning styles actually exist, it may be ‘neither viable nor justified’ for learning styles to form the basis of lesson planning (Hall, G. 2011. Exploring English Language Teaching. Abingdon, Oxon.: Routledge p.140). More detailed information about these issues can be found in the following sources:

Coffield, F., Moseley, D., Hall, E. & Ecclestone, K. 2004. Learning styles and pedagogy in post-16 learning: a systematic and critical review. London: Learning and Skills Research Centre

Dembo, M. H. & Howard, K. 2007. Advice about the use of learning styles: a major myth in education. Journal of College Reading & Learning 37 / 2: 101 – 109

Kirschner, P. A. 2017. Stop propagating the learning styles myth. Computers & Education 106: 166 – 171

Pashler, H., McDaniel, M., Rohrer, D. & Bjork, E. 2008. Learning styles concepts and evidence. Psychological Science in the Public Interest 9 / 3: 105 – 119

Riener, C. & Willingham, D. 2010. The myth of learning styles. Change – The Magazine of Higher Learning

The second problem concerns what Pashler et al refer to as the ‘meshing hypothesis’: the idea that instructional interventions can be effectively tailored to match particular learning styles. Pashler et al concluded that the available taxonomies of student types do not offer any valid help in deciding what kind of instruction to offer each individual. Even in 2008, their finding was not new. Back in 1978, a review of 15 studies that looked at attempts to match learning styles to approaches to first language reading instruction, concluded that modality preference ‘has not been found to interact significantly with the method of teaching’ (Tarver, Sara & M. M. Dawson. 1978. Modality preference and the teaching of reading. Journal of Learning Disabilities 11: 17 – 29). The following year, two other researchers concluded that [the assumption that one can improve instruction by matching materials to children’s modality strengths] appears to lack even minimal empirical support. (Arter, J.A. & Joseph A. Jenkins 1979 ‘Differential diagnosis-prescriptive teaching: A critical appraisal’ Review of Educational Research 49: 517-555). Fast forward 20 years to 1999, and Stahl (Different strokes for different folks?’ American Educator Fall 1999 pp. 1 – 5) was writing the reason researchers roll their eyes at learning styles is the utter failure to find that assessing children’s learning styles and matching to instructional methods has any effect on learning. The area with the most research has been the global and analytic styles […]. Over the past 30 years, the names of these styles have changed – from ‘visual’ to ‘global’ and from ‘auditory’ to ‘analytic’ – but the research results have not changed. For a recent evaluation of the practical applications of learning styles, have a look at Rogowsky, B. A., Calhoun, B. M. & Tallal, P. 2015. ‘Matching Learning Style to Instructional Method: Effects on Comprehension’ Journal of Educational Psychology 107 / 1: 64 – 78. Even David Kolb, the Big Daddy of learning styles, now concedes that there is no strong evidence that teachers should tailor their instruction to their student’s particular learning styles (reported in Glenn, D. 2009. ‘Matching teaching style to learning style may not help students’ The Chronicle of Higher Education). To summarise, the meshing hypothesis is entirely unsupported in the scientific literature. It is a myth (Howard-Jones, P. A. 2014. ‘Neuroscience and education: myths and messages’ Nature Reviews Neuroscience).

This brings me back to the blog posts, advertising blurb, coursebooks, methodology books and so on that continue to tout learning styles. The writers of these texts typically do not acknowledge that there’s a problem of any kind. Are they unaware of the research? Or are they aware of it, but choose not to acknowledge it? I suspect that the former is often the case with the app developers. But if the latter is the case, what  might those reasons be? In the case of teacher training specifications, the reason is probably practical. Changing a syllabus is an expensive and time-consuming operation. But in the case of some of the ELT writers, I suspect that they hang on in there because they so much want to believe.

As Newton (2015: 2) notes, intuitively, there is much that is attractive about the concept of Learning Styles. People are obviously different and Learning Styles appear to offer educators a way to accommodate individual learner differences.  Pashler et al (2009:107) add that another related factor that may play a role in the popularity of the learning-styles approach has to do with responsibility. If a person or a person’s child is not succeeding or excelling in school, it may be more comfortable for the person to think that the educational system, not the person or the child himself or herself, is responsible. That is, rather than attribute one’s lack of success to any lack of ability or effort on one’s part, it may be more appealing to think that the fault lies with instruction being inadequately tailored to one’s learning style. In that respect, there may be linkages to the self-esteem movement that became so influential, internationally, starting in the 1970s. There is no reason to doubt that many of those who espouse learning styles have good intentions.

No one, I think, seriously questions whether learners might not benefit from a wide variety of input styles and learning tasks. People are obviously different. MacIntyre et al (MacIntyre, P.D., Gregersen, T. & Clément, R. 2016. ‘Individual Differences’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon, Oxon: Routledge, pp.310 – 323, p.319) suggest that teachers might consider instructional methods that allow them to capitalise on both variety and choice and also help learners find ways to do this for themselves inside and outside the classroom. Jill Hadfield (2006. ‘Teacher Education and Trainee Learning Style’ RELC Journal 37 / 3: 369 – 388) recommends that we design our learning tasks across the range of learning styles so that our trainees can move across the spectrum, experiencing both the comfort of matching and the challenge produced by mismatching. But this is not the same thing as claiming that identification of a particular learning style can lead to instructional decisions. The value of books like Rosenberg’s Spotlight on Learning Styles lies in the wide range of practical suggestions for varying teaching styles and tasks. They contain ideas of educational value: it is unfortunate that the theoretical background is so thin.

In ELT things are, perhaps, beginning to change. Russ Mayne’s blog post Learning styles: facts and fictions in 2012 got a few heads nodding, and he followed this up 2 years later with a presentation at IATEFL looking at various aspects of ELT, including learning styles, which have little or no scientific credibility. Carol Lethaby and Patricia Harries gave a talk at IATEFL 2016, Changing the way we approach learning styles in teacher education, which was also much discussed and shared online. They also had an article in ELT Journal called Learning styles and teacher training: are we perpetuating neuromyths? (2016 ELTJ 70 / 1: 16 – 27). Even Pearson, in a blog post of November 2016, (Mythbusters: A review of research on learning styles) acknowledges that there is a shocking lack of evidence to support the core learning styles claim that customizing instruction based on students’ preferred learning styles produces better learning than effective universal instruction, concluding that  it is impossible to recommend learning styles as an effective strategy for improving learning outcomes.

 

 

In December last year, I posted a wish list for vocabulary (flashcard) apps. At the time, I hadn’t read a couple of key research texts on the subject. It’s time for an update.

First off, there’s an article called ‘Intentional Vocabulary Learning Using Digital Flashcards’ by Hsiu-Ting Hung. It’s available online here. Given the lack of empirical research into the use of digital flashcards, it’s an important article and well worth a read. Its basic conclusion is that digital flashcards are more effective as a learning tool than printed word lists. No great surprises there, but of more interest, perhaps, are the recommendations that (1) ‘students should be educated about the effective use of flashcards (e.g. the amount and timing of practice), and this can be implemented through explicit strategy instruction in regular language courses or additional study skills workshops ‘ (Hung, 2015: 111), and (2) that digital flashcards can be usefully ‘repurposed for collaborative learning tasks’ (Hung, ibid.).

nakataHowever, what really grabbed my attention was an article by Tatsuya Nakata. Nakata’s research is of particular interest to anyone interested in vocabulary learning, but especially so to those with an interest in digital possibilities. A number of his research articles can be freely accessed via his page at ResearchGate, but the one I am interested in is called ‘Computer-assisted second language vocabulary learning in a paired-associate paradigm: a critical investigation of flashcard software’. Don’t let the title put you off. It’s a review of a pile of web-based flashcard programs: since the article is already five years old, many of the programs have either changed or disappeared, but the critical approach he takes is more or less as valid now as it was then (whether we’re talking about web-based stuff or apps).

Nakata divides his evaluation for criteria into two broad groups.

Flashcard creation and editing

(1) Flashcard creation: Can learners create their own flashcards?

(2) Multilingual support: Can the target words and their translations be created in any language?

(3) Multi-word units: Can flashcards be created for multi-word units as well as single words?

(4) Types of information: Can various kinds of information be added to flashcards besides the word meanings (e.g. parts of speech, contexts, or audios)?

(5) Support for data entry: Does the software support data entry by automatically supplying information about lexical items such as meaning, parts of speech, contexts, or frequency information from an internal database or external resources?

(6) Flashcard set: Does the software allow learners to create their own sets of flashcards?

Learning

(1) Presentation mode: Does the software have a presentation mode, where new items are introduced and learners familiarise themselves with them?

(2) Retrieval mode: Does the software have a retrieval mode, which asks learners to recall or choose the L2 word form or its meaning?

(3) Receptive recall: Does the software ask learners to produce the meanings of target words?

(4) Receptive recognition: Does the software ask learners to choose the meanings of target words?

(5) Productive recall: Does the software ask learners to produce the target word forms corresponding to the meanings provided?

(6) Productive recognition: Does the software ask learners to choose the target word forms corresponding to the meanings provided?

(7) Increasing retrieval effort: For a given item, does the software arrange exercises in the order of increasing difficulty?

(8) Generative use: Does the software encourage generative use of words, where learners encounter or use previously met words in novel contexts?

(9) Block size: Can the number of words studied in one learning session be controlled and altered?

(10) Adaptive sequencing: Does the software change the sequencing of items based on learners’ previous performance on individual items?

(11) Expanded rehearsal: Does the software help implement expanded rehearsal, where the intervals between study trials are gradually increased as learning proceeds? (Nakata, T. (2011): ‘Computer-assisted second language vocabulary learning in a paired-associate paradigm: a critical investigation of flashcard software’ Computer Assisted Language Learning, 24:1, 17-38)

It’s a rather different list from my own (there’s nothing I would disagree with here), because mine is more general and his is exclusively oriented towards learning principles. Nakata makes the point towards the end of the article that it would ‘be useful to investigate learners’ reactions to computer-based flashcards to examine whether they accept flashcard programs developed according to learning principles’ (p. 34). It’s far from clear, he points out, that conformity to learning principles are at the top of learners’ agendas. More than just users’ feelings about computer-based flashcards in general, a key concern will be the fact that there are ‘large individual differences in learners’ perceptions of [any flashcard] program’ (Nakata, N. 2008. ‘English vocabulary learning with word lists, word cards and computers: implications from cognitive psychology research for optimal spaced learning’ ReCALL 20(1), p. 18).

I was trying to make a similar point in another post about motivation and vocabulary apps. In the end, as with any language learning material, research-driven language learning principles can only take us so far. User experience is a far more difficult creature to pin down or to make generalisations about. A user’s reaction to graphics, gamification, uploading time and so on are so powerful and so subjective that learning principles will inevitably play second fiddle. That’s not to say, of course, that Nakata’s questions are not important: it’s merely to wonder whether the bigger question is truly answerable.

Nakata’s research identifies plenty of room for improvement in digital flashcards, and although the article is now quite old, not a lot had changed. Key areas to work on are (1) the provision of generative use of target words, (2) the need to increase retrieval effort, (3) the automatic provision of information about meaning, parts of speech, or contexts (in order to facilitate flashcard creation), and (4) the automatic generation of multiple-choice distractors.

In the conclusion of his study, he identifies one flashcard program which is better than all the others. Unsurprisingly, five years down the line, the software he identifies is no longer free, others have changed more rapidly in the intervening period, and who knows will be out in front next week?

 

If you’re going to teach vocabulary, you need to organise it in some way. Almost invariably, this organisation is topical, with words grouped into what are called semantic sets. In coursebooks, the example below (from Rogers, M., Taylore-Knowles, J. & S. Taylor-Knowles. 2010. Open Mind Level 1. London: Macmillan, p.68) is fairly typical.

open mind

Coursebooks are almost always organised in a topical way. The example above comes in a unit (of 10 pages), entitled ‘You have talent!’, which contains two main vocabulary sections. It’s unsurprising to find a section called ‘personality adjectives’ in such a unit. What’s more, such an approach lends itself to the requisite, but largely, spurious ‘can-do’ statement in the self-evaluation section: I can talk about people’s positive qualities. We must have clearly identifiable learning outcomes, after all.

There is, undeniably, a certain intuitive logic in this approach. An alternative might entail a radical overhaul of coursebook architecture – this might not be such a bad thing, but might not go down too well in the markets. How else, after all, could the vocabulary strand of the syllabus be organised?

Well, there are a number of ways in which a vocabulary syllabus could be organised. Including the standard approach described above, here are four possibilities:

1 semantic sets (e.g. bee, butterfly, fly, mosquito, etc.)

2 thematic sets (e.g. ‘pets’: cat, hate, flea, feed, scratch, etc.)

3 unrelated sets

4 sets determined by a group of words’ occurrence in a particular text

Before reading further, you might like to guess what research has to say about the relative effectiveness of these four approaches.

The answer depends, to some extent, on the level of the learner. For advanced learners, it appears to make no, or little, difference (Al-Jabri, 2005, cited by Ellis & Shintani, 2014: 106). But, for the vast majority of English language learners (i.e. those at or below B2 level), the research is clear: the most effective way of organising vocabulary items to be learnt is by grouping them into thematic sets (2) or by mixing words together in a semantically unrelated way (3) – not by teaching sets like ‘personality adjectives’. It is surprising how surprising this finding is to so many teachers and materials writers. It goes back at least to 1988 and West’s article on ‘Catenizing’ in ELTJ, which argued that semantic grouping made little sense from a psycho-linguistic perspective. Since then, a large amount of research has taken place. This is succinctly summarised by Paul Nation (2013: 128) in the following terms: Avoid interference from related words. Words which are similar in form (Laufer, 1989) or meaning (Higa, 1963; Nation, 2000; Tinkham, 1993; Tinkham, 1997; Waring, 1997) are more difficult to learn together than they are to learn separately. For anyone who is interested, the most up-to-date review of this research that I can find is in chapter 11 of Barcroft (2105).

The message is clear. So clear that you have to wonder how it is not getting through to materials designers. Perhaps, coursebooks are different. They regularly eschew research findings for commercial reasons. But vocabulary apps? There is rarely, if ever, any pressure on the content-creation side of vocabulary apps (except those that are tied to coursebooks) to follow the popular misconceptions that characterise so many coursebooks. It wouldn’t be too hard to organise vocabulary into thematic sets (like, for example, the approach in the A2 level of Memrise German that I’m currently using). Is it simply because the developers of so many vocabulary apps just don’t know much about language learning?

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Nation, I. S. P. 2013. Learning Vocabulary in Another Language 2nd edition. Cambridge: Cambridge University Press

Ellis, R. & N. Shintani, N. 2014. Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon, Oxon: Routledge

West, M. 1988. ‘Catenizing’ English Language Teaching Journal 6: 147 – 151

51Fgn6C4sWL__SY344_BO1,204,203,200_Decent research into adaptive learning remains very thin on the ground. Disappointingly, the Journal of Learning Analytics has only managed one issue so far in 2015, compared to three in 2014. But I recently came across an article in Vol. 18 (pp. 111 – 125) of  Informing Science: the International Journal of an Emerging Transdiscipline entitled Informing and performing: A study comparing adaptive learning to traditional learning by Murray, M. C., & Pérez, J. of Kennesaw State University.

The article is worth reading, not least because of the authors’ digestible review of  adaptive learning theory and their discussion of levels of adaptation, including a handy diagram (see below) which they have reproduced from a white paper by Tyton Partners ‘Learning to Adapt: Understanding the Adaptive Learning Supplier Landscape’. Murray and Pérez make clear that adaptive learning theory is closely connected to the belief that learning is improved when instruction is personalized — adapted to individual learning styles, but their approach is surprisingly uncritical. They write, for example, that the general acceptance of learning styles is evidenced in recommended teaching strategies in nearly every discipline, and learning styles continue to inform the evolution of adaptive learning systems, and quote from the much-quoted Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008) Learning styles: concepts and evidence, Psychological Science in the Public Interest, 9, 105–119. But Pashler et al concluded that the current evidence supporting the use of learning style-matched approaches is virtually non-existent (see here for a review of Pashler et al). And, in the world of ELT, an article in the latest edition of ELTJ by Carol Lethaby and Patricia Harries disses learning styles and other neuromyths. Given the close connection between adaptive learning theory and learning styles, one might reasonably predict that a comparative study of adaptive learning and traditional learning would not come out with much evidence in support of the former.

adaptive_taxonomyMurray and Pérez set out, anyway, to explore the hypothesis that adapting instruction to an individual’s learning style results in better learning outcomes. Their study compared adaptive and traditional methods in a university-level digital literacy course. Their conclusion? This study and a few others like it indicate that today’s adaptive learning systems have negligible impact on learning outcomes.

I was, however, more interested in the comments which followed this general conclusion. They point out that learning outcomes are only one measure of quality. Others, such as student persistence and engagement, they claim, can be positively affected by the employment of adaptive systems. I am not convinced. I think it’s simply far too soon to be able to judge this, and we need to wait quite some time for novelty effects to wear off. Murray and Pérez provide two references in support of their claim. One is an article by Josh Jarrett, Bigfoot, Goldilocks, and Moonshots: A Report from the Frontiers of Personalized Learning in Educause. Jarrett is Deputy Director for Postsecondary Success at the Bill & Melinda Gates Foundation and Educause is significantly funded by the Gates Foundation. Not, therefore, an entirely unbiased and trustworthy source. The other is a journalistic piece in Forbes. It’s by Tim Zimmer, entitled Rethinking higher ed: A case for adaptive learning and it reads like an advert. Zimmer is a ‘CCAP contributor’. CCAP is the Centre for College Affordability and Productivity, a libertarian, conservative foundation with a strong privatization agenda. Not, therefore, a particularly reliable source, either.

Despite their own findings, Murray and Pérez follow up their claim about student persistence and engagement with what they describe as a more compelling still argument for adaptive learning. This, they say, is the intuitively appealing case for adaptive learning systems as engines with which institutions can increase access and reduce costs. Ah, now we’re getting to the point!

 

 

 

 

 

 

 

.

 

 

 

 

Adaptive learning providers make much of their ability to provide learners with personalised feedback and to provide teachers with dashboard feedback on the performance of both individuals and groups. All well and good, but my interest here is in the automated feedback that software could provide on very specific learning tasks. Scott Thornbury, in a recent talk, ‘Ed Tech: The Mouse that Roared?’, listed six ‘problems’ of language acquisition that educational technology for language learning needs to address. One of these he framed as follows: ‘The feedback problem, i.e. how does the learner get optimal feedback at the point of need?’, and suggested that technological applications ‘have some way to go.’ He was referring, not to the kind of feedback that dashboards can provide, but to the kind of feedback that characterises a good language teacher: corrective feedback (CF) – the way that teachers respond to learner utterances (typically those containing errors, but not necessarily restricted to these) in what Ellis and Shintani call ‘form-focused episodes’[1]. These responses may include a direct indication that there is an error, a reformulation, a request for repetition, a request for clarification, an echo with questioning intonation, etc. Basically, they are correction techniques.

These days, there isn’t really any debate about the value of CF. There is a clear research consensus that it can aid language acquisition. Discussing learning in more general terms, Hattie[2] claims that ‘the most powerful single influence enhancing achievement is feedback’. The debate now centres around the kind of feedback, and when it is given. Interestingly, evidence[3] has been found that CF is more effective in the learning of discrete items (e.g. some grammatical structures) than in communicative activities. Since it is precisely this kind of approach to language learning that we are more likely to find in adaptive learning programs, it is worth exploring further.

What do we know about CF in the learning of discrete items? First of all, it works better when it is explicit than when it is implicit (Li, 2010), although this needs to be nuanced. In immediate post-tests, explicit CF is better than implicit variations. But over a longer period of time, implicit CF provides better results. Secondly, formative feedback (as opposed to right / wrong testing-style feedback) strengthens retention of the learning items: this typically involves the learner repairing their error, rather than simply noticing that an error has been made. This is part of what cognitive scientists[4] sometimes describe as the ‘generation effect’. Whilst learners may benefit from formative feedback without repairing their errors, Ellis and Shintani (2014: 273) argue that the repair may result in ‘deeper processing’ and, therefore, assist learning. Thirdly, there is evidence that some delay in receiving feedback aids subsequent recall, especially over the longer term. Ellis and Shintani (2014: 276) suggest that immediate CF may ‘benefit the development of learners’ procedural knowledge’, while delayed CF is ‘perhaps more likely to foster metalinguistic understanding’. You can read a useful summary of a meta-analysis of feedback effects in online learning here, or you can buy the whole article here.

I have yet to see an online language learning program which can do CF well, but I think it’s a matter of time before things improve significantly. First of all, at the moment, feedback is usually immediate, or almost immediate. This is unlikely to change, for a number of reasons – foremost among them being the pride that ed tech takes in providing immediate feedback, and the fact that online learning is increasingly being conceptualised and consumed in bite-sized chunks, something you do on your phone between doing other things. What will change in better programs, however, is that feedback will become more formative. As things stand, tasks are usually of a very closed variety, with drag-and-drop being one of the most popular. Only one answer is possible and feedback is usually of the right / wrong-and-here’s-the-correct-answer kind. But tasks of this kind are limited in their value, and, at some point, tasks are needed where more than one answer is possible.

Here’s an example of a translation task from Duolingo, where a simple sentence could be translated into English in quite a large number of ways.

i_am_doing_a_basketDecontextualised as it is, the sentence could be translated in the way that I have done it, although it’s unlikely. The feedback, however, is of relatively little help to the learner, who would benefit from guidance of some sort. The simple reason that Duolingo doesn’t offer useful feedback is that the programme is static. It has been programmed to accept certain answers (e.g. in this case both the present simple and the present continuous are acceptable), but everything else will be rejected. Why? Because it would take too long and cost too much to anticipate and enter in all the possible answers. Why doesn’t it offer formative feedback? Because in order to do so, it would need to identify the kind of error that has been made. If we can identify the kind of error, we can make a reasonable guess about the cause of the error, and select appropriate CF … this is what good teachers do all the time.

Analysing the kind of error that has been made is the first step in providing appropriate CF, and it can be done, with increasing accuracy, by current technology, but it requires a lot of computing. Let’s take spelling as a simple place to start. If you enter ‘I am makeing a basket for my mother’ in the Duolingo translation above, the program tells you ‘Nice try … there’s a typo in your answer’. Given the configuration of keyboards, it is highly unlikely that this is a typo. It’s a simple spelling mistake and teachers recognise it as such because they see it so often. For software to achieve the same insight, it would need, as a start, to trawl a large English dictionary database and a large tagged database of learner English. The process is quite complicated, but it’s perfectably do-able, and learners could be provided with CF in the form of a ‘spelling hint’.i_am_makeing_a_basket

Rather more difficult is the error illustrated in my first screen shot. What’s the cause of this ‘error’? Teachers know immediately that this is probably a classic confusion of ‘do’ and ‘make’. They know that the French verb ‘faire’ can be translated into English as ‘make’ or ‘do’ (among other possibilities), and the error is a common language transfer problem. Software could do the same thing. It would need a large corpus (to establish that ‘make’ collocates with ‘a basket’ more often than ‘do’), a good bilingualised dictionary (plenty of these now exist), and a tagged database of learner English. Again, appropriate automated feedback could be provided in the form of some sort of indication that ‘faire’ is only sometimes translated as ‘make’.

These are both relatively simple examples, but it’s easy to think of others that are much more difficult to analyse automatically. Duolingo rejects ‘I am making one basket for my mother’: it’s not very plausible, but it’s not wrong. Teachers know why learners do this (again, it’s probably a transfer problem) and know how to respond (perhaps by saying something like ‘Only one?’). Duolingo also rejects ‘I making a basket for my mother’ (a common enough error), but is unable to provide any help beyond the correct answer. Automated CF could, however, be provided in both cases if more tools are brought into play. Multiple parsing machines (one is rarely accurate enough on its own) and semantic analysis will be needed. Both the range and the complexity of the available tools are increasing so rapidly (see here for the sort of research that Google is doing and here for an insight into current applications of this research in language learning) that Duolingo-style right / wrong feedback will very soon seem positively antediluvian.

One further development is worth mentioning here, and it concerns feedback and gamification. Teachers know from the way that most learners respond to written CF that they are usually much more interested in knowing what they got right or wrong, rather than the reasons for this. Most students are more likely to spend more time looking at the score at the bottom of a corrected piece of written work than at the laborious annotations of the teacher throughout the text. Getting students to pay close attention to the feedback we provide is not easy. Online language learning systems with gamification elements, like Duolingo, typically reward learners for getting things right, and getting things right in the fewest attempts possible. They encourage learners to look for the shortest or cheapest route to finding the correct answers: learning becomes a sexed-up form of test. If, however, the automated feedback is good, this sort of gamification encourages the wrong sort of learning behaviour. Gamification designers will need to shift their attention away from the current concern with right / wrong, and towards ways of motivating learners to look at and respond to feedback. It’s tricky, because you want to encourage learners to take more risks (and reward them for doing so), but it makes no sense to penalise them for getting things right. The probable solution is to have a dual points system: one set of points for getting things right, another for employing positive learning strategies.

The provision of automated ‘optimal feedback at the point of need’ may not be quite there yet, but it seems we’re on the way for some tasks in discrete-item learning. There will probably always be some teachers who can outperform computers in providing appropriate feedback, in the same way that a few top chess players can beat ‘Deep Blue’ and its scions. But the rest of us had better watch our backs: in the provision of some kinds of feedback, computers are catching up with us fast.

[1] Ellis, R. & N. Shintani (2014) Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon: Routledge p. 249

[2] Hattie, K. (2009) Visible Learning. Abingdon: Routledge p.12

[3] Li, S. (2010) ‘The effectiveness of corrective feedback in SLA: a meta-analysis’ Language Learning 60 / 2: 309 -365

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)

There are a number of reasons why we sometimes need to describe a person’s language competence using a single number. Most of these are connected to the need for a shorthand to differentiate people, in summative testing or in job selection, for example. Numerical (or grade) allocation of this kind is so common (and especially in times when accountability is greatly valued) that it is easy to believe that this number is an objective description of a concrete entity, rather than a shorthand description of an abstract concept. In the process, the abstract concept (language competence) becomes reified and there is a tendency to stop thinking about what it actually is.

Language is messy. It’s a complex, adaptive system of communication which has a fundamentally social function. As Diane Larsen-Freeman and others have argued patterns of use strongly affect how language is acquired, is used, and changes. These processes are not independent of one another but are facets of the same complex adaptive system. […] The system consists of multiple agents (the speakers in the speech community) interacting with one another [and] the structures of language emerge from interrelated patterns of experience, social interaction, and cognitive mechanisms.

As such, competence in language use is difficult to measure. There are ways of capturing some of it. Think of the pages and pages of competency statements in the Common European Framework, but there has always been something deeply unsatisfactory about documents of this kind. How, for example, are we supposed to differentiate, exactly and objectively, between, say, can participate fully in an interview (C1) and can carry out an effective, fluent interview (B2)? The short answer is that we can’t. There are too many of these descriptors anyway and, even if we did attempt to use such a detailed tool to describe language competence, we would still be left with a very incomplete picture. There is at least one whole book devoted to attempts to test the untestable in language education (edited by Amos Paran and Lies Sercu, Multilingual Matters, 2010).

So, here is another reason why we are tempted to use shorthand numerical descriptors (such as A1, A2, B1, etc.) to describe something which is very complex and abstract (‘overall language competence’) and to reify this abstraction in the process. From there, it is a very short step to making things even more numerical, more scientific-sounding. Number-creep in recent years has brought us the Pearson Global Scale of English which can place you at a precise point on a scale from 10 to 90. Not to be outdone, Cambridge English Language Assessment now has a scale that runs from 80 points to 230, although Cambridge does, at least, allocate individual scores for four language skills.

As the title of this post suggests (in its reference to Stephen Jay Gould’s The Mismeasure of Man), I am suggesting that there are parallels between attempts to measure language competence and the sad history of attempts to measure ‘general intelligence’. Both are guilty of the twin fallacies of reification and ranking – the ordering of complex information as a gradual ascending scale. These conceptual fallacies then lead us, through the way that they push us to think about language, into making further conceptual errors about language learning. We start to confuse language testing with the ways that language learning can be structured.

We begin to granularise language. We move inexorably away from difficult-to-measure hazy notions of language skills towards what, on the surface at least, seem more readily measurable entities: words and structures. We allocate to them numerical values on our testing scales, so that an individual word can be deemed to be higher or lower on the scale than another word. And then we have a syllabus, a synthetic syllabus, that lends itself to digital delivery and adaptive manipulation. We find ourselves in a situation where materials writers for Pearson, writing for a particular ‘level’, are only allowed to use vocabulary items and grammatical structures that correspond to that ‘level’. We find ourselves, in short, in a situation where the acquisition of a complex and messy system is described as a linear, additive process. Here’s an example from the Pearson website: If you score 29 on the scale, you should be able to identify and order common food and drink from a menu; at 62, you should be able to write a structured review of a film, book or play. And because the GSE is so granular in nature, you can conquer smaller steps more often; and you are more likely to stay motivated as you work towards your goal. It’s a nonsense, a nonsense that is dictated by the needs of testing and adaptive software, but the sciency-sounding numbers help to hide the conceptual fallacies that lie beneath.

Perhaps, though, this doesn’t matter too much for most language learners. In the early stages of language learning (where most language learners are to be found), there are countless millions of people who don’t seem to mind the granularised programmes of Duolingo or Rosetta Stone, or the Grammar McNuggets of coursebooks. In these early stages, anything seems to be better than nothing, and the testing is relatively low-stakes. But as a learner’s interlanguage becomes more complex, and as the language she needs to acquire becomes more complex, attempts to granularise it and to present it in a linearly additive way become more problematic. It is for this reason, I suspect, that the appeal of granularised syllabuses declines so rapidly the more progress a learner makes. It comes as no surprise that, the further up the scale you get, the more that both teachers and learners want to get away from pre-determined syllabuses in coursebooks and software.

Adaptive language learning software is continuing to gain traction in the early stages of learning, in the initial acquisition of basic vocabulary and structures and in coming to grips with a new phonological system. It will almost certainly gain even more. But the challenge for the developers and publishers will be to find ways of making adaptive learning work for more advanced learners. Can it be done? Or will the mismeasure of language make it impossible?

In the words of its founder and CEO, self-declared ‘visionary’ Claudio Santori, Bliu Bliu is ‘the only company in the world that teaches languages we don’t even know’. This claim, which was made during a pitch  for funding in October 2014, tells us a lot about the Bliu Bliu approach. It assumes that there exists a system by which all languages can be learnt / taught, and the particular features of any given language are not of any great importance. It’s questionable, to say the least, and Santori fails to inspire confidence when he says, in the same pitch, ‘you join Bliu Bliu, you use it, we make something magical, and after a few weeks you can understand the language’.

The basic idea behind Bliu Bliu is that a language is learnt by using it (e.g. by reading or listening to texts), but that the texts need to be selected so that you know the great majority of words within them. The technological challenge, therefore, is to find (online) texts that contain the vocabulary that is appropriate for you. After that, Santori explains , ‘you progress, you input more words and you will get more text that you can understand. Hours and hours of conversations you can fully understand and listen. Not just stupid exercise from stupid grammar book. Real conversation. And in all of them you know 100% of the words. […] So basically you will have the same opportunity that a kid has when learning his native language. Listen hours and hours of native language being naturally spoken at you…at a level he/she can understand plus some challenge, everyday some more challenge, until he can pick up words very very fast’ (sic).

test4

On entering the site, you are invited to take a test. In this, you are shown a series of words and asked to say if you find them ‘easy’ or ‘difficult’. There were 12 words in total, and each time I clicked ‘easy’. The system then tells you how many words it thinks you know, and offers you one or more words to click on. Here are the words I was presented with and, to the right, the number of words that Bliu Blu thinks I know, after clicking ‘easy’ on the preceding word.

hello 4145
teenager 5960
soap, grape 7863
receipt, washing, skateboard 9638
motorway, tram, luggage, footballer, weekday 11061

test7

Finally, I was asked about my knowledge of other languages. I said that my French was advanced and that my Spanish and German were intermediate. On the basis of this answer, I was now told that Bliu Bliu thinks that I know 11,073 words.

Eight of the words in the test are starred in the Macmillan dictionaries, meaning they are within the most frequent 7,500 words in English. Of the other four, skateboard, footballer and tram are very international words. The last, weekday, is a readily understandable compound made up of two extremely high frequency words. How could Bliu Bliu know, with such uncanny precision, that I know 11,073 words from a test like this? I decided to try the test for French. Again, I clicked ‘easy’ for each of the twelve words that was offered. This time, I was offered a very different set of words, with low frequency items like polynôme, toponymie, diaspora, vectoriel (all of which are cognate with English words), along with the rather surprising vichy (which should have had a capital letter, as it is a proper noun). Despite finding all these words easy, I was mortified to be told that I only knew 6546 words in French.

I needn’t have bothered with the test, anyway. Irrespective of level, you are offered vocabulary sets of high frequency words. Examples of sets I was offered included [the, be, of, and, to], [way, state, say, world, two], [may, man, hear, said, call] and [life, down, any, show, t]. Bliu Bliu then gives you a series of short texts that include the target words. You can click on any word you don’t know and you are given either a definition or a translation (I opted for French translations). There is no task beyond simply reading these texts. Putting aside for the moment the question of why I was being offered these particular words when my level is advanced, how does the software perform?

The vast majority of the texts are short quotes from brainyquote.com, and here is the first problem. Quotes tend to be pithy and often play with words: their comprehensibility is not always a function of the frequency of the words they contain. For the word ‘say’, for example, the texts included the Shakespearean quote It will have blood, they say; blood will have blood. For the word ‘world’, I was offered this line from Alexander Pope: The world forgetting, by the world forgot. Not, perhaps, the best way of learning a couple of very simple, high-frequency words. But this was the least of the problems.

The system operates on a word level. It doesn’t recognise phrases or chunks, or even phrasal verbs. So, a word like ‘down’ (in one of the lists above) is presented without consideration of its multiple senses. The first set of sentences I was asked to read for ‘down’ included: I never regretted what I turned down, You get old, you slow down, I’m Creole, and I’m down to earth, I never fall down. I always fight, I like seeing girls throw down and I don’t take criticism lying down. Not exactly the best way of getting to grips with the word ‘down’ if you don’t know it!

bliubliu2You may have noticed the inclusion of the word ‘t’ in one of the lists above. Here are the example sentences for practising this word: (1) Knock the ‘t’ off the ‘can’t’, (2) Sometimes reality T.V. can be stressful, (3) Argentina Debt Swap Won’t Avoid Default, (4) OK, I just don’t understand Nethanyahu, (5) Venezuela: Hell on Earth by Walter T Molano and (6) Work will win when wishy washy wishing won t. I paid €7.99 for one month of this!

The translation function is equally awful. With high frequency words with multiple meanings, you get a long list of possible translations, but no indication of which one is appropriate for the context you are looking at. With other words, it is sometimes, simply, wrong. For example, in the sentence, Heaven lent you a soul, Earth will lend a grave, the translation for ‘grave’ was only for the homonymous adjective. In the sentence There’s a bright spot in every dark cloud, the translation for ‘spot’ was only for verbs. And the translation for ‘but’ in We love but once, for once only are we perfectly equipped for loving was ‘mais’ (not at all what it means here!). The translation tool couldn’t handle the first ‘for’ in this sentence, either.

Bliu Bliu’s claim that Bliu Bliu knows you very well, every single word you know or don’t know is manifest nonsense and reveals a serious lack of understanding about what it means to know a word. However, as you spend more time on the system, a picture of your vocabulary knowledge is certainly built up. The texts that are offered begin to move away from the one-liners from brainyquote.com. As reading (or listening to recorded texts) is the only learning task that is offered, the intrinsic interest of the texts is crucial. Here, again, I was disappointed. Texts that I was offered were sourced from IEEE Spectrum (The World’s Largest Professional Association for the Advancement of Technology), infowars.com (the home of the #1 Internet News Show in the World), Latin America News and Analysis, the Google official blog (Meet 15 Finalists and Science in Action Winner for the 2013 GoogleScience Fair) MLB Trade Rumors (a clearinghouse for relevant, legitimate baseball rumors), and a long text entitled Robert Waldmann: Policy-Relevant Macro Is All in Samuelson and Solow (1960) from a blog called Brad DeLong’s Grasping Reality……with the Neural Network of a Moderately-Intelligent Cephalopod.

There is more curated content (selected from a menu which includes sections entitled ‘18+’ and ‘Controversial Jokes’). In these texts, words that the system thinks you won’t know (most of the proper nouns for example) are highlighted. And there is a small library of novels, again, where predicted unknown words are highlighted in pink. These include Dostoyevsky, Kafka, Oscar Wilde, Gogol, Conan Doyle, Joseph Conrad, Oblomov, H.P. Lovecraft, Joyce, and Poe. You can also upload your own texts if you wish.

But, by this stage, I’d had enough and I clicked on the button to cancel my subscription. I shouldn’t have been surprised when the system crashed and a message popped up saying the system had encountered an error.

Like so many ‘language learning’ start-ups, Bliu Bliu seems to know a little, but not a lot about language learning. The Bliu Bliu blog has a video of Stephen Krashen talking about comprehensible input (it is misleadingly captioned ‘Stephen Krashen on Bliu Bliu’) in which he says that we all learn languages the same way, and that is when we get comprehensible input in a low anxiety environment. Influential though it has been, Krashen’s hypothesis remains a hypothesis, and it is generally accepted now that comprehensible input may be necessary, but it is not sufficient for language learning to take place.

The hypothesis hinges, anyway, on a definition of what is meant by ‘comprehensible’ and no one has come close to defining what precisely this means. Bliu Bliu has falsely assumed that comprehensibility can be determined by self-reporting of word knowledge, and this assumption is made even more problematic by the confusion of words (as sequences of letters) with lexical items. Bliu Bliu takes no account of lexical grammar or collocation (fundamental to any real word knowledge).

The name ‘Bliu Bliu’ was inspired by an episode from ‘Friends’ where Joey tries and fails to speak French. In the episode, according to the ‘Friends’ wiki, ‘Phoebe helps Joey prepare for an audition by teaching him how to speak French. Joey does not progress well and just speaks gibberish, thinking he’s doing a great job. Phoebe explains to the director in French that Joey is her mentally disabled younger brother so he’ll take pity on Joey.’ Bliu Bliu was an unfortunately apt choice of name.

friends

I suggested in my last post that vocabulary flashcard systems can have a useful role to play in blended learning contexts. However, for their potential to be exploited, teachers will need to devote classroom time to the things that the apps, on their own, cannot do. This post looks in some detail at what teachers can do.

Spaced repetition may be important to long-term memorization of new vocabulary items, but it will not be enough on its own. Memory researchers refer to three techniques that will improve speed of retention and long-term recall. The first of these is called the ‘generation effect’ – the use of even a little cognitive effort in generating the answer in flashcard practice. A simple example is provided by Brown, Roediger and McDaniel[1]: simply asking a subject to fill in a word’s missing letters resulted in better memory of the word. […] For a pair like foot-shoe, those who studied the pair intact had lower subsequent recall than those who studied the pair from a clue as obvious as foot-s _ _ e. In vocabulary learning, there is much that learners need to know beyond the meaning or translation equivalent: pronunciation, collocation, and associated grammatical patterns, for example. A focus on these aspects of word knowledge will all deepen that knowledge, but can enhance memorization at the same time.

The second of these techniques is called ‘elaboration’ – the process of giving new material meaning by expressing it in your own words and connecting it with what you already know. The more you can explain about the way your new learning relates to your prior knowledge, the stronger your grasp of the new learning will be, and the more connections you create that will help you remember it later[2]. Explaining the meaning or rules of use of a target vocabulary item to a fellow student, or explaining how this word has significance in your life outside the classroom are simple examples of elaboration. Whilst elaboration is important in any kind of memorization, it is probably especially important in vocabulary learning. If the mental lexicon is a network of associations (and we don’t really have a better way of describing it right now!), the fostering of multiple associations or connections will be a vital part of building up this lexicon: When students are asked to manipulate words, relate them to other words and to their own experiences, and then to justify their choices, these word associations are reinforced[3].

The third of these is getting the right kind of feedback. Feedback on flashcard software is typically of the right / wrong variety. At some point, this is obviously necessary, but it has its limitations. First of all, it is usually immediate, and research[4] suggests that a slight delay in getting feedback aids recall. With immediate feedback, learners can easily come to over-rely on it. Secondly, intelligent, scaffolded feedback (e.g. with hints and cues, rather than simple provision of the correct answer) contributes to the ‘generation effect’ (see above). Thirdly, positive feedback (e.g. where a learner sees that she can accurately and appropriately use new items, especially in new contexts) will enhance both learning and motivation. Flashcard software almost invariably presents and practises vocabulary in one context only, and rarely requires learners to produce the language in a communicative context.

The practical classroom suggestions that follow are all attempts to address the issues raised above. This is not in any way a complete list, and I have prioritized, in the ‘Practice Activities’ section, those tasks that offer more than simple re-exposure (for example, activities such as ‘Hangman’, word quizzes, word squares, definition games, and so on). But I hope that it will be a useful starting point.

Preparation activities

  • Put students into pairs and give them a few minutes (at any moment in a lesson, but this is often done at the start) to test each other on the words they are studying.
  • On a regular basis, allocate some classroom time for students to edit / improve their flashcards. This is best done in pairs. Tasks that you could set include: (1) students find example sentences to add to their cards; (2) students find more memorable / amusing example sentences to add to their cards; (3) students research and find useful phrases which include their target items, and add these to their cards; (4) students research and find common collocations of their target words and add these to their cards; (5) students research and find pictures (from an online image search) which they can use to replace their own-language translations; (6) students research, find and add to their cards other parts of speech; (7) students find recordings (via online dictionaries) of their target items and add them to their cards; (8) students record themselves saying the target items and add these to their cards; (9) students gap (or anagrammatize) some of the letters on the English sides of their cards; (10) students compare cards, discuss which are more memorable, and edit their own if they think this is useful
  • The ultimate hope is that learners will become more autonomous in their vocabulary learning. To this end, I’d thoroughly endorse Daniel Barber’s suggestion in a comment on my previous post: get the class to use and review the various wordcard apps and feed back to their classmates, i.e. to discover for themselves the relative merits of digital vs. hand-written / Anki vs. Quizlet and decide for themselves what’s best.

Practice activities

  • Ask students to flip through their flashcard set and make a list of the words that they are finding hardest to remember. They should do this with a partner and, together, should come up with a list of twelve or more words. Ask the pairs to put their words into groups. Initially, it will probably be best to suggest the kinds of groupings they could use. For example: (1) words they think they would probably need to use in their first week in an English-speaking country vs. words they think they are unlikely to need in their first week in an English-speaking country, (2) words they like (for whatever reason) vs. words they dislike; (3) words they can associate with good things vs. words which they can associate with bad things. When students are familiar with this activity type, they can choose their own categories. Once students have completed the task with their partner, they should change partners and exchange ideas. All of this can be done orally.
  • Ask students to flip through their flashcard set and make a list of the words that they are finding hardest to remember. They should do this with a partner and, together, should come up with a list of twelve or more words. Tell them to write these words in a circle on a sheet of paper. word_circle Tell the students to choose, at random, one word in their circle. Next, they must find another word in the circle which they can associate in some way with the first word that they chose. They must explain this association to their partner. They must then find another word which they can associate with their second word. Again they must explain the association. They should continue in this way until they have connected all the words in their circle. Once students have completed the task with their partner, they should change partners and exchange ideas. All of this can be done orally.
  • Using the same kind of circle of words (as in the activity above), students again work with a partner. Starting with any word, they must find and explain an association with another word. Next, beginning with the word they first chose, they must find and explain an association with another word from the circle. They continue in this way until they have found connections between their first word and all the other words in the circle. Once students have completed the task with their partner, they should change partners and exchange ideas. All of this can be done orally.
  • Ask the students to flick through their coursebooks and find four or five images that they find interesting or attractive. Tell them to note the page numbers. straightforward-upperintermediate-sb-1-638 Then, ask the students to flip through their flashcard set and make a list of the words that they are finding hardest to remember. They should do this with a partner and, together, should come up with a list of twelve or more words. The students should then find an association between each of the words on their list and one of the pictures they have selected. They discuss their ideas with their partner, before comparing their ideas with a new partner.
  • Using the pictures and word lists (as in the activity above), students should select one picture, without telling their partner which picture they have selected. They should then look at the word list and choose four words from this list which they can associate with that picture. They then tell their four words to their partner, whose task is to guess which picture the other student was thinking of.
  • Ask students to flip through their flashcard set and make a list of the words that they are finding hardest to remember. Individually, they should then write a series of sentences which contain these words: the sentences can contain one, two, or more of their target words. Half of the sentences should contain true personal information; the other half should contain false personal information. Students then work with a partner, read their sentences aloud, and the partner must decide which sentences are true and which are false.
  • Ask students to flip through their flashcard set and make a list of the words that they are finding hardest to remember. They should do this with a partner and, together, should come up with a list of twelve or more words. Still in pairs, they should prepare a short story which contains at least seven of the items in their list. After preparing their story, they should rehearse it before exchanging stories with another student / pair of students.
  • There’s a fun question-and-answer game, ‘Any Which Way Matching’, from Alex Case, which can be used with any set of vocabulary. It can be found here:
  • Play a class game which recycles the vocabulary that students are having difficulty remembering. You can find the rules for one game, ‘Words in sentences’, which can be used with any set of vocabulary here:

[1] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014) p.32

[2] ibid p.5

[3] Sökmen, A.J. (1997) ‘Current trends in teaching second language vocabulary,’ in Schmitt, N. & McCarthy, M. (eds.) Vocabulary: Description, Acquisition and Pedagogy (Cambridge: CUP, 1997) pp.241-242

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)  pp.39 – 40

(This post was originally published at eltjam.)

learning_teaching_ngramWe now have young learners and very young learners, learner differences and learner profiles, learning styles, learner training, learner independence and autonomy, learning technologies, life-long learning, learning management systems, virtual learning environments, learning outcomes, learning analytics and adaptive learning. Much, but not perhaps all, of this is to the good, but it’s easy to forget that it wasn’t always like this.

The rise in the use of the terms ‘learner’ and ‘learning’ can be seen in policy documents, educational research and everyday speech, and it really got going in the mid 1980s[1]. Duncan Hunter and Richard Smith[2] have identified a similar trend in ELT after analysing a corpus of articles from the English Language Teaching Journal. They found that ‘learner’ had risen to near the top of the key-word pile in the mid 1980s, but had been practically invisible 15 years previously. Accompanying this rise has been a relative decline of words like ‘teacher’, ‘teaching’, ‘pupil’ and, even, ‘education’. Gert Biesta has described this shift in discourse as a ‘new language of learning’ and the ‘learnification of education’.

It’s not hard to see the positive side of this change in focus towards the ‘learner’ and away from the syllabus, the teachers and the institution in which the ‘learning’ takes place. We can, perhaps, be proud of our preference for learner-centred approaches over teacher-centred ones. We can see something liberating (for our students) in the change of language that we use. But, as Bingham and Biesta[3] have pointed out, this gain is also a loss.

The language of ‘learners’ and ‘learning’ focusses our attention on process – how something is learnt. This was a much-needed corrective after an uninterrupted history of focussing on end-products, but the corollary is that it has become very easy to forget not only about the content of language learning, but also its purposes and the social relationships through which it takes place.

There has been some recent debate about the content of language learning, most notably in the work of the English as a Lingua Franca scholars. But there has been much more attention paid to the measurement of the learners’ acquisition of that content (through the use of tools like the Pearson Global Scale of English). There is a growing focus on ‘granularized’ content – lists of words and structures, and to a lesser extent language skills, that can be easily measured. It looks as though other things that we might want our students to be learning – critical thinking skills and intercultural competence, for example – are being sidelined.

More significant is the neglect of the purposes of language learning. The discourse of ELT is massively dominated by the paying sector of private language schools and semi-privatised universities. In these contexts, questions of purpose are not, perhaps, terribly important, as the whole point of the enterprise can be assumed to be primarily instrumental. But the vast majority of English language learners around the world are studying in state-funded institutions as part of a broader educational programme, which is as much social and political as it is to do with ‘learning’. The ultimate point of English lessons in these contexts is usually stated in much broader terms. The Council of Europe’s Common European Framework of Reference, for example, states that the ultimate point of the document is to facilitate better intercultural understanding. It is very easy to forget this when we are caught up in the business of levels and scales and measuring learning outcomes.

Lastly, a focus on ‘learners’ and ‘learning’ distracts attention away from the social roles that are enacted in classrooms. 25 years ago, Henry Widdowson[4] pointed out that there are two quite different kinds of role. The first of these is concerned with occupation (student / pupil vs teacher / master / mistress) and is identifying. The second (the learning role) is actually incidental and cannot be guaranteed. He reminds us that the success of the language learning / teaching enterprise depends on ‘recognizing and resolving the difficulties inherent in the dual functioning of roles in the classroom encounter’[5]. Again, this may not matter too much in the private sector, but, elsewhere, any attempt to tackle the learning / teaching conundrum through an exclusive focus on learning processes is unlikely to succeed.

The ‘learnification’ of education has been accompanied by two related developments: the casting of language learners as consumers of a ‘learning experience’ and the rise of digital technologies in education. For reasons of space, I will limit myself to commenting on the second of these[6]. Research by Geir Haugsbakk and Yngve Nordkvelle[7] has documented a clear and critical link between the new ‘language of learning’ and the rhetoric of edtech advocacy. These researchers suggest that these discourses are mutually reinforcing, that both contribute to the casting of the ‘learner’ as a consumer, and that the coupling of learning and digital tools is often purely rhetorical.

One of the net results of ‘learnification’ is the transformation of education into a technical or technological problem to be solved. It suggests, wrongly, that approaches to education can be derived purely from theories of learning. By adopting an ahistorical and apolitical standpoint, it hides ‘the complex nexus of political and economic power and resources that lies behind a considerable amount of curriculum organization and selection’[8]. The very real danger, as Biesta[9] has observed, is that ‘if we fail to engage with the question of good education head-on – there is a real risk that data, statistics and league tables will do the decision-making for us’.

[1] 2004 Biesta, G.J.J. ‘Against learning. Reclaiming a language for education in an age of learning’ Nordisk Pedagogik 24 (1), 70-82 & 2010 Biesta, G.J.J. Good Education in an Age of Measurement (Boulder, Colorado: Paradigm Publishers)

[2] 2012 Hunter, D. & R. Smith ‘Unpackaging the past: ‘CLT’ through ELTJ keywords’ ELTJ 66/4 430-439

[3] 2010 Bingham, C. & Biesta, G.J.J. Jacques Rancière: Education, Truth, Emancipation (London: Continuum) 134

[4] 1990 Widdowson, H.G. Aspects of Language Teaching (Oxford: OUP) 182 ff

[5] 1987 Widdowson, H.G. ‘The roles of teacher and learner’ ELTJ 41/2

[6] A compelling account of the way that students have become ‘consumers’ can be found in 2013 Williams, J. Consuming Higher Education (London: Bloomsbury)

[7] 2007 Haugsbakk, G. & Nordkvelle, Y. ‘The Rhetoric of ICT and the New Language of Learning: a critical analysis of the use of ICT in the curricular field’ European Educational Research Journal 6/1 1 – 12

[8] 2004 Apple, M. W. Ideology and Curriculum 3rd edition (New York: Routledge) 28

[9] 2010 Biesta, G.J.J. Good Education in an Age of Measurement (Boulder, Colorado: Paradigm Publishers) 27

 

 

There is a lot that technology can do to help English language learners develop their reading skills. The internet makes it possible for learners to read an almost limitless number of texts that will interest them, and these texts can evaluated for readability and, therefore, suitability for level (see here for a useful article). RSS opens up exciting possibilities for narrow reading and the positive impact of multimedia-enhanced texts was researched many years ago. There are good online bilingual dictionaries and other translation tools. There are apps that go with graded readers (see this review in the Guardian) and there are apps that can force you to read at a certain speed. And there is more. All of this could very effectively be managed on a good learning platform.

Could adaptive software add another valuable element to reading skills development?

Adaptive reading programs are spreading in the US in primary education, and, with some modifications, could be used in ELT courses for younger learners and for those who do not have the Roman alphabet. One of the most well-known has been developed by Lexia Learning®, a company that won a $500,000 grant from the Gates Foundation last year. Lexia Learning® was bought by Rosetta Stone® for $22.5 million in June 2013.

One of their products, Lexia Reading Core5, ‘provides explicit, systematic, personalized learning in the six areas of reading instruction, and delivers norm-referenced performance data and analysis without interrupting the flow of instruction to administer a test. Designed specifically to meet the Common Core and the most rigorous state standards, this research-proven, technology-based approach accelerates reading skills development, predicts students’ year-end performance and provides teachers data-driven action plans to help differentiate instruction’.

core5-ss-small

The predictable claim that it is ‘research-proven’ has not convinced everyone. Richard Allington, a professor of literacy studies at the University of Tennessee and a past president of both the International Reading Association and the National Reading Association, has said that all the companies that have developed this kind of software ‘come up with evidence – albeit potential evidence — that kids could improve their abilities to read by using their product. It’s all marketing. They’re selling a product. Lexia is one of these programs. But there virtually are no commercial programs that have any solid, reliable evidence that they improve reading achievement.’[1] He has argued that the $12 million that has been spent on the Lexia programs would have been better spent on a national program, developed at Ohio State University, that matches specially trained reading instructors with students known to have trouble learning to read.

But what about ELT? For an adaptive program like Lexia’s to work, reading skills need to be broken down in a similar way to the diagram shown above. Let’s get some folk linguistics out of the way first. The sub-skills of reading are not skimming, scanning, inferring meaning from context, etc. These are strategies that readers adopt voluntarily in order to understand a text better. If a reader uses these strategies in their own language, they are likely to transfer these strategies to their English reading. It seems that ELT instruction in strategy use has only limited impact, although this kind of training may be relevant to preparation for exams. This insight is taking a long time to filter down to course and coursebook design, but there really isn’t much debate[2]. Any adaptive ELT reading program that confuses reading strategies with reading sub-skills is going to have big problems.

What, then, are the sub-skills of reading? In what ways could reading be broken down into a skill tree so that it is amenable to adaptive learning? Researchers have provided different answers. Munby (1978), for example, listed 19 reading microskills, Heaton (1988) listed 14. However, a bigger problem is that other researchers (e.g. Lunzer 1979, Rost 1993) have failed to find evidence that distinct sub-skills actually exist. While it is easier to identify sub-skills for very low level readers (especially for those whose own language is very different from English), it is simply not possible to do so for higher levels.

Reading in another language is a complex process which involves both top-down and bottom-up strategies, is intimately linked to vocabulary knowledge and requires the activation of background, cultural knowledge. Reading ability, in the eyes of some researchers, is unitary or holistic. Others prefer to separate things into two components: word recognition and comprehension[3]. Either way, a consensus is beginning to emerge that teachers and learners might do better to focus on vocabulary extension (and this would include extensive reading) than to attempt to develop reading programs that assume the multidivisible nature of reading.

All of which means that adaptive learning software and reading skills in ELT are unlikely bedfellows. To be sure, an increased use of technology (as described in the first paragraph of this post) in reading work will generate a lot of data about learner behaviours. Analysis of this data may lead to actionable insights, and it may not! It will be interesting to find out.

 

[1] http://www.khi.org/news/2013/jun/17/budget-proviso-reading-program-raises-questions/

[2] See, for example, Walter, C. & M. Swan. 2008. ‘Teaching reading skills: mostly a waste of time?’ in Beaven, B. (ed.) IATEFL 2008 Exeter Conference Selections. (Canterbury: IATEFL). Or go back further to Alderson, J. C. 1984 ‘Reading in a foreign language: a reading problem or a language problem?’ in J.C. Alderson & A. H. Urquhart (eds.) Reading in a Foreign Language (London: Longman)

[3] For a useful summary of these issues, see ‘Reading abilities and strategies: a short introduction’ by Feng Liu (International Education Studies 3 / 3 August 2010) www.ccsenet.org/journal/index.php/ies/article/viewFile/6790/5321