Posts Tagged ‘translation’

ltsigIt’s hype time again. Spurred on, no doubt, by the current spate of books and articles  about AIED (artificial intelligence in education), the IATEFL Learning Technologies SIG is organising an online event on the topic in November of this year. Currently, the most visible online references to AI in language learning are related to Glossika , basically a language learning system that uses spaced repetition, whose marketing department has realised that references to AI might help sell the product. GlossikaThey’re not alone – see, for example, Knowble which I reviewed earlier this year .

In the wider world of education, where AI has made greater inroads than in language teaching, every day brings more stuff: How artificial intelligence is changing teaching , 32 Ways AI is Improving Education , How artificial intelligence could help teachers do a better job , etc., etc. There’s a full-length book by Anthony Seldon, The Fourth Education Revolution: will artificial intelligence liberate or infantilise humanity? (2018, University of Buckingham Press) – one of the most poorly researched and badly edited books on education I’ve ever read, although that won’t stop it selling – and, no surprises here, there’s a Pearson commissioned report called Intelligence Unleashed: An argument for AI in Education (2016) which is available free.

Common to all these publications is the claim that AI will radically change education. When it comes to language teaching, a similar claim has been made by Donald Clark (described by Anthony Seldon as an education guru but perhaps best-known to many in ELT for his demolition of Sugata Mitra). In 2017, Clark wrote a blog post for Cambridge English (now unavailable) entitled How AI will reboot language learning, and a more recent version of this post, called AI has and will change language learning forever (sic) is available on Clark’s own blog. Given the history of the failure of education predictions, Clark is making bold claims. Thomas Edison (1922) believed that movies would revolutionize education. Radios were similarly hyped in the 1940s and in the 1960s it was the turn of TV. In the 1980s, Seymour Papert predicted the end of schools – ‘the computer will blow up the school’, he wrote. Twenty years later, we had the interactive possibilities of Web 2.0. As each technology failed to deliver on the hype, a new generation of enthusiasts found something else to make predictions about.

But is Donald Clark onto something? Developments in AI and computational linguistics have recently resulted in enormous progress in machine translation. Impressive advances in automatic speech recognition and generation, coupled with the power that can be packed into a handheld device, mean that we can expect some re-evaluation of the value of learning another language. Stephen Heppell, a specialist at Bournemouth University in the use of ICT in Education, has said: ‘Simultaneous translation is coming, making language teachers redundant. Modern languages teaching in future may be more about navigating cultural differences’ (quoted by Seldon, p.263). Well, maybe, but this is not Clark’s main interest.

Less a matter of opinion and much closer to the present day is the issue of assessment. AI is becoming ubiquitous in language testing. Cambridge, Pearson, TELC, Babbel and Duolingo are all using or exploring AI in their testing software, and we can expect to see this increase. Current, paper-based systems of testing subject knowledge are, according to Rosemary Luckin and Kristen Weatherby, outdated, ineffective, time-consuming, the cause of great anxiety and can easily be automated (Luckin, R. & Weatherby, K. 2018. ‘Learning analytics, artificial intelligence and the process of assessment’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.253). By capturing data of various kinds throughout a language learner’s course of study and by using AI to analyse learning development, continuous formative assessment becomes possible in ways that were previously unimaginable. ‘Assessment for Learning (AfL)’ or ‘Learning Oriented Assessment (LOA)’ are two terms used by Cambridge English to refer to the potential that AI offers which is described by Luckin (who is also one of the authors of the Pearson paper mentioned earlier). In practical terms, albeit in a still very limited way, this can be seen in the CUP course ‘Empower’, which combines CUP course content with validated LOA from Cambridge Assessment English.

Will this reboot or revolutionise language teaching? Probably not and here’s why. AIED systems need to operate with what is called a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. Some subjects (especially STEM subjects) ‘lend themselves much more readily to having their domains represented in ways that can be automatically reasoned about’ (du Boulay, D. et al., 2018. ‘Artificial intelligences and big data technologies to close the achievement gap’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.258). This is why most AIED systems have been built to teach these areas. Language are rather different. We simply do not have a domain knowledge model, except perhaps for the very lowest levels of language learning (and even that is highly questionable). Language learning is probably not, or not primarily, about acquiring subject knowledge. Debate still rages about the relationship between explicit language knowledge and language competence. AI-driven formative assessment will likely focus most on explicit language knowledge, as does most current language teaching. This will not reboot or revolutionise anything. It will more likely reinforce what is already happening: a model of language learning that assumes there is a strong interface between explicit knowledge and language competence. It is not a model that is shared by most SLA researchers.

So, one thing that AI can do (and is doing) for language learning is to improve the algorithms that determine the way that grammar and vocabulary are presented to individual learners in online programs. AI-optimised delivery of ‘English Grammar in Use’ may lead to some learning gains, but they are unlikely to be significant. It is not, in any case, what language learners need.

AI, Donald Clark suggests, can offer personalised learning. Precisely what kind of personalised learning this might be, and whether or not this is a good thing, remains unclear. A 2015 report funded by the Gates Foundation found that we currently lack evidence about the effectiveness of personalised learning. We do not know which aspects of personalised learning (learner autonomy, individualised learning pathways and instructional approaches, etc.) or which combinations of these will lead to gains in language learning. The complexity of the issues means that we may never have a satisfactory explanation. You can read my own exploration of the problems of personalised learning starting here .

What’s left? Clark suggests that chatbots are one area with ‘huge potential’. I beg to differ and I explained my reasons eighteen months ago . Chatbots work fine in very specific domains. As Clark says, they can be used for ‘controlled practice’, but ‘controlled practice’ means practice of specific language knowledge, the practice of limited conversational routines, for example. It could certainly be useful, but more than that? Taking things a stage further, Clark then suggests more holistic speaking and listening practice with Amazon Echo, Alexa or Google Home. If and when the day comes that we have general, as opposed to domain-specific, AI, chatting with one of these tools would open up vast new possibilities. Unfortunately, general AI does not exist, and until then Alexa and co will remain a poor substitute for human-human interaction (which is readily available online, anyway). Incidentally, AI could be used to form groups of online language learners to carry out communicative tasks – ‘the aim might be to design a grouping of students all at a similar cognitive level and of similar interests, or one where the participants bring different but complementary knowledge and skills’ (Luckin, R., Holmes, W., Griffiths, M. & Forceir, L.B. 2016. Intelligence Unleashed: An argument for AI in Education. London: Pearson, p.26).

Predictions about the impact of technology on education have a tendency to be made by people with a vested interest in the technologies. Edison was a businessman who had invested heavily in motion pictures. Donald Clark is an edtech entrepreneur whose company, Wildfire, uses AI in online learning programs. Stephen Heppell is executive chairman of LP+ who are currently developing a Chinese language learning community for 20 million Chinese school students. The reporting of AIED is almost invariably in websites that are paid for, in one way or another, by edtech companies. Predictions need, therefore, to be treated sceptically. Indeed, the safest prediction we can make about hyped educational technologies is that inflated expectations will be followed by disillusionment, before the technology finds a smaller niche.

 

Advertisements

Knowble, claims its developers, is a browser extension that will improve English vocabulary and reading comprehension. It also describes itself as an ‘adaptive language learning solution for publishers’. It’s currently beta and free, and sounds right up my street so I decided to give it a run.

Knowble reader

Users are asked to specify a first language (I chose French) and a level (A1 to C2): I chose B1, but this did not seem to impact on anything that subsequently happened. They are then offered a menu of about 30 up-to-date news items, grouped into 5 categories (world, science, business, sport, entertainment). Clicking on one article takes you to the article on the source website. There’s a good selection, including USA Today, CNN, Reuters, the Independent and the Torygraph from Britain, the Times of India, the Independent from Ireland and the Star from Canada. A large number of words are underlined: a single click brings up a translation in the extension box. Double-clicking on all other words will also bring up translations. Apart from that, there is one very short exercise (which has presumably been automatically generated) for each article.

For my trial run, I picked three articles: ‘Woman asks firefighters to help ‘stoned’ raccoon’ (from the BBC, 240 words), ‘Plastic straw and cotton bud ban proposed’ (also from the BBC, 823 words) and ‘London’s first housing market slump since 2009 weighs on UK price growth’ (from the Torygraph, 471 words).

Translations

Research suggests that the use of translations, rather than definitions, may lead to more learning gains, but the problem with Knowble is that it relies entirely on Google Translate. Google Translate is fast improving. Take the first sentence of the ‘plastic straw and cotton bud’ article, for example. It’s not a bad translation, but it gets the word ‘bid’ completely wrong, translating it as ‘offre’ (= offer), where ‘tentative’ (= attempt) is needed. So, we can still expect a few problems with Google Translate …

google_translateOne of the reasons that Google Translate has improved is that it no longer treats individual words as individual lexical items. It analyses groups of words and translates chunks or phrases (see, for example, the way it translates ‘as part of’). It doesn’t do word-for-word translation. Knowble, however, have set their software to ask Google for translations of each word as individual items, so the phrase ‘as part of’ is translated ‘comme’ + ‘partie’ + ‘de’. Whilst this example is comprehensible, problems arise very quickly. ‘Cotton buds’ (‘cotons-tiges’) become ‘coton’ + ‘bourgeon’ (= botanical shoots of cotton). Phrases like ‘in time’, ‘run into’, ‘sleep it off’ ‘take its course’, ‘fire station’ or ‘going on’ (all from the stoned raccoon text) all cause problems. In addition, Knowble are not using any parsing tools, so the system does not identify parts of speech, and further translation errors inevitably appear. In the short article of 240 words, about 10% are wrongly translated. Knowble claim to be using NLP tools, but there’s no sign of it here. They’re just using Google Translate rather badly.

Highlighted items

word_listNLP tools of some kind are presumably being used to select the words that get underlined. Exactly how this works is unclear. On the whole, it seems that very high frequency words are ignored and that lower frequency words are underlined. Here, for example, is the list of words that were underlined in the stoned raccoon text. I’ve compared them with (1) the CEFR levels for these words in the English Profile Text Inspector, and (2) the frequency information from the Macmillan dictionary (more stars = more frequent). In the other articles, some extremely high frequency words were underlined (e.g. price, cost, year) while much lower frequency items were not.

It is, of course, extremely difficult to predict which items of vocabulary a learner will know, even if we have a fairly accurate idea of their level. Personal interests play a significant part, so, for example, some people at even a low level will have no problem with ‘cannabis’, ‘stoned’ and ‘high’, even if these are low frequency. First language, however, is a reasonably reliable indicator as cognates can be expected to be easy. A French speaker will have no problem with ‘appreciate’, ‘unique’ and ‘symptom’. A recommendation engine that can meaningfully personalize vocabulary suggestions will, at the very least, need to consider cognates.

In short, the selection and underlining of vocabulary items, as it currently stands in Knowble, appears to serve no clear or useful function.

taskVocabulary learning

Knowble offers a very short exercise for each article. They are of three types: word completion, dictation and drag and drop (see the example). The rationale for the selection of the target items is unclear, but, in any case, these exercises are tokenistic in the extreme and are unlikely to lead to any significant learning gains. More valuable would be the possibility of exporting items into a spaced repetition flash card system.

effectiveThe claim that Knowble’s ‘learning effect is proven scientifically’ seems to me to be without any foundation. If there has been any proper research, it’s not signposted anywhere. Sure, reading lots of news articles (with a look-up function – if it works reliably) can only be beneficial for language learners, but they can do that with any decent dictionary running in the background.

Similar in many ways to en.news, which I reviewed in my last post, Knowble is another example of a technology-driven product that shows little understanding of language learning.

MosaLingua  (with the obligatory capital letter in the middle) is a vocabulary app, available for iOS and Android. There are packages for a number of languages and English variations include general English, business English, vocabulary for TOEFL and vocabulary for TOEIC. The company follows the freemium model, with free ‘Lite’ versions and fuller content selling for €4.99. I tried the ‘Lite’ general English app, opting for French as my first language. Since the app is translation-based, you need to have one of the language pairings that are on offer (the other languages are currently Italian, Spanish, Portuguese and German).Mosalingua

The app I looked at is basically a phrase book with spaced repetition. Even though this particular app was general English, it appeared to be geared towards the casual business traveller. It uses the same algorithm as Anki, and users are taken through a sequence of (1) listening to an audio recording of the target item (word or phrase) along with the possibility of comparing a recording of yourself with the recording provided, (2) standard bilingual flashcard practice, (3) a practice stage where you are given the word or phrase in your own language and you have to unscramble words or letters to form the equivalent in English, and (4) a self-evaluation stage where users select from one of four options (“review”, “hard”, “good”, “perfect”) where the choice made will influence the re-presentation of the item within the spaced repetition.

In addition to these words and phrases, there are a number of dialogues where you (1) listen to the dialogue (‘without worrying about understanding everything’), (2) are re-exposed to the dialogue with English subtitles, (3) see it again with subtitles in your own language, (4) practise it with standard flashcards.

The developers seem to be proud of their Mosa Learning Method®: they’ve registered this as a trademark. At its heart is spaced repetition. This is supplemented by what they refer to as ‘Active Recall’, the notion that things are better memorised if the learner has to make some sort of cognitive effort, however minimal, in recalling the target items. The principle is, at least to me, unquestionable, but the realisation (unjumbling words or letters) becomes rather repetitive and, ultimately, tedious. Then, there is what they call ‘metacognition’. Again, this is informed by research, even if the realisation (self-evaluation of learning difficulty into four levels) is extremely limited. Then there is the Pareto principle  – the 80-20 rule. I couldn’t understand the explanation of what this has to do with the trademarked method. Here’s the MosaLingua explanation  – figure it out for yourself:

Did you know that the 100 most common words in English account for half of the written corpus?

Evidently, you shouldn’t quit after learning only 100 words. Instead, you should concentrate on the most frequently used words and you’ll make spectacular progress. What’s more, globish (global English) has shown that it’s possible to express yourself using only 1500 well-chosen words (which would take less than 3 months with only 10 minutes per day with MosaLingua). Once you’ve acquired this base, MosaLingua proposes specialized vocabulary suited to your needs (the application has over 3000 words).

Finally, there’s some stuff about motivation and learner psychology. This boils down to That’s why we offer free learning help via email, presenting the Web’s best resources, as well as tips through bonus material or the learning community on the MosaLingua blog. We’ll give you all the tools you need to develop your own personalized learning method that is adapted to your needs. Some of these tips are not at all bad, but there’s precious little in the way of gamification or other forms of easy motivation.

In short, it’s all reasonably respectable, despite the predilection for sciency language in the marketing blurb. But what really differentiates this product from Anki, as the founder, Samuel Michelot, points out is the content. Mosalingua has lists of vocabulary and phrases that were created by professors. The word ‘professors’ set my alarm bells ringing, and I wasn’t overly reassured when all I could find out about these ‘professors’ was the information about the MosaLingua team .professors

Despite what some people  claim, content is, actually, rather important when it comes to language learning. I’ll leave you with some examples of MosaLingua content (one dialogue and a selection of words / phrases organised by level) and you can make up your own mind.

Dialogue

Hi there, have a seat. What seems to be the problem?

I haven’t been feeling well since this morning. I have a very bad headache and I feel sick.

Do you feel tired? Have you had cold sweats?

Yes, I’m very tired and have had cold sweats. I have been feeling like that since this morning.

Have you been out in the sun?

Yes, this morning I was at the beach with my friends for a couple hours.

OK, it’s nothing serious. It’s just a bad case of sunstroke. You must drink lots of water and rest. I’ll prescribe you something for the headache and some after sun lotion.

Great, thank you, doctor. Bye.

You’re welcome. Bye.

Level 1: could you help me, I would like a …, I need to …, I don’t know, it’s okay, I (don’t) agree, do you speak English, to drink, to sleep, bank, I’m going to call the police

Level 2: I’m French, cheers, can you please repeat that, excuse me how can I get to …, map, turn left, corner, far (from), distance, thief, can you tell me where I can find …

Level 3: what does … mean, I’m learning English, excuse my English, famous, there, here, until, block, from, to turn, street corner, bar, nightclub, I have to be at the airport tomorrow morning

Level 4: OK, I’m thirty (years old), I love this country, how do you say …, what is it, it’s a bit like …, it’s a sort of …, it’s as small / big as …, is it far, where are we, where are we going, welcome, thanks but I can’t, how long have you been here, is this your first trip to England, take care, district / neighbourhood, in front (of)

Level 5: of course, can I ask you a question, you speak very well, I can’t find the way, David this is Julia, we meet at last, I would love to, where do you want to go, maybe another day, I’ll miss you, leave me alone, don’t touch me, what’s you email

Level 6: I’m here on a business trip, I came with some friends, where are the nightclubs, I feel like going to a bar, I can pick you up at your house, let’s go to see a movie, we had a lot of fun, come again, thanks for the invitation

Adaptive learning providers make much of their ability to provide learners with personalised feedback and to provide teachers with dashboard feedback on the performance of both individuals and groups. All well and good, but my interest here is in the automated feedback that software could provide on very specific learning tasks. Scott Thornbury, in a recent talk, ‘Ed Tech: The Mouse that Roared?’, listed six ‘problems’ of language acquisition that educational technology for language learning needs to address. One of these he framed as follows: ‘The feedback problem, i.e. how does the learner get optimal feedback at the point of need?’, and suggested that technological applications ‘have some way to go.’ He was referring, not to the kind of feedback that dashboards can provide, but to the kind of feedback that characterises a good language teacher: corrective feedback (CF) – the way that teachers respond to learner utterances (typically those containing errors, but not necessarily restricted to these) in what Ellis and Shintani call ‘form-focused episodes’[1]. These responses may include a direct indication that there is an error, a reformulation, a request for repetition, a request for clarification, an echo with questioning intonation, etc. Basically, they are correction techniques.

These days, there isn’t really any debate about the value of CF. There is a clear research consensus that it can aid language acquisition. Discussing learning in more general terms, Hattie[2] claims that ‘the most powerful single influence enhancing achievement is feedback’. The debate now centres around the kind of feedback, and when it is given. Interestingly, evidence[3] has been found that CF is more effective in the learning of discrete items (e.g. some grammatical structures) than in communicative activities. Since it is precisely this kind of approach to language learning that we are more likely to find in adaptive learning programs, it is worth exploring further.

What do we know about CF in the learning of discrete items? First of all, it works better when it is explicit than when it is implicit (Li, 2010), although this needs to be nuanced. In immediate post-tests, explicit CF is better than implicit variations. But over a longer period of time, implicit CF provides better results. Secondly, formative feedback (as opposed to right / wrong testing-style feedback) strengthens retention of the learning items: this typically involves the learner repairing their error, rather than simply noticing that an error has been made. This is part of what cognitive scientists[4] sometimes describe as the ‘generation effect’. Whilst learners may benefit from formative feedback without repairing their errors, Ellis and Shintani (2014: 273) argue that the repair may result in ‘deeper processing’ and, therefore, assist learning. Thirdly, there is evidence that some delay in receiving feedback aids subsequent recall, especially over the longer term. Ellis and Shintani (2014: 276) suggest that immediate CF may ‘benefit the development of learners’ procedural knowledge’, while delayed CF is ‘perhaps more likely to foster metalinguistic understanding’. You can read a useful summary of a meta-analysis of feedback effects in online learning here, or you can buy the whole article here.

I have yet to see an online language learning program which can do CF well, but I think it’s a matter of time before things improve significantly. First of all, at the moment, feedback is usually immediate, or almost immediate. This is unlikely to change, for a number of reasons – foremost among them being the pride that ed tech takes in providing immediate feedback, and the fact that online learning is increasingly being conceptualised and consumed in bite-sized chunks, something you do on your phone between doing other things. What will change in better programs, however, is that feedback will become more formative. As things stand, tasks are usually of a very closed variety, with drag-and-drop being one of the most popular. Only one answer is possible and feedback is usually of the right / wrong-and-here’s-the-correct-answer kind. But tasks of this kind are limited in their value, and, at some point, tasks are needed where more than one answer is possible.

Here’s an example of a translation task from Duolingo, where a simple sentence could be translated into English in quite a large number of ways.

i_am_doing_a_basketDecontextualised as it is, the sentence could be translated in the way that I have done it, although it’s unlikely. The feedback, however, is of relatively little help to the learner, who would benefit from guidance of some sort. The simple reason that Duolingo doesn’t offer useful feedback is that the programme is static. It has been programmed to accept certain answers (e.g. in this case both the present simple and the present continuous are acceptable), but everything else will be rejected. Why? Because it would take too long and cost too much to anticipate and enter in all the possible answers. Why doesn’t it offer formative feedback? Because in order to do so, it would need to identify the kind of error that has been made. If we can identify the kind of error, we can make a reasonable guess about the cause of the error, and select appropriate CF … this is what good teachers do all the time.

Analysing the kind of error that has been made is the first step in providing appropriate CF, and it can be done, with increasing accuracy, by current technology, but it requires a lot of computing. Let’s take spelling as a simple place to start. If you enter ‘I am makeing a basket for my mother’ in the Duolingo translation above, the program tells you ‘Nice try … there’s a typo in your answer’. Given the configuration of keyboards, it is highly unlikely that this is a typo. It’s a simple spelling mistake and teachers recognise it as such because they see it so often. For software to achieve the same insight, it would need, as a start, to trawl a large English dictionary database and a large tagged database of learner English. The process is quite complicated, but it’s perfectably do-able, and learners could be provided with CF in the form of a ‘spelling hint’.i_am_makeing_a_basket

Rather more difficult is the error illustrated in my first screen shot. What’s the cause of this ‘error’? Teachers know immediately that this is probably a classic confusion of ‘do’ and ‘make’. They know that the French verb ‘faire’ can be translated into English as ‘make’ or ‘do’ (among other possibilities), and the error is a common language transfer problem. Software could do the same thing. It would need a large corpus (to establish that ‘make’ collocates with ‘a basket’ more often than ‘do’), a good bilingualised dictionary (plenty of these now exist), and a tagged database of learner English. Again, appropriate automated feedback could be provided in the form of some sort of indication that ‘faire’ is only sometimes translated as ‘make’.

These are both relatively simple examples, but it’s easy to think of others that are much more difficult to analyse automatically. Duolingo rejects ‘I am making one basket for my mother’: it’s not very plausible, but it’s not wrong. Teachers know why learners do this (again, it’s probably a transfer problem) and know how to respond (perhaps by saying something like ‘Only one?’). Duolingo also rejects ‘I making a basket for my mother’ (a common enough error), but is unable to provide any help beyond the correct answer. Automated CF could, however, be provided in both cases if more tools are brought into play. Multiple parsing machines (one is rarely accurate enough on its own) and semantic analysis will be needed. Both the range and the complexity of the available tools are increasing so rapidly (see here for the sort of research that Google is doing and here for an insight into current applications of this research in language learning) that Duolingo-style right / wrong feedback will very soon seem positively antediluvian.

One further development is worth mentioning here, and it concerns feedback and gamification. Teachers know from the way that most learners respond to written CF that they are usually much more interested in knowing what they got right or wrong, rather than the reasons for this. Most students are more likely to spend more time looking at the score at the bottom of a corrected piece of written work than at the laborious annotations of the teacher throughout the text. Getting students to pay close attention to the feedback we provide is not easy. Online language learning systems with gamification elements, like Duolingo, typically reward learners for getting things right, and getting things right in the fewest attempts possible. They encourage learners to look for the shortest or cheapest route to finding the correct answers: learning becomes a sexed-up form of test. If, however, the automated feedback is good, this sort of gamification encourages the wrong sort of learning behaviour. Gamification designers will need to shift their attention away from the current concern with right / wrong, and towards ways of motivating learners to look at and respond to feedback. It’s tricky, because you want to encourage learners to take more risks (and reward them for doing so), but it makes no sense to penalise them for getting things right. The probable solution is to have a dual points system: one set of points for getting things right, another for employing positive learning strategies.

The provision of automated ‘optimal feedback at the point of need’ may not be quite there yet, but it seems we’re on the way for some tasks in discrete-item learning. There will probably always be some teachers who can outperform computers in providing appropriate feedback, in the same way that a few top chess players can beat ‘Deep Blue’ and its scions. But the rest of us had better watch our backs: in the provision of some kinds of feedback, computers are catching up with us fast.

[1] Ellis, R. & N. Shintani (2014) Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon: Routledge p. 249

[2] Hattie, K. (2009) Visible Learning. Abingdon: Routledge p.12

[3] Li, S. (2010) ‘The effectiveness of corrective feedback in SLA: a meta-analysis’ Language Learning 60 / 2: 309 -365

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)

In the words of its founder and CEO, self-declared ‘visionary’ Claudio Santori, Bliu Bliu is ‘the only company in the world that teaches languages we don’t even know’. This claim, which was made during a pitch  for funding in October 2014, tells us a lot about the Bliu Bliu approach. It assumes that there exists a system by which all languages can be learnt / taught, and the particular features of any given language are not of any great importance. It’s questionable, to say the least, and Santori fails to inspire confidence when he says, in the same pitch, ‘you join Bliu Bliu, you use it, we make something magical, and after a few weeks you can understand the language’.

The basic idea behind Bliu Bliu is that a language is learnt by using it (e.g. by reading or listening to texts), but that the texts need to be selected so that you know the great majority of words within them. The technological challenge, therefore, is to find (online) texts that contain the vocabulary that is appropriate for you. After that, Santori explains , ‘you progress, you input more words and you will get more text that you can understand. Hours and hours of conversations you can fully understand and listen. Not just stupid exercise from stupid grammar book. Real conversation. And in all of them you know 100% of the words. […] So basically you will have the same opportunity that a kid has when learning his native language. Listen hours and hours of native language being naturally spoken at you…at a level he/she can understand plus some challenge, everyday some more challenge, until he can pick up words very very fast’ (sic).

test4

On entering the site, you are invited to take a test. In this, you are shown a series of words and asked to say if you find them ‘easy’ or ‘difficult’. There were 12 words in total, and each time I clicked ‘easy’. The system then tells you how many words it thinks you know, and offers you one or more words to click on. Here are the words I was presented with and, to the right, the number of words that Bliu Blu thinks I know, after clicking ‘easy’ on the preceding word.

hello 4145
teenager 5960
soap, grape 7863
receipt, washing, skateboard 9638
motorway, tram, luggage, footballer, weekday 11061

test7

Finally, I was asked about my knowledge of other languages. I said that my French was advanced and that my Spanish and German were intermediate. On the basis of this answer, I was now told that Bliu Bliu thinks that I know 11,073 words.

Eight of the words in the test are starred in the Macmillan dictionaries, meaning they are within the most frequent 7,500 words in English. Of the other four, skateboard, footballer and tram are very international words. The last, weekday, is a readily understandable compound made up of two extremely high frequency words. How could Bliu Bliu know, with such uncanny precision, that I know 11,073 words from a test like this? I decided to try the test for French. Again, I clicked ‘easy’ for each of the twelve words that was offered. This time, I was offered a very different set of words, with low frequency items like polynôme, toponymie, diaspora, vectoriel (all of which are cognate with English words), along with the rather surprising vichy (which should have had a capital letter, as it is a proper noun). Despite finding all these words easy, I was mortified to be told that I only knew 6546 words in French.

I needn’t have bothered with the test, anyway. Irrespective of level, you are offered vocabulary sets of high frequency words. Examples of sets I was offered included [the, be, of, and, to], [way, state, say, world, two], [may, man, hear, said, call] and [life, down, any, show, t]. Bliu Bliu then gives you a series of short texts that include the target words. You can click on any word you don’t know and you are given either a definition or a translation (I opted for French translations). There is no task beyond simply reading these texts. Putting aside for the moment the question of why I was being offered these particular words when my level is advanced, how does the software perform?

The vast majority of the texts are short quotes from brainyquote.com, and here is the first problem. Quotes tend to be pithy and often play with words: their comprehensibility is not always a function of the frequency of the words they contain. For the word ‘say’, for example, the texts included the Shakespearean quote It will have blood, they say; blood will have blood. For the word ‘world’, I was offered this line from Alexander Pope: The world forgetting, by the world forgot. Not, perhaps, the best way of learning a couple of very simple, high-frequency words. But this was the least of the problems.

The system operates on a word level. It doesn’t recognise phrases or chunks, or even phrasal verbs. So, a word like ‘down’ (in one of the lists above) is presented without consideration of its multiple senses. The first set of sentences I was asked to read for ‘down’ included: I never regretted what I turned down, You get old, you slow down, I’m Creole, and I’m down to earth, I never fall down. I always fight, I like seeing girls throw down and I don’t take criticism lying down. Not exactly the best way of getting to grips with the word ‘down’ if you don’t know it!

bliubliu2You may have noticed the inclusion of the word ‘t’ in one of the lists above. Here are the example sentences for practising this word: (1) Knock the ‘t’ off the ‘can’t’, (2) Sometimes reality T.V. can be stressful, (3) Argentina Debt Swap Won’t Avoid Default, (4) OK, I just don’t understand Nethanyahu, (5) Venezuela: Hell on Earth by Walter T Molano and (6) Work will win when wishy washy wishing won t. I paid €7.99 for one month of this!

The translation function is equally awful. With high frequency words with multiple meanings, you get a long list of possible translations, but no indication of which one is appropriate for the context you are looking at. With other words, it is sometimes, simply, wrong. For example, in the sentence, Heaven lent you a soul, Earth will lend a grave, the translation for ‘grave’ was only for the homonymous adjective. In the sentence There’s a bright spot in every dark cloud, the translation for ‘spot’ was only for verbs. And the translation for ‘but’ in We love but once, for once only are we perfectly equipped for loving was ‘mais’ (not at all what it means here!). The translation tool couldn’t handle the first ‘for’ in this sentence, either.

Bliu Bliu’s claim that Bliu Bliu knows you very well, every single word you know or don’t know is manifest nonsense and reveals a serious lack of understanding about what it means to know a word. However, as you spend more time on the system, a picture of your vocabulary knowledge is certainly built up. The texts that are offered begin to move away from the one-liners from brainyquote.com. As reading (or listening to recorded texts) is the only learning task that is offered, the intrinsic interest of the texts is crucial. Here, again, I was disappointed. Texts that I was offered were sourced from IEEE Spectrum (The World’s Largest Professional Association for the Advancement of Technology), infowars.com (the home of the #1 Internet News Show in the World), Latin America News and Analysis, the Google official blog (Meet 15 Finalists and Science in Action Winner for the 2013 GoogleScience Fair) MLB Trade Rumors (a clearinghouse for relevant, legitimate baseball rumors), and a long text entitled Robert Waldmann: Policy-Relevant Macro Is All in Samuelson and Solow (1960) from a blog called Brad DeLong’s Grasping Reality……with the Neural Network of a Moderately-Intelligent Cephalopod.

There is more curated content (selected from a menu which includes sections entitled ‘18+’ and ‘Controversial Jokes’). In these texts, words that the system thinks you won’t know (most of the proper nouns for example) are highlighted. And there is a small library of novels, again, where predicted unknown words are highlighted in pink. These include Dostoyevsky, Kafka, Oscar Wilde, Gogol, Conan Doyle, Joseph Conrad, Oblomov, H.P. Lovecraft, Joyce, and Poe. You can also upload your own texts if you wish.

But, by this stage, I’d had enough and I clicked on the button to cancel my subscription. I shouldn’t have been surprised when the system crashed and a message popped up saying the system had encountered an error.

Like so many ‘language learning’ start-ups, Bliu Bliu seems to know a little, but not a lot about language learning. The Bliu Bliu blog has a video of Stephen Krashen talking about comprehensible input (it is misleadingly captioned ‘Stephen Krashen on Bliu Bliu’) in which he says that we all learn languages the same way, and that is when we get comprehensible input in a low anxiety environment. Influential though it has been, Krashen’s hypothesis remains a hypothesis, and it is generally accepted now that comprehensible input may be necessary, but it is not sufficient for language learning to take place.

The hypothesis hinges, anyway, on a definition of what is meant by ‘comprehensible’ and no one has come close to defining what precisely this means. Bliu Bliu has falsely assumed that comprehensibility can be determined by self-reporting of word knowledge, and this assumption is made even more problematic by the confusion of words (as sequences of letters) with lexical items. Bliu Bliu takes no account of lexical grammar or collocation (fundamental to any real word knowledge).

The name ‘Bliu Bliu’ was inspired by an episode from ‘Friends’ where Joey tries and fails to speak French. In the episode, according to the ‘Friends’ wiki, ‘Phoebe helps Joey prepare for an audition by teaching him how to speak French. Joey does not progress well and just speaks gibberish, thinking he’s doing a great job. Phoebe explains to the director in French that Joey is her mentally disabled younger brother so he’ll take pity on Joey.’ Bliu Bliu was an unfortunately apt choice of name.

friends

Lingua.ly is an Israeli start-up which, in its own words, ‘is an innovative new learning solution that helps you learn a language from the open web’. Its platform ‘uses big-data paired with spaced repetition to help users bootstrap their way to fluency’. You can read more of this kind of adspeak at the Lingua.ly blog  or the Wikipedia entry  which seems to have been written by someone from the company.

How does it work? First of all, state the language you want to study (currently there are 10 available) and the language you already speak (currently there are 18 available). Then, there are three possible starting points: insert a word which you want to study, click on a word in any web text or click on a word in one of the suggested reading texts. This then brings up a bilingual dictionary entry which, depending on the word, will offer a number of parts of speech and a number of translated word senses. Click on the appropriate part of speech and the appropriate word sense, and the item will be added to your personal word list. Once you have a handful of words in your word list, you can begin practising these words. Here there are two options. The first is a spaced repetition flashcard system. It presents the target word and 8 different translations in your own language, and you have to click on the correct option. Like most flashcard apps, spaced repetition software determines when and how often you will be re-presented with the item.

The second option is to read an authentic web text which contains one or more of your target items. The company calls this ‘digital language immersion, a method of employing a virtual learning environment to simulate the language learning environment’. The app ‘relies on a number of applied linguistics principles, including the Natural Approach and Krashen’s Input Hypothesis’, according to the Wikipedia entry. Apparently, the more you use the app, the more it knows about you as a learner, and the better able it is to select texts that are appropriate for you. As you read these texts, of course, you can click on more words and add them to your word list.

I tried out Lingua.ly, logging on as a French speaker wanting to learn English, and clicking on words as the fancy took me. I soon had a selection of texts to read. Users are offered a topic menu which consisted of the following: arts, business, education, entertainment, food, weird, beginners, green, health, living, news, politics, psychology, religion, science, sports, style. The sources are varied and not at all bad – Christian Science Monitor, The Grauniad, Huffington Post, Time, for example –and there are many very recent articles. Some texts were interesting; others seemed very niche. I began clicking on more words that I thought would be interesting to explore and here my problems began.

I quickly discovered that the system could only deal with single words, so phrasal verbs were off limits. One text I looked at had the phrasal verb ‘ripping off’, and although I could get translations for ‘ripping’ and ‘off’, this was obviously not terribly helpful. Learners who don’t know the phrasal verb ‘ripped off’ do not necessarily know that it is a phrasal verb, so the translations offered for the two parts of the verb are worse than unhelpful; they are actually misleading. Proper nouns were also a problem, although some of the more common ones were recognised. But the system failed to recognise many proper nouns for what they were, and offered me translations of homonymous nouns. new_word_added_'ripping_off' With some words (e.g. ‘stablemate’), the dictionary offered only one translation (in this case, the literal translation), but not the translation (the much more common idiomatic one) that was needed in the context in which I came across the word. With others (e.g. ‘pertain’), I was offered a list of translations which included the one that was appropriate in the context, but, unfortunately, this is the French word ‘porter’, which has so many possible meanings that, if you genuinely didn’t know the word, you would be none the wiser.

Once you’ve clicked on an appropriate part of speech and translation (if you can find one), the dictionary look-up function offers both photos and example sentences. Here again there were problems. I’d clicked on the verb ‘pan’ which I’d encountered in the context of a critic panning a book they’d read. I was able to select an appropriate translation, but when I got to the photos, I was offered only multiple pictures of frying pans. There were no example sentences for my meaning of ‘pan’: instead, I was offered multiple sentences about cooking pans, and one about Peter Pan. In other cases, the example sentences were either unhelpful (e.g. the example for ‘deal’ was ‘I deal with that’) or bizarre (e.g. the example sentence for ‘deemed’ was ‘The boy deemed that he cheated in the examination’). For some words, there were no example sentences at all.

Primed in this way, I was intrigued to see how the system would deal with the phrase ‘heaving bosoms’ which came up in one text. ‘Heaving bosoms’ is an interesting case. It’s a strong collocation, and, statistically, ‘heaving bosoms’ plural are much more frequent than ‘a heaving bosom’ singular. ‘Heaving’, as an adjective, only really collocates with ‘bosoms’. You don’t find ‘heaving’ collocating with any of the synonyms for ‘bosoms’. The phrase is also heavily connoted, strongly associated with romance novels, and often used with humorous intent. Finally, there is also a problem of usage with ‘bosom’ / ‘bosoms’: men or women, one or two – all in all, it’s a tricky word.

Lingua.ly was no help at all. There was no dictionary entry for an adjectival ‘heaving’, and the translations for the verb ‘heave’ were amusing, but less than appropriate. As for ‘bosom’, there were appropriate translations (‘sein’ and ‘poitrine’), but absolutely no help with how the word is actually used. Example sentences, which are clearly not tagged to the translation which has been chosen, included ‘Or whether he shall die in the bosom of his family or neglected and despised in a foreign land’ and ‘Can a man take fire in his bosom, and his clothes not be burned?’

Lingua.ly has a number of problems. First off, its software hinges on a dictionary (it’s a Babylon dictionary) which can only deal with single words, is incomplete, and does not deal with collocation, connotation, style or register. As such, it can only be of limited value for receptive use, and of no value whatsoever for productive use. Secondly, the web corpus that it is using simply isn’t big enough. Thirdly, it doesn’t seem to have any Natural Language Processing tool which could enable it to deal with meanings in context. It can’t disambiguate words automatically. Such software does now exist, and Lingua.ly desperately needs it.

Unfortunately, there are other problems, too. The flashcard practice is very repetitive and soon becomes boring. With eight translations to choose from, you have to scroll down the page to see them all. But there’s a timer mechanism, and I frequently timed out before being able to select the correct translation (partly because words are presented with no context, so you have to remember the meaning which you clicked in an earlier study session). The texts do not seem to be graded for level. There is no indication of word frequency or word sense frequency. There is just one gamification element (a score card), but there is no indication of how scores are achieved. Last, but certainly not least, the system is buggy. My word list disappeared into the cloud earlier today, and has not been seen since.

I think it’s a pity that Lingua.ly is not better. The idea behind it is good – even if the references to Krashen are a little unfortunate. The company says that they have raised $800,000 in funding, but with their freemium model they’ll be desperately needing more, and they’ve gone to market too soon. One reviewer, Language Surfer,  wrote a withering review of Lingua.ly’s Arabic program (‘it will do more harm than good to the Arabic student’), and Brendan Wightman, commenting at eltjam,  called it ‘dull as dish water, […] still very crude, limited and replete with multiple flaws’. But, at least, it’s free.

There is a lot that technology can do to help English language learners develop their reading skills. The internet makes it possible for learners to read an almost limitless number of texts that will interest them, and these texts can evaluated for readability and, therefore, suitability for level (see here for a useful article). RSS opens up exciting possibilities for narrow reading and the positive impact of multimedia-enhanced texts was researched many years ago. There are good online bilingual dictionaries and other translation tools. There are apps that go with graded readers (see this review in the Guardian) and there are apps that can force you to read at a certain speed. And there is more. All of this could very effectively be managed on a good learning platform.

Could adaptive software add another valuable element to reading skills development?

Adaptive reading programs are spreading in the US in primary education, and, with some modifications, could be used in ELT courses for younger learners and for those who do not have the Roman alphabet. One of the most well-known has been developed by Lexia Learning®, a company that won a $500,000 grant from the Gates Foundation last year. Lexia Learning® was bought by Rosetta Stone® for $22.5 million in June 2013.

One of their products, Lexia Reading Core5, ‘provides explicit, systematic, personalized learning in the six areas of reading instruction, and delivers norm-referenced performance data and analysis without interrupting the flow of instruction to administer a test. Designed specifically to meet the Common Core and the most rigorous state standards, this research-proven, technology-based approach accelerates reading skills development, predicts students’ year-end performance and provides teachers data-driven action plans to help differentiate instruction’.

core5-ss-small

The predictable claim that it is ‘research-proven’ has not convinced everyone. Richard Allington, a professor of literacy studies at the University of Tennessee and a past president of both the International Reading Association and the National Reading Association, has said that all the companies that have developed this kind of software ‘come up with evidence – albeit potential evidence — that kids could improve their abilities to read by using their product. It’s all marketing. They’re selling a product. Lexia is one of these programs. But there virtually are no commercial programs that have any solid, reliable evidence that they improve reading achievement.’[1] He has argued that the $12 million that has been spent on the Lexia programs would have been better spent on a national program, developed at Ohio State University, that matches specially trained reading instructors with students known to have trouble learning to read.

But what about ELT? For an adaptive program like Lexia’s to work, reading skills need to be broken down in a similar way to the diagram shown above. Let’s get some folk linguistics out of the way first. The sub-skills of reading are not skimming, scanning, inferring meaning from context, etc. These are strategies that readers adopt voluntarily in order to understand a text better. If a reader uses these strategies in their own language, they are likely to transfer these strategies to their English reading. It seems that ELT instruction in strategy use has only limited impact, although this kind of training may be relevant to preparation for exams. This insight is taking a long time to filter down to course and coursebook design, but there really isn’t much debate[2]. Any adaptive ELT reading program that confuses reading strategies with reading sub-skills is going to have big problems.

What, then, are the sub-skills of reading? In what ways could reading be broken down into a skill tree so that it is amenable to adaptive learning? Researchers have provided different answers. Munby (1978), for example, listed 19 reading microskills, Heaton (1988) listed 14. However, a bigger problem is that other researchers (e.g. Lunzer 1979, Rost 1993) have failed to find evidence that distinct sub-skills actually exist. While it is easier to identify sub-skills for very low level readers (especially for those whose own language is very different from English), it is simply not possible to do so for higher levels.

Reading in another language is a complex process which involves both top-down and bottom-up strategies, is intimately linked to vocabulary knowledge and requires the activation of background, cultural knowledge. Reading ability, in the eyes of some researchers, is unitary or holistic. Others prefer to separate things into two components: word recognition and comprehension[3]. Either way, a consensus is beginning to emerge that teachers and learners might do better to focus on vocabulary extension (and this would include extensive reading) than to attempt to develop reading programs that assume the multidivisible nature of reading.

All of which means that adaptive learning software and reading skills in ELT are unlikely bedfellows. To be sure, an increased use of technology (as described in the first paragraph of this post) in reading work will generate a lot of data about learner behaviours. Analysis of this data may lead to actionable insights, and it may not! It will be interesting to find out.

 

[1] http://www.khi.org/news/2013/jun/17/budget-proviso-reading-program-raises-questions/

[2] See, for example, Walter, C. & M. Swan. 2008. ‘Teaching reading skills: mostly a waste of time?’ in Beaven, B. (ed.) IATEFL 2008 Exeter Conference Selections. (Canterbury: IATEFL). Or go back further to Alderson, J. C. 1984 ‘Reading in a foreign language: a reading problem or a language problem?’ in J.C. Alderson & A. H. Urquhart (eds.) Reading in a Foreign Language (London: Longman)

[3] For a useful summary of these issues, see ‘Reading abilities and strategies: a short introduction’ by Feng Liu (International Education Studies 3 / 3 August 2010) www.ccsenet.org/journal/index.php/ies/article/viewFile/6790/5321

busuu is an online language learning service. I did not refer to it in the ‘guide’ because it does not seem to use any adaptive learning software yet, but this is set to change. According to founder Bernhard Niesner, the company is already working on incorporation of adaptive software.

A few statistics will show the significance of busuu. The site currently has over 40 million users (El Pais, 8 February 2014) and is growing by 40,000 a day. The basic service is free, but the premium service costs Euro 69.99 a year. The company will not give detailed user statistics, but say that ‘hundreds of thousands’ are paying for the premium service, that turnover was a 7-figure number last year and will rise to 8 figures this year.

It is easy to understand why traditional publishers might be worried about competition like busuu and why they are turning away from print-based courses.

Busuu offers 12 languages, but, as a translation-based service, any one of these languages can only be studied if you speak one of the other languages on offer. The levels of the different courses are tagged to the CEFR.

busuuframe

In some ways, busuu is not so different from competitors like duolingo. Students are presented with bilingual vocabulary sets, accompanied by pictures, which are tested in a variety of ways. As with duolingo, some of this is a little strange. For German at level A1, I did a vocabulary set on ‘pets’ which presented the German words for a ferret, a tortoise and a guinea-pig, among others. There are dialogues, which are both written and recorded, that are sometimes surreal.

Child: Mum, look over there, there’s a dog without a collar, can we take it?

Mother: No, darling, our house is too small to have a dog.

Child: Mum your bedroom is very big, it can sleep with dad and you.

Mother: Come on, I’ll buy you a toy dog.

The dialogues are followed up by multiple choice questions which test your memory of the dialogue. There are also writing exercises where you are given a picture from National Geographic and asked to write about it. It’s not always clear what one is supposed to write. What would you say about a photo that showed a large number of parachutes in the sky, beyond ‘I can see a lot of parachutes’?

There are also many gamification elements. There is a learning carrot where you can set your own learning targets and users can earn ‘busuuberries’ which can then be traded in for animations in a ‘language garden’.

2014-02-25_0911

But in one significant respect, busuu differs from its competitors. It combines the usual vocabulary, grammar and dialogue work with social networking. Users can interact with text or video, and feedback on written work comes from other users. My own experience with this was mixed, but the potential is clear. Feedback on other learners’ work is encouraged by the awarding of ‘busuuberries’.

We will have to wait and see what busuu does with adaptive software and what it will do with the big data it is generating. For the moment, its interest lies in illustrating what could be done with a learning platform and adaptive software. The big ELT publishers know they have a new kind of competition and, with a lot more money to invest than busuu, we have to assume that what they will launch a few years from now will do everything that busuu does, and more. Meanwhile, busuu are working on site redesign and adaptivity. They would do well, too, to sort out their syllabus!

‘Adaptive learning’ can mean slightly different things to different people. According to one provider of adaptive learning software (Smart Sparrow https://www.smartsparrow.com/adaptive-elearning), it is ‘an online learning and teaching medium that uses an Intelligent Tutoring System to adapt online learning to the student’s level of knowledge. Adaptive eLearning provides students with customised educational content and the unique feedback that they need, when they need it.’ Essentially, it is software that analyzes the work that a student is doing online, and tailors further learning tasks to the individual learner’s needs (as analyzed by the software).

A relatively simple example of adaptive language learning is Duolingo, a free online service that currently offers seven languages, including English (www.duolingo.com/ ), with over 10 million users in November 2013. Learners progress through a series of translation, dictation and multiple choice exercises that are organised into a ‘skill tree’ of vocabulary and grammar areas. Because translation plays such a central role, the program is only suitable for speakers of one of the languages on offer in combination with one of the other languages on offer. Duolingo’s own blog describes the approach in the following terms: ‘Every time you finish a Duolingo lesson, translation, test, or practice session, you provide valuable data about what you know and what you’re struggling with. Our system uses this info to plan future lessons and select translation tasks specifically for your skills and needs. Similar to how an online store uses your previous purchases to customize your shopping experience, Duolingo uses your learning history to customize your learning experience’ (http://blog.duolingo.com/post/41960192602/duolingos-data-driven-approach-to-education).duolingo skilltree

Example of a ‘skill tree’ from http://www.duolingo.com

For anyone with a background in communicative language teaching, the experience can be slightly surreal. Examples of sentences that need to be translated include: The dog eats the bird, the boy has a cow, and the fly is eating bread. The system allows you to compete and communicate with other learners, and to win points and rewards (see ‘Gamification’ next post).

Duolingo describes its crowd-sourced, free, adaptive approach as ‘pretty unique’, but uniquely unique it is not. It is essentially a kind of memory trainer, and there are a number available on the market. One of the most well-known is Cerego’s cloud-based iKnow!, which describes itself as a ‘memory management platform’. Particularly strong in Japan, corporate and individual customers pay a monthly subscription to access its English, Chinese and Japanese language programs. A free trial of some of the products is available at http://iknow.jp/  and I experimented with their ‘Erudite English’ program. This presented a series of words which included ‘defalcate’, ‘fleer’ and ‘kvetch’ through English-only definitions, followed by multiple choice and dictated gap-fill exercises. As with Duolingo, there seemed to be no obvious principle behind the choice of items, and example sentences included things like ‘Michael arrogates a slice of carrot cake, unbeknownst to his sister,’ or ‘She found a place in which to posit the flowerpot.’ Based on a user’s performance, Cerego’s algorithms decide which items will be presented, and select the frequency and timing of opportunities for review. The program can be accessed through ordinary computers, as well as iPhone and Android apps. The platform has been designed in such a way as to allow other content to be imported, and then presented and practised in a similar way.

In a similar vein, the Rosetta Stone software also uses spaced repetition to teach grammar and vocabulary. It describes its adaptive learning as ‘Adaptive Recall™’ According to their website, this provides review activities for each lesson ‘at intervals that are determined by your performance in that review. Exceed the program’s expectations for you and the review gets pushed out further. Fall short and you’ll see it sooner. The program gives you a likely date and automatically notifies you when it’s time to take the review again’. Rosetta Stone has won numerous awards and claims that over 20,000 educational institutions around the world have formed partnerships with them. These include the US military, the University of Barcelona and Harrogate Grammar school in the UK (http://www.rosettastone.co.uk/faq ).

Slightly more sophisticated than the memory-trainers described above is the GRE (the Graduate Record Examinations, a test for admission into many graduate schools in the US) online preparation program that is produced by Barron’s (www.barronstestprep.com//gre ). Although this is not an English language course, it provides a useful example of how simple adaptive learning programs can be taken a few steps further. At the time of writing, it is possible to do a free trial, and this gives a good taste of adaptive learning. Barron’s highlights the way that their software delivers individualized study programs: it is not, they say, a case of ‘one size fits all’. After entering the intended test date, the intended number of hours of study, and a simple self-evaluation of different reasoning skills, a diagnostic test completes the information required to set up a personalized ‘prep plan’. This determines the lessons you will be given. As you progress through the course, the ‘prep plan’ adapts to the work that you do, comparing your performance to other students who have taken the course. Measuring your progress and modifying your ‘skill profile’, the order of the lessons and the selection of the 1000+ practice questions can change.