Posts Tagged ‘gamification’

About two and a half years ago when I started writing this blog, there was a lot of hype around adaptive learning and the big data which might drive it. Two and a half years are a long time in technology. A look at Google Trends suggests that interest in adaptive learning has been pretty static for the last couple of years. It’s interesting to note that 3 of the 7 lettered points on this graph are Knewton-related media events (including the most recent, A, which is Knewton’s latest deal with Hachette) and 2 of them concern McGraw-Hill. It would be interesting to know whether these companies follow both parts of Simon Cowell’s dictum of ‘Create the hype, but don’t ever believe it’.

Google_trends

A look at the Hype Cycle (see here for Wikipedia’s entry on the topic and for criticism of the hype of Hype Cycles) of the IT research and advisory firm, Gartner, indicates that both big data and adaptive learning have now slid into the ‘trough of disillusionment’, which means that the market has started to mature, becoming more realistic about how useful the technologies can be for organizations.

A few years ago, the Gates Foundation, one of the leading cheerleaders and financial promoters of adaptive learning, launched its Adaptive Learning Market Acceleration Program (ALMAP) to ‘advance evidence-based understanding of how adaptive learning technologies could improve opportunities for low-income adults to learn and to complete postsecondary credentials’. It’s striking that the program’s aims referred to how such technologies could lead to learning gains, not whether they would. Now, though, with the publication of a report commissioned by the Gates Foundation to analyze the data coming out of the ALMAP Program, things are looking less rosy. The report is inconclusive. There is no firm evidence that adaptive learning systems are leading to better course grades or course completion. ‘The ultimate goal – better student outcomes at lower cost – remains elusive’, the report concludes. Rahim Rajan, a senior program office for Gates, is clear: ‘There is no magical silver bullet here.’

The same conclusion is being reached elsewhere. A report for the National Education Policy Center (in Boulder, Colorado) concludes: Personalized Instruction, in all its many forms, does not seem to be the transformational technology that is needed, however. After more than 30 years, Personalized Instruction is still producing incremental change. The outcomes of large-scale studies and meta-analyses, to the extent they tell us anything useful at all, show mixed results ranging from modest impacts to no impact. Additionally, one must remember that the modest impacts we see in these meta-analyses are coming from blended instruction, which raises the cost of education rather than reducing it (Enyedy, 2014: 15 -see reference at the foot of this post). In the same vein, a recent academic study by Meg Coffin Murray and Jorge Pérez (2015, ‘Informing and Performing: A Study Comparing Adaptive Learning to Traditional Learning’) found that ‘adaptive learning systems have negligible impact on learning outcomes’.

future-ready-learning-reimagining-the-role-of-technology-in-education-1-638In the latest educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Reimagining the Role of Technology in Education’, 2016) the only mentions of the word ‘adaptive’ are in the context of testing. And the latest OECD report on ‘Students, Computers and Learning: Making the Connection’ (2015), finds, more generally, that information and communication technologies, when they are used in the classroom, have, at best, a mixed impact on student performance.

There is, however, too much money at stake for the earlier hype to disappear completely. Sponsored cheerleading for adaptive systems continues to find its way into blogs and national magazines and newspapers. EdSurge, for example, recently published a report called ‘Decoding Adaptive’ (2016), sponsored by Pearson, that continues to wave the flag. Enthusiastic anecdotes take the place of evidence, but, for all that, it’s a useful read.

In the world of ELT, there are plenty of sales people who want new products which they can call ‘adaptive’ (and gamified, too, please). But it’s striking that three years after I started following the hype, such products are rather thin on the ground. Pearson was the first of the big names in ELT to do a deal with Knewton, and invested heavily in the company. Their relationship remains close. But, to the best of my knowledge, the only truly adaptive ELT product that Pearson offers is the PTE test.

Macmillan signed a contract with Knewton in May 2013 ‘to provide personalized grammar and vocabulary lessons, exam reviews, and supplementary materials for each student’. In December of that year, they talked up their new ‘big tree online learning platform’: ‘Look out for the Big Tree logo over the coming year for more information as to how we are using our partnership with Knewton to move forward in the Language Learning division and create content that is tailored to students’ needs and reactive to their progress.’ I’ve been looking out, but it’s all gone rather quiet on the adaptive / platform front.

In September 2013, it was the turn of Cambridge to sign a deal with Knewton ‘to create personalized learning experiences in its industry-leading ELT digital products for students worldwide’. This year saw the launch of a major new CUP series, ‘Empower’. It has an online workbook with personalized extra practice, but there’s nothing (yet) that anyone would call adaptive. More recently, Cambridge has launched the online version of the 2nd edition of Touchstone. Nothing adaptive there, either.

Earlier this year, Cambridge published The Cambridge Guide to Blended Learning for Language Teaching, edited by Mike McCarthy. It contains a chapter by M.O.Z. San Pedro and R. Baker on ‘Adaptive Learning’. It’s an enthusiastic account of the potential of adaptive learning, but it doesn’t contain a single reference to language learning or ELT!

So, what’s going on? Skepticism is becoming the order of the day. The early hype of people like Knewton’s Jose Ferreira is now understood for what it was. Companies like Macmillan got their fingers badly burnt when they barked up the wrong tree with their ‘Big Tree’ platform.

Noel Enyedy captures a more contemporary understanding when he writes: Personalized Instruction is based on the metaphor of personal desktop computers—the technology of the 80s and 90s. Today’s technology is not just personal but mobile, social, and networked. The flexibility and social nature of how technology infuses other aspects of our lives is not captured by the model of Personalized Instruction, which focuses on the isolated individual’s personal path to a fixed end-point. To truly harness the power of modern technology, we need a new vision for educational technology (Enyedy, 2014: 16).

Adaptive solutions aren’t going away, but there is now a much better understanding of what sorts of problems might have adaptive solutions. Testing is certainly one. As the educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Re-imagining the Role of Technology in Education’, 2016) puts it: Computer adaptive testing, which uses algorithms to adjust the difficulty of questions throughout an assessment on the basis of a student’s responses, has facilitated the ability of assessments to estimate accurately what students know and can do across the curriculum in a shorter testing session than would otherwise be necessary. In ELT, Pearson and EF have adaptive tests that have been well researched and designed.

Vocabulary apps which deploy adaptive technology continue to become more sophisticated, although empirical research is lacking. Automated writing tutors with adaptive corrective feedback are also developing fast, and I’ll be writing a post about these soon. Similarly, as speech recognition software improves, we can expect to see better and better automated adaptive pronunciation tutors. But going beyond such applications, there are bigger questions to ask, and answers to these will impact on whatever direction adaptive technologies take. Large platforms (LMSs), with or without adaptive software, are already beginning to look rather dated. Will they be replaced by integrated apps, or are apps themselves going to be replaced by bots (currently riding high in the Hype Cycle)? In language learning and teaching, the future of bots is likely to be shaped by developments in natural language processing (another topic about which I’ll be blogging soon). Nobody really has a clue where the next two and a half years will take us (if anywhere), but it’s becoming increasingly likely that adaptive learning will be only one very small part of it.

 

Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning. Boulder, CO: National Education Policy Center. Retrieved 17.07.16 from http://nepc.colorado.edu/publication/personalized-instruction

Advertisements

I have been putting in a lot of time studying German vocabulary with Memrise lately, but this is not a review of the Memrise app. For that, I recommend you read Marek Kiczkowiak’s second post on this app. Like me, he’s largely positive, although I am less enthusiastic about Memrise’s USP, the use of mnemonics. It’s not that mnemonics don’t work – there’s a lot of evidence that they do: it’s just that there is little or no evidence that they’re worth the investment of time.

Time … as I say, I have been putting in the hours. Every day, for over a month, averaging a couple of hours a day, it’s enough to get me very near the top of the leader board (which I keep a very close eye on) and it means that I am doing more work than 99% of other users. And, yes, my German is improving.

Putting in the time is the sine qua non of any language learning and a well-designed app must motivate users to do this. Relevant content will be crucial, as will satisfactory design, both visual and interactive. But here I’d like to focus on the two other key elements: task design / variety and gamification.

Memrise offers a limited range of task types: presentation cards (with word, phrase or sentence with translation and audio recording), multiple choice (target item with four choices), unscrambling letters or words, and dictation (see below).

Screenshot_2016-05-24-08-10-42Screenshot_2016-05-24-08-10-57Screenshot_2016-05-24-08-11-24Screenshot_2016-05-24-08-11-45Screenshot_2016-05-24-08-12-51Screenshot_2016-05-24-08-13-44

As Marek writes, it does get a bit repetitive after a while (although less so than thumbing through a pack of cardboard flashcards). The real problem, though, is that there are only so many things an app designer can do with standard flashcards, if they are to contribute to learning. True, there could be a few more game-like tasks (as with Quizlet), races against the clock as you pop word balloons or something of the sort, but, while these might, just might, help with motivation, these games rarely, if ever, contribute much to learning.

What’s more, you’ll get fed up with the games sooner or later if you’re putting in serious study hours. Even if Memrise were to double the number of activity types, I’d have got bored with them by now, in the same way I got bored with the Quizlet games. Bear in mind, too, that I’ve only done a month: I have at least another two months to go before I finish the level I’m working on. There’s another issue with ‘fun’ activities / games which I’ll come on to later.

The options for task variety in vocabulary / memory apps are therefore limited. Let’s look at gamification. Memrise has leader boards (weekly, monthly, ‘all time’), streak badges, daily goals, email reminders and (in the laptop and premium versions) a variety of graphs that allow you to analyse your study patterns. Your degree of mastery of learning items is represented by a growing flower that grows leaves, flowers and withers. None of this is especially original or different from similar apps.

Screenshot_2016-05-24-19-17-14The trouble with all of this is that it can only work for a certain time and, for some people, never. There’s always going to be someone like me who can put in a couple of hours a day more than you can. Or someone, in my case, like ‘Nguyenduyha’, who must be doing about four hours a day, and who, I know, is out of my league. I can’t compete and the realisation slowly dawns that my life would be immeasurably sadder if I tried to.

Having said that, I have tried to compete and the way to do so is by putting in the time on the ‘speed review’. This is the closest that Memrise comes to a game. One hundred items are flashed up with four multiple choices and these are against the clock. The quicker you are, the more points you get, and if you’re too slow, or you make a mistake, you lose a life. That’s how you gain lots of points with Memrise. The problem is that, at best, this task only promotes receptive knowledge of the items, which is not what I need by this stage. At worst, it serves no useful learning function at all because I have learnt ways of doing this well which do not really involve me processing meaning at all. As Marek says in his post (in reference to Quizlet), ‘I had the feeling that sometimes I was paying more attention to ‘winning’ the game and scoring points, rather than to the words on the screen.’ In my case, it is not just a feeling: it’s an absolute certainty.

desktop_dashboard

Sadly, the gamification is working against me. The more time I spend on the U-Bahn doing Memrise, the less time I spend reading the free German-language newspapers, the less time I spend eavesdropping on conversations. Two hours a day is all I have time for for my German study, and Memrise is eating it all up. I know that there are other, and better, ways of learning. In order to do what I know I should be doing, I need to ignore the gamification. For those, more reasonable, students, who can regularly do their fifteen minutes a day, day in – day out, the points and leader boards serve no real function at all.

Cheating at gamification, or gaming the system, is common in app-land. A few years ago, Memrise had to take down their leader board when they realised that cheating was taking place. There’s an inexorable logic to this: gamification is an attempt to motivate by rewarding through points, rather than the reward coming from the learning experience. The logic of the game overtakes itself. Is ‘Nguyenduyha’ cheating, or do they simply have nothing else to do all day? Am I cheating by finding time to do pointless ‘speed reviews’ that earn me lots of points?

For users like myself, then, gamification design needs to be a delicate balancing act. For others, it may be largely an irrelevance. I’ve been working recently on a general model of vocabulary app design that looks at two very different kinds of user. On the one hand, there are the self-motivated learners like myself or the millions of other who have chosen to use self-study apps. On the other, there are the millions of students in schools and colleges, studying English among other subjects, some of whom are now being told to use the vocabulary apps that are beginning to appear packaged with their coursebooks (or other learning material). We’ve never found entirely satisfactory ways of making these students do their homework, and the fact that this homework is now digital will change nothing (except, perhaps, in the very, very short term). The incorporation of games and gamification is unlikely to change much either: there will always be something more interesting and motivating (and unconnected with language learning) elsewhere.

Teachers and college principals may like the idea of gamification (without having really experienced it themselves) for their students. But more important for most of them is likely to be the teacher dashboard: the means by which they can check that their students are putting the time in. Likewise, they will see the utility of automated email reminders that a student is not working hard enough to meet their learning objectives, more and more regular tests that contribute to overall course evaluation, comparisons with college, regional or national benchmarks. Technology won’t solve the motivation issue, but it does offer efficient means of control.

Screenshot_2016-04-29-09-48-05I call Lern Deutsch a vocabulary app, although it’s more of a game than anything else. Developed by the Goethe Institute, the free app was probably designed primarily as a marketing tool rather than a serious attempt to develop an educational language app. It’s available for speakers of Arabic, English, Spanish, Italian, French, Italian, Portuguese and Russian. It’s aimed at A1 learners.

Users of the app create an avatar and roam around a virtual city, learning new vocabulary and practising situational language. They can interact in language challenges with other players. As they explore, they earn Goethe coins, collect accessories for their avatars and progress up a leader board.Screenshot_2016-04-29-09-50-12

As they explore the virtual city, populated by other avatars, they find objects that can be clicked on to add to their vocabulary list. They hear a recording of an example sentence containing the target word, with the word gapped and three multiple choice possibilities. They are then required to type the missing word (see the image below). After collecting a certain number of words, they complete exercises which include the following task types:

  • Jumbled sentences
  • Audio recording of individual words and multiple choice selection
  • Gapped sentences with multiple choice answers
  • Dictation
  • Example sentences containing target item and multiple choice pictures
  • Typing sentences which are buried in a string of random letters

Screenshot_2016-05-02-14-23-07Screenshot_2016-05-02-14-26-13

Screenshot_2016-05-02-14-27-21Screenshot_2016-05-02-14-31-49

 

 

 

 

 

 

 

 

 

The developers have focused their attention on providing variety: engagement and ‘fun’ override other considerations. But how does the app stand up as a language learning tool? Surprisingly, for something developed by the Goethe Institute, it’s less than impressive.

The words that you collect as you navigate the virtual city are all nouns (Hotel, Auto, Mann, Banane, etc), but some (e.g. Sehenswurdigkeit) seem out of level. Any app that uses illustrations as the basic means of conveying meaning runs into problems when it moves away from concrete nouns, but a diet of nouns only (as here) is of necessarily limited value. Other parts of speech are introduced via the example sentences, but no help with meaning is provided so when you come across the word for ‘egg’, for example, your example sentence is ‘Ich möchte das Frühstück mit Ei.’ It’s all very well embedding the target vocabulary in example sentences that have a functional value, but example sentences are only of value if they are understandable: the app badly needs a look-up function for the surrounding language.

The practice exercises are varied, too, but they also vary in their level of difficulty. It makes sense to do receptive / recognition tasks before productive ones, but there is no evidence that I could see of pedagogical considerations of this kind. Neither does there seem to be any spaced repetition at work: the app is driven by the needs of the game design rather than any learning principles.

It’s unclear to me who the app is for. The functional language that is presented is adult: the situations are adult situations (buying a bed, booking a hotel room, ordering a beer). However, the graphic design and the gamification features are juvenile (adding a pirate patch to your avatar, for example).

The lack of attention to the business of learning is especially striking in the English of the English language version that I used. The number of examples of dodgy English that I came across do not inspire confidence.

  • Quite alright! You win your first Goethe coin.
  • What sightseeings do you spot in the city center and the train station?
  • Have a picknick in the park. You now have a picnic in the park with the musician.
  • You still search for your teacher. Whom do you meet in the park? What do they work?

 

All in all, it’s an interesting example of a gamified approach to language, and other app developers may find ideas here that they could do something with. It’s of less interest, though, to anyone who wants to learn a bit of German.

Having spent a lot of time recently looking at vocabulary apps, I decided to put together a Christmas wish list of the features of my ideal vocabulary app. The list is not exhaustive and I’ve given more attention to some features than others. What (apart from testing) have I missed out?

1             Spaced repetition

Since the point of a vocabulary app is to help learners memorise vocabulary items, it is hard to imagine a decent system that does not incorporate spaced repetition. Spaced repetition algorithms offer one well-researched way of improving the brain’s ‘forgetting curve’. These algorithms come in different shapes and sizes, and I am not technically competent to judge which is the most efficient. However, as Peter Ellis Jones, the developer of a flashcard system called CardFlash, points out, efficiency is only one half of the rote memorisation problem. If you are not motivated to learn, the cleverness of the algorithm is moot. Fundamentally, learning software needs to be fun, rewarding, and give a solid sense of progression.

2             Quantity, balance and timing of new and ‘old’ items

A spaced repetition algorithm determines the optimum interval between repetitions, but further algorithms will be needed to determine when and with what frequency new items will be added to the deck. Once a system knows how many items a learner needs to learn and the time in which they have to do it, it is possible to determine the timing and frequency of the presentation of new items. But the system cannot know in advance how well an individual learner will learn the items (for any individual, some items will be more readily learnable than others) nor the extent to which learners will live up to their own positive expectations of time spent on-app. As most users of flashcard systems know, it is easy to fall behind, feel swamped and, ultimately, give up. An intelligent system needs to be able to respond to individual variables in order to ensure that the learning load is realistic.

3             Task variety

A standard flashcard system which simply asks learners to indicate whether they ‘know’ a target item before they flip over the card rapidly becomes extremely boring. A system which tests this knowledge soon becomes equally dull. There needs to be a variety of ways in which learners interact with an app, both for reasons of motivation and learning efficiency. It may be the case that, for an individual user, certain task types lead to more rapid gains in learning. An intelligent, adaptive system should be able to capture this information and modify the selection of task types.

Most younger learners and some adult learners will respond well to the inclusion of games within the range of task types. Examples of such games include the puzzles developed by Oliver Rose in his Phrase Maze app to accompany Quizlet practice.Phrase Maze 1Phrase Maze 2

4             Generative use

Memory researchers have long known about the ‘Generation Effect’ (see for example this piece of research from the Journal of Verbal Learning and Learning Behavior, 1978). Items are better learnt when the learner has to generate, in some (even small) way, the target item, rather than simply reading it. In vocabulary learning, this could be, for example, typing in the target word or, more simply, inserting some missing letters. Systems which incorporate task types that require generative use are likely to result in greater learning gains than simple, static flashcards with target items on one side and definitions or translations on the other.

5             Receptive and productive practice

The most basic digital flashcard systems require learners to understand a target item, or to generate it from a definition or translation prompt. Valuable as this may be, it won’t help learners much to use these items productively, since these systems focus exclusively on meaning. In order to do this, information must be provided about collocation, colligation, register, etc and these aspects of word knowledge will need to be focused on within the range of task types. At the same time, most vocabulary apps that I have seen focus primarily on the written word. Although any good system will offer an audio recording of the target item, and many will offer the learner the option of recording themselves, learners are invariably asked to type in their answers, rather than say them. For the latter, speech recognition technology will be needed. Ideally, too, an intelligent system will compare learner recordings with the audio models and provide feedback in such a way that the learner is guided towards a closer reproduction of the model.

6             Scaffolding and feedback

feebuMost flashcard systems are basically low-stakes, practice self-testing. Research (see, for example, Dunlosky et al’s metastudy ‘Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology’) suggests that, as a learning strategy, practice testing has high utility – indeed, of higher utility than other strategies like keyword mnemonics or highlighting. However, an element of tutoring is likely to enhance practice testing, and, for this, scaffolding and feedback will be needed. If, for example, a learner is unable to produce a correct answer, they will probably benefit from being guided towards it through hints, in the same way as a teacher would elicit in a classroom. Likewise, feedback on why an answer is wrong (as opposed to simply being told that you are wrong), followed by encouragement to try again, is likely to enhance learning. Such feedback might, for example, point out that there is perhaps a spelling problem in the learner’s attempted answer, that the attempted answer is in the wrong part of speech, or that it is semantically close to the correct answer but does not collocate with other words in the text. The incorporation of intelligent feedback of this kind will require a number of NLP tools, since it will never be possible for a human item-writer to anticipate all the possible incorrect answers. A current example of intelligent feedback of this kind can be found in the Oxford English Vocabulary Trainer app.

7             Content

At the very least, a decent vocabulary app will need good definitions and translations (how many different languages?), and these will need to be tagged to the senses of the target items. These will need to be supplemented with all the other information that you find in a good learner’s dictionary: syntactic patterns, collocations, cognates, an indication of frequency, etc. The only way of getting this kind of high-quality content is by paying to license it from a company with expertise in lexicography. It doesn’t come cheap.

There will also need to be example sentences, both to illustrate meaning / use and for deployment in tasks. Dictionary databases can provide some of these, but they cannot be relied on as a source. This is because the example sentences in dictionaries have been selected and edited to accompany the other information provided in the dictionary, and not as items in practice exercises, which have rather different requirements. Once more, the solution doesn’t come cheap: experienced item writers will be needed.

Dictionaries describe and illustrate how words are typically used. But examples of typical usage tend to be as dull as they are forgettable. Learning is likely to be enhanced if examples are cognitively salient: weird examples with odd collocations, for example. Another thing for the item writers to think about.

A further challenge for an app which is not level-specific is that both the definitions and example sentences need to be level-specific. An A1 / A2 learner will need the kind of content that is found in, say, the Oxford Essential dictionary; B2 learners and above will need content from, say, the OALD.

8             Artwork and design

My wordbook2It’s easy enough to find artwork or photos of concrete nouns, but try to find or commission a pair of pictures that differentiate, for example, the adjectives ‘wild’ and ‘dangerous’ … What kind of pictures might illustrate simple verbs like ‘learn’ or ‘remember’? Will such illustrations be clear enough when squeezed into a part of a phone screen? Animations or very short video clips might provide a solution in some cases, but these are more expensive to produce and video files are much heavier.

With a few notable exceptions, such as the British Councils’s MyWordBook 2, design in vocabulary apps has been largely forgotten.

9             Importable and personalisable lists

Many learners will want to use a vocabulary app in association with other course material (e.g. coursebooks). Teachers, however, will inevitably want to edit these lists, deleting some items, adding others. Learners will want to do the same. This is a huge headache for app designers. If new items are going to be added to word lists, how will the definitions, example sentences and illustrations be generated? Will the database contain audio recordings of these words? How will these items be added to the practice tasks (if these include task types that go beyond simple double-sided flashcards)? NLP tools are not yet good enough to trawl a large corpus in order to select (and possibly edit) sentences that illustrate the right meaning and which are appropriate for interactive practice exercises. We can personalise the speed of learning and even the types of learning tasks, so long as the target language is predetermined. But as soon as we allow for personalisation of content, we run into difficulties.

10          Gamification

Maintaining motivation to use a vocabulary app is not easy. Gamification may help. Measuring progress against objectives will be a start. Stars and badges and leaderboards may help some users. Rewards may help others. But gamification features need to be built into the heart of the system, into the design and selection of tasks, rather than simply tacked on as an afterthought. They need to be trialled and tweaked, so analytics will be needed.

11          Teacher support

Although the use of vocabulary flashcards is beginning to catch on with English language teachers, teachers need help with ways to incorporate them in the work they do with their students. What can teachers do in class to encourage use of the app? In what ways does app use require teachers to change their approach to vocabulary work in the classroom? Reporting functions can help teachers know about the progress their students are making and provide very detailed information about words that are causing problems. But, as anyone involved in platform-based course materials knows, teachers need a lot of help.

12          And, of course, …

Apps need to be usable with different operating systems. Ideally, they should be (partially) usable offline. Loading times need to be short. They need to be easy and intuitive to use.

It’s unlikely that I’ll be seeing a vocabulary app with all of these features any time soon. Or, possibly, ever. The cost of developing something that could do all this would be extremely high, and there is no indication that there is a market that would be ready to pay the sort of prices that would be needed to cover the costs of development and turn a profit. We need to bear in mind, too, the fact that vocabulary apps can only ever assist in the initial acquisition of vocabulary: apps alone can’t solve the vocabulary learning problem (despite the silly claims of some app developers). The need for meaningful communicative use, extensive reading and listening, will not go away because a learner has been using an app. So, how far can we go in developing better and better vocabulary apps before users decide that a cheap / free app, with all its shortcomings, is actually good enough?

I posted a follow up to this post in October 2016.

MosaLingua  (with the obligatory capital letter in the middle) is a vocabulary app, available for iOS and Android. There are packages for a number of languages and English variations include general English, business English, vocabulary for TOEFL and vocabulary for TOEIC. The company follows the freemium model, with free ‘Lite’ versions and fuller content selling for €4.99. I tried the ‘Lite’ general English app, opting for French as my first language. Since the app is translation-based, you need to have one of the language pairings that are on offer (the other languages are currently Italian, Spanish, Portuguese and German).Mosalingua

The app I looked at is basically a phrase book with spaced repetition. Even though this particular app was general English, it appeared to be geared towards the casual business traveller. It uses the same algorithm as Anki, and users are taken through a sequence of (1) listening to an audio recording of the target item (word or phrase) along with the possibility of comparing a recording of yourself with the recording provided, (2) standard bilingual flashcard practice, (3) a practice stage where you are given the word or phrase in your own language and you have to unscramble words or letters to form the equivalent in English, and (4) a self-evaluation stage where users select from one of four options (“review”, “hard”, “good”, “perfect”) where the choice made will influence the re-presentation of the item within the spaced repetition.

In addition to these words and phrases, there are a number of dialogues where you (1) listen to the dialogue (‘without worrying about understanding everything’), (2) are re-exposed to the dialogue with English subtitles, (3) see it again with subtitles in your own language, (4) practise it with standard flashcards.

The developers seem to be proud of their Mosa Learning Method®: they’ve registered this as a trademark. At its heart is spaced repetition. This is supplemented by what they refer to as ‘Active Recall’, the notion that things are better memorised if the learner has to make some sort of cognitive effort, however minimal, in recalling the target items. The principle is, at least to me, unquestionable, but the realisation (unjumbling words or letters) becomes rather repetitive and, ultimately, tedious. Then, there is what they call ‘metacognition’. Again, this is informed by research, even if the realisation (self-evaluation of learning difficulty into four levels) is extremely limited. Then there is the Pareto principle  – the 80-20 rule. I couldn’t understand the explanation of what this has to do with the trademarked method. Here’s the MosaLingua explanation  – figure it out for yourself:

Did you know that the 100 most common words in English account for half of the written corpus?

Evidently, you shouldn’t quit after learning only 100 words. Instead, you should concentrate on the most frequently used words and you’ll make spectacular progress. What’s more, globish (global English) has shown that it’s possible to express yourself using only 1500 well-chosen words (which would take less than 3 months with only 10 minutes per day with MosaLingua). Once you’ve acquired this base, MosaLingua proposes specialized vocabulary suited to your needs (the application has over 3000 words).

Finally, there’s some stuff about motivation and learner psychology. This boils down to That’s why we offer free learning help via email, presenting the Web’s best resources, as well as tips through bonus material or the learning community on the MosaLingua blog. We’ll give you all the tools you need to develop your own personalized learning method that is adapted to your needs. Some of these tips are not at all bad, but there’s precious little in the way of gamification or other forms of easy motivation.

In short, it’s all reasonably respectable, despite the predilection for sciency language in the marketing blurb. But what really differentiates this product from Anki, as the founder, Samuel Michelot, points out is the content. Mosalingua has lists of vocabulary and phrases that were created by professors. The word ‘professors’ set my alarm bells ringing, and I wasn’t overly reassured when all I could find out about these ‘professors’ was the information about the MosaLingua team .professors

Despite what some people  claim, content is, actually, rather important when it comes to language learning. I’ll leave you with some examples of MosaLingua content (one dialogue and a selection of words / phrases organised by level) and you can make up your own mind.

Dialogue

Hi there, have a seat. What seems to be the problem?

I haven’t been feeling well since this morning. I have a very bad headache and I feel sick.

Do you feel tired? Have you had cold sweats?

Yes, I’m very tired and have had cold sweats. I have been feeling like that since this morning.

Have you been out in the sun?

Yes, this morning I was at the beach with my friends for a couple hours.

OK, it’s nothing serious. It’s just a bad case of sunstroke. You must drink lots of water and rest. I’ll prescribe you something for the headache and some after sun lotion.

Great, thank you, doctor. Bye.

You’re welcome. Bye.

Level 1: could you help me, I would like a …, I need to …, I don’t know, it’s okay, I (don’t) agree, do you speak English, to drink, to sleep, bank, I’m going to call the police

Level 2: I’m French, cheers, can you please repeat that, excuse me how can I get to …, map, turn left, corner, far (from), distance, thief, can you tell me where I can find …

Level 3: what does … mean, I’m learning English, excuse my English, famous, there, here, until, block, from, to turn, street corner, bar, nightclub, I have to be at the airport tomorrow morning

Level 4: OK, I’m thirty (years old), I love this country, how do you say …, what is it, it’s a bit like …, it’s a sort of …, it’s as small / big as …, is it far, where are we, where are we going, welcome, thanks but I can’t, how long have you been here, is this your first trip to England, take care, district / neighbourhood, in front (of)

Level 5: of course, can I ask you a question, you speak very well, I can’t find the way, David this is Julia, we meet at last, I would love to, where do you want to go, maybe another day, I’ll miss you, leave me alone, don’t touch me, what’s you email

Level 6: I’m here on a business trip, I came with some friends, where are the nightclubs, I feel like going to a bar, I can pick you up at your house, let’s go to see a movie, we had a lot of fun, come again, thanks for the invitation

Adaptive learning providers make much of their ability to provide learners with personalised feedback and to provide teachers with dashboard feedback on the performance of both individuals and groups. All well and good, but my interest here is in the automated feedback that software could provide on very specific learning tasks. Scott Thornbury, in a recent talk, ‘Ed Tech: The Mouse that Roared?’, listed six ‘problems’ of language acquisition that educational technology for language learning needs to address. One of these he framed as follows: ‘The feedback problem, i.e. how does the learner get optimal feedback at the point of need?’, and suggested that technological applications ‘have some way to go.’ He was referring, not to the kind of feedback that dashboards can provide, but to the kind of feedback that characterises a good language teacher: corrective feedback (CF) – the way that teachers respond to learner utterances (typically those containing errors, but not necessarily restricted to these) in what Ellis and Shintani call ‘form-focused episodes’[1]. These responses may include a direct indication that there is an error, a reformulation, a request for repetition, a request for clarification, an echo with questioning intonation, etc. Basically, they are correction techniques.

These days, there isn’t really any debate about the value of CF. There is a clear research consensus that it can aid language acquisition. Discussing learning in more general terms, Hattie[2] claims that ‘the most powerful single influence enhancing achievement is feedback’. The debate now centres around the kind of feedback, and when it is given. Interestingly, evidence[3] has been found that CF is more effective in the learning of discrete items (e.g. some grammatical structures) than in communicative activities. Since it is precisely this kind of approach to language learning that we are more likely to find in adaptive learning programs, it is worth exploring further.

What do we know about CF in the learning of discrete items? First of all, it works better when it is explicit than when it is implicit (Li, 2010), although this needs to be nuanced. In immediate post-tests, explicit CF is better than implicit variations. But over a longer period of time, implicit CF provides better results. Secondly, formative feedback (as opposed to right / wrong testing-style feedback) strengthens retention of the learning items: this typically involves the learner repairing their error, rather than simply noticing that an error has been made. This is part of what cognitive scientists[4] sometimes describe as the ‘generation effect’. Whilst learners may benefit from formative feedback without repairing their errors, Ellis and Shintani (2014: 273) argue that the repair may result in ‘deeper processing’ and, therefore, assist learning. Thirdly, there is evidence that some delay in receiving feedback aids subsequent recall, especially over the longer term. Ellis and Shintani (2014: 276) suggest that immediate CF may ‘benefit the development of learners’ procedural knowledge’, while delayed CF is ‘perhaps more likely to foster metalinguistic understanding’. You can read a useful summary of a meta-analysis of feedback effects in online learning here, or you can buy the whole article here.

I have yet to see an online language learning program which can do CF well, but I think it’s a matter of time before things improve significantly. First of all, at the moment, feedback is usually immediate, or almost immediate. This is unlikely to change, for a number of reasons – foremost among them being the pride that ed tech takes in providing immediate feedback, and the fact that online learning is increasingly being conceptualised and consumed in bite-sized chunks, something you do on your phone between doing other things. What will change in better programs, however, is that feedback will become more formative. As things stand, tasks are usually of a very closed variety, with drag-and-drop being one of the most popular. Only one answer is possible and feedback is usually of the right / wrong-and-here’s-the-correct-answer kind. But tasks of this kind are limited in their value, and, at some point, tasks are needed where more than one answer is possible.

Here’s an example of a translation task from Duolingo, where a simple sentence could be translated into English in quite a large number of ways.

i_am_doing_a_basketDecontextualised as it is, the sentence could be translated in the way that I have done it, although it’s unlikely. The feedback, however, is of relatively little help to the learner, who would benefit from guidance of some sort. The simple reason that Duolingo doesn’t offer useful feedback is that the programme is static. It has been programmed to accept certain answers (e.g. in this case both the present simple and the present continuous are acceptable), but everything else will be rejected. Why? Because it would take too long and cost too much to anticipate and enter in all the possible answers. Why doesn’t it offer formative feedback? Because in order to do so, it would need to identify the kind of error that has been made. If we can identify the kind of error, we can make a reasonable guess about the cause of the error, and select appropriate CF … this is what good teachers do all the time.

Analysing the kind of error that has been made is the first step in providing appropriate CF, and it can be done, with increasing accuracy, by current technology, but it requires a lot of computing. Let’s take spelling as a simple place to start. If you enter ‘I am makeing a basket for my mother’ in the Duolingo translation above, the program tells you ‘Nice try … there’s a typo in your answer’. Given the configuration of keyboards, it is highly unlikely that this is a typo. It’s a simple spelling mistake and teachers recognise it as such because they see it so often. For software to achieve the same insight, it would need, as a start, to trawl a large English dictionary database and a large tagged database of learner English. The process is quite complicated, but it’s perfectably do-able, and learners could be provided with CF in the form of a ‘spelling hint’.i_am_makeing_a_basket

Rather more difficult is the error illustrated in my first screen shot. What’s the cause of this ‘error’? Teachers know immediately that this is probably a classic confusion of ‘do’ and ‘make’. They know that the French verb ‘faire’ can be translated into English as ‘make’ or ‘do’ (among other possibilities), and the error is a common language transfer problem. Software could do the same thing. It would need a large corpus (to establish that ‘make’ collocates with ‘a basket’ more often than ‘do’), a good bilingualised dictionary (plenty of these now exist), and a tagged database of learner English. Again, appropriate automated feedback could be provided in the form of some sort of indication that ‘faire’ is only sometimes translated as ‘make’.

These are both relatively simple examples, but it’s easy to think of others that are much more difficult to analyse automatically. Duolingo rejects ‘I am making one basket for my mother’: it’s not very plausible, but it’s not wrong. Teachers know why learners do this (again, it’s probably a transfer problem) and know how to respond (perhaps by saying something like ‘Only one?’). Duolingo also rejects ‘I making a basket for my mother’ (a common enough error), but is unable to provide any help beyond the correct answer. Automated CF could, however, be provided in both cases if more tools are brought into play. Multiple parsing machines (one is rarely accurate enough on its own) and semantic analysis will be needed. Both the range and the complexity of the available tools are increasing so rapidly (see here for the sort of research that Google is doing and here for an insight into current applications of this research in language learning) that Duolingo-style right / wrong feedback will very soon seem positively antediluvian.

One further development is worth mentioning here, and it concerns feedback and gamification. Teachers know from the way that most learners respond to written CF that they are usually much more interested in knowing what they got right or wrong, rather than the reasons for this. Most students are more likely to spend more time looking at the score at the bottom of a corrected piece of written work than at the laborious annotations of the teacher throughout the text. Getting students to pay close attention to the feedback we provide is not easy. Online language learning systems with gamification elements, like Duolingo, typically reward learners for getting things right, and getting things right in the fewest attempts possible. They encourage learners to look for the shortest or cheapest route to finding the correct answers: learning becomes a sexed-up form of test. If, however, the automated feedback is good, this sort of gamification encourages the wrong sort of learning behaviour. Gamification designers will need to shift their attention away from the current concern with right / wrong, and towards ways of motivating learners to look at and respond to feedback. It’s tricky, because you want to encourage learners to take more risks (and reward them for doing so), but it makes no sense to penalise them for getting things right. The probable solution is to have a dual points system: one set of points for getting things right, another for employing positive learning strategies.

The provision of automated ‘optimal feedback at the point of need’ may not be quite there yet, but it seems we’re on the way for some tasks in discrete-item learning. There will probably always be some teachers who can outperform computers in providing appropriate feedback, in the same way that a few top chess players can beat ‘Deep Blue’ and its scions. But the rest of us had better watch our backs: in the provision of some kinds of feedback, computers are catching up with us fast.

[1] Ellis, R. & N. Shintani (2014) Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon: Routledge p. 249

[2] Hattie, K. (2009) Visible Learning. Abingdon: Routledge p.12

[3] Li, S. (2010) ‘The effectiveness of corrective feedback in SLA: a meta-analysis’ Language Learning 60 / 2: 309 -365

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)

FluentU, busuu, Bliu Bliu … what is it with all the ‘u’s? Hong-Kong based FluentU used to be called FluentFlix, but they changed their name a while back. The service for English learners is relatively new. Before that, they focused on Chinese, where the competition is much less fierce.

At the core of FluentU is a collection of short YouTube videos, which are sorted into 6 levels and grouped into 7 topic categories. The videos are accompanied by transcriptions. As learners watch a video, they can click on any word in the transcript. This will temporarily freeze the video and show a pop-up which offers a definition of the word, information about part of speech, a couple of examples of this word in other sentences, and more example sentences of the word from other videos that are linked on FluentU. These can, in turn, be clicked on to bring up a video collage of these sentences. Learners can click on an ‘Add to Vocab’ button, which will add the word to personalised vocabulary lists. These are later studied through spaced repetition.

FluentU describes its approach in the following terms: FluentU selects the best authentic video content from the web, and provides the scaffolding and support necessary to bring that authentic content within reach for your students. It seems appropriate, therefore, to look first at the nature of that content. At the moment, there appear to be just under 1,000 clips which are allocated to levels as follows:

Newbie 123 Intermediate 294 Advanced 111
Elementary 138 Upper Int 274 Native 40

It has to be assumed that the amount of content will continue to grow, but, for the time being, it’s not unreasonable to say that there isn’t a lot there. I looked at the Upper Intermediate level where the shortest was 32 seconds long, the longest 4 minutes 34 seconds, but most were between 1 and 2 minutes. That means that there is the equivalent of about 400 minutes (say, 7 hours) for this level.

The actual amount that anyone would want to watch / study can be seen to be significantly less when the topics are considered. These break down as follows:

Arts & entertainment 105 Everyday life 60 Science & tech 17
Business 34 Health & lifestyle 28
Culture 29 Politics & society 6

The screenshots below give an idea of the videos on offer:

menu1menu2

I may be a little difficult, but there wasn’t much here that appealed. Forget the movie trailers for crap movies, for a start. Forget the low level business stuff, too. ‘The History of New Year’s Resolutions’ looked promising, but turned out to be a Wikipedia style piece. FluentU certainly doesn’t have the eye for interesting, original video content of someone like Jamie Keddie or Kieran Donaghy.

But, perhaps, the underwhelming content is of less importance than what you do with it. After all, if you’re really interested in content, you can just go to YouTube and struggle through the transcriptions on your own. The transcripts can be downloaded as pdfs, which, strangely are marked with a FluentU copyright notice.copyright FluentU doesn’t need to own the copyright of the videos, because they just provide links, but claiming copyright for someone else’s script seemed questionable to me. Anyway, the only real reason to be on this site is to learn some vocabulary. How well does it perform?

fluentu1

Level is self-selected. It wasn’t entirely clear how videos had been allocated to level, but I didn’t find any major discrepancies between FluentU’s allocation and my own, intuitive grading of the content. Clicking on words in the transcript, the look-up / dictionary function wasn’t too bad, compared to some competing products I have looked at. The system could deal with some chunks and phrases (e.g. at your service, figure out) and the definitions were appropriate to the way these had been used in context. The accuracy was far from consistent, though. Some definitions were harder than the word they were explaining (e.g. telephone = an instrument used to call someone) and some were plain silly (e.g. the definition of I is me).

have_been_definitionSome chunks were not recognised, so definitions were amusingly wonky. Come out, get through and have been were all wrong. For the phrase talk her into it, the program didn’t recognise the phrasal verb, and offered me communicate using speech for talk, and to the condition, state or form of for into.

For many words, there are pictures to help you with the meaning, but you wonder about some of them, e.g. the picture of someone clutching a suitcase to illustrate the meaning of of, or a woman holding up a finger and thumb to illustrate the meaning of what (as a pronoun).what_definition

The example sentences don’t seem to be graded in any way and are not always useful. The example sentences for of, for example, are The pages of the book are ripped, the lemurs of Madagascar and what time of day are you free. Since the definition is given as belonging to, there seems to be a problem with, at least, the last of these examples!

With the example sentence that link you to other video examples of this word being used, I found that it took a long time to load … and it really wasn’t worth waiting for.

After a catalogue of problems like this, you might wonder how I can say that this function wasn’t too bad, but I’ve seen a lot worse. It was, at least, mostly accurate.

Moving away from the ‘Watch’ options, I explored the ‘Learn’ section. Bearing in mind that I had described myself as ‘Upper Intermediate’, I was surprised to be offered the following words for study: Good morning, may, help, think, so. This then took me to the following screen:great job

I was getting increasingly confused. After watching another video, I could practise some of the words I had highlighted, but, again, I wasn’t sure quite what was going on. There was a task that asked me to ‘pick the correct translation’, but this was, in fact a multiple choice dictation task.translation task

Next, I was asked to study the meaning of the word in, followed by an unhelpful gap-fill task:gap fill

Confused? I was. I decided to look for something a little more straightforward, and clicked on a menu of vocabulary flash cards that I could import. These included sets based on copyright material from both CUP and OUP, and I wondered what these publishers might think of their property being used in this way.flashcards

FluentU claims  that it is based on the following principles:

  1. Individualized scaffolding: FluentU makes language learning easy by teaching new words with vocabulary students already know.
  2. Mastery Learning: FluentU sets students up for success by making sure they master the basics before moving on to more advanced topics.
  3. Gamification: FluentU incorporates the latest game design mechanics to make learning fun and engaging.
  4. Personalization: Each student’s FluentU experience is unlike anyone else’s. Video clips, examples, and quizzes are picked to match their vocabulary and interests.

The ‘individualized scaffolding’ is no more than common sense, dressed up in sciency-sounding language. The reference to ‘Mastery Learning’ is opaque, to say the least, with some confusion between language features and topic. The gamification is rudimentary, and the personalization is pretty limited. It doesn’t come cheap, either.

price table

Lingua.ly is an Israeli start-up which, in its own words, ‘is an innovative new learning solution that helps you learn a language from the open web’. Its platform ‘uses big-data paired with spaced repetition to help users bootstrap their way to fluency’. You can read more of this kind of adspeak at the Lingua.ly blog  or the Wikipedia entry  which seems to have been written by someone from the company.

How does it work? First of all, state the language you want to study (currently there are 10 available) and the language you already speak (currently there are 18 available). Then, there are three possible starting points: insert a word which you want to study, click on a word in any web text or click on a word in one of the suggested reading texts. This then brings up a bilingual dictionary entry which, depending on the word, will offer a number of parts of speech and a number of translated word senses. Click on the appropriate part of speech and the appropriate word sense, and the item will be added to your personal word list. Once you have a handful of words in your word list, you can begin practising these words. Here there are two options. The first is a spaced repetition flashcard system. It presents the target word and 8 different translations in your own language, and you have to click on the correct option. Like most flashcard apps, spaced repetition software determines when and how often you will be re-presented with the item.

The second option is to read an authentic web text which contains one or more of your target items. The company calls this ‘digital language immersion, a method of employing a virtual learning environment to simulate the language learning environment’. The app ‘relies on a number of applied linguistics principles, including the Natural Approach and Krashen’s Input Hypothesis’, according to the Wikipedia entry. Apparently, the more you use the app, the more it knows about you as a learner, and the better able it is to select texts that are appropriate for you. As you read these texts, of course, you can click on more words and add them to your word list.

I tried out Lingua.ly, logging on as a French speaker wanting to learn English, and clicking on words as the fancy took me. I soon had a selection of texts to read. Users are offered a topic menu which consisted of the following: arts, business, education, entertainment, food, weird, beginners, green, health, living, news, politics, psychology, religion, science, sports, style. The sources are varied and not at all bad – Christian Science Monitor, The Grauniad, Huffington Post, Time, for example –and there are many very recent articles. Some texts were interesting; others seemed very niche. I began clicking on more words that I thought would be interesting to explore and here my problems began.

I quickly discovered that the system could only deal with single words, so phrasal verbs were off limits. One text I looked at had the phrasal verb ‘ripping off’, and although I could get translations for ‘ripping’ and ‘off’, this was obviously not terribly helpful. Learners who don’t know the phrasal verb ‘ripped off’ do not necessarily know that it is a phrasal verb, so the translations offered for the two parts of the verb are worse than unhelpful; they are actually misleading. Proper nouns were also a problem, although some of the more common ones were recognised. But the system failed to recognise many proper nouns for what they were, and offered me translations of homonymous nouns. new_word_added_'ripping_off' With some words (e.g. ‘stablemate’), the dictionary offered only one translation (in this case, the literal translation), but not the translation (the much more common idiomatic one) that was needed in the context in which I came across the word. With others (e.g. ‘pertain’), I was offered a list of translations which included the one that was appropriate in the context, but, unfortunately, this is the French word ‘porter’, which has so many possible meanings that, if you genuinely didn’t know the word, you would be none the wiser.

Once you’ve clicked on an appropriate part of speech and translation (if you can find one), the dictionary look-up function offers both photos and example sentences. Here again there were problems. I’d clicked on the verb ‘pan’ which I’d encountered in the context of a critic panning a book they’d read. I was able to select an appropriate translation, but when I got to the photos, I was offered only multiple pictures of frying pans. There were no example sentences for my meaning of ‘pan’: instead, I was offered multiple sentences about cooking pans, and one about Peter Pan. In other cases, the example sentences were either unhelpful (e.g. the example for ‘deal’ was ‘I deal with that’) or bizarre (e.g. the example sentence for ‘deemed’ was ‘The boy deemed that he cheated in the examination’). For some words, there were no example sentences at all.

Primed in this way, I was intrigued to see how the system would deal with the phrase ‘heaving bosoms’ which came up in one text. ‘Heaving bosoms’ is an interesting case. It’s a strong collocation, and, statistically, ‘heaving bosoms’ plural are much more frequent than ‘a heaving bosom’ singular. ‘Heaving’, as an adjective, only really collocates with ‘bosoms’. You don’t find ‘heaving’ collocating with any of the synonyms for ‘bosoms’. The phrase is also heavily connoted, strongly associated with romance novels, and often used with humorous intent. Finally, there is also a problem of usage with ‘bosom’ / ‘bosoms’: men or women, one or two – all in all, it’s a tricky word.

Lingua.ly was no help at all. There was no dictionary entry for an adjectival ‘heaving’, and the translations for the verb ‘heave’ were amusing, but less than appropriate. As for ‘bosom’, there were appropriate translations (‘sein’ and ‘poitrine’), but absolutely no help with how the word is actually used. Example sentences, which are clearly not tagged to the translation which has been chosen, included ‘Or whether he shall die in the bosom of his family or neglected and despised in a foreign land’ and ‘Can a man take fire in his bosom, and his clothes not be burned?’

Lingua.ly has a number of problems. First off, its software hinges on a dictionary (it’s a Babylon dictionary) which can only deal with single words, is incomplete, and does not deal with collocation, connotation, style or register. As such, it can only be of limited value for receptive use, and of no value whatsoever for productive use. Secondly, the web corpus that it is using simply isn’t big enough. Thirdly, it doesn’t seem to have any Natural Language Processing tool which could enable it to deal with meanings in context. It can’t disambiguate words automatically. Such software does now exist, and Lingua.ly desperately needs it.

Unfortunately, there are other problems, too. The flashcard practice is very repetitive and soon becomes boring. With eight translations to choose from, you have to scroll down the page to see them all. But there’s a timer mechanism, and I frequently timed out before being able to select the correct translation (partly because words are presented with no context, so you have to remember the meaning which you clicked in an earlier study session). The texts do not seem to be graded for level. There is no indication of word frequency or word sense frequency. There is just one gamification element (a score card), but there is no indication of how scores are achieved. Last, but certainly not least, the system is buggy. My word list disappeared into the cloud earlier today, and has not been seen since.

I think it’s a pity that Lingua.ly is not better. The idea behind it is good – even if the references to Krashen are a little unfortunate. The company says that they have raised $800,000 in funding, but with their freemium model they’ll be desperately needing more, and they’ve gone to market too soon. One reviewer, Language Surfer,  wrote a withering review of Lingua.ly’s Arabic program (‘it will do more harm than good to the Arabic student’), and Brendan Wightman, commenting at eltjam,  called it ‘dull as dish water, […] still very crude, limited and replete with multiple flaws’. But, at least, it’s free.

After my second aborted attempt to learn some German through Duolingo, I decided to try something a little different. I started using word cards with my students many years ago, but when I say ‘word cards’, I mean word cards (i.e. on pieces of card). Although more recently I’ve encouraged students to use digital word cards with adaptive elements, I’d never seriously experimented with them myself. What I’ve learnt is that, whilst digital word cards are superior in many ways to the old-fashioned cards on card, the problems and limitations remain more or less the same.

Deliberate learning of vocabulary through the use of word cards is well supported by research: Every piece of research comparing deliberate learning with incidental learning has shown that deliberate word learning easily beats incidental vocabulary learning in terms of the time taken to learn and the amount learnt. The deliberate learning studies also show that such learning lasts for a very long time. (Nation, I.S.P. 2008 Teaching Vocabulary: Strategies and Techniques (Boston, MA: Heinle Cengage Learning) p.104) The current crop of digital word cards simplify the learner’s task enormously by allowing sets of words to be imported into the programs, by automatically calculating the intervals between repetitions / exposures, and by offering a range of task types and gamification elements to help motivation. I can’t imagine going back to old-fashioned dog-eared cards stuffed into a ‘vocab bag’.

anki-16I’ve been using Anki , but I didn’t choose it in preference to one of the many other free systems, such as Quizlet , for any particular reason. I’ve looked at a number of these systems, and, frankly, I don’t have any strong preference. Some have games, which are fun for a few minutes. Some have better gamification features than others. Some seem easier than others to use. It’s a fiercely competitive world, and new features are being constantly added. For any teacher wanting to try these word cards (or flash cards) for the first time – either with their students, or for themselves, I’d probably recommend Quizlet, for the simple reason that there’s a very good step-by-step guide to using these cards at Lizzie Pinard’s blog , ‘Reflections of an English Language Teacher’.

Learning vocabulary – the task at the heart of language learning – necessarily entails a lot of memorization, and it makes sense for this to be done, as much as possible, outside the classroom. In fact, it has to be done outside the classroom, as there will simply never be enough time to do it in the classroom. Here is the first big problem. Even when my students, back in the 1990s, were equipped with their sets of cards, and had been instructed how to make the best use of them while sitting on the bus or the train (there were some excellent tips in Stuart Redman et al’s A Way with Words, CUP 1990), the majority just never managed to find the time. Despite all their protestations to the contrary, sufficient motivation was lacking. There is no reason to suppose that things will be any different with word card apps, even with all their gamification and games. It will remain the job of the teacher to push the motivation.

In addition to the central problem of motivation, there are a number of other areas in which digital word cards are no different from their cardboard predecessors. The first of these is that the majority of word cards do not contain enough information. Typically, there is just a translation; possibly a key to the part of speech, an example sentence and access to a recording of the word. There is only very rarely information about collocations, connotations or cultural background. Lexical priming is not going to happen this way! I have learnt, for example, from my Anki cards that die Ansiedlung means ‘location’ or ‘settlement’, but I’m still not too sure how to use the word. Word cards work best for receptive knowledge, for translating from the target language into your own language. They are less useful for learners who want or need to build their productive vocabulary. Learners can be helped by their teachers to produce or edit fuller, more useful cards, but this entails training. Training, in turns, usually entails classroom time.

Time (and motivation) is also needed to prepare the cards. All the digital apps allow lexical sets and ready-made cards to be imported, just as it used to be possible to buy sets of laminated cards and filing boxes. But there are three problems with taking this short-cut. Firstly, the ready-made sets are not usually very good (see the paragraph above), however glossy they may look. Secondly, and more importantly, ready-made sets are highly unlikely to match precisely the needs of individual classes, let alone individual learners. Finally, the effort involved in producing (and subsequently editing) one’s own cards will have a pay-off in long-term memorization. For all of these reasons, digital word card use is likely to be more effective if the teacher addresses these issues in the classroom.

Word cards are also static. Once the card has been prepared with a translation and an example sentence and so on, this tends to remain fixed. The problem here is that learning is strengthened if the learner meets or uses the input again in a way that involves some change to the form and use of the word (Joe, 1998). That is, the new word is put into a slightly different context from the original meeting. This is called ‘generative use’. (Nation, I.S.P. 2008 Teaching Vocabulary: Strategies and Techniques (Boston, MA: Heinle Cengage Learning) p27) Once again, there is useful classroom work that teachers can do to deal with this issue.

Multiple exposure to a vocabulary item through spaced repetition is likely to help the process of that item ending up in the long-term memory. But frequency of repetition (what Patrick Hanks, in his book Lexical Analysis, describes as social salience) is not the end of the story. Long term memorization is more likely to take place when there is what Hanks calls cognitive salience … and this is much more likely when the item is embedded and encountered in some sort of memorable (e.g. weird) context. Teachers can encourage their students to illustrate target items in cognitively salient ways, and they can also exploit the dynamics of the classroom environment to the same effect.

fluent_in_three_monthsDespite the claims of word card enthusiasts like ‘Benny the Irish polygot’ blogger of Fluent in 3 Months , no one is going to learn a language just by using this kind of software. It should not be assumed that learning from word lists or word cards means that the words are learned forever, nor does it mean that all knowledge of a word has been learned, even though word cards can be designed to include a wide range of information about a word (Schmitt and Schmitt, 1995). Learning from lists or word cards is only an initial stage of learning a particular word. It is, however, a learning tool for use at any level of language proficiency. (Nation, P. & Waring, R. ‘Vocabulary size, text coverage and word lists’ in Schmitt, N. & McCarthy, M. (eds) 1997 Vocabulary (Cambridge University Press) ppp.12 – 13)

In order to be able to use the words of a target language, confidently and fluently, learners will need opportunities to use them, meaningfully and communicatively. They will also benefit from feedback on how they are using them. Gamified gap-fills and matching tasks, score cards and progress charts cannot do this. Word card apps are a valuable tool for language learners, and can be very usefully exploited in blended contexts. If (and it’s a big ‘if’) students can be motivated to do this kind of self-study, classroom time can be freed up to spend on meaning-focused language practice and learning strategy training. In the second part of this post, I’ll be looking at specific, practical examples of what teachers can do in the classroom.

busuu is an online language learning service. I did not refer to it in the ‘guide’ because it does not seem to use any adaptive learning software yet, but this is set to change. According to founder Bernhard Niesner, the company is already working on incorporation of adaptive software.

A few statistics will show the significance of busuu. The site currently has over 40 million users (El Pais, 8 February 2014) and is growing by 40,000 a day. The basic service is free, but the premium service costs Euro 69.99 a year. The company will not give detailed user statistics, but say that ‘hundreds of thousands’ are paying for the premium service, that turnover was a 7-figure number last year and will rise to 8 figures this year.

It is easy to understand why traditional publishers might be worried about competition like busuu and why they are turning away from print-based courses.

Busuu offers 12 languages, but, as a translation-based service, any one of these languages can only be studied if you speak one of the other languages on offer. The levels of the different courses are tagged to the CEFR.

busuuframe

In some ways, busuu is not so different from competitors like duolingo. Students are presented with bilingual vocabulary sets, accompanied by pictures, which are tested in a variety of ways. As with duolingo, some of this is a little strange. For German at level A1, I did a vocabulary set on ‘pets’ which presented the German words for a ferret, a tortoise and a guinea-pig, among others. There are dialogues, which are both written and recorded, that are sometimes surreal.

Child: Mum, look over there, there’s a dog without a collar, can we take it?

Mother: No, darling, our house is too small to have a dog.

Child: Mum your bedroom is very big, it can sleep with dad and you.

Mother: Come on, I’ll buy you a toy dog.

The dialogues are followed up by multiple choice questions which test your memory of the dialogue. There are also writing exercises where you are given a picture from National Geographic and asked to write about it. It’s not always clear what one is supposed to write. What would you say about a photo that showed a large number of parachutes in the sky, beyond ‘I can see a lot of parachutes’?

There are also many gamification elements. There is a learning carrot where you can set your own learning targets and users can earn ‘busuuberries’ which can then be traded in for animations in a ‘language garden’.

2014-02-25_0911

But in one significant respect, busuu differs from its competitors. It combines the usual vocabulary, grammar and dialogue work with social networking. Users can interact with text or video, and feedback on written work comes from other users. My own experience with this was mixed, but the potential is clear. Feedback on other learners’ work is encouraged by the awarding of ‘busuuberries’.

We will have to wait and see what busuu does with adaptive software and what it will do with the big data it is generating. For the moment, its interest lies in illustrating what could be done with a learning platform and adaptive software. The big ELT publishers know they have a new kind of competition and, with a lot more money to invest than busuu, we have to assume that what they will launch a few years from now will do everything that busuu does, and more. Meanwhile, busuu are working on site redesign and adaptivity. They would do well, too, to sort out their syllabus!