Posts Tagged ‘investment’

From time to time, I have mentioned Programmed Learning (or Programmed Instruction) in this blog (here and here, for example). It felt like time to go into a little more detail about what Programmed Instruction was (and is) and why I think it’s important to know about it.

A brief description

The basic idea behind Programmed Instruction was that subject matter could be broken down into very small parts, which could be organised into an optimal path for presentation to students. Students worked, at their own speed, through a series of micro-tasks, building their mastery of each nugget of learning that was presented, not progressing from one to the next until they had demonstrated they could respond accurately to the previous task.

There were two main types of Programmed Instruction: linear programming and branching programming. In the former, every student would follow the same path, the same sequence of frames. This could be used in classrooms for whole-class instruction and I tracked down a book (illustrated below) called ‘Programmed English Course Student’s Book 1’ (Hill, 1966), which was an attempt to transfer the ideas behind Programmed Instruction to a zero-tech, class environment. This is very similar in approach to the material I had to use when working at an Inlingua school in the 1980s.

Programmed English Course

Comparatives strip

An example of how self-paced programming worked is illustrated here, with a section on comparatives.

With branching programming, ‘extra frames (or branches) are provided for students who do not get the correct answer’ (Kay et al., 1968: 19). This was only suitable for self-study, but it was clearly preferable, as it allowed for self-pacing and some personalization. The material could be presented in books (which meant that students had to flick back and forth in their books) or with special ‘teaching machines’, but the latter were preferred.

In the words of an early enthusiast, Programmed Instruction was essentially ‘a device to control a student’s behaviour and help him to learn without the supervision of a teacher’ (Kay et al.,1968: 58). The approach was inspired by the work of Skinner and it was first used as part of a university course in behavioural psychology taught by Skinner at Harvard University in 1957. It moved into secondary schools for teaching mathematics in 1959 (Saettler, 2004: 297).

Enthusiasm and uptake

The parallels between current enthusiasm for the power of digital technology to transform education and the excitement about Programmed Instruction and teaching machines in the 1960s are very striking (McDonald et al., 2005: 90). In 1967, it was reported that ‘we are today on the verge of what promises to be a revolution in education’ (Goodman, 1967: 3) and that ‘tremors of excitement ran through professional journals and conferences and department meetings from coast to coast’ (Kennedy, 1967: 871). The following year, another commentator referred to the way that the field of education had been stirred ‘with an almost Messianic promise of a breakthrough’ (Ornstein, 1968: 401). Programmed instruction was also seen as an exciting business opportunity: ‘an entire industry is just coming into being and significant sales and profits should not be too long in coming’, wrote one hopeful financial analyst as early as 1961 (Kozlowski, 1967: 47).

The new technology seemed to offer a solution to the ‘problems of education’. Media reports in 1963 in Germany, for example, discussed a shortage of teachers, large classes and inadequate learning progress … ‘an ‘urgent pedagogical emergency’ that traditional teaching methods could not resolve’ (Hof, 2018). Individualised learning, through Programmed Instruction, would equalise educational opportunity and if you weren’t part of it, you would be left behind. In the US, two billion dollars were spent on educational technology by the government in the decade following the passing of the National Defense Education Act, and this was added to by grants from private foundations. As a result, ‘the production of teaching machines began to flourish, accompanied by the marketing of numerous ‘teaching units’ stamped into punch cards as well as less expensive didactic programme books and index cards. The market grew dramatically in a short time’ (Hof, 2018).

In the field of language learning, however, enthusiasm was more muted. In the year in which he completed his doctoral studies[1], the eminent linguist, Bernard Spolsky noted that ‘little use is actually being made of the new technique’ (Spolsky, 1966). A year later, a survey of over 600 foreign language teachers at US colleges and universities reported that only about 10% of them had programmed materials in their departments (Valdman, 1968: 1). In most of these cases, the materials ‘were being tried out on an experimental basis under the direction of their developers’. And two years after that, it was reported that ‘programming has not yet been used to any very great extent in language teaching, so there is no substantial body of experience from which to draw detailed, water-tight conclusions’ (Howatt, 1969: 164).

By the early 1970s, Programmed Instruction was already beginning to seem like yesterday’s technology, even though the principles behind it are still very much alive today (Thornbury (2017) refers to Duolingo as ‘Programmed Instruction’). It would be nice to think that language teachers of the day were more sceptical than, for example, their counterparts teaching mathematics. It would be nice to think that, like Spolsky, they had taken on board Chomsky’s (1959) demolition of Skinner. But the widespread popularity of Audiolingual methods suggests otherwise. Audiolingualism, based essentially on the same Skinnerian principles as Programmed Instruction, needed less outlay on technology. The machines (a slide projector and a record or tape player) were cheaper than the teaching machines, could be used for other purposes and did not become obsolete so quickly. The method also lent itself more readily to established school systems (i.e. whole-class teaching) and the skills sets of teachers of the day. Significantly, too, there was relatively little investment in Programmed Instruction for language teaching (compared to, say, mathematics), since this was a smallish and more localized market. There was no global market for English language learning as there is today.

Lessons to be learned

1 Shaping attitudes

It was not hard to persuade some educational authorities of the value of Programmed Instruction. As discussed above, it offered a solution to the problem of ‘the chronic shortage of adequately trained and competent teachers at all levels in our schools, colleges and universities’, wrote Goodman (1967: 3), who added, there is growing realisation of the need to give special individual attention to handicapped children and to those apparently or actually retarded’. The new teaching machines ‘could simulate the human teacher and carry out at least some of his functions quite efficiently’ (Goodman, 1967: 4). This wasn’t quite the same thing as saying that the machines could replace teachers, although some might have hoped for this. The official line was more often that the machines could ‘be used as devices, actively co-operating with the human teacher as adaptive systems and not just merely as aids’ (Goodman, 1967: 37). But this more nuanced message did not always get through, and ‘the Press soon stated that robots would replace teachers and conjured up pictures of classrooms of students with little iron men in front of them’ (Kay et al., 1968: 161).

For teachers, though, it was one thing to be told that the machines would free their time to perform more meaningful tasks, but harder to believe when this was accompanied by a ‘rhetoric of the instructional inadequacies of the teacher’ (McDonald, et al., 2005: 88). Many teachers felt threatened. They ‘reacted against the ‘unfeeling machine’ as a poor substitute for the warm, responsive environment provided by a real, live teacher. Others have seemed to take it more personally, viewing the advent of programmed instruction as the end of their professional career as teachers. To these, even the mention of programmed instruction produces a momentary look of panic followed by the appearance of determination to stave off the ominous onslaught somehow’ (Tucker, 1972: 63).

Some of those who were pushing for Programmed Instruction had a bigger agenda, with their sights set firmly on broader school reform made possible through technology (Hof, 2018). Individualised learning and Programmed Instruction were not just ends in themselves: they were ways of facilitating bigger changes. The trouble was that teachers were necessary for Programmed Instruction to work. On the practical level, it became apparent that a blend of teaching machines and classroom teaching was more effective than the machines alone (Saettler, 2004: 299). But the teachers’ attitudes were crucial: a research study involving over 6000 students of Spanish showed that ‘the more enthusiastic the teacher was about programmed instruction, the better the work the students did, even though they worked independently’ (Saettler, 2004: 299). In other researched cases, too, ‘teacher attitudes proved to be a critical factor in the success of programmed instruction’ (Saettler, 2004: 301).

2 Returns on investment

Pricing a hyped edtech product is a delicate matter. Vendors need to see a relatively quick return on their investment, before a newer technology knocks them out of the market. Developments in computing were fast in the late 1960s, and the first commercially successful personal computer, the Altair 8800, appeared in 1974. But too high a price carried obvious risks. In 1967, the cheapest teaching machine in the UK, the Tutorpack (from Packham Research Ltd), cost £7 12s (equivalent to about £126 today), but machines like these were disparagingly referred to as ‘page-turners’ (Higgins, 1983: 4). A higher-end linear programming machine cost twice this amount. Branching programme machines cost a lot more. The Mark II AutoTutor (from USI Great Britain Limited), for example, cost £31 per month (equivalent to £558), with eight reels of programmes thrown in (Goodman, 1967: 26). A lower-end branching machine, the Grundytutor, could be bought for £ 230 (worth about £4140 today).

Teaching machines (from Goodman)AutoTutor Mk II (from Goodman)

This was serious money, and any institution splashing out on teaching machines needed to be confident that they would be well used for a long period of time (Nordberg, 1965). The programmes (the software) were specific to individual machines and the content could not be updated easily. At the same time, other technological developments (cine projectors, tape recorders, record players) were arriving in classrooms, and schools found themselves having to pay for technical assistance and maintenance. The average teacher was ‘unable to avail himself fully of existing aids because, to put it bluntly, he is expected to teach for too many hours a day and simply has not the time, with all the administrative chores he is expected to perform, either to maintain equipment, to experiment with it, let alone keeping up with developments in his own and wider fields. The advent of teaching machines which can free the teacher to fulfil his role as an educator will intensify and not diminish the problem’ (Goodman, 1967: 44). Teaching machines, in short, were ‘oversold and underused’ (Cuban, 2001).

3 Research and theory

Looking back twenty years later, B. F. Skinner conceded that ‘the machines were crude, [and] the programs were untested’ (Skinner, 1986: 105). The documentary record suggests that the second part of this statement is not entirely true. Herrick (1966: 695) reported that ‘an overwhelming amount of research time has been invested in attempts to determine the relative merits of programmed instruction when compared to ‘traditional’ or ‘conventional’ methods of instruction. The results have been almost equally overwhelming in showing no significant differences’. In 1968, Kay et al (1968: 96) noted that ‘there has been a definite effort to examine programmed instruction’. A later meta-analysis of research in secondary education (Kulik et al.: 1982) confirmed that ‘Programmed Instruction did not typically raise student achievement […] nor did it make students feel more positively about the subjects they were studying’.

It was not, therefore, the case that research was not being done. It was that many people were preferring not to look at it. The same holds true for theoretical critiques. In relation to language learning, Spolsky (1966) referred to Chomsky’s (1959) rebuttal of Skinner’s arguments, adding that ‘there should be no need to rehearse these inadequacies, but as some psychologists and even applied linguists appear to ignore their existence it might be as well to remind readers of a few’. Programmed Instruction might have had a limited role to play in language learning, but vendors’ claims went further than that and some people believed them: ‘Rather than addressing themselves to limited and carefully specified FL tasks – for example the teaching of spelling, the teaching of grammatical concepts, training in pronunciation, the acquisition of limited proficiency within a restricted number of vocabulary items and grammatical features – most programmers aimed at self-sufficient courses designed to lead to near-native speaking proficiency’ (Valdman, 1968: 2).

4 Content

When learning is conceptualised as purely the acquisition of knowledge, technological optimists tend to believe that machines can convey it more effectively and more efficiently than teachers (Hof, 2018). The corollary of this is the belief that, if you get the materials right (plus the order in which they are presented and appropriate feedback), you can ‘to a great extent control and engineer the quality and quantity of learning’ (Post, 1972: 14). Learning, in other words, becomes an engineering problem, and technology is its solution.

One of the problems was that technology vendors were, first and foremost, technology specialists. Content was almost an afterthought. Materials writers needed to be familiar with the technology and, if not, they were unlikely to be employed. Writers needed to believe in the potential of the technology, so those familiar with current theory and research would clearly not fit in. The result was unsurprising. Kennedy (1967: 872) reported that ‘there are hundreds of programs now available. Many more will be published in the next few years. Watch for them. Examine them critically. They are not all of high quality’. He was being polite.

5 Motivation

As is usually the case with new technologies, there was a positive novelty effect with Programmed Instruction. And, as is always the case, the novelty effect wears off: ‘students quickly tired of, and eventually came to dislike, programmed instruction’ (McDonald et al.: 89). It could not really have been otherwise: ‘human learning and intrinsic motivation are optimized when persons experience a sense of autonomy, competence, and relatedness in their activity. Self-determination theorists have also studied factors that tend to occlude healthy functioning and motivation, including, among others, controlling environments, rewards contingent on task performance, the lack of secure connection and care by teachers, and situations that do not promote curiosity and challenge’ (McDonald et al.: 93). The demotivating experience of using these machines was particularly acute with younger and ‘less able’ students, as was noted at the time (Valdman, 1968: 9).

The unlearned lessons

I hope that you’ll now understand why I think the history of Programmed Instruction is so relevant to us today. In the words of my favourite Yogi-ism, it’s like deja vu all over again. I have quoted repeatedly from the article by McDonald et al (2005) and I would highly recommend it – available here. Hopefully, too, Audrey Watters’ forthcoming book, ‘Teaching Machines’, will appear before too long, and she will, no doubt, have much more of interest to say on this topic.

References

Chomsky N. 1959. ‘Review of Skinner’s Verbal Behavior’. Language, 35:26–58.

Cuban, L. 2001. Oversold & Underused: Computers in the Classroom. (Cambridge, MA: Harvard University Press)

Goodman, R. 1967. Programmed Learning and Teaching Machines 3rd edition. (London: English Universities Press)

Herrick, M. 1966. ‘Programmed Instruction: A critical appraisal’ The American Biology Teacher, 28 (9), 695 -698

Higgins, J. 1983. ‘Can computers teach?’ CALICO Journal, 1 (2)

Hill, L. A. 1966. Programmed English Course Student’s Book 1. (Oxford: Oxford University Press)

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47:4, 445-465

Howatt, A. P. R. 1969. Programmed Learning and the Language Teacher. (London: Longmans)

Kay, H., Dodd, B. & Sime, M. 1968. Teaching Machines and Programmed Instruction. (Harmondsworth: Penguin)

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 / 6, 47 – 54

Kulik, C.-L., Schwalb, B. & Kulik, J. 1982. ‘Programmed Instruction in Secondary Education: A Meta-analysis of Evaluation Findings’ Journal of Educational Research, 75: 133 – 138

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 / 2, 84 – 98

Nordberg, R. B. 1965. Teaching machines-six dangers and one advantage. In J. S. Roucek (Ed.), Programmed teaching: A symposium on automation in education (pp. 1–8). (New York: Philosophical Library)

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 / 7, 401 – 410

Post, D. 1972. ‘Up the programmer: How to stop PI from boring learners and strangling results’. Educational Technology, 12(8), 14–1

Saettler, P. 2004. The Evolution of American Educational Technology. (Greenwich, Conn.: Information Age Publishing)

Skinner, B. F. 1986. ‘Programmed Instruction Revisited’ The Phi Delta Kappan, 68 (2), 103 – 110

Spolsky, B. 1966. ‘A psycholinguistic critique of programmed foreign language instruction’ International Review of Applied Linguistics in Language Teaching, Volume 4, Issue 1-4: 119–130

Thornbury, S. 2017. Scott Thornbury’s 30 Language Teaching Methods. (Cambridge: Cambridge University Press)

Tucker, C. 1972. ‘Programmed Dictation: An Example of the P.I. Process in the Classroom’. TESOL Quarterly, 6(1), 61-70

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14

 

 

 

[1] Spolsky’ doctoral thesis for the University of Montreal was entitled ‘The psycholinguistic basis of programmed foreign language instruction’.

 

 

 

 

 

In my last post , I asked why it is so easy to believe that technology (in particular, technological innovations) will offer solutions to whatever problems exist in language learning and teaching. A simple, but inadequate, answer is that huge amounts of money have been invested in persuading us. Without wanting to detract from the significance of this, it is clearly not sufficient as an explanation. In an attempt to develop my own understanding, I have been turning more and more to the idea of ‘social imaginaries’. In many ways, this is also an attempt to draw together the various interests that I have had since starting this blog.

The Canadian philosopher, Charles Taylor, describes a ‘social imaginary’ as a ‘common understanding that makes possible common practices and a widely shared sense of legitimacy’ (Taylor, 2004: 23). As a social imaginary develops over time, it ‘begins to define the contours of [people’s] worlds and can eventually come to count as the taken-for-granted shape of things, too obvious to mention’ (Taylor, 2004: 29). It is, however, not just a set of ideas or a shared narrative: it is also a set of social practices that enact those understandings, whilst at the same time modifying or solidifying them. The understandings make the practices possible, and it is the practices that largely carry the understanding (Taylor, 2004: 25). In the process, the language we use is filled with new associations and our familiarity with these associations shapes ‘our perceptions and expectations’ (Worster, 1994, quoted in Moore, 2015: 33). A social imaginary, then, is a complex system that is not technological or economic or social or political or educational, but all of these (Urry, 2016). The image of the patterns of an amorphous mass of moving magma (Castoriadis, 1987), flowing through pre-existing channels, but also, at times, striking out along new paths, may offer a helpful metaphor.

Lava flow Hawaii

Technology, of course, plays a key role in contemporary social imaginaries and the term ‘sociotechnical imaginary’ is increasingly widely used. The understandings of the sociotechnical imaginary typically express visions of social progress and a desirable future that is made possible by advances in science and technology (Jasanoff & Kim, 2015: 4). In education, technology is presented as capable of overcoming human failings and the dark ways of the past, of facilitating a ‘pedagogical utopia of natural, authentic teaching and learning’ (Friesen, forthcoming). As such understandings become more widespread and as the educational practices (platforms, apps, etc.) which both shape and are shaped by them become equally widespread, technology has come to be seen as a ‘solution’ to the ‘problem’ of education (Friesen, forthcoming). We need to be careful, however, that having shaped the technology, it does not comes to shape us (see Cobo, 2019, for a further exploration of this idea).

As a way of beginning to try to understand what is going on in edtech in ELT, which is not so very different from what is taking place in education more generally, I have sketched a number of what I consider key components of the shared understandings and the social practices that are related to them. These are closely interlocking pieces and each of them is itself embedded in much broader understandings. They evolve over time and their history can be traced quite easily. Taken together, they do, I think, help us to understand a little more why technology in ELT seems so seductive.

1 The main purpose of English language teaching is to prepare people for the workplace

There has always been a strong connection between learning an additional living language (such as English) and preparing for the world of work. The first modern language schools, such as the Berlitz schools at the end of the 19th century with their native-speaker teachers and monolingual methods, positioned themselves as primarily vocational, in opposition to the kinds of language teaching taking place in schools and universities, which were more broadly humanistic in their objectives. Throughout the 20th century, and especially as English grew as a global language, the public sector, internationally, grew closer to the methods and objectives of the private schools. The idea that learning English might serve other purposes (e.g. cultural enrichment or personal development) has never entirely gone away, as witnessed by the Council of Europe’s list of objectives (including the promotion of mutual understanding and European co-operation, and the overcoming of prejudice and discrimination) in the Common European Framework, but it is often forgotten.

The clarion calls from industry to better align education with labour markets, present and future, grow louder all the time, often finding expression in claims that ‘education is unfit for purpose.’ It is invariably assumed that this purpose is to train students in the appropriate skills to enhance their ‘human capital’ in an increasingly competitive and global market (Lingard & Gale, 2007). Educational agendas are increasingly set by the world of business (bodies like the OECD or the World Economic Forum, corporations like Google or Microsoft, and national governments which share their priorities (see my earlier post about neo-liberalism and solutionism ).

One way in which this shift is reflected in English language teaching is in the growing emphasis that is placed on ‘21st century skills’ in teaching material. Sometimes called ‘life skills’, they are very clearly concerned with the world of work, rather than the rest of our lives. The World Economic Forum’s 2018 Future of Jobs survey lists the soft skills that are considered important in the near future and they include ‘creativity’, ‘critical thinking’, ‘emotional intelligence’ and ‘leadership’. (The fact that the World Economic Forum is made up of a group of huge international corporations (e.g. J.P. Morgan, HSBC, UBS, Johnson & Johnson) with a very dubious track record of embezzlement, fraud, money-laundering and tax evasion has not resulted in much serious, public questioning of the view of education expounded by the WEF.)

Without exception, the ELT publishers have brought these work / life skills into their courses, and the topic is an extremely popular one in ELT blogs and magazines, and at conferences. Two of the four plenaries at this year’s international IATEFL conference are concerned with these skills. Pearson has a wide range of related products, including ‘a four-level competency-based digital course that provides engaging instruction in the essential work and life skills competencies that adult learners need’. Macmillan ELT made ‘life skills’ the central plank of their marketing campaign and approach to product design, and even won a British Council ELTon (see below) Award for ‘Innovation in teacher resources) in 2015 for their ‘life skills’ marketing campaign. Cambridge University Press has developed a ‘Framework for Life Competencies’ which allows these skills to be assigned numerical values.

The point I am making here is not that these skills do not play an important role in contemporary society, nor that English language learners may not benefit from some training in them. The point, rather, is that the assumption that English language learning is mostly concerned with preparation for the workplace has become so widespread that it becomes difficult to think in another way.

2 Technological innovation is good and necessary

The main reason that soft skills are deemed to be so important is that we live in a rapidly-changing world, where the unsubstantiated claim that 85% (or whatever other figure comes to mind) of current jobs won’t exist 10 years from now is so often repeated that it is taken as fact . Whether or not this is true is perhaps less important to those who make the claim than the present and the future that they like to envisage. The claim is, at least, true-ish enough to resonate widely. Since these jobs will disappear, and new ones will emerge, because of technological innovations, education, too, will need to innovate to keep up.

English language teaching has not been slow to celebrate innovation. There were coursebooks called ‘Cutting Edge’ (1998) and ‘Innovations’ (2005), but more recently the connections between innovation and technology have become much stronger. The title of the recent ‘Language Hub’ (2019) was presumably chosen, in part, to conjure up images of digital whizzkids in fashionable co-working start-up spaces. Technological innovation is explicitly promoted in the Special Interest Groups of IATEFL and TESOL. Despite a singular lack of research that unequivocally demonstrates a positive connection between technology and language learning, the former’s objective is ‘to raise awareness among ELT professionals of the power of learning technologies to assist with language learning’. There is a popular annual conference, called InnovateELT , which has the tagline ‘Be Part of the Solution’, and the first problem that this may be a solution to is that our students need to be ‘ready to take on challenging new careers’.

Last, but by no means least, there are the annual British Council ELTon awards  with a special prize for digital innovation. Among the British Council’s own recent innovations are a range of digitally-delivered resources to develop work / life skills among teens.

Again, my intention (here) is not to criticise any of the things mentioned in the preceding paragraphs. It is merely to point to a particular structure of feeling and the way that is enacted and strengthened through material practices like books, social groups, conferences and other events.

3 Technological innovations are best driven by the private sector

The vast majority of people teaching English language around the world work in state-run primary and secondary schools. They are typically not native-speakers of English, they hold national teaching qualifications and they are frequently qualified to teach other subjects in addition to English (often another language). They may or may not self-identify as teachers of ‘ELT’ or ‘EFL’, often seeing themselves more as ‘school teachers’ or ‘language teachers’. People who self-identify as part of the world of ‘ELT or ‘TEFL’ are more likely to be native speakers and to work in the private sector (including private or semi-private language schools, universities (which, in English-speaking countries, are often indistinguishable from private sector institutions), publishing companies, and freelancers). They are more likely to hold international (TEFL) qualifications or higher degrees, and they are less likely to be involved in the teaching of other languages.

The relationship between these two groups is well illustrated by the practice of training days, where groups of a few hundred state-school teachers participate in workshops organised by publishing companies and delivered by ELT specialists. In this context, state-school teachers are essentially in a client role when they are in contact with the world of ‘ELT’ – as buyers or potential buyers of educational products, training or technology.

Technological innovation is invariably driven by the private sector. This may be in the development of technologies (platforms, apps and so on), in the promotion of technology (through training days and conference sponsorship, for example), or in training for technology (with consultancy companies like ELTjam or The Consultants-E, which offer a wide range of technologically oriented ‘solutions’).

As in education more generally, it is believed that the private sector can be more agile and more efficient than state-run bodies, which continue to decline in importance in educational policy-setting. When state-run bodies are involved in technological innovation in education, it is normal for them to work in partnership with the private sector.

4 Accountability is crucial

Efficacy is vital. It makes no sense to innovate unless the innovations improve something, but for us to know this, we need a way to measure it. In a previous post , I looked at Pearson’s ‘Asking More: the Path to Efficacy’ by CEO John Fallon (who will be stepping down later this year). Efficacy in education, says Fallon, is ‘making a measurable impact on someone’s life through learning’. ‘Measurable’ is the key word, because, as Fallon claims, ‘it is increasingly possible to determine what works and what doesn’t in education, just as in healthcare.’ We need ‘a relentless focus’ on ‘the learning outcomes we deliver’ because it is these outcomes that can be measured in ‘a systematic, evidence-based fashion’. Measurement, of course, is all the easier when education is delivered online, ‘real-time learner data’ can be captured, and the power of analytics can be deployed.

Data is evidence, and it’s as easy to agree on the importance of evidence as it is hard to decide on (1) what it is evidence of, and (2) what kind of data is most valuable. While those questions remain largely unanswered, the data-capturing imperative invades more and more domains of the educational world.

English language teaching is becoming data-obsessed. From language scales, like Pearson’s Global Scale of English to scales of teacher competences, from numerically-oriented formative assessment practices (such as those used on many LMSs) to the reporting of effect sizes in meta-analyses (such as those used by John Hattie and colleagues), datafication in ELT accelerates non-stop.

The scales and frameworks are all problematic in a number of ways (see, for example, this post on ‘The Mismeasure of Language’) but they have undeniably shaped the way that we are able to think. Of course, we need measurable outcomes! If, for the present, there are privacy and security issues, it is to be hoped that technology will find solutions to them, too.

REFERENCES

Castoriadis, C. (1987). The Imaginary Institution of Society. Cambridge: Polity Press.

Cobo, C. (2019). I Accept the Terms and Conditions. Montevideo: International Development Research Centre / Center for Research Ceibal Foundation. https://adaptivelearninginelt.files.wordpress.com/2020/01/41acf-cd84b5_7a6e74f4592c460b8f34d1f69f2d5068.pdf

Friesen, N. (forthcoming) The technological imaginary in education, or: Myth and enlightenment in ‘Personalized Learning’. In M. Stocchetti (Ed.) The Digital Age and its Discontents. University of Helsinki Press. Available at https://www.academia.edu/37960891/The_Technological_Imaginary_in_Education_or_Myth_and_Enlightenment_in_Personalized_Learning_

Jasanoff, S. & Kim, S.-H. (2015). Dreamscapes of Modernity. Chicago: University of Chicago Press.

Lingard, B. & Gale, T. (2007). The emergent structure of feeling: what does it mean for critical educational studies and research?, Critical Studies in Education, 48:1, pp. 1-23

Moore, J. W. (2015). Capitalism in the Web of Life. London: Verso.

Robbins, K. & Webster, F. (1989]. The Technical Fix. Basingstoke: Macmillan Education.

Taylor, C. (2014). Modern Social Imaginaries. Durham, NC: Duke University Press.

Urry, J. (2016). What is the Future? Cambridge: Polity Press.

 

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

Back in the middle of the last century, the first interactive machines for language teaching appeared. Previously, there had been phonograph discs and wire recorders (Ornstein, 1968: 401), but these had never really taken off. This time, things were different. Buoyed by a belief in the power of technology, along with the need (following the Soviet Union’s successful Sputnik programme) to demonstrate the pre-eminence of the United States’ technological expertise, the interactive teaching machines that were used in programmed instruction promised to revolutionize language learning (Valdman, 1968: 1). From coast to coast, ‘tremors of excitement ran through professional journals and conferences and department meetings’ (Kennedy, 1967: 871). The new technology was driven by hard science, supported and promoted by the one of the most well-known and respected psychologists and public intellectuals of the day (Skinner, 1961).

In classrooms, the machines acted as powerfully effective triggers in generating situational interest (Hidi & Renninger, 2006). Even more exciting than the mechanical teaching machines were the computers that were appearing on the scene. ‘Lick’ Licklider, a pioneer in interactive computing at the Advanced Research Projects Agency in Arlington, Virginia, developed an automated drill routine for learning German by hooking up a computer, two typewriters, an oscilloscope and a light pen (Noble, 1991: 124). Students loved it, and some would ‘go on and on, learning German words until they were forced by scheduling to cease their efforts’. Researchers called the seductive nature of the technology ‘stimulus trapping’, and Licklider hoped that ‘before [the student] gets out from under the control of the computer’s incentives, [they] will learn enough German words’ (Noble, 1991: 125).

With many of the developed economies of the world facing a critical shortage of teachers, ‘an urgent pedagogical emergency’ (Hof, 2018), the new approach was considered to be extremely efficient and could equalise opportunity in schools across the country. It was ‘here to stay: [it] appears destined to make progress that could well go beyond the fondest dreams of its originators […] an entire industry is just coming into being and significant sales and profits should not be too long in coming’ (Kozlowski, 1961: 47).

Unfortunately, however, researchers and entrepreneurs had massively underestimated the significance of novelty effects. The triggered situational interest of the machines did not lead to intrinsic individual motivation. Students quickly tired of, and eventually came to dislike, programmed instruction and the machines that delivered it (McDonald et al.: 2005: 89). What’s more, the machines were expensive and ‘research studies conducted on its effectiveness showed that the differences in achievement did not constantly or substantially favour programmed instruction over conventional instruction (Saettler, 2004: 303). Newer technologies, with better ‘stimulus trapping’, were appearing. Programmed instruction lost its backing and disappeared, leaving as traces only its interest in clearly defined learning objectives, the measurement of learning outcomes and a concern with the efficiency of learning approaches.

Hot on the heels of programmed instruction came the language laboratory. Futuristic in appearance, not entirely unlike the deck of the starship USS Enterprise which launched at around the same time, language labs captured the public imagination and promised to explore the final frontiers of language learning. As with the earlier teaching machines, students were initially enthusiastic. Even today, when language labs are introduced into contexts where they may be perceived as new technology, they can lead to high levels of initial motivation (e.g. Ramganesh & Janaki, 2017).

Given the huge investments into these labs, it’s unfortunate that initial interest waned fast. By 1969, many of these rooms had turned into ‘“electronic graveyards,” sitting empty and unused, or perhaps somewhat glorified study halls to which students grudgingly repair to don headphones, turn down the volume, and prepare the next period’s history or English lesson, unmolested by any member of the foreign language faculty’ (Turner, 1969: 1, quoted in Roby, 2003: 527). ‘Many second language students shudder[ed] at the thought of entering into the bowels of the “language laboratory” to practice and perfect the acoustical aerobics of proper pronunciation skills. Visions of sterile white-walled, windowless rooms, filled with endless bolted-down rows of claustrophobic metal carrels, and overseen by a humorless, lab director, evoke[d] fear in the hearts of even the most stout-hearted prospective second-language learners (Wiley, 1990: 44).

By the turn of this century, language labs had mostly gone, consigned to oblivion by the appearance of yet newer technology: the internet, laptops and smartphones. Education had been on the brink of being transformed through new learning technologies for decades (Laurillard, 2008: 1), but this time it really was different. It wasn’t just one technology that had appeared, but a whole slew of them: ‘artificial intelligence, learning analytics, predictive analytics, adaptive learning software, school management software, learning management systems (LMS), school clouds. No school was without these and other technologies branded as ‘superintelligent’ by the late 2020s’ (Macgilchrist et al., 2019). The hardware, especially phones, was ubiquitous and, therefore, free. Unlike teaching machines and language laboratories, students were used to using the technology and expected to use their devices in their studies.

A barrage of publicity, mostly paid for by the industry, surrounded the new technologies. These would ‘meet the demands of Generation Z’, the new generation of students, now cast as consumers, who ‘were accustomed to personalizing everything’.  AR, VR, interactive whiteboards, digital projectors and so on made it easier to ‘create engaging, interactive experiences’. The ‘New Age’ technologies made learning fun and easy,  ‘bringing enthusiasm among the students, improving student engagement, enriching the teaching process, and bringing liveliness in the classroom’. On top of that, they allowed huge amounts of data to be captured and sold, whilst tracking progress and attendance. In any case, resistance to digital technology, said more than one language teaching expert, was pointless (Styring, 2015).slide

At the same time, technology companies increasingly took on ‘central roles as advisors to national governments and local districts on educational futures’ and public educational institutions came to be ‘regarded by many as dispensable or even harmful’ (Macgilchrist et al., 2019).

But, as it turned out, the students of Generation Z were not as uniformly enthusiastic about the new technology as had been assumed, and resistance to digital, personalized delivery in education was not long in coming. In November 2018, high school students at Brooklyn’s Secondary School for Journalism staged a walkout in protest at their school’s use of Summit Learning, a web-based platform promoting personalized learning developed by Facebook. They complained that the platform resulted in coursework requiring students to spend much of their day in front of a computer screen, that made it easy to cheat by looking up answers online, and that some of their teachers didn’t have the proper training for the curriculum (Leskin, 2018). Besides, their school was in a deplorable state of disrepair, especially the toilets. There were similar protests in Kansas, where students staged sit-ins, supported by their parents, one of whom complained that ‘we’re allowing the computers to teach and the kids all looked like zombies’ before pulling his son out of the school (Bowles, 2019). In Pennsylvania and Connecticut, some schools stopped using Summit Learning altogether, following protests.

But the resistance did not last. Protesters were accused of being nostalgic conservatives and educationalists kept largely quiet, fearful of losing their funding from the Chan Zuckerberg Initiative (Facebook) and other philanthro-capitalists. The provision of training in grit, growth mindset, positive psychology and mindfulness (also promoted by the technology companies) was ramped up, and eventually the disaffected students became more quiescent. Before long, the data-intensive, personalized approach, relying on the tools, services and data storage of particular platforms had become ‘baked in’ to educational systems around the world (Moore, 2018: 211). There was no going back (except for small numbers of ultra-privileged students in a few private institutions).

By the middle of the century (2155), most students, of all ages, studied with interactive screens in the comfort of their homes. Algorithmically-driven content, with personalized, adaptive tests had become the norm, but the technology occasionally went wrong, leading to some frustration. One day, two young children discovered a book in their attic. Made of paper with yellow, crinkly pages, where ‘the words stood still instead of moving the way they were supposed to’. The book recounted the experience of schools in the distant past, where ‘all the kids from the neighbourhood came’, sitting in the same room with a human teacher, studying the same things ‘so they could help one another on the homework and talk about it’. Margie, the younger of the children at 11 years old, was engrossed in the book when she received a nudge from her personalized learning platform to return to her studies. But Margie was reluctant to go back to her fractions. She ‘was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had’ (Asimov, 1951).

References

Asimov, I. 1951. The Fun They Had. Accessed September 20, 2019. http://web1.nbed.nb.ca/sites/ASD-S/1820/J%20Johnston/Isaac%20Asimov%20-%20The%20fun%20they%20had.pdf

Bowles, N. 2019. ‘Silicon Valley Came to Kansas Schools. That Started a Rebellion’ The New York Times, April 21. Accessed September 20, 2019. https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html

Hidi, S. & Renninger, K.A. 2006. ‘The Four-Phase Model of Interest Development’ Educational Psychologist, 41 (2), 111 – 127

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47 (4): 445-465

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 (6): 47 – 54

Laurillard, D. 2008. Digital Technologies and their Role in Achieving our Ambitions for Education. London: Institute for Education.

Leskin, P. 2018. ‘Students in Brooklyn protest their school’s use of a Zuckerberg-backed online curriculum that Facebook engineers helped build’ Business Insider, 12.11.18 Accessed 20 September 2019. https://www.businessinsider.de/summit-learning-school-curriculum-funded-by-zuckerberg-faces-backlash-brooklyn-2018-11?r=US&IR=T

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 (2): 84 – 98

Macgilchrist, F., Allert, H. & Bruch, A. 2019. ‚Students and society in the 2020s. Three future ‘histories’ of education and technology’. Learning, Media and Technology, https://www.tandfonline.com/doi/full/10.1080/17439884.2019.1656235 )

Moore, M. 2018. Democracy Hacked. London: Oneworld

Noble, D. D. 1991. The Classroom Arsenal. London: The Falmer Press

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 (7), 401 – 410

Ramganesh, E. & Janaki, S. 2017. ‘Attitude of College Teachers towards the Utilization of Language Laboratories for Learning English’ Asian Journal of Social Science Studies; Vol. 2 (1): 103 – 109

Roby, W.B. 2003. ‘Technology in the service of foreign language teaching: The case of the language laboratory’ In D. Jonassen (ed.), Handbook of Research on Educational Communications and Technology, 2nd ed.: 523 – 541. Mahwah, NJ.: Lawrence Erlbaum Associates

Saettler, P. 2004. The Evolution of American Educational Technology. Greenwich, Conn.: Information Age Publishing

Skinner, B. F. 1961. ‘Teaching Machines’ Scientific American, 205(5), 90-107

Styring, J. 2015. Engaging Generation Z. Cambridge English webinar 2015 https://www.youtube.com/watch?time_continue=4&v=XCxl4TqgQZA

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14.

Wiley, P. D. 1990. ‘Language labs for 1990: User-friendly, expandable and affordable’. Media & Methods, 27(1), 44–47)

jenny-holzer-untitled-protect-me-from-what-i-want-text-displayed-in-times-square-nyc-1982

Jenny Holzer, Protect me from what I want

When the startup, AltSchool, was founded in 2013 by Max Ventilla, the former head of personalization at Google, it quickly drew the attention of venture capitalists and within a few years had raised $174 million from the likes of the Zuckerberg Foundation, Peter Thiel, Laurene Powell Jobs and Pierre Omidyar. It garnered gushing articles in a fawning edtech press which enthused about ‘how successful students can be when they learn in small, personalized communities that champion project-based learning, guided by educators who get a say in the technology they use’. It promised ‘a personalized learning approach that would far surpass the standardized education most kids receive’.

altschoolVentilla was an impressive money-raiser who used, and appeared to believe, every cliché in the edTech sales manual. Dressed in regulation jeans, polo shirt and fleece, he claimed that schools in America were ‘stuck in an industrial-age model, [which] has been in steady decline for the last century’ . What he offered, instead, was a learner-centred, project-based curriculum providing real-world lessons. There was a focus on social-emotional learning activities and critical thinking was vital.

The key to the approach was technology. From the start, software developers, engineers and researchers worked alongside teachers everyday, ‘constantly tweaking the Personalized Learning Plan, which shows students their assignments for each day and helps teachers keep track of and assess student’s learning’. There were tablets for pre-schoolers, laptops for older kids and wall-mounted cameras to record the lessons. There were, of course, Khan Academy videos. Ventilla explained that “we start with a representation of each child”, and even though “the vast majority of the learning should happen non-digitally”, the child’s habits and preferences gets converted into data, “a digital representation of the important things that relate to that child’s learning, not just their academic learning but also their non-academic learning. Everything logistic that goes into setting up the experience for them, whether it’s who has permission to pick them up or their allergy information. You name it.” And just like Netflix matches us to TV shows, “If you have that accurate and actionable representation for each child, now you can start to personalize the whole experience for that child. You can create that kind of loop you described where because we can represent a child well, we can match them to the right experiences.”

AltSchool seemed to offer the possibility of doing something noble, of transforming education, ‘bringing it into the digital age’, and, at the same time, a healthy return on investors’ money. Expanding rapidly, nine AltSchool microschools were opened in New York and the Bay Area, and plans were afoot for further expansion in Chicago. But, by then, it was already clear that something was going wrong. Five of the schools were closed before they had really got started and the attrition rate in some classrooms had reached about 30%. Revenue in 2018 was only $7 million and there were few buyers for the AltSchool platform. Quoting once more from the edTech bible, Ventilla explained the situation: ‘Our whole strategy is to spend more than we make,’ he says. Since software is expensive to develop and cheap to distribute, the losses, he believes, will turn into steep profits once AltSchool refines its product and lands enough customers.

The problems were many and apparent. Some of the buildings were simply not appropriate for schools, with no playgrounds or gyms, malfunctioning toilets, among other issues. Parents were becoming unhappy and accused AltSchool of putting ‘its ambitions as a tech company above its responsibility to teach their children. […] We kind of came to the conclusion that, really, AltSchool as a school was kind of a front for what Max really wants to do, which is develop software that he’s selling,’ a parent of a former AltSchool student told Business Insider. ‘We had really mediocre educators using technology as a crutch,’ said one father who transferred his child to a different private school after two years at AltSchool. […] We learned that it’s almost impossible to really customize the learning experience for each kid.’ Some parents began to wonder whether AltSchool had enticed families into its program merely to extract data from their children, then toss them aside?

With the benefit of hindsight, it would seem that the accusations were hardly unfair. In June of this year, AltSchool announced that its four remaining schools would be operated by a new partner, Higher Ground Education (a well-funded startup founded in 2016 which promotes and ‘modernises’ Montessori education). Meanwhile, AltSchool has been rebranded as Altitude Learning, focusing its ‘resources on the development and expansion of its personalized learning platform’ for licensing to other schools across the country.

Quoting once more from the edTech sales manual, Ventilla has said that education should drive the tech, not the other way round. Not so many years earlier, before starting AltSchool, Ventilla also said that he had read two dozen books on education and emerged a fan of Sir Ken Robinson. He had no experience as a teacher or as an educational administrator. Instead, he had ‘extensive knowledge of networks, and he understood the kinds of insights that can be gleaned from big data’.

ltsigIt’s hype time again. Spurred on, no doubt, by the current spate of books and articles  about AIED (artificial intelligence in education), the IATEFL Learning Technologies SIG is organising an online event on the topic in November of this year. Currently, the most visible online references to AI in language learning are related to Glossika , basically a language learning system that uses spaced repetition, whose marketing department has realised that references to AI might help sell the product. GlossikaThey’re not alone – see, for example, Knowble which I reviewed earlier this year .

In the wider world of education, where AI has made greater inroads than in language teaching, every day brings more stuff: How artificial intelligence is changing teaching , 32 Ways AI is Improving Education , How artificial intelligence could help teachers do a better job , etc., etc. There’s a full-length book by Anthony Seldon, The Fourth Education Revolution: will artificial intelligence liberate or infantilise humanity? (2018, University of Buckingham Press) – one of the most poorly researched and badly edited books on education I’ve ever read, although that won’t stop it selling – and, no surprises here, there’s a Pearson commissioned report called Intelligence Unleashed: An argument for AI in Education (2016) which is available free.

Common to all these publications is the claim that AI will radically change education. When it comes to language teaching, a similar claim has been made by Donald Clark (described by Anthony Seldon as an education guru but perhaps best-known to many in ELT for his demolition of Sugata Mitra). In 2017, Clark wrote a blog post for Cambridge English (now unavailable) entitled How AI will reboot language learning, and a more recent version of this post, called AI has and will change language learning forever (sic) is available on Clark’s own blog. Given the history of the failure of education predictions, Clark is making bold claims. Thomas Edison (1922) believed that movies would revolutionize education. Radios were similarly hyped in the 1940s and in the 1960s it was the turn of TV. In the 1980s, Seymour Papert predicted the end of schools – ‘the computer will blow up the school’, he wrote. Twenty years later, we had the interactive possibilities of Web 2.0. As each technology failed to deliver on the hype, a new generation of enthusiasts found something else to make predictions about.

But is Donald Clark onto something? Developments in AI and computational linguistics have recently resulted in enormous progress in machine translation. Impressive advances in automatic speech recognition and generation, coupled with the power that can be packed into a handheld device, mean that we can expect some re-evaluation of the value of learning another language. Stephen Heppell, a specialist at Bournemouth University in the use of ICT in Education, has said: ‘Simultaneous translation is coming, making language teachers redundant. Modern languages teaching in future may be more about navigating cultural differences’ (quoted by Seldon, p.263). Well, maybe, but this is not Clark’s main interest.

Less a matter of opinion and much closer to the present day is the issue of assessment. AI is becoming ubiquitous in language testing. Cambridge, Pearson, TELC, Babbel and Duolingo are all using or exploring AI in their testing software, and we can expect to see this increase. Current, paper-based systems of testing subject knowledge are, according to Rosemary Luckin and Kristen Weatherby, outdated, ineffective, time-consuming, the cause of great anxiety and can easily be automated (Luckin, R. & Weatherby, K. 2018. ‘Learning analytics, artificial intelligence and the process of assessment’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.253). By capturing data of various kinds throughout a language learner’s course of study and by using AI to analyse learning development, continuous formative assessment becomes possible in ways that were previously unimaginable. ‘Assessment for Learning (AfL)’ or ‘Learning Oriented Assessment (LOA)’ are two terms used by Cambridge English to refer to the potential that AI offers which is described by Luckin (who is also one of the authors of the Pearson paper mentioned earlier). In practical terms, albeit in a still very limited way, this can be seen in the CUP course ‘Empower’, which combines CUP course content with validated LOA from Cambridge Assessment English.

Will this reboot or revolutionise language teaching? Probably not and here’s why. AIED systems need to operate with what is called a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. Some subjects (especially STEM subjects) ‘lend themselves much more readily to having their domains represented in ways that can be automatically reasoned about’ (du Boulay, D. et al., 2018. ‘Artificial intelligences and big data technologies to close the achievement gap’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.258). This is why most AIED systems have been built to teach these areas. Language are rather different. We simply do not have a domain knowledge model, except perhaps for the very lowest levels of language learning (and even that is highly questionable). Language learning is probably not, or not primarily, about acquiring subject knowledge. Debate still rages about the relationship between explicit language knowledge and language competence. AI-driven formative assessment will likely focus most on explicit language knowledge, as does most current language teaching. This will not reboot or revolutionise anything. It will more likely reinforce what is already happening: a model of language learning that assumes there is a strong interface between explicit knowledge and language competence. It is not a model that is shared by most SLA researchers.

So, one thing that AI can do (and is doing) for language learning is to improve the algorithms that determine the way that grammar and vocabulary are presented to individual learners in online programs. AI-optimised delivery of ‘English Grammar in Use’ may lead to some learning gains, but they are unlikely to be significant. It is not, in any case, what language learners need.

AI, Donald Clark suggests, can offer personalised learning. Precisely what kind of personalised learning this might be, and whether or not this is a good thing, remains unclear. A 2015 report funded by the Gates Foundation found that we currently lack evidence about the effectiveness of personalised learning. We do not know which aspects of personalised learning (learner autonomy, individualised learning pathways and instructional approaches, etc.) or which combinations of these will lead to gains in language learning. The complexity of the issues means that we may never have a satisfactory explanation. You can read my own exploration of the problems of personalised learning starting here .

What’s left? Clark suggests that chatbots are one area with ‘huge potential’. I beg to differ and I explained my reasons eighteen months ago . Chatbots work fine in very specific domains. As Clark says, they can be used for ‘controlled practice’, but ‘controlled practice’ means practice of specific language knowledge, the practice of limited conversational routines, for example. It could certainly be useful, but more than that? Taking things a stage further, Clark then suggests more holistic speaking and listening practice with Amazon Echo, Alexa or Google Home. If and when the day comes that we have general, as opposed to domain-specific, AI, chatting with one of these tools would open up vast new possibilities. Unfortunately, general AI does not exist, and until then Alexa and co will remain a poor substitute for human-human interaction (which is readily available online, anyway). Incidentally, AI could be used to form groups of online language learners to carry out communicative tasks – ‘the aim might be to design a grouping of students all at a similar cognitive level and of similar interests, or one where the participants bring different but complementary knowledge and skills’ (Luckin, R., Holmes, W., Griffiths, M. & Forceir, L.B. 2016. Intelligence Unleashed: An argument for AI in Education. London: Pearson, p.26).

Predictions about the impact of technology on education have a tendency to be made by people with a vested interest in the technologies. Edison was a businessman who had invested heavily in motion pictures. Donald Clark is an edtech entrepreneur whose company, Wildfire, uses AI in online learning programs. Stephen Heppell is executive chairman of LP+ who are currently developing a Chinese language learning community for 20 million Chinese school students. The reporting of AIED is almost invariably in websites that are paid for, in one way or another, by edtech companies. Predictions need, therefore, to be treated sceptically. Indeed, the safest prediction we can make about hyped educational technologies is that inflated expectations will be followed by disillusionment, before the technology finds a smaller niche.

 

440px-HydraOrganization_HeadLike the mythical monster, the ancient Hydra organisation of Marvel Comics grows two more heads if one is cut off, becoming more powerful in the process. With the most advanced technology on the planet and with a particular focus on data gathering, Hydra operates through international corporations and highly-placed individuals in national governments.
Personalized learning has also been around for centuries. Its present incarnation can be traced to the individualized instructional programmes of the late 19th century which ‘focused on delivering specific subject matter […] based on the principles of scientific management. The intent was to solve the practical problems of the classroom by reducing waste and increasing efficiency, effectiveness, and cost containment in education (Januszewski, 2001: 58). Since then, personalized learning has adopted many different names, including differentiated instruction, individualized instruction, individually guided education, programmed instruction, personalized learning, personalized instruction, and individually prescribed instruction.
Disambiguating the terms has never been easy. In the world of language learning / teaching, it was observed back in the early 1970s ‘that there is little agreement on the description and definition of individualized foreign language instruction’ (Garfinkel, 1971: 379). The point was echoed a few years later by Grittner (1975: 323): it ‘means so many things to so many different people’. A UNESCO document (Chaix & O’Neil, 1978: 6) complained that ‘the term ‘individualization’ and the many expressions using the same root, such as ‘individualized learning’, are much too ambiguous’. Zoom forward to the present day and nothing has changed. Critiquing the British government’s focus on personalized learning, the Institute for Public Policy Research (Johnson, 2004: 17) wrote that it ‘remains difficult to be certain what the Government means by personalised learning’. In the U.S. context, a piece by Sean Cavanagh (2014) in Education Week (which is financially supported by the Gates Foundation) noted that although ‘the term “personalized learning” seems to be everywhere, there is not yet a shared understanding of what it means’. In short, as Arthur Levine  has put it, the words personalized learning ‘generate more heat than light’.
Despite the lack of clarity about what precisely personalized learning actually is, it has been in the limelight of language teaching and learning since before the 1930s when Pendleton (1930: 195) described the idea as being more widespread than ever before. Zoom forward to the 1970s and we find it described as ‘one of the major movements in second-language education at the present time’ (Chastain, 1975: 334). In 1971, it was described as ‘a bandwagon onto which foreign language teachers at all levels are jumping’ (Altman & Politzer, 1971: 6). A little later, in the 1980s, ‘words or phrases such as ‘learner-centered’, ‘student-centered’, ‘personalized’, ‘individualized’, and ‘humanized’ appear as the most frequent modifiers of ‘instruction’ in journals and conferences of foreign language education (Altman & James, 1980). Continue to the present day, and we find that personalized learning is at the centre of the educational policies of governments across the world. Between 2012 and 2015, the U.S. Department of Education threw over half a billion dollars at personalized learning initiatives (Bulger, 2016: 22). At the same time, there is massive sponsorship of personalized learning from the biggest international corporations (the William and Flora Hewlett Foundation, Rogers Family Foundation, Susan and Michael Dell Foundation, and the Eli and Edythe Broad Foundation) (Bulger, 2016: 22). The Bill & Melinda Gates Foundation has invested nearly $175 million in personalized learning development and Facebook’s Mark Zuckerberg is ploughing billions of dollars into it.
There has, however, been one constant: the belief that technology can facilitate the process of personalization (whatever that might be). Technology appears to offer the potential to realise the goal of personalized learning. We have come a long way from Sydney Pressey’s attempts in the 1920s to use teaching machines to individualize instruction. At that time, the machines were just one part of the programme (and not the most important). But each new technology has offered a new range of possibilities to be exploited and each new technology, its advocates argue, ‘will solve the problems better than previous efforts’ (Ferster, 2014: xii). With the advent of data-capturing learning technologies, it has now become virtually impossible to separate advocacy of personalized instruction from advocacy of digitalization in education. As the British Department for Education has put it ‘central to personalised learning is schools’ use of data (DfES (2005) White Paper: Higher Standards, Better Schools for All. London, Department for Education and Skills, para 4.50). When the U.S. Department of Education threw half a billion dollars at personalized learning initiatives, the condition was that these projects ‘use collaborative, data-based strategies and 21st century tools to deliver instruction’ (Bulger, 2016: 22).
Is it just a coincidence that the primary advocates of personalized learning are either vendors of technology or are very close to them in the higher echelons of Hydra (World Economic Forum, World Bank, IMF, etc.)? ‘Personalized learning’ has ‘almost no descriptive value’: it is ‘a term that sounds good without the inconvenience of having any obviously specific pedagogical meaning’ (Feldstein & Hill, 2016: 30). It evokes positive responses, with its ‘nod towards more student-centered learning […], a move that honors the person learning not just the learning institution’ (Watters, 2014). As such, it is ‘a natural for marketing purposes’ since nobody in their right mind would want unpersonalized or depersonalized learning (Feldstein & Hill, 2016: 25). It’s ‘a slogan that nobody’s going to be against, and everybody’s going to be for. Nobody knows what it means, because it doesn’t mean anything. Its crucial value is that it diverts your attention from a question that does mean something: Do you support our policy?’ (Chomsky, 1997).
None of the above is intended to suggest that there might not be goals that come under the ‘personalized learning’ umbrella that are worth working towards. But that’s another story – one I will return to in another post. For the moment, it’s just worth remembering that, in one of the Marvel Comics stories, Captain America, who appeared to be fighting the depersonalized evils of the world, was actually a deep sleeper agent for Hydra.

References
Altman, H.B. & James, C.V. (eds.) 1980. Foreign Language Teaching: Meeting Individual Needs. Oxford: Pergamon Press
Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare
Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. New York: Data and Society Research Institute.
Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week
Chaix, P., & O’Neil, C. 1978. A Critical Analysis of Forms of Autonomous Learning (Autodidaxy and Semi-autonomy in the Field of Foreign Language Learning. Final Report. UNESCO Doc Ed 78/WS/58
Chastain, K. 1975. ‘An Examination of the Basic Assumptions of “Individualized” Instruction’ The Modern Language Journal 59 / 7: 334 – 344
Chomsky, N. 1997. Media Control: The Spectacular Achievements of Propaganda. New York: Seven Stories Press
Feldstein, M. & Hill, P. 2016. ‘Personalized Learning: What it Really is and why it Really Matters’ EduCause Review March / April 2016: 25 – 35
Ferster, B. 2014. Teaching Machines. Baltimore: John Hopkins University Press
Garfinkel, A. 1971. ‘Stanford University Conference on Individualizing Foreign Language Instruction, May 6-8, 1971.’ The Modern Language Journal Vol. 55, No. 6 (Oct., 1971), pp. 378-381
Grittner, F. M. 1975. ‘Individualized Instruction: An Historical Perspective’ The Modern Language Journal 59 / 7: 323 – 333
Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited
Johnson, M. 2004. Personalised Learning – an Emperor’s Outfit? London: Institute for Public Policy Research
Pendleton, C. S. 1930. ‘Personalizing English Teaching’ Peabody Journal of Education 7 / 4: 195 – 200
Watters, A. 2014. The problem with ‘personalization’ Hack Education

by Philip Kerr & Andrew Wickham

from IATEFL 2016 Birmingham Conference Selections (ed. Tania Pattison) Faversham, Kent: IATEFL pp. 75 – 78

ELT publishing, international language testing and private language schools are all industries: products are produced, bought and sold for profit. English language teaching (ELT) is not. It is an umbrella term that is used to describe a range of activities, some of which are industries, and some of which (such as English teaching in high schools around the world) might better be described as public services. ELT, like education more generally, is, nevertheless, often referred to as an ‘industry’.

Education in a neoliberal world

The framing of ELT as an industry is both a reflection of how we understand the term and a force that shapes our understanding. Associated with the idea of ‘industry’ is a constellation of other ideas and words (such as efficacy, productivity, privatization, marketization, consumerization, digitalization and globalization) which become a part of ELT once it is framed as an industry. Repeated often enough, ‘ELT as an industry’ can become a metaphor that we think and live by. Those activities that fall under the ELT umbrella, but which are not industries, become associated with the desirability of industrial practices through such discourse.

The shift from education, seen as a public service, to educational managerialism (where education is seen in industrial terms with a focus on efficiency, free market competition, privatization and a view of students as customers) can be traced to the 1980s and 1990s (Gewirtz, 2001). In 1999, under pressure from developed economies, the General Agreement on Trade in Services (GATS) transformed education into a commodity that could be traded like any other in the marketplace (Robertson, 2006). The global industrialisation and privatization of education continues to be promoted by transnational organisations (such as the World Bank and the OECD), well-funded free-market think-tanks (such as the Cato Institute), philanthro-capitalist foundations (such as the Gates Foundation) and educational businesses (such as Pearson) (Ball, 2012).

Efficacy and learning outcomes

Managerialist approaches to education require educational products and services to be measured and compared. In ELT, the most visible manifestation of this requirement is the current ubiquity of learning outcomes. Contemporary coursebooks are full of ‘can-do’ statements, although these are not necessarily of any value to anyone. Examples from one unit of one best-selling course include ‘Now I can understand advice people give about hotels’ and ‘Now I can read an article about unique hotels’ (McCarthy et al. 2014: 74). However, in a world where accountability is paramount, they are deemed indispensable. The problem from a pedagogical perspective is that teaching input does not necessarily equate with learning uptake. Indeed, there is no reason why it should.

Drawing on the Common European Framework of Reference for Languages (CEFR) for inspiration, new performance scales have emerged in recent years. These include the Cambridge English Scale and the Pearson Global Scale of English. Moving away from the broad six categories of the CEFR, such scales permit finer-grained measurement and we now see individual vocabulary and grammar items tagged to levels. Whilst such initiatives undoubtedly support measurements of efficacy, the problem from a pedagogical perspective is that they assume that language learning is linear and incremental, as opposed to complex and jagged.

Given the importance accorded to the measurement of language learning (or what might pass for language learning), it is unsurprising that attention is shifting towards the measurement of what is probably the most important factor impacting on learning: the teaching. Teacher competency scales have been developed by Cambridge Assessment, the British Council and EAQUALS (Evaluation and Accreditation of Quality Language Services), among others.

The backwash effects of the deployment of such scales are yet to be fully experienced, but the likely increase in the perception of both language learning and teacher learning as the synthesis of granularised ‘bits of knowledge’ is cause for concern.

Digital technology

Digital technology may offer advantages to both English language teachers and learners, but its rapid growth in language learning is the result, primarily but not exclusively, of the way it has been promoted by those who stand to gain financially. In education, generally, and in English language teaching, more specifically, advocacy of the privatization of education is always accompanied by advocacy of digitalization. The global market for digital English language learning products was reported to be $2.8 billion in 2015 and is predicted to reach $3.8 billion by 2020 (Ambient Insight, 2016).

In tandem with the increased interest in measuring learning outcomes, there is fierce competition in the market for high-stakes examinations, and these are increasingly digitally delivered and marked. In the face of this competition and in a climate of digital disruption, companies like Pearson and Cambridge English are developing business models of vertical integration where they can provide and sell everything from placement testing, to courseware (either print or delivered through an LMS), teaching, assessment and teacher training. Huge investments are being made in pursuit of such models. Pearson, for example, recently bought GlobalEnglish, Wall Street English, and set up a partnership with Busuu, thus covering all aspects of language learning from resources provision and publishing to off- and online training delivery.

As regards assessment, the most recent adult coursebook from Cambridge University Press (in collaboration with Cambridge English Language Assessment), ‘Empower’ (Doff, et. Al, 2015) sells itself on a combination of course material with integrated, validated assessment.

Besides its potential for scalability (and therefore greater profit margins), the appeal (to some) of platform-delivered English language instruction is that it facilitates assessment that is much finer-grained and actionable in real time. Digitization and testing go hand in hand.

Few English language teachers have been unaffected by the move towards digital. In the state sectors, large-scale digitization initiatives (such as the distribution of laptops for educational purposes, the installation of interactive whiteboards, the move towards blended models of instruction or the move away from printed coursebooks) are becoming commonplace. In the private sectors, online (or partially online) language schools are taking market share from the traditional bricks-and-mortar institutions.

These changes have entailed modifications to the skill-sets that teachers need to have. Two announcements at this conference reflect this shift. First of all, Cambridge English launched their ‘Digital Framework for Teachers’, a matrix of six broad competency areas organised into four levels of proficiency. Secondly, Aqueduto, the Association for Quality Education and Training Online, was launched, setting itself up as an accreditation body for online or blended teacher training courses.

Teachers’ pay and conditions

In the United States, and likely soon in the UK, the move towards privatization is accompanied by an overt attack on teachers’ unions, rights, pay and conditions (Selwyn, 2014). As English language teaching in both public and private sectors is commodified and marketized it is no surprise to find that the drive to bring down costs has a negative impact on teachers worldwide. Gwynt (2015), for example, catalogues cuts in funding, large-scale redundancies, a narrowing of the curriculum, intensified workloads (including the need to comply with ‘quality control measures’), the deskilling of teachers, dilapidated buildings, minimal resources and low morale in an ESOL department in one British further education college. In France, a large-scale study by Wickham, Cagnol, Wright and Oldmeadow (Linguaid, 2015; Wright, 2016) found that EFL teachers in the very competitive private sector typically had multiple employers, limited or no job security, limited sick pay and holiday pay, very little training and low hourly rates that were deteriorating. One of the principle drivers of the pressure on salaries is the rise of online training delivery through Skype and other online platforms, using offshore teachers in low-cost countries such as the Philippines. This type of training represents 15% in value and up to 25% in volume of all language training in the French corporate sector and is developing fast in emerging countries. These examples are illustrative of a broad global trend.

Implications

Given the current climate, teachers will benefit from closer networking with fellow professionals in order, not least, to be aware of the rapidly changing landscape. It is likely that they will need to develop and extend their skill sets (especially their online skills and visibility and their specialised knowledge), to differentiate themselves from competitors and to be able to demonstrate that they are in tune with current demands. More generally, it is important to recognise that current trends have yet to run their full course. Conditions for teachers are likely to deteriorate further before they improve. More than ever before, teachers who want to have any kind of influence on the way that marketization and industrialization are shaping their working lives will need to do so collectively.

References

Ambient Insight. 2016. The 2015-2020 Worldwide Digital English Language Learning Market. http://www.ambientinsight.com/Resources/Documents/AmbientInsight_2015-2020_Worldwide_Digital_English_Market_Sample.pdf

Ball, S. J. 2012. Global Education Inc. Abingdon, Oxon.: Routledge

Doff, A., Thaine, C., Puchta, H., Stranks, J. and P. Lewis-Jones 2015. Empower. Cambridge: Cambridge University Press

Gewirtz, S. 2001. The Managerial School: Post-welfarism and Social Justice in Education. Abingdon, Oxon.: Routledge

Gwynt, W. 2015. ‘The effects of policy changes on ESOL’. Language Issues 26 / 2: 58 – 60

McCarthy, M., McCarten, J. and H. Sandiford 2014. Touchstone 2 Student’s Book Second Edition. Cambridge: Cambridge University Press

Linguaid, 2015. Le Marché de la Formation Langues à l’Heure de la Mondialisation. Guildford: Linguaid

Robertson, S. L. 2006. ‘Globalisation, GATS and trading in education services.’ published by the Centre for Globalisation, Education and Societies, University of Bristol, Bristol BS8 1JA, UK at http://www.bris.ac.uk/education/people/academicStaff/edslr/publications/04slr

Selwyn, N. 2014. Distrusting Educational Technology. New York: Routledge

Wright, R. 2016. ‘My teacher is rich … or not!’ English Teaching Professional 103: 54 – 56

 

 

Having spent a lot of time recently looking at vocabulary apps, I decided to put together a Christmas wish list of the features of my ideal vocabulary app. The list is not exhaustive and I’ve given more attention to some features than others. What (apart from testing) have I missed out?

1             Spaced repetition

Since the point of a vocabulary app is to help learners memorise vocabulary items, it is hard to imagine a decent system that does not incorporate spaced repetition. Spaced repetition algorithms offer one well-researched way of improving the brain’s ‘forgetting curve’. These algorithms come in different shapes and sizes, and I am not technically competent to judge which is the most efficient. However, as Peter Ellis Jones, the developer of a flashcard system called CardFlash, points out, efficiency is only one half of the rote memorisation problem. If you are not motivated to learn, the cleverness of the algorithm is moot. Fundamentally, learning software needs to be fun, rewarding, and give a solid sense of progression.

2             Quantity, balance and timing of new and ‘old’ items

A spaced repetition algorithm determines the optimum interval between repetitions, but further algorithms will be needed to determine when and with what frequency new items will be added to the deck. Once a system knows how many items a learner needs to learn and the time in which they have to do it, it is possible to determine the timing and frequency of the presentation of new items. But the system cannot know in advance how well an individual learner will learn the items (for any individual, some items will be more readily learnable than others) nor the extent to which learners will live up to their own positive expectations of time spent on-app. As most users of flashcard systems know, it is easy to fall behind, feel swamped and, ultimately, give up. An intelligent system needs to be able to respond to individual variables in order to ensure that the learning load is realistic.

3             Task variety

A standard flashcard system which simply asks learners to indicate whether they ‘know’ a target item before they flip over the card rapidly becomes extremely boring. A system which tests this knowledge soon becomes equally dull. There needs to be a variety of ways in which learners interact with an app, both for reasons of motivation and learning efficiency. It may be the case that, for an individual user, certain task types lead to more rapid gains in learning. An intelligent, adaptive system should be able to capture this information and modify the selection of task types.

Most younger learners and some adult learners will respond well to the inclusion of games within the range of task types. Examples of such games include the puzzles developed by Oliver Rose in his Phrase Maze app to accompany Quizlet practice.Phrase Maze 1Phrase Maze 2

4             Generative use

Memory researchers have long known about the ‘Generation Effect’ (see for example this piece of research from the Journal of Verbal Learning and Learning Behavior, 1978). Items are better learnt when the learner has to generate, in some (even small) way, the target item, rather than simply reading it. In vocabulary learning, this could be, for example, typing in the target word or, more simply, inserting some missing letters. Systems which incorporate task types that require generative use are likely to result in greater learning gains than simple, static flashcards with target items on one side and definitions or translations on the other.

5             Receptive and productive practice

The most basic digital flashcard systems require learners to understand a target item, or to generate it from a definition or translation prompt. Valuable as this may be, it won’t help learners much to use these items productively, since these systems focus exclusively on meaning. In order to do this, information must be provided about collocation, colligation, register, etc and these aspects of word knowledge will need to be focused on within the range of task types. At the same time, most vocabulary apps that I have seen focus primarily on the written word. Although any good system will offer an audio recording of the target item, and many will offer the learner the option of recording themselves, learners are invariably asked to type in their answers, rather than say them. For the latter, speech recognition technology will be needed. Ideally, too, an intelligent system will compare learner recordings with the audio models and provide feedback in such a way that the learner is guided towards a closer reproduction of the model.

6             Scaffolding and feedback

feebuMost flashcard systems are basically low-stakes, practice self-testing. Research (see, for example, Dunlosky et al’s metastudy ‘Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology’) suggests that, as a learning strategy, practice testing has high utility – indeed, of higher utility than other strategies like keyword mnemonics or highlighting. However, an element of tutoring is likely to enhance practice testing, and, for this, scaffolding and feedback will be needed. If, for example, a learner is unable to produce a correct answer, they will probably benefit from being guided towards it through hints, in the same way as a teacher would elicit in a classroom. Likewise, feedback on why an answer is wrong (as opposed to simply being told that you are wrong), followed by encouragement to try again, is likely to enhance learning. Such feedback might, for example, point out that there is perhaps a spelling problem in the learner’s attempted answer, that the attempted answer is in the wrong part of speech, or that it is semantically close to the correct answer but does not collocate with other words in the text. The incorporation of intelligent feedback of this kind will require a number of NLP tools, since it will never be possible for a human item-writer to anticipate all the possible incorrect answers. A current example of intelligent feedback of this kind can be found in the Oxford English Vocabulary Trainer app.

7             Content

At the very least, a decent vocabulary app will need good definitions and translations (how many different languages?), and these will need to be tagged to the senses of the target items. These will need to be supplemented with all the other information that you find in a good learner’s dictionary: syntactic patterns, collocations, cognates, an indication of frequency, etc. The only way of getting this kind of high-quality content is by paying to license it from a company with expertise in lexicography. It doesn’t come cheap.

There will also need to be example sentences, both to illustrate meaning / use and for deployment in tasks. Dictionary databases can provide some of these, but they cannot be relied on as a source. This is because the example sentences in dictionaries have been selected and edited to accompany the other information provided in the dictionary, and not as items in practice exercises, which have rather different requirements. Once more, the solution doesn’t come cheap: experienced item writers will be needed.

Dictionaries describe and illustrate how words are typically used. But examples of typical usage tend to be as dull as they are forgettable. Learning is likely to be enhanced if examples are cognitively salient: weird examples with odd collocations, for example. Another thing for the item writers to think about.

A further challenge for an app which is not level-specific is that both the definitions and example sentences need to be level-specific. An A1 / A2 learner will need the kind of content that is found in, say, the Oxford Essential dictionary; B2 learners and above will need content from, say, the OALD.

8             Artwork and design

My wordbook2It’s easy enough to find artwork or photos of concrete nouns, but try to find or commission a pair of pictures that differentiate, for example, the adjectives ‘wild’ and ‘dangerous’ … What kind of pictures might illustrate simple verbs like ‘learn’ or ‘remember’? Will such illustrations be clear enough when squeezed into a part of a phone screen? Animations or very short video clips might provide a solution in some cases, but these are more expensive to produce and video files are much heavier.

With a few notable exceptions, such as the British Councils’s MyWordBook 2, design in vocabulary apps has been largely forgotten.

9             Importable and personalisable lists

Many learners will want to use a vocabulary app in association with other course material (e.g. coursebooks). Teachers, however, will inevitably want to edit these lists, deleting some items, adding others. Learners will want to do the same. This is a huge headache for app designers. If new items are going to be added to word lists, how will the definitions, example sentences and illustrations be generated? Will the database contain audio recordings of these words? How will these items be added to the practice tasks (if these include task types that go beyond simple double-sided flashcards)? NLP tools are not yet good enough to trawl a large corpus in order to select (and possibly edit) sentences that illustrate the right meaning and which are appropriate for interactive practice exercises. We can personalise the speed of learning and even the types of learning tasks, so long as the target language is predetermined. But as soon as we allow for personalisation of content, we run into difficulties.

10          Gamification

Maintaining motivation to use a vocabulary app is not easy. Gamification may help. Measuring progress against objectives will be a start. Stars and badges and leaderboards may help some users. Rewards may help others. But gamification features need to be built into the heart of the system, into the design and selection of tasks, rather than simply tacked on as an afterthought. They need to be trialled and tweaked, so analytics will be needed.

11          Teacher support

Although the use of vocabulary flashcards is beginning to catch on with English language teachers, teachers need help with ways to incorporate them in the work they do with their students. What can teachers do in class to encourage use of the app? In what ways does app use require teachers to change their approach to vocabulary work in the classroom? Reporting functions can help teachers know about the progress their students are making and provide very detailed information about words that are causing problems. But, as anyone involved in platform-based course materials knows, teachers need a lot of help.

12          And, of course, …

Apps need to be usable with different operating systems. Ideally, they should be (partially) usable offline. Loading times need to be short. They need to be easy and intuitive to use.

It’s unlikely that I’ll be seeing a vocabulary app with all of these features any time soon. Or, possibly, ever. The cost of developing something that could do all this would be extremely high, and there is no indication that there is a market that would be ready to pay the sort of prices that would be needed to cover the costs of development and turn a profit. We need to bear in mind, too, the fact that vocabulary apps can only ever assist in the initial acquisition of vocabulary: apps alone can’t solve the vocabulary learning problem (despite the silly claims of some app developers). The need for meaningful communicative use, extensive reading and listening, will not go away because a learner has been using an app. So, how far can we go in developing better and better vocabulary apps before users decide that a cheap / free app, with all its shortcomings, is actually good enough?

I posted a follow up to this post in October 2016.

‘Sticky’ – as in ‘sticky learning’ or ‘sticky content’ (as opposed to ‘sticky fingers’ or a ‘sticky problem’) – is itself fast becoming a sticky word. If you check out ‘sticky learning’ on Google Trends, you’ll see that it suddenly spiked in September 2011, following the slightly earlier appearance of ‘sticky content’. The historical rise in this use of the word coincides with the exponential growth in the number of references to ‘big data’.

I am often asked if adaptive learning really will take off as a big thing in language learning. Will adaptivity itself be a sticky idea? When the question is asked, people mean the big data variety of adaptive learning, rather than the much more limited adaptivity of spaced repetition algorithms, which, I think, is firmly here and here to stay. I can’t answer the question with any confidence, but I recently came across a book which suggests a useful way of approaching the question.

41u+NEyWjnL._SY344_BO1,204,203,200_‘From the Ivory Tower to the Schoolhouse’ by Jack Schneider (Harvard Education Press, 2014) investigates the reasons why promising ideas from education research fail to get taken up by practitioners, and why other, less-than-promising ideas, from a research or theoretical perspective, become sticky quite quickly. As an example of the former, Schneider considers Robert Sternberg’s ‘Triarchic Theory’. As an example of the latter, he devotes a chapter to Howard Gardner’s ‘Multiple Intelligences Theory’.

Schneider argues that educational ideas need to possess four key attributes in order for teachers to sit up, take notice and adopt them.

  1. perceived significance: the idea must answer a question central to the profession – offering a big-picture understanding rather than merely one small piece of a larger puzzle
  2. philosophical compatibility: the idea must clearly jibe with closely held [teacher] beliefs like the idea that teachers are professionals, or that all children can learn
  3. occupational realism: it must be possible for the idea to be put easily into immediate use
  4. transportability: the idea needs to find its practical expression in a form that teachers can access and use at the time that they need it – it needs to have a simple core that can travel through pre-service coursework, professional development seminars, independent study and peer networks

To what extent does big data adaptive learning possess these attributes? It certainly comes up trumps with respect to perceived significance. The big question that it attempts to answer is the question of how we can make language learning personalized / differentiated / individualised. As its advocates never cease to remind us, adaptive learning holds out the promise of moving away from a one-size-fits-all approach. The extent to which it can keep this promise is another matter, of course. For it to do so, it will never be enough just to offer different pathways through a digitalised coursebook (or its equivalent). Much, much more content will be needed: at least five or six times the content of a one-size-fits-all coursebook. At the moment, there is little evidence of the necessary investment into content being made (quite the opposite, in fact), but the idea remains powerful nevertheless.

When it comes to philosophical compatibility, adaptive learning begins to run into difficulties. Despite the decades of edging towards more communicative approaches in language teaching, research (e.g. the research into English teaching in Turkey described in a previous post), suggests that teachers still see explanation and explication as key functions of their jobs. They believe that they know their students best and they know what is best for them. Big data adaptive learning challenges these beliefs head on. It is no doubt for this reason that companies like Knewton make such a point of claiming that their technology is there to help teachers. But Jose Ferreira doth protest too much, methinks. Platform-delivered adaptive learning is a direct threat to teachers’ professionalism, their salaries and their jobs.

Occupational realism is more problematic still. Very, very few language teachers around the world have any experience of truly blended learning, and it’s very difficult to envisage precisely what it is that the teacher should be doing in a classroom. Publishers moving towards larger-scale blended adaptive materials know that this is a big problem, and are actively looking at ways of packaging teacher training / teacher development (with a specific focus on blended contexts) into the learner-facing materials that they sell. But the problem won’t go away. Education ministries have a long history of throwing money at technological ‘solutions’ without thinking about obtaining the necessary buy-in from their employees. It is safe to predict that this is something that is unlikely to change. Moreover, learning how to become a blended teacher is much harder than learning, say, how to make good use of an interactive whiteboard. Since there are as many different blended adaptive approaches as there are different educational contexts, there cannot be (irony of ironies) a one-size-fits-all approach to training teachers to make good use of this software.

Finally, how transportable is big data adaptive learning? Not very, is the short answer, and for the same reasons that ‘occupational realism’ is highly problematic.

Looking at things through Jack Schneider’s lens, we might be tempted to come to the conclusion that the future for adaptive learning is a rocky path, at best. But Schneider doesn’t take political or economic considerations into account. Sternberg’s ‘Triarchic Theory’ never had the OECD or the Gates Foundation backing it up. It never had millions and millions of dollars of investment behind it. As we know from political elections (and the big data adaptive learning issue is a profoundly political one), big bucks can buy opinions.

It may also prove to be the case that the opinions of teachers don’t actually matter much. If the big adaptive bucks can win the educational debate at the highest policy-making levels, teachers will be the first victims of the ‘creative disruption’ that adaptivity promises. If you don’t believe me, just look at what is going on in the U.S.

There are causes for concern, but I don’t want to sound too alarmist. Nobody really has a clue whether big data adaptivity will actually work in language learning terms. It remains more of a theory than a research-endorsed practice. And to end on a positive note, regardless of how sticky it proves to be, it might just provide the shot-in-the-arm realisation that language teachers, at their best, are a lot more than competent explainers of grammar or deliverers of gap-fills.