Archive for the ‘Personalization’ Category

Take the Cambridge Assessment English website, for example. When you connect to the site, you will see, at the bottom of the screen, a familiar (to people in Europe, at least) notification about the site’s use of cookies: the cookies consent.

You probably trust the site, so ignore the notification and quickly move on to find the resource you are looking for. But if you did click on hyperlinked ‘set cookies’, what would you find? The first link takes you to the ‘Cookie policy’ where you will be told that ‘We use cookies principally because we want to make our websites and mobile applications user-friendly, and we are interested in anonymous user behaviour. Generally our cookies don’t store sensitive or personally identifiable information such as your name and address or credit card details’. Scroll down, and you will find out more about the kind of cookies that are used. Besides the cookies that are necessary to the functioning of the site, you will see that there are also ‘third party cookies’. These are explained as follows: ‘Cambridge Assessment works with third parties who serve advertisements or present offers on our behalf and personalise the content that you see. Cookies may be used by those third parties to build a profile of your interests and show you relevant adverts on other sites. They do not store personal information directly but use a unique identifier in your browser or internet device. If you do not allow these cookies, you will experience less targeted content’.

This is not factually inaccurate: personal information is not stored directly. However, it is extremely easy for this information to be triangulated with other information to identify you personally. In addition to the data that you generate by having cookies on your device, Cambridge Assessment will also directly collect data about you. Depending on your interactions with Cambridge Assessment, this will include ‘your name, date of birth, gender, contact data including your home/work postal address, email address and phone number, transaction data including your credit card number when you make a payment to us, technical data including internet protocol (IP) address, login data, browser type and technology used to access this website’. They say they may share this data ‘with other people and/or businesses who provide services on our behalf or at our request’ and ‘with social media platforms, including but not limited to Facebook, Google, Google Analytics, LinkedIn, in pseudonymised or anonymised forms’.

In short, Cambridge Assessment may hold a huge amount of data about you and they can, basically, do what they like with it.

The cookie and privacy policies are fairly standard, as is the lack of transparency in the phrasing of them. Rather more transparency would include, for example, information about which particular ad trackers you are giving your consent to. This information can be found with a browser extension tool like Ghostery, and these trackers can be blocked. As you’ll see below, there are 5 ad trackers on this site. This is rather more than other sites that English language teachers are likely to go to. ETS-TOEFL has 4, Macmillan English and Pearson have 3, CUP ELT and the British Council Teaching English have 1, OUP ELT, IATEFL, BBC Learning English and Trinity College have none. I could only find TESOL, with 6 ad trackers, which has more. The blogs for all these organisations invariably have more trackers than their websites.

The use of numerous ad trackers is probably a reflection of the importance that Cambridge Assessment gives to social media marketing. There is a research paper, produced by Cambridge Assessment, which outlines the significance of big data and social media analytics. They have far more Facebook followers (and nearly 6 million likes) than any other ELT page, and they are proud of their #1 ranking in the education category of social media. The amount of data that can be collected here is enormous and it can be analysed in myriad ways using tools like Ubervu, Yomego and Hootsuite.

A little more transparency, however, would not go amiss. According to a report in Vox, Apple has announced that some time next year ‘iPhone users will start seeing a new question when they use many of the apps on their devices: Do they want the app to follow them around the internet, tracking their behavior?’ Obviously, Google and Facebook are none too pleased about this and will be fighting back. The implications for ad trackers and online advertising, more generally, are potentially huge. I wrote to Cambridge Assessment about this and was pleased to hear that ‘Cambridge Assessment are currently reviewing the process by which we obtain users consent for the use of cookies with the intention of moving to a much more transparent model in the future’. Let’s hope that other ELT organisations are doing the same.

You may be less bothered than I am by the thought of dozens of ad trackers following you around the net so that you can be served with more personalized ads. But the digital profile about you, to which these cookies contribute, may include information about your ethnicity, disabilities and sexual orientation. This profile is auctioned to advertisers when you visit some sites, allowing them to show you ‘personalized’ adverts based on the categories in your digital profile. Contrary to EU regulations, these categories may include whether you have cancer, a substance-abuse problem, your politics and religion (as reported in Fortune https://fortune.com/2019/01/28/google-iab-sensitive-profiles/ ).

But it’s not these cookies that are the most worrying aspect about our lack of digital privacy. It’s the sheer quantity of personal data that is stored about us. Every time we ask our students to use an app or a platform, we are asking them to divulge huge amounts of data. With ClassDojo, for example, this includes names, usernames, passwords, age, addresses, photographs, videos, documents, drawings, or audio files, IP addresses and browser details, clicks, referring URL’s, time spent on site, and page views (Manolev et al., 2019; see also Williamson, 2019).

It is now widely recognized that the ‘consent’ that is obtained through cookie policies and other end-user agreements is largely spurious. These consent agreements, as Sadowski (2019) observes, are non-negotiated, and non-negotiable; you either agree or you are denied access. What’s more, he adds, citing one study, it would take 76 days, working for 8 hours a day, to read the privacy policies a person typically encounters in a year. As a result, most of us choose not to choose when we accept online services (Cobo, 2019: 25). We have little, if any, control over how the data that is collected is used (Birch et al., 2020). More importantly, perhaps, when we ask our students to sign up to an educational app, we are asking / telling them to give away their personal data, not just ours. They are unlikely to fully understand the consequences of doing so.

The extent of this ignorance is also now widely recognized. In the UK, for example, two reports (cited by Sander, 2020) indicate that ‘only a third of people know that data they have not actively chosen to share has been collected’ (Doteveryone, 2018: 5), and that ‘less than half of British adult internet users are aware that apps collect their location and information on their personal preferences’ (Ofcom, 2019: 14).

The main problem with this has been expressed by programmer and activist, Richard Stallman, in an interview with New York magazine (Kulwin, 2018): Companies are collecting data about people. The data that is collected will be abused. That’s not an absolute certainty, but it’s a practical, extreme likelihood, which is enough to make collection a problem.

The abuse that Smallman is referring to can come in a variety of forms. At the relatively trivial end is the personalized advertising. Much more serious is the way that data aggregation companies will scrape data from a variety of sources, building up individual data profiles which can be used to make significant life-impacting decisions, such as final academic grades or whether one is offered a job, insurance or credit (Manolev et al., 2019). Cathy O’Neil’s (2016) best-selling ‘Weapons of Math Destruction’ spells out in detail how this abuse of data increases racial, gender and class inequalities. And after the revelations of Edward Snowden, we all know about the routine collection by states of huge amounts of data about, well, everyone. Whether it’s used for predictive policing or straightforward repression or something else, it is simply not possible for younger people, our students, to know what personal data they may regret divulging at a later date.

Digital educational providers may try to reassure us that they will keep data private, and not use it for advertising purposes, but the reassurances are hollow. These companies may change their terms and conditions further down the line, and examples exist of when this has happened (Moore, 2018: 210). But even if this does not happen, the data can never be secure. Illegal data breaches and cyber attacks are relentless, and education ranked worst at cybersecurity out of 17 major industries in one recent analysis (Foresman, 2018). One report suggests that one in five US schools and colleges have fallen victim to cyber-crime. Two weeks ago, I learnt (by chance, as I happened to be looking at my security settings on Chrome) that my passwords for Quizlet, Future Learn, Elsevier and Science Direct had been compromised by a data breach. To get a better understanding of the scale of data breaches, you might like to look at the UK’s IT Governance site, which lists detected and publicly disclosed data breaches and cyber attacks each month (36.6 million records breached in August 2020). If you scroll through the list, you’ll see how many of them are educational sites. You’ll also see a comment about how leaky organisations have been throughout lockdown … because they weren’t prepared for the sudden shift online.

Recent years have seen a growing consensus that ‘it is crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Dudeney et al., 2013). Most of the focus has been on the skills that are needed to use digital media. There also appears to be growing interest in developing critical thinking skills in the context of digital media (e.g. Peachey, 2016) – identifying fake news and so on. To a much lesser extent, there has been some focus on ‘issues of digital identity, responsibility, safety and ethics when students use these technologies’ (Mavridi, 2020a: 172). Mavridi (2020b: 91) also briefly discusses the personal risks of digital footprints, but she does not have the space to explore more fully the notion of critical data literacy. This literacy involves an understanding of not just the personal risks of using ‘free’ educational apps and platforms, but of why they are ‘free’ in the first place. Sander (2020b) suggests that this literacy entails ‘an understanding of datafication, recognizing the risks and benefits of the growing prevalence of data collection, analytics, automation, and predictive systems, as well as being able to critically reflect upon these developments. This includes, but goes beyond the skills of, for example, changing one’s social media settings, and rather constitutes an altered view on the pervasive, structural, and systemic levels of changing big data systems in our datafied societies’.

In my next two posts, I will, first of all, explore in more detail the idea of critical data literacy, before suggesting a range of classroom resources.

(I posted about privacy in March 2014, when I looked at the connections between big data and personalized / adaptive learning. In another post, September 2014, I looked at the claims of the CEO of Knewton who bragged that his company had five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. You might find both of these posts interesting.)

References

Birch, K., Chiappetta, M. & Artyushina, A. (2020). ‘The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset’ Policy Studies, 41:5, 468-487, DOI: 10.1080/01442872.2020.1748264

Cobo, C. (2019). I Accept the Terms and Conditions. https://adaptivelearninginelt.files.wordpress.com/2020/01/41acf-cd84b5_7a6e74f4592c460b8f34d1f69f2d5068.pdf

Doteveryone. (2018). People, Power and Technology: The 2018 Digital Attitudes Report. https://attitudes.doteveryone.org.uk

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Foresman, B. (2018). Education ranked worst at cybersecurity out of 17 major industries. Edscoop, December 17, 2018. https://edscoop.com/education-ranked-worst-at-cybersecurity-out-of-17-major-industries/

Kulwin, K. (2018). F*ck Them. We Need a Law’: A Legendary Programmer Takes on Silicon Valley, New York Intelligencer, 2018, https://nymag.com/intelligencer/2018/04/richard-stallman-rms-on-privacy-data-and-free-software.html

Manolev, J., Sullivan, A. & Slee, R. (2019). ‘Vast amounts of data about our children are being harvested and stored via apps used by schools’ EduReseach Matters, February 18, 2019. https://www.aare.edu.au/blog/?p=3712

Mavridi, S. (2020a). Fostering Students’ Digital Responsibility, Ethics and Safety Skills (Dress). In Mavridi, S. & Saumell, V. (Eds.) Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL. pp. 170 – 196

Mavridi, S. (2020b). Digital literacies and the new digital divide. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp. 90 – 98

Moore, M. (2018). Democracy Hacked. London: Oneworld

Ofcom. (2019). Adults: Media use and attitudes report [Report]. https://www.ofcom.org.uk/__data/assets/pdf_file/0021/149124/adults-media-use-and-attitudes-report.pdf

O’Neil, C. (2016). Weapons of Math Destruction. London: Allen Lane

Peachey, N. (2016). Thinking Critically through Digital Media. http://peacheypublications.com/

Sadowski, J. (2019). ‘When data is capital: Datafication, accumulation, and extraction’ Big Data and Society 6 (1) https://doi.org/10.1177%2F2053951718820549

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479 https://www.econstor.eu/bitstream/10419/218936/1/2020-2-1479.pdf

Sander, I. (2020b). Critical big data literacy tools—Engaging citizens and promoting empowered internet usage. Data & Policy, 2: e5 doi:10.1017/dap.2020.5

Williamson, B. (2019). ‘Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry’ Journal of Professional Learning, 2019 (Semester 2) https://cpl.asn.au/journal/semester-2-2019/killer-apps-for-the-classroom-developing-critical-perspectives-on-classdojo

From time to time, I have mentioned Programmed Learning (or Programmed Instruction) in this blog (here and here, for example). It felt like time to go into a little more detail about what Programmed Instruction was (and is) and why I think it’s important to know about it.

A brief description

The basic idea behind Programmed Instruction was that subject matter could be broken down into very small parts, which could be organised into an optimal path for presentation to students. Students worked, at their own speed, through a series of micro-tasks, building their mastery of each nugget of learning that was presented, not progressing from one to the next until they had demonstrated they could respond accurately to the previous task.

There were two main types of Programmed Instruction: linear programming and branching programming. In the former, every student would follow the same path, the same sequence of frames. This could be used in classrooms for whole-class instruction and I tracked down a book (illustrated below) called ‘Programmed English Course Student’s Book 1’ (Hill, 1966), which was an attempt to transfer the ideas behind Programmed Instruction to a zero-tech, class environment. This is very similar in approach to the material I had to use when working at an Inlingua school in the 1980s.

Programmed English Course

Comparatives strip

An example of how self-paced programming worked is illustrated here, with a section on comparatives.

With branching programming, ‘extra frames (or branches) are provided for students who do not get the correct answer’ (Kay et al., 1968: 19). This was only suitable for self-study, but it was clearly preferable, as it allowed for self-pacing and some personalization. The material could be presented in books (which meant that students had to flick back and forth in their books) or with special ‘teaching machines’, but the latter were preferred.

In the words of an early enthusiast, Programmed Instruction was essentially ‘a device to control a student’s behaviour and help him to learn without the supervision of a teacher’ (Kay et al.,1968: 58). The approach was inspired by the work of Skinner and it was first used as part of a university course in behavioural psychology taught by Skinner at Harvard University in 1957. It moved into secondary schools for teaching mathematics in 1959 (Saettler, 2004: 297).

Enthusiasm and uptake

The parallels between current enthusiasm for the power of digital technology to transform education and the excitement about Programmed Instruction and teaching machines in the 1960s are very striking (McDonald et al., 2005: 90). In 1967, it was reported that ‘we are today on the verge of what promises to be a revolution in education’ (Goodman, 1967: 3) and that ‘tremors of excitement ran through professional journals and conferences and department meetings from coast to coast’ (Kennedy, 1967: 871). The following year, another commentator referred to the way that the field of education had been stirred ‘with an almost Messianic promise of a breakthrough’ (Ornstein, 1968: 401). Programmed instruction was also seen as an exciting business opportunity: ‘an entire industry is just coming into being and significant sales and profits should not be too long in coming’, wrote one hopeful financial analyst as early as 1961 (Kozlowski, 1967: 47).

The new technology seemed to offer a solution to the ‘problems of education’. Media reports in 1963 in Germany, for example, discussed a shortage of teachers, large classes and inadequate learning progress … ‘an ‘urgent pedagogical emergency’ that traditional teaching methods could not resolve’ (Hof, 2018). Individualised learning, through Programmed Instruction, would equalise educational opportunity and if you weren’t part of it, you would be left behind. In the US, two billion dollars were spent on educational technology by the government in the decade following the passing of the National Defense Education Act, and this was added to by grants from private foundations. As a result, ‘the production of teaching machines began to flourish, accompanied by the marketing of numerous ‘teaching units’ stamped into punch cards as well as less expensive didactic programme books and index cards. The market grew dramatically in a short time’ (Hof, 2018).

In the field of language learning, however, enthusiasm was more muted. In the year in which he completed his doctoral studies[1], the eminent linguist, Bernard Spolsky noted that ‘little use is actually being made of the new technique’ (Spolsky, 1966). A year later, a survey of over 600 foreign language teachers at US colleges and universities reported that only about 10% of them had programmed materials in their departments (Valdman, 1968: 1). In most of these cases, the materials ‘were being tried out on an experimental basis under the direction of their developers’. And two years after that, it was reported that ‘programming has not yet been used to any very great extent in language teaching, so there is no substantial body of experience from which to draw detailed, water-tight conclusions’ (Howatt, 1969: 164).

By the early 1970s, Programmed Instruction was already beginning to seem like yesterday’s technology, even though the principles behind it are still very much alive today (Thornbury (2017) refers to Duolingo as ‘Programmed Instruction’). It would be nice to think that language teachers of the day were more sceptical than, for example, their counterparts teaching mathematics. It would be nice to think that, like Spolsky, they had taken on board Chomsky’s (1959) demolition of Skinner. But the widespread popularity of Audiolingual methods suggests otherwise. Audiolingualism, based essentially on the same Skinnerian principles as Programmed Instruction, needed less outlay on technology. The machines (a slide projector and a record or tape player) were cheaper than the teaching machines, could be used for other purposes and did not become obsolete so quickly. The method also lent itself more readily to established school systems (i.e. whole-class teaching) and the skills sets of teachers of the day. Significantly, too, there was relatively little investment in Programmed Instruction for language teaching (compared to, say, mathematics), since this was a smallish and more localized market. There was no global market for English language learning as there is today.

Lessons to be learned

1 Shaping attitudes

It was not hard to persuade some educational authorities of the value of Programmed Instruction. As discussed above, it offered a solution to the problem of ‘the chronic shortage of adequately trained and competent teachers at all levels in our schools, colleges and universities’, wrote Goodman (1967: 3), who added, there is growing realisation of the need to give special individual attention to handicapped children and to those apparently or actually retarded’. The new teaching machines ‘could simulate the human teacher and carry out at least some of his functions quite efficiently’ (Goodman, 1967: 4). This wasn’t quite the same thing as saying that the machines could replace teachers, although some might have hoped for this. The official line was more often that the machines could ‘be used as devices, actively co-operating with the human teacher as adaptive systems and not just merely as aids’ (Goodman, 1967: 37). But this more nuanced message did not always get through, and ‘the Press soon stated that robots would replace teachers and conjured up pictures of classrooms of students with little iron men in front of them’ (Kay et al., 1968: 161).

For teachers, though, it was one thing to be told that the machines would free their time to perform more meaningful tasks, but harder to believe when this was accompanied by a ‘rhetoric of the instructional inadequacies of the teacher’ (McDonald, et al., 2005: 88). Many teachers felt threatened. They ‘reacted against the ‘unfeeling machine’ as a poor substitute for the warm, responsive environment provided by a real, live teacher. Others have seemed to take it more personally, viewing the advent of programmed instruction as the end of their professional career as teachers. To these, even the mention of programmed instruction produces a momentary look of panic followed by the appearance of determination to stave off the ominous onslaught somehow’ (Tucker, 1972: 63).

Some of those who were pushing for Programmed Instruction had a bigger agenda, with their sights set firmly on broader school reform made possible through technology (Hof, 2018). Individualised learning and Programmed Instruction were not just ends in themselves: they were ways of facilitating bigger changes. The trouble was that teachers were necessary for Programmed Instruction to work. On the practical level, it became apparent that a blend of teaching machines and classroom teaching was more effective than the machines alone (Saettler, 2004: 299). But the teachers’ attitudes were crucial: a research study involving over 6000 students of Spanish showed that ‘the more enthusiastic the teacher was about programmed instruction, the better the work the students did, even though they worked independently’ (Saettler, 2004: 299). In other researched cases, too, ‘teacher attitudes proved to be a critical factor in the success of programmed instruction’ (Saettler, 2004: 301).

2 Returns on investment

Pricing a hyped edtech product is a delicate matter. Vendors need to see a relatively quick return on their investment, before a newer technology knocks them out of the market. Developments in computing were fast in the late 1960s, and the first commercially successful personal computer, the Altair 8800, appeared in 1974. But too high a price carried obvious risks. In 1967, the cheapest teaching machine in the UK, the Tutorpack (from Packham Research Ltd), cost £7 12s (equivalent to about £126 today), but machines like these were disparagingly referred to as ‘page-turners’ (Higgins, 1983: 4). A higher-end linear programming machine cost twice this amount. Branching programme machines cost a lot more. The Mark II AutoTutor (from USI Great Britain Limited), for example, cost £31 per month (equivalent to £558), with eight reels of programmes thrown in (Goodman, 1967: 26). A lower-end branching machine, the Grundytutor, could be bought for £ 230 (worth about £4140 today).

Teaching machines (from Goodman)AutoTutor Mk II (from Goodman)

This was serious money, and any institution splashing out on teaching machines needed to be confident that they would be well used for a long period of time (Nordberg, 1965). The programmes (the software) were specific to individual machines and the content could not be updated easily. At the same time, other technological developments (cine projectors, tape recorders, record players) were arriving in classrooms, and schools found themselves having to pay for technical assistance and maintenance. The average teacher was ‘unable to avail himself fully of existing aids because, to put it bluntly, he is expected to teach for too many hours a day and simply has not the time, with all the administrative chores he is expected to perform, either to maintain equipment, to experiment with it, let alone keeping up with developments in his own and wider fields. The advent of teaching machines which can free the teacher to fulfil his role as an educator will intensify and not diminish the problem’ (Goodman, 1967: 44). Teaching machines, in short, were ‘oversold and underused’ (Cuban, 2001).

3 Research and theory

Looking back twenty years later, B. F. Skinner conceded that ‘the machines were crude, [and] the programs were untested’ (Skinner, 1986: 105). The documentary record suggests that the second part of this statement is not entirely true. Herrick (1966: 695) reported that ‘an overwhelming amount of research time has been invested in attempts to determine the relative merits of programmed instruction when compared to ‘traditional’ or ‘conventional’ methods of instruction. The results have been almost equally overwhelming in showing no significant differences’. In 1968, Kay et al (1968: 96) noted that ‘there has been a definite effort to examine programmed instruction’. A later meta-analysis of research in secondary education (Kulik et al.: 1982) confirmed that ‘Programmed Instruction did not typically raise student achievement […] nor did it make students feel more positively about the subjects they were studying’.

It was not, therefore, the case that research was not being done. It was that many people were preferring not to look at it. The same holds true for theoretical critiques. In relation to language learning, Spolsky (1966) referred to Chomsky’s (1959) rebuttal of Skinner’s arguments, adding that ‘there should be no need to rehearse these inadequacies, but as some psychologists and even applied linguists appear to ignore their existence it might be as well to remind readers of a few’. Programmed Instruction might have had a limited role to play in language learning, but vendors’ claims went further than that and some people believed them: ‘Rather than addressing themselves to limited and carefully specified FL tasks – for example the teaching of spelling, the teaching of grammatical concepts, training in pronunciation, the acquisition of limited proficiency within a restricted number of vocabulary items and grammatical features – most programmers aimed at self-sufficient courses designed to lead to near-native speaking proficiency’ (Valdman, 1968: 2).

4 Content

When learning is conceptualised as purely the acquisition of knowledge, technological optimists tend to believe that machines can convey it more effectively and more efficiently than teachers (Hof, 2018). The corollary of this is the belief that, if you get the materials right (plus the order in which they are presented and appropriate feedback), you can ‘to a great extent control and engineer the quality and quantity of learning’ (Post, 1972: 14). Learning, in other words, becomes an engineering problem, and technology is its solution.

One of the problems was that technology vendors were, first and foremost, technology specialists. Content was almost an afterthought. Materials writers needed to be familiar with the technology and, if not, they were unlikely to be employed. Writers needed to believe in the potential of the technology, so those familiar with current theory and research would clearly not fit in. The result was unsurprising. Kennedy (1967: 872) reported that ‘there are hundreds of programs now available. Many more will be published in the next few years. Watch for them. Examine them critically. They are not all of high quality’. He was being polite.

5 Motivation

As is usually the case with new technologies, there was a positive novelty effect with Programmed Instruction. And, as is always the case, the novelty effect wears off: ‘students quickly tired of, and eventually came to dislike, programmed instruction’ (McDonald et al.: 89). It could not really have been otherwise: ‘human learning and intrinsic motivation are optimized when persons experience a sense of autonomy, competence, and relatedness in their activity. Self-determination theorists have also studied factors that tend to occlude healthy functioning and motivation, including, among others, controlling environments, rewards contingent on task performance, the lack of secure connection and care by teachers, and situations that do not promote curiosity and challenge’ (McDonald et al.: 93). The demotivating experience of using these machines was particularly acute with younger and ‘less able’ students, as was noted at the time (Valdman, 1968: 9).

The unlearned lessons

I hope that you’ll now understand why I think the history of Programmed Instruction is so relevant to us today. In the words of my favourite Yogi-ism, it’s like deja vu all over again. I have quoted repeatedly from the article by McDonald et al (2005) and I would highly recommend it – available here. Hopefully, too, Audrey Watters’ forthcoming book, ‘Teaching Machines’, will appear before too long, and she will, no doubt, have much more of interest to say on this topic.

References

Chomsky N. 1959. ‘Review of Skinner’s Verbal Behavior’. Language, 35:26–58.

Cuban, L. 2001. Oversold & Underused: Computers in the Classroom. (Cambridge, MA: Harvard University Press)

Goodman, R. 1967. Programmed Learning and Teaching Machines 3rd edition. (London: English Universities Press)

Herrick, M. 1966. ‘Programmed Instruction: A critical appraisal’ The American Biology Teacher, 28 (9), 695 -698

Higgins, J. 1983. ‘Can computers teach?’ CALICO Journal, 1 (2)

Hill, L. A. 1966. Programmed English Course Student’s Book 1. (Oxford: Oxford University Press)

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47:4, 445-465

Howatt, A. P. R. 1969. Programmed Learning and the Language Teacher. (London: Longmans)

Kay, H., Dodd, B. & Sime, M. 1968. Teaching Machines and Programmed Instruction. (Harmondsworth: Penguin)

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 / 6, 47 – 54

Kulik, C.-L., Schwalb, B. & Kulik, J. 1982. ‘Programmed Instruction in Secondary Education: A Meta-analysis of Evaluation Findings’ Journal of Educational Research, 75: 133 – 138

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 / 2, 84 – 98

Nordberg, R. B. 1965. Teaching machines-six dangers and one advantage. In J. S. Roucek (Ed.), Programmed teaching: A symposium on automation in education (pp. 1–8). (New York: Philosophical Library)

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 / 7, 401 – 410

Post, D. 1972. ‘Up the programmer: How to stop PI from boring learners and strangling results’. Educational Technology, 12(8), 14–1

Saettler, P. 2004. The Evolution of American Educational Technology. (Greenwich, Conn.: Information Age Publishing)

Skinner, B. F. 1986. ‘Programmed Instruction Revisited’ The Phi Delta Kappan, 68 (2), 103 – 110

Spolsky, B. 1966. ‘A psycholinguistic critique of programmed foreign language instruction’ International Review of Applied Linguistics in Language Teaching, Volume 4, Issue 1-4: 119–130

Thornbury, S. 2017. Scott Thornbury’s 30 Language Teaching Methods. (Cambridge: Cambridge University Press)

Tucker, C. 1972. ‘Programmed Dictation: An Example of the P.I. Process in the Classroom’. TESOL Quarterly, 6(1), 61-70

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14

 

 

 

[1] Spolsky’ doctoral thesis for the University of Montreal was entitled ‘The psycholinguistic basis of programmed foreign language instruction’.

 

 

 

 

 

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

Back in the middle of the last century, the first interactive machines for language teaching appeared. Previously, there had been phonograph discs and wire recorders (Ornstein, 1968: 401), but these had never really taken off. This time, things were different. Buoyed by a belief in the power of technology, along with the need (following the Soviet Union’s successful Sputnik programme) to demonstrate the pre-eminence of the United States’ technological expertise, the interactive teaching machines that were used in programmed instruction promised to revolutionize language learning (Valdman, 1968: 1). From coast to coast, ‘tremors of excitement ran through professional journals and conferences and department meetings’ (Kennedy, 1967: 871). The new technology was driven by hard science, supported and promoted by the one of the most well-known and respected psychologists and public intellectuals of the day (Skinner, 1961).

In classrooms, the machines acted as powerfully effective triggers in generating situational interest (Hidi & Renninger, 2006). Even more exciting than the mechanical teaching machines were the computers that were appearing on the scene. ‘Lick’ Licklider, a pioneer in interactive computing at the Advanced Research Projects Agency in Arlington, Virginia, developed an automated drill routine for learning German by hooking up a computer, two typewriters, an oscilloscope and a light pen (Noble, 1991: 124). Students loved it, and some would ‘go on and on, learning German words until they were forced by scheduling to cease their efforts’. Researchers called the seductive nature of the technology ‘stimulus trapping’, and Licklider hoped that ‘before [the student] gets out from under the control of the computer’s incentives, [they] will learn enough German words’ (Noble, 1991: 125).

With many of the developed economies of the world facing a critical shortage of teachers, ‘an urgent pedagogical emergency’ (Hof, 2018), the new approach was considered to be extremely efficient and could equalise opportunity in schools across the country. It was ‘here to stay: [it] appears destined to make progress that could well go beyond the fondest dreams of its originators […] an entire industry is just coming into being and significant sales and profits should not be too long in coming’ (Kozlowski, 1961: 47).

Unfortunately, however, researchers and entrepreneurs had massively underestimated the significance of novelty effects. The triggered situational interest of the machines did not lead to intrinsic individual motivation. Students quickly tired of, and eventually came to dislike, programmed instruction and the machines that delivered it (McDonald et al.: 2005: 89). What’s more, the machines were expensive and ‘research studies conducted on its effectiveness showed that the differences in achievement did not constantly or substantially favour programmed instruction over conventional instruction (Saettler, 2004: 303). Newer technologies, with better ‘stimulus trapping’, were appearing. Programmed instruction lost its backing and disappeared, leaving as traces only its interest in clearly defined learning objectives, the measurement of learning outcomes and a concern with the efficiency of learning approaches.

Hot on the heels of programmed instruction came the language laboratory. Futuristic in appearance, not entirely unlike the deck of the starship USS Enterprise which launched at around the same time, language labs captured the public imagination and promised to explore the final frontiers of language learning. As with the earlier teaching machines, students were initially enthusiastic. Even today, when language labs are introduced into contexts where they may be perceived as new technology, they can lead to high levels of initial motivation (e.g. Ramganesh & Janaki, 2017).

Given the huge investments into these labs, it’s unfortunate that initial interest waned fast. By 1969, many of these rooms had turned into ‘“electronic graveyards,” sitting empty and unused, or perhaps somewhat glorified study halls to which students grudgingly repair to don headphones, turn down the volume, and prepare the next period’s history or English lesson, unmolested by any member of the foreign language faculty’ (Turner, 1969: 1, quoted in Roby, 2003: 527). ‘Many second language students shudder[ed] at the thought of entering into the bowels of the “language laboratory” to practice and perfect the acoustical aerobics of proper pronunciation skills. Visions of sterile white-walled, windowless rooms, filled with endless bolted-down rows of claustrophobic metal carrels, and overseen by a humorless, lab director, evoke[d] fear in the hearts of even the most stout-hearted prospective second-language learners (Wiley, 1990: 44).

By the turn of this century, language labs had mostly gone, consigned to oblivion by the appearance of yet newer technology: the internet, laptops and smartphones. Education had been on the brink of being transformed through new learning technologies for decades (Laurillard, 2008: 1), but this time it really was different. It wasn’t just one technology that had appeared, but a whole slew of them: ‘artificial intelligence, learning analytics, predictive analytics, adaptive learning software, school management software, learning management systems (LMS), school clouds. No school was without these and other technologies branded as ‘superintelligent’ by the late 2020s’ (Macgilchrist et al., 2019). The hardware, especially phones, was ubiquitous and, therefore, free. Unlike teaching machines and language laboratories, students were used to using the technology and expected to use their devices in their studies.

A barrage of publicity, mostly paid for by the industry, surrounded the new technologies. These would ‘meet the demands of Generation Z’, the new generation of students, now cast as consumers, who ‘were accustomed to personalizing everything’.  AR, VR, interactive whiteboards, digital projectors and so on made it easier to ‘create engaging, interactive experiences’. The ‘New Age’ technologies made learning fun and easy,  ‘bringing enthusiasm among the students, improving student engagement, enriching the teaching process, and bringing liveliness in the classroom’. On top of that, they allowed huge amounts of data to be captured and sold, whilst tracking progress and attendance. In any case, resistance to digital technology, said more than one language teaching expert, was pointless (Styring, 2015).slide

At the same time, technology companies increasingly took on ‘central roles as advisors to national governments and local districts on educational futures’ and public educational institutions came to be ‘regarded by many as dispensable or even harmful’ (Macgilchrist et al., 2019).

But, as it turned out, the students of Generation Z were not as uniformly enthusiastic about the new technology as had been assumed, and resistance to digital, personalized delivery in education was not long in coming. In November 2018, high school students at Brooklyn’s Secondary School for Journalism staged a walkout in protest at their school’s use of Summit Learning, a web-based platform promoting personalized learning developed by Facebook. They complained that the platform resulted in coursework requiring students to spend much of their day in front of a computer screen, that made it easy to cheat by looking up answers online, and that some of their teachers didn’t have the proper training for the curriculum (Leskin, 2018). Besides, their school was in a deplorable state of disrepair, especially the toilets. There were similar protests in Kansas, where students staged sit-ins, supported by their parents, one of whom complained that ‘we’re allowing the computers to teach and the kids all looked like zombies’ before pulling his son out of the school (Bowles, 2019). In Pennsylvania and Connecticut, some schools stopped using Summit Learning altogether, following protests.

But the resistance did not last. Protesters were accused of being nostalgic conservatives and educationalists kept largely quiet, fearful of losing their funding from the Chan Zuckerberg Initiative (Facebook) and other philanthro-capitalists. The provision of training in grit, growth mindset, positive psychology and mindfulness (also promoted by the technology companies) was ramped up, and eventually the disaffected students became more quiescent. Before long, the data-intensive, personalized approach, relying on the tools, services and data storage of particular platforms had become ‘baked in’ to educational systems around the world (Moore, 2018: 211). There was no going back (except for small numbers of ultra-privileged students in a few private institutions).

By the middle of the century (2155), most students, of all ages, studied with interactive screens in the comfort of their homes. Algorithmically-driven content, with personalized, adaptive tests had become the norm, but the technology occasionally went wrong, leading to some frustration. One day, two young children discovered a book in their attic. Made of paper with yellow, crinkly pages, where ‘the words stood still instead of moving the way they were supposed to’. The book recounted the experience of schools in the distant past, where ‘all the kids from the neighbourhood came’, sitting in the same room with a human teacher, studying the same things ‘so they could help one another on the homework and talk about it’. Margie, the younger of the children at 11 years old, was engrossed in the book when she received a nudge from her personalized learning platform to return to her studies. But Margie was reluctant to go back to her fractions. She ‘was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had’ (Asimov, 1951).

References

Asimov, I. 1951. The Fun They Had. Accessed September 20, 2019. http://web1.nbed.nb.ca/sites/ASD-S/1820/J%20Johnston/Isaac%20Asimov%20-%20The%20fun%20they%20had.pdf

Bowles, N. 2019. ‘Silicon Valley Came to Kansas Schools. That Started a Rebellion’ The New York Times, April 21. Accessed September 20, 2019. https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html

Hidi, S. & Renninger, K.A. 2006. ‘The Four-Phase Model of Interest Development’ Educational Psychologist, 41 (2), 111 – 127

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47 (4): 445-465

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 (6): 47 – 54

Laurillard, D. 2008. Digital Technologies and their Role in Achieving our Ambitions for Education. London: Institute for Education.

Leskin, P. 2018. ‘Students in Brooklyn protest their school’s use of a Zuckerberg-backed online curriculum that Facebook engineers helped build’ Business Insider, 12.11.18 Accessed 20 September 2019. https://www.businessinsider.de/summit-learning-school-curriculum-funded-by-zuckerberg-faces-backlash-brooklyn-2018-11?r=US&IR=T

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 (2): 84 – 98

Macgilchrist, F., Allert, H. & Bruch, A. 2019. ‚Students and society in the 2020s. Three future ‘histories’ of education and technology’. Learning, Media and Technology, https://www.tandfonline.com/doi/full/10.1080/17439884.2019.1656235 )

Moore, M. 2018. Democracy Hacked. London: Oneworld

Noble, D. D. 1991. The Classroom Arsenal. London: The Falmer Press

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 (7), 401 – 410

Ramganesh, E. & Janaki, S. 2017. ‘Attitude of College Teachers towards the Utilization of Language Laboratories for Learning English’ Asian Journal of Social Science Studies; Vol. 2 (1): 103 – 109

Roby, W.B. 2003. ‘Technology in the service of foreign language teaching: The case of the language laboratory’ In D. Jonassen (ed.), Handbook of Research on Educational Communications and Technology, 2nd ed.: 523 – 541. Mahwah, NJ.: Lawrence Erlbaum Associates

Saettler, P. 2004. The Evolution of American Educational Technology. Greenwich, Conn.: Information Age Publishing

Skinner, B. F. 1961. ‘Teaching Machines’ Scientific American, 205(5), 90-107

Styring, J. 2015. Engaging Generation Z. Cambridge English webinar 2015 https://www.youtube.com/watch?time_continue=4&v=XCxl4TqgQZA

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14.

Wiley, P. D. 1990. ‘Language labs for 1990: User-friendly, expandable and affordable’. Media & Methods, 27(1), 44–47)

jenny-holzer-untitled-protect-me-from-what-i-want-text-displayed-in-times-square-nyc-1982

Jenny Holzer, Protect me from what I want

When the startup, AltSchool, was founded in 2013 by Max Ventilla, the former head of personalization at Google, it quickly drew the attention of venture capitalists and within a few years had raised $174 million from the likes of the Zuckerberg Foundation, Peter Thiel, Laurene Powell Jobs and Pierre Omidyar. It garnered gushing articles in a fawning edtech press which enthused about ‘how successful students can be when they learn in small, personalized communities that champion project-based learning, guided by educators who get a say in the technology they use’. It promised ‘a personalized learning approach that would far surpass the standardized education most kids receive’.

altschoolVentilla was an impressive money-raiser who used, and appeared to believe, every cliché in the edTech sales manual. Dressed in regulation jeans, polo shirt and fleece, he claimed that schools in America were ‘stuck in an industrial-age model, [which] has been in steady decline for the last century’ . What he offered, instead, was a learner-centred, project-based curriculum providing real-world lessons. There was a focus on social-emotional learning activities and critical thinking was vital.

The key to the approach was technology. From the start, software developers, engineers and researchers worked alongside teachers everyday, ‘constantly tweaking the Personalized Learning Plan, which shows students their assignments for each day and helps teachers keep track of and assess student’s learning’. There were tablets for pre-schoolers, laptops for older kids and wall-mounted cameras to record the lessons. There were, of course, Khan Academy videos. Ventilla explained that “we start with a representation of each child”, and even though “the vast majority of the learning should happen non-digitally”, the child’s habits and preferences gets converted into data, “a digital representation of the important things that relate to that child’s learning, not just their academic learning but also their non-academic learning. Everything logistic that goes into setting up the experience for them, whether it’s who has permission to pick them up or their allergy information. You name it.” And just like Netflix matches us to TV shows, “If you have that accurate and actionable representation for each child, now you can start to personalize the whole experience for that child. You can create that kind of loop you described where because we can represent a child well, we can match them to the right experiences.”

AltSchool seemed to offer the possibility of doing something noble, of transforming education, ‘bringing it into the digital age’, and, at the same time, a healthy return on investors’ money. Expanding rapidly, nine AltSchool microschools were opened in New York and the Bay Area, and plans were afoot for further expansion in Chicago. But, by then, it was already clear that something was going wrong. Five of the schools were closed before they had really got started and the attrition rate in some classrooms had reached about 30%. Revenue in 2018 was only $7 million and there were few buyers for the AltSchool platform. Quoting once more from the edTech bible, Ventilla explained the situation: ‘Our whole strategy is to spend more than we make,’ he says. Since software is expensive to develop and cheap to distribute, the losses, he believes, will turn into steep profits once AltSchool refines its product and lands enough customers.

The problems were many and apparent. Some of the buildings were simply not appropriate for schools, with no playgrounds or gyms, malfunctioning toilets, among other issues. Parents were becoming unhappy and accused AltSchool of putting ‘its ambitions as a tech company above its responsibility to teach their children. […] We kind of came to the conclusion that, really, AltSchool as a school was kind of a front for what Max really wants to do, which is develop software that he’s selling,’ a parent of a former AltSchool student told Business Insider. ‘We had really mediocre educators using technology as a crutch,’ said one father who transferred his child to a different private school after two years at AltSchool. […] We learned that it’s almost impossible to really customize the learning experience for each kid.’ Some parents began to wonder whether AltSchool had enticed families into its program merely to extract data from their children, then toss them aside?

With the benefit of hindsight, it would seem that the accusations were hardly unfair. In June of this year, AltSchool announced that its four remaining schools would be operated by a new partner, Higher Ground Education (a well-funded startup founded in 2016 which promotes and ‘modernises’ Montessori education). Meanwhile, AltSchool has been rebranded as Altitude Learning, focusing its ‘resources on the development and expansion of its personalized learning platform’ for licensing to other schools across the country.

Quoting once more from the edTech sales manual, Ventilla has said that education should drive the tech, not the other way round. Not so many years earlier, before starting AltSchool, Ventilla also said that he had read two dozen books on education and emerged a fan of Sir Ken Robinson. He had no experience as a teacher or as an educational administrator. Instead, he had ‘extensive knowledge of networks, and he understood the kinds of insights that can be gleaned from big data’.

The use of big data and analytics in education continues to grow.

A vast apparatus of measurement is being developed to underpin national education systems, institutions and the actions of the individuals who occupy them. […] The presence of digital data and software in education is being amplified through massive financial and political investment in educational technologies, as well as huge growth in data collection and analysis in policymaking practices, extension of performance measurement technologies in the management of educational institutions, and rapid expansion of digital methodologies in educational research. To a significant extent, many of the ways in which classrooms function, educational policy departments and leaders make decisions, and researchers make sense of data, simply would not happen as currently intended without the presence of software code and the digital data processing programs it enacts. (Williamson, 2017: 4)

The most common and successful use of this technology so far has been in the identification of students at risk of dropping out of their courses (Jørno & Gynther, 2018: 204). The kind of analytics used in this context may be called ‘academic analytics’ and focuses on educational processes at the institutional level or higher (Gelan et al, 2018: 3). However, ‘learning analytics’, the capture and analysis of learner and learning data in order to personalize learning ‘(1) through real-time feedback on online courses and e-textbooks that can ‘learn’ from how they are used and ‘talk back’ to the teacher, and (2) individualization and personalization of the educational experience through adaptive learning systems that enable materials to be tailored to each student’s individual needs through automated real-time analysis’ (Mayer-Schönberger & Cukier, 2014) has become ‘the main keyword of data-driven education’ (Williamson, 2017: 10). See my earlier posts on this topic here and here and here.

Learning with big dataNear the start of Mayer-Schönberger and Cukier’s enthusiastic sales pitch (Learning with Big Data: The Future of Education) for the use of big data in education, there is a discussion of Duolingo. They quote Luis von Ahn, the founder of Duolingo, as saying ‘there has been little empirical work on what is the best way to teach a foreign language’. This is so far from the truth as to be laughable. Von Ahn’s comment, along with the Duolingo product itself, is merely indicative of a lack of awareness of the enormous amount of research that has been carried out. But what could the data gleaned from the interactions of millions of users with Duolingo tell us of value? The example that is given is the following. Apparently, ‘in the case of Spanish speakers learning English, it’s common to teach pronouns early on: words like “he,” “she,” and “it”.’ But, Duolingo discovered, ‘the term “it” tends to confuse and create anxiety for Spanish speakers, since the word doesn’t easily translate into their language […] Delaying the introduction of “it” until a few weeks later dramatically improves the number of people who stick with learning English rather than drop out.’ Was von Ahn unaware of the decades of research into language transfer effects? Did von Ahn (who grew up speaking Spanish in Guatemala) need all this data to tell him that English personal pronouns can cause problems for Spanish learners of English? Was von Ahn unaware of the debates concerning the value of teaching isolated words (especially grammar words!)?

The area where little empirical research has been done is not in different ways of learning another language: it is in the use of big data and learning analytics to assist language learning. Claims about the value of these technologies in language learning are almost always speculative – they are based on comparison to other school subjects (especially, mathematics). Gelan et al (2018: 2), who note this lack of research, suggest that ‘understanding language learner behaviour could provide valuable insights into task design for instructors and materials designers, as well as help students with effective learning strategies and personalised learning pathways’ (my italics). Reinders (2018: 81) writes ‘that analysis of prior experiences with certain groups or certain courses may help to identify key moments at which students need to receive more or different support. Analysis of student engagement and performance throughout a course may help with early identification of learning problems and may prompt early intervention’ (italics added). But there is some research out there, and it’s worth having a look at. Most studies that have collected learner-tracking data concern glossary use for reading comprehension and vocabulary retention (Gelan et al, 2018: 5), but a few have attempted to go further in scope.

Volk et al (2015) looked at the behaviour of the 20,000 students per day using the platform which accompanies ‘More!’ (Gerngross et al. 2008) to do their English homework for Austrian lower secondary schools. They discovered that

  • the exercises used least frequently were those that are located further back in the course book
  • usage is highest from Monday to Wednesday, declining from Thursday, with a rise again on Sunday
  • most interaction took place between 3:00 and 5:00 pm.
  • repetition of exercises led to a strong improvement in success rate
  • students performed better on multiple choice and matching exercises than they did where they had to produce some language

The authors of this paper conclude by saying that ‘the results of this study suggest a number of new avenues for research. In general, the authors plan to extend their analysis of exercise results and applied exercises to the population of all schools using the online learning platform more-online.at. This step enables a deeper insight into student’s learning behaviour and allows making more generalizing statements.’ When I shared these research findings with the Austrian lower secondary teachers that I work with, their reaction was one of utter disbelief. People get paid to do this research? Why not just ask us?

More useful, more actionable insights may yet come from other sources. For example, Gu Yueguo, Pro-Vice-Chancellor of the Beijing Foreign Studies University has announced the intention to set up a national Big Data research center, specializing in big data-related research topics in foreign language education (Yu, 2015). Meanwhile, I’m aware of only one big research project that has published its results. The EC Erasmus+ VITAL project (Visualisation Tools and Analytics to monitor Online Language Learning & Teaching) was carried out between 2015 and 2017 and looked at the learning trails of students from universities in Belgium, Britain and the Netherlands. It was discovered (Gelan et al, 2015) that:

  • students who did online exercises when they were supposed to do them were slightly more successful than those who were late carrying out the tasks
  • successful students logged on more often, spent more time online, attempted and completed more tasks, revisited both exercises and theory pages more frequently, did the work in the order in which it was supposed to be done and did more work in the holidays
  • most students preferred to go straight into the assessed exercises and only used the theory pages when they felt they needed to; successful students referred back to the theory pages more often than unsuccessful students
  • students made little use of the voice recording functionality
  • most online activity took place the day before a class and the day of the class itself

EU funding for this VITAL project amounted to 274,840 Euros[1]. The technology for capturing the data has been around for a long time. In my opinion, nothing of value, or at least nothing new, has been learnt. Publishers like Pearson and Cambridge University Press who have large numbers of learners using their platforms have been capturing learning data for many years. They do not publish their findings and, intriguingly, do not even claim that they have learnt anything useful / actionable from the data they have collected. Sure, an exercise here or there may need to be amended. Both teachers and students may need more support in using the more open-ended functionalities of the platforms (e.g. discussion forums). But are they getting ‘unprecedented insights into what works and what doesn’t’ (Mayer-Schönberger & Cukier, 2014)? Are they any closer to building better pedagogies? On the basis of what we know so far, you wouldn’t want to bet on it.

It may be the case that all the learning / learner data that is captured could be used in some way that has nothing to do with language learning. Show me a language-learning app developer who does not dream of monetizing the ‘behavioural surplus’ (Zuboff, 2018) that they collect! But, for the data and analytics to be of any value in guiding language learning, it must lead to actionable insights. Unfortunately, as Jørno & Gynther (2018: 198) point out, there is very little clarity about what is meant by ‘actionable insights’. There is a danger that data and analytics ‘simply gravitates towards insights that confirm longstanding good practice and insights, such as “students tend to ignore optional learning activities … [and] focus on activities that are assessed” (Jørno & Gynther, 2018: 211). While this is happening, the focus on data inevitably shapes the way we look at the object of study (i.e. language learning), ‘thereby systematically excluding other perspectives’ (Mau, 2019: 15; see also Beer, 2019). The belief that tech is always the solution, that all we need is more data and better analytics, remains very powerful: it’s called techno-chauvinism (Broussard, 2018: 7-8).

References

Beer, D. 2019. The Data Gaze. London: Sage

Broussard, M. 2018. Artificial Unintelligence. Cambridge, Mass.: MIT Press

Gelan, A., Fastre, G., Verjans, M., Martin, N., Jansenswillen, G., Creemers, M., Lieben, J., Depaire, B. & Thomas, M. 2018. ‘Affordances and limitations of learning analytics for computer­assisted language learning: a case study of the VITAL project’. Computer Assisted Language Learning. pp. 1­26. http://clok.uclan.ac.uk/21289/

Gerngross, G., Puchta, H., Holzmann, C., Stranks, J., Lewis-Jones, P. & Finnie, R. 2008. More! 1 Cyber Homework. Innsbruck, Austria: Helbling

Jørno, R. L. & Gynther, K. 2018. ‘What Constitutes an “Actionable Insight” in Learning Analytics?’ Journal of Learning Analytics 5 (3): 198 – 221

Mau, S. 2019. The Metric Society. Cambridge: Polity Press

Mayer-Schönberger, V. & Cukier, K. 2014. Learning with Big Data: The Future of Education. New York: Houghton Mifflin Harcourt

Reinders, H. 2018. ‘Learning analytics for language learning and teaching’. JALT CALL Journal 14 / 1: 77 – 86 https://files.eric.ed.gov/fulltext/EJ1177327.pdf

Volk, H., Kellner, K. & Wohlhart, D. 2015. ‘Learning Analytics for English Language Teaching.’ Journal of Universal Computer Science, Vol. 21 / 1: 156-174 http://www.jucs.org/jucs_21_1/learning_analytics_for_english/jucs_21_01_0156_0174_volk.pdf

Williamson, B. 2017. Big Data in Education. London: Sage

Yu, Q. 2015. ‘Learning Analytics: The next frontier for computer assisted language learning in big data age’ SHS Web of Conferences, 17 https://www.shs-conferences.org/articles/shsconf/pdf/2015/04/shsconf_icmetm2015_02013.pdf

Zuboff, S. 2019. The Age of Surveillance Capitalism. London: Profile Books

 

[1] See https://ec.europa.eu/programmes/erasmus-plus/sites/erasmusplus2/files/ka2-2015-he_en.pdf

ltsigIt’s hype time again. Spurred on, no doubt, by the current spate of books and articles  about AIED (artificial intelligence in education), the IATEFL Learning Technologies SIG is organising an online event on the topic in November of this year. Currently, the most visible online references to AI in language learning are related to Glossika , basically a language learning system that uses spaced repetition, whose marketing department has realised that references to AI might help sell the product. GlossikaThey’re not alone – see, for example, Knowble which I reviewed earlier this year .

In the wider world of education, where AI has made greater inroads than in language teaching, every day brings more stuff: How artificial intelligence is changing teaching , 32 Ways AI is Improving Education , How artificial intelligence could help teachers do a better job , etc., etc. There’s a full-length book by Anthony Seldon, The Fourth Education Revolution: will artificial intelligence liberate or infantilise humanity? (2018, University of Buckingham Press) – one of the most poorly researched and badly edited books on education I’ve ever read, although that won’t stop it selling – and, no surprises here, there’s a Pearson commissioned report called Intelligence Unleashed: An argument for AI in Education (2016) which is available free.

Common to all these publications is the claim that AI will radically change education. When it comes to language teaching, a similar claim has been made by Donald Clark (described by Anthony Seldon as an education guru but perhaps best-known to many in ELT for his demolition of Sugata Mitra). In 2017, Clark wrote a blog post for Cambridge English (now unavailable) entitled How AI will reboot language learning, and a more recent version of this post, called AI has and will change language learning forever (sic) is available on Clark’s own blog. Given the history of the failure of education predictions, Clark is making bold claims. Thomas Edison (1922) believed that movies would revolutionize education. Radios were similarly hyped in the 1940s and in the 1960s it was the turn of TV. In the 1980s, Seymour Papert predicted the end of schools – ‘the computer will blow up the school’, he wrote. Twenty years later, we had the interactive possibilities of Web 2.0. As each technology failed to deliver on the hype, a new generation of enthusiasts found something else to make predictions about.

But is Donald Clark onto something? Developments in AI and computational linguistics have recently resulted in enormous progress in machine translation. Impressive advances in automatic speech recognition and generation, coupled with the power that can be packed into a handheld device, mean that we can expect some re-evaluation of the value of learning another language. Stephen Heppell, a specialist at Bournemouth University in the use of ICT in Education, has said: ‘Simultaneous translation is coming, making language teachers redundant. Modern languages teaching in future may be more about navigating cultural differences’ (quoted by Seldon, p.263). Well, maybe, but this is not Clark’s main interest.

Less a matter of opinion and much closer to the present day is the issue of assessment. AI is becoming ubiquitous in language testing. Cambridge, Pearson, TELC, Babbel and Duolingo are all using or exploring AI in their testing software, and we can expect to see this increase. Current, paper-based systems of testing subject knowledge are, according to Rosemary Luckin and Kristen Weatherby, outdated, ineffective, time-consuming, the cause of great anxiety and can easily be automated (Luckin, R. & Weatherby, K. 2018. ‘Learning analytics, artificial intelligence and the process of assessment’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.253). By capturing data of various kinds throughout a language learner’s course of study and by using AI to analyse learning development, continuous formative assessment becomes possible in ways that were previously unimaginable. ‘Assessment for Learning (AfL)’ or ‘Learning Oriented Assessment (LOA)’ are two terms used by Cambridge English to refer to the potential that AI offers which is described by Luckin (who is also one of the authors of the Pearson paper mentioned earlier). In practical terms, albeit in a still very limited way, this can be seen in the CUP course ‘Empower’, which combines CUP course content with validated LOA from Cambridge Assessment English.

Will this reboot or revolutionise language teaching? Probably not and here’s why. AIED systems need to operate with what is called a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. Some subjects (especially STEM subjects) ‘lend themselves much more readily to having their domains represented in ways that can be automatically reasoned about’ (du Boulay, D. et al., 2018. ‘Artificial intelligences and big data technologies to close the achievement gap’ in Luckin, R. (ed.) Enhancing Learning and Teaching with Technology, 2018. UCL Institute of Education Press, p.258). This is why most AIED systems have been built to teach these areas. Language are rather different. We simply do not have a domain knowledge model, except perhaps for the very lowest levels of language learning (and even that is highly questionable). Language learning is probably not, or not primarily, about acquiring subject knowledge. Debate still rages about the relationship between explicit language knowledge and language competence. AI-driven formative assessment will likely focus most on explicit language knowledge, as does most current language teaching. This will not reboot or revolutionise anything. It will more likely reinforce what is already happening: a model of language learning that assumes there is a strong interface between explicit knowledge and language competence. It is not a model that is shared by most SLA researchers.

So, one thing that AI can do (and is doing) for language learning is to improve the algorithms that determine the way that grammar and vocabulary are presented to individual learners in online programs. AI-optimised delivery of ‘English Grammar in Use’ may lead to some learning gains, but they are unlikely to be significant. It is not, in any case, what language learners need.

AI, Donald Clark suggests, can offer personalised learning. Precisely what kind of personalised learning this might be, and whether or not this is a good thing, remains unclear. A 2015 report funded by the Gates Foundation found that we currently lack evidence about the effectiveness of personalised learning. We do not know which aspects of personalised learning (learner autonomy, individualised learning pathways and instructional approaches, etc.) or which combinations of these will lead to gains in language learning. The complexity of the issues means that we may never have a satisfactory explanation. You can read my own exploration of the problems of personalised learning starting here .

What’s left? Clark suggests that chatbots are one area with ‘huge potential’. I beg to differ and I explained my reasons eighteen months ago . Chatbots work fine in very specific domains. As Clark says, they can be used for ‘controlled practice’, but ‘controlled practice’ means practice of specific language knowledge, the practice of limited conversational routines, for example. It could certainly be useful, but more than that? Taking things a stage further, Clark then suggests more holistic speaking and listening practice with Amazon Echo, Alexa or Google Home. If and when the day comes that we have general, as opposed to domain-specific, AI, chatting with one of these tools would open up vast new possibilities. Unfortunately, general AI does not exist, and until then Alexa and co will remain a poor substitute for human-human interaction (which is readily available online, anyway). Incidentally, AI could be used to form groups of online language learners to carry out communicative tasks – ‘the aim might be to design a grouping of students all at a similar cognitive level and of similar interests, or one where the participants bring different but complementary knowledge and skills’ (Luckin, R., Holmes, W., Griffiths, M. & Forceir, L.B. 2016. Intelligence Unleashed: An argument for AI in Education. London: Pearson, p.26).

Predictions about the impact of technology on education have a tendency to be made by people with a vested interest in the technologies. Edison was a businessman who had invested heavily in motion pictures. Donald Clark is an edtech entrepreneur whose company, Wildfire, uses AI in online learning programs. Stephen Heppell is executive chairman of LP+ who are currently developing a Chinese language learning community for 20 million Chinese school students. The reporting of AIED is almost invariably in websites that are paid for, in one way or another, by edtech companies. Predictions need, therefore, to be treated sceptically. Indeed, the safest prediction we can make about hyped educational technologies is that inflated expectations will be followed by disillusionment, before the technology finds a smaller niche.

 

Learners are different, the argument goes, so learning paths will be different, too. And, the argument continues, if learners will benefit from individualized learning pathways, so instruction should be based around an analysis of the optimal learning pathways for individuals and tailored to match them. In previous posts, I have questioned whether such an analysis is meaningful or reliable and whether the tailoring leads to any measurable learning gains. In this post, I want to focus primarily on the analysis of learner differences.

Family / social background and previous educational experiences are obvious ways in which learners differ when they embark on any course of study. The way they impact on educational success is well researched and well established. Despite this research, there are some who disagree. For example, Dominic Cummings (former adviser to Michael Gove when he was UK Education minister and former campaign director of the pro-Brexit Vote Leave group) has argued  that genetic differences, especially in intelligence, account for more than 50% of the differences in educational achievement.

Cummings got his ideas from Robert Plomin , one of the world’s most cited living psychologists. Plomin, in a recent paper in Nature, ‘The New Genetics of Intelligence’ , argues that ‘intelligence is highly heritable and predicts important educational, occupational and health outcomes better than any other trait’. In an earlier paper, ‘Genetics affects choice of academic subjects as well as achievement’, Plomin and his co-authors argued that ‘choosing to do A-levels and the choice of subjects show substantial genetic influence, as does performance after two years studying the chosen subjects’. Environment matters, says Plomin , but it’s possible that genes matter more.

All of which leads us to the field known as ‘educational genomics’. In an article of breathless enthusiasm entitled ‘How genetics could help future learners unlock hidden potential’ , University of Sussex psychologist, Darya Gaysina, describes educational genomics as the use of ‘detailed information about the human genome – DNA variants – to identify their contribution to particular traits that are related to education [… ] it is thought that one day, educational genomics could enable educational organisations to create tailor-made curriculum programmes based on a pupil’s DNA profile’. It could, she writes, ‘enable schools to accommodate a variety of different learning styles – both well-worn and modern – suited to the individual needs of the learner [and] help society to take a decisive step towards the creation of an education system that plays on the advantages of genetic background. Rather than the current system, that penalises those individuals who do not fit the educational mould’.

The goal is not just personalized learning. It is ‘Personalized Precision Education’ where researchers ‘look for patterns in huge numbers of genetic factors that might explain behaviors and achievements in individuals. It also focuses on the ways that individuals’ genotypes and environments interact, or how other “epigenetic” factors impact on whether and how genes become active’. This will require huge amounts of ‘data gathering from learners and complex analysis to identify patterns across psychological, neural and genetic datasets’. Why not, suggests Darya Gaysina, use the same massive databases that are being used to identify health risks and to develop approaches to preventative medicine?

BG-for-educationIf I had a spare 100 Euros, I (or you) could buy Darya Gaysina’s book, ‘Behavioural Genetics for Education’ (Palgrave Macmillan, 2016) and, no doubt, I’d understand the science better as a result. There is much about the science that seems problematic, to say the least (e.g. the definition and measurement of intelligence, the lack of reference to other research that suggests academic success is linked to non-genetic factors), but it isn’t the science that concerns me most. It’s the ethics. I don’t share Gaysina’s optimism that ‘every child in the future could be given the opportunity to achieve their maximum potential’. Her utopianism is my fear of Gattaca-like dystopias. IQ testing, in its early days, promised something similarly wonderful, but look what became of that. When you already have reporting of educational genomics using terms like ‘dictate’, you have to fear for the future of Gaysina’s brave new world.

Futurism.pngEducational genomics could equally well lead to expectations of ‘certain levels of achievement from certain groups of children – perhaps from different socioeconomic or ethnic groups’ and you can be pretty sure it will lead to ‘companies with the means to assess students’ genetic identities [seeking] to create new marketplaces of products to sell to schools, educators and parents’. The very fact that people like Dominic Cummings (described by David Cameron as a ‘career psychopath’ ) have opted to jump on this particular bandwagon is, for me, more than enough cause for concern.

Underlying my doubts about educational genomics is a much broader concern. It’s the apparent belief of educational genomicists that science can provide technical solutions to educational problems. It’s called ‘solutionism’ and it doesn’t have a pretty history.

On Sunday 17 June I’ll be giving a talk at a conference in London, organised by Regent’s University and Trinity College London. Further information about the conference can be found here.

The talk is entitled ‘Personalized learning: the past, present and future of ELT’ and draws heavily on earlier posts on this blog. For anyone attending the talk, here are links to the references I cite along with further reading.

  1. Personalized learning – attempts to define it and its links to technology: see Personalized learning: Hydra and the power of ambiguity and Evaluating personalization
  2. Goal-setting and standardization: see Personalization and goal-setting
  3. Self-pacing and programmed instruction: see Self-paced language learning
  4. The promotion of personalized learning in ELT: see Personalized learning at IATEFL

 

 

Knowble, claims its developers, is a browser extension that will improve English vocabulary and reading comprehension. It also describes itself as an ‘adaptive language learning solution for publishers’. It’s currently beta and free, and sounds right up my street so I decided to give it a run.

Knowble reader

Users are asked to specify a first language (I chose French) and a level (A1 to C2): I chose B1, but this did not seem to impact on anything that subsequently happened. They are then offered a menu of about 30 up-to-date news items, grouped into 5 categories (world, science, business, sport, entertainment). Clicking on one article takes you to the article on the source website. There’s a good selection, including USA Today, CNN, Reuters, the Independent and the Torygraph from Britain, the Times of India, the Independent from Ireland and the Star from Canada. A large number of words are underlined: a single click brings up a translation in the extension box. Double-clicking on all other words will also bring up translations. Apart from that, there is one very short exercise (which has presumably been automatically generated) for each article.

For my trial run, I picked three articles: ‘Woman asks firefighters to help ‘stoned’ raccoon’ (from the BBC, 240 words), ‘Plastic straw and cotton bud ban proposed’ (also from the BBC, 823 words) and ‘London’s first housing market slump since 2009 weighs on UK price growth’ (from the Torygraph, 471 words).

Translations

Research suggests that the use of translations, rather than definitions, may lead to more learning gains, but the problem with Knowble is that it relies entirely on Google Translate. Google Translate is fast improving. Take the first sentence of the ‘plastic straw and cotton bud’ article, for example. It’s not a bad translation, but it gets the word ‘bid’ completely wrong, translating it as ‘offre’ (= offer), where ‘tentative’ (= attempt) is needed. So, we can still expect a few problems with Google Translate …

google_translateOne of the reasons that Google Translate has improved is that it no longer treats individual words as individual lexical items. It analyses groups of words and translates chunks or phrases (see, for example, the way it translates ‘as part of’). It doesn’t do word-for-word translation. Knowble, however, have set their software to ask Google for translations of each word as individual items, so the phrase ‘as part of’ is translated ‘comme’ + ‘partie’ + ‘de’. Whilst this example is comprehensible, problems arise very quickly. ‘Cotton buds’ (‘cotons-tiges’) become ‘coton’ + ‘bourgeon’ (= botanical shoots of cotton). Phrases like ‘in time’, ‘run into’, ‘sleep it off’ ‘take its course’, ‘fire station’ or ‘going on’ (all from the stoned raccoon text) all cause problems. In addition, Knowble are not using any parsing tools, so the system does not identify parts of speech, and further translation errors inevitably appear. In the short article of 240 words, about 10% are wrongly translated. Knowble claim to be using NLP tools, but there’s no sign of it here. They’re just using Google Translate rather badly.

Highlighted items

word_listNLP tools of some kind are presumably being used to select the words that get underlined. Exactly how this works is unclear. On the whole, it seems that very high frequency words are ignored and that lower frequency words are underlined. Here, for example, is the list of words that were underlined in the stoned raccoon text. I’ve compared them with (1) the CEFR levels for these words in the English Profile Text Inspector, and (2) the frequency information from the Macmillan dictionary (more stars = more frequent). In the other articles, some extremely high frequency words were underlined (e.g. price, cost, year) while much lower frequency items were not.

It is, of course, extremely difficult to predict which items of vocabulary a learner will know, even if we have a fairly accurate idea of their level. Personal interests play a significant part, so, for example, some people at even a low level will have no problem with ‘cannabis’, ‘stoned’ and ‘high’, even if these are low frequency. First language, however, is a reasonably reliable indicator as cognates can be expected to be easy. A French speaker will have no problem with ‘appreciate’, ‘unique’ and ‘symptom’. A recommendation engine that can meaningfully personalize vocabulary suggestions will, at the very least, need to consider cognates.

In short, the selection and underlining of vocabulary items, as it currently stands in Knowble, appears to serve no clear or useful function.

taskVocabulary learning

Knowble offers a very short exercise for each article. They are of three types: word completion, dictation and drag and drop (see the example). The rationale for the selection of the target items is unclear, but, in any case, these exercises are tokenistic in the extreme and are unlikely to lead to any significant learning gains. More valuable would be the possibility of exporting items into a spaced repetition flash card system.

effectiveThe claim that Knowble’s ‘learning effect is proven scientifically’ seems to me to be without any foundation. If there has been any proper research, it’s not signposted anywhere. Sure, reading lots of news articles (with a look-up function – if it works reliably) can only be beneficial for language learners, but they can do that with any decent dictionary running in the background.

Similar in many ways to en.news, which I reviewed in my last post, Knowble is another example of a technology-driven product that shows little understanding of language learning.