Archive for the ‘platforms’ Category

When the internet arrived on our desktops in the 1990s, language teachers found themselves able to access huge amounts of authentic texts of all kinds. It was a true game-changer. But when it came to ELT dedicated websites, the pickings were much slimmer. There was a very small number of good ELT resource sites (onestopenglish stood out from the crowd), but more ubiquitous and more enduring were the sites offering downloadable material shared by teachers. One of these, ESLprintables.com, currently has 1,082,522 registered users, compared to the 700,000+ of onestopenglish.

The resources on offer at sites such as these range from texts and scripted dialogues, along with accompanying comprehension questions, to grammar explanations and gapfills, vocabulary matching tasks and gapfills, to lists of prompts for discussions. Almost all of it is unremittingly awful, a terrible waste of the internet’s potential.

Ten years later, interactive online possibilities began to appear. Before long, language teachers found themselves able to use things like blogs, wikis and Google Docs. It was another true game changer. But when it came to ELT dedicated things, the pickings were much slimmer. There is some useful stuff (flashcard apps, for example) out there, but more ubiquitous are interactive versions of the downloadable dross that already existed. Learning platforms, which have such rich possibilities, are mostly loaded with gapfills, drag-and-drop, multiple choice, and so on. Again, it seems such a terrible waste of the technology’s potential. And all of this runs counter to what we know about how people learn another language. It’s as if decades of research into second language acquisition had never taken place.

And now we have AI and large language models like GPT. The possibilities are rich and quite a few people, like Sam Gravell and Svetlana Kandybovich, have already started suggesting interesting and creative ways of using the technology for language teaching. Sadly, though, technology has a tendency to bring out the worst in approaches to language teaching, since there’s always a bandwagon to be jumped on. Welcome to Twee, A.I. powered tools for English teachers, where you can generate your own dross in a matter of seconds. You can generate texts and dialogues, pitched at one of three levels, with or without target vocabulary, and produce comprehension questions (open questions, T / F, or M / C), exercises where vocabulary has to be matched to definitions, word-formation exercises, gapfills. The name of the site has been carefully chosen (Cambridge dictionary defines ‘twee’ as ‘artificially attractive’).

I decided to give it a try. Twee uses the same technology as ChatGPT and the results were unsurprising. I won’t comment in any detail on the intrinsic interest or the accuracy of factual information in the texts. They are what you might expect if you have experimented with ChatGPT. For the same reason, I won’t go into details about the credibility or naturalness of the dialogues. Similarly, the ability of Twee to gauge the appropriacy of texts for particular levels is poor: it hasn’t been trained on a tagged learner corpus. In any case, having only three level bands (A1/A2, B1/B2 and C1/C2) means that levelling is far too approximate. Suffice to say that the comprehension questions, vocabulary-item selection, vocabulary practice activities would all require very heavy editing.

Twee is still in beta, and, no doubt, improvements will come as the large language models on which it draws get bigger and better. Bilingual functionality is a necessary addition, and is doable. More reliable level-matching would be nice, but it’s a huge technological challenge, besides being theoretically problematic. But bigger problems remain and these have nothing to do with technology. Take a look at the examples below of how Twee suggests its reading comprehension tasks (open questions, M / C, T / F) could be used with some Beatles songs.

Is there any point getting learners to look at a ‘dialogue’ (on the topic of yellow submarines) like the one below? Is there any point getting learners to write essays using prompts such as those below?

What possible learning value could tasks such as these have? Is there any credible theory of language learning behind any of this, or is it just stuff that would while away some classroom time? AI meets ESLprintables – what a waste of the technology’s potential!

Edtech vendors like to describe their products as ‘solutions’, but the educational challenges, which these products are supposedly solutions to, often remain unexamined. Poor use of technology can exacerbate these challenges by making inappropriate learning materials more easily available.

Last September, Cambridge published a ‘Sustainability Framework for ELT’, which attempts to bring together environmental, social and economic sustainability. It’s a kind of 21st century skills framework and is designed to help teachers ‘to integrate sustainability skills development’ into their lessons. Among the sub-skills that are listed, a handful grabbed my attention:

  • Identifying and understanding obstacles to sustainability
  • Broadening discussion and including underrepresented voices
  • Understanding observable and hidden consequences
  • Critically evaluating sustainability claims
  • Understanding the bigger picture

Hoping to brush up my skills in these areas, I decided to take a look at the upcoming BETT show in London, which describes itself as ‘the biggest Education Technology exhibition in the world’. BETT and its parent company, Hyve, ‘are committed to redefining sustainability within the event industry and within education’. They are doing this by reducing their ‘onsite printing and collateral’. (‘Event collateral’ is an interesting event-industry term that refers to all the crap that is put into delegate bags, intended to ‘enhance their experience of the event’.) BETT and Hyve are encouraging all sponsors to go paperless, too, ‘switching from seat-drop collateral to QR codes’, and delegate bags will no longer be offered. They are partnering with various charities to donate ‘surplus food and furniture’ to local community projects, they are donating to other food charities that support families in need, and they are recycling all of the aisle banners into tote bags. Keynote speakers will include people like Sally Uren, CEO of ‘Forum for the Future’, who will talk about ‘Transforming carbon neutral education for a just and regenerative future’.

BETT and Hyve want us to take their corporate and social responsibility very seriously. All of these initiatives are very commendable, even though I wouldn’t go so far as to say that they will redefine sustainability within the event industry and education. But there is a problem – and it’s not that the world is already over-saturated with recycled tote bags. As the biggest jamboree of this kind in the world, the show attracts over 600 vendors and over 30,000 visitors, with over 120 countries represented. Quite apart from all the collateral and surplus furniture, the carbon and material footprint of the event cannot be negligible. Think of all those start-up solution-providers flying and driving into town, AirB’n’B-ing for the duration, and Ubering around town after hours, for a start.

But this is not really the problem, either. Much as the event likes to talk about ‘driving impact and improving outcomes for teachers and learners’, the clear and only purpose of the event is to sell stuff. It is to enable the investors in the 600+ edtech solution-providers in the exhibition area to move towards making a return on their investment. If we wanted to talk seriously about sustainability, the question that needs to be asked is: to what extent does all the hardware and software on sale contribute in any positive and sustainable way to education? Is there any meaningful social benefit to be derived from all this hardware and software, or is it all primarily just a part of a speculative, financial game? Is the corporate social responsibility of BETT / Hyve a form of green-washing to disguise the stimulation of more production and consumption? Is it all just a kind of environmentalism of the rich’ (Dauvergne, 2016).

Edtech is not the most pressing of environmental problems – indeed, there are examples of edtech that are likely more sustainable than the non-tech alternatives – but the sustainability question remains. There are at least four environmental costs to edtech:

  • The energy-greedy data infrastructures that lie behind digital transactions
  • The raw ingredients of digital devices
  • The environmentally destructive manufacture and production of digital devices
  • The environmental cost of dismantling and disposing digital hardware (Selwyn, 2018)

Some forms of edtech are more environmentally costly than others. First, we might consider the material costs. Going back to pre-internet days, think of the countless tonnes of audio cassettes, VCR tapes, DVDs and CD-ROMs. Think of the discarded playback devices, language laboratories and IWBs. None of these are easily recyclable and most have ended up in landfill, mostly in countries that never used these products. These days the hardware that is used for edtech is more often a device that serves other non-educational purposes, but the planned obsolescence of our phones, tablets and laptops is a huge problem for sustainability.

More important now are probably the energy costs of edtech. Audio and video streaming might seem more environmentally friendly than CDs and DVDs, but, depending on how often the CD or DVD is used, the energy cost of streaming (especially high quality video) can be much higher than using the physical format. AI ups the ante significantly (Brevini, 2022). Five years ago, a standard ‘AI training model in linguistics emit more than 284 tonnes of carbon dioxide equivalent’ (Strubell et al., 2019). With exponentially greater volumes of data now being used, the environmental cost is much, much higher. Whilst VR vendors will tout the environmental benefits of cutting down on travel, getting learners together in a physical room may well have a much lower carbon footprint than meeting in the Metaverse.

When doing the calculus of edtech, we need to evaluate the use-value of the technology. Does the tech actually have any clear educational (or other social) benefit, or is its value primarily in terms of its exchange-value?

To illustrate the difference between use-value and exchange-value, I’d like to return again to the beginnings of modern edtech in ELT. As the global market for ELT materials mushroomed in the 1990s, coursebook publishers realised that, for a relatively small investment, they could boost their sales by bringing out ‘new editions’ of best-selling titles. This meant a new cover, replacing a few texts and topics, making minor modifications to other content, and, crucially, adding extra features. As the years went by, these extra features became digital: CD-ROMs, DVDs, online workbooks and downloadables of various kinds. The publishers knew that sales depended on the existence of these shiny new things, even if many buyers made minimal use or zero use of them. But they gave the marketing departments and sales reps a pitch, and justified an increase in unit price. Did these enhanced coursebooks actually represent any increase in use-value? Did learners make better or faster progress in English as a result? On the whole, the answer has to be an unsurprising and resounding no. We should not be surprised if hundreds of megabytes of drag-and-drop grammar practice fail to have much positive impact on learning outcomes. From the start, it was the impact on the exchange-value (sales and profits) of these products that was the driving force.

Edtech vendors have always wanted to position themselves to potential buyers as ‘solution providers’, trumpeting the use-value of what they are selling. When it comes to attracting investors, it’s a different story, one that is all about minimum viable products, scalability and return on investment.

There are plenty of technologies that have undisputed educational use-value in language learning and teaching. Google Docs, Word, Zoom and YouTube come immediately to mind. Not coincidentally, they are not technologies that were designed for educational purposes. But when you look at specifically educational technology, It becomes much harder (though not impossible) to identify unambiguous gains in use-value. Most commonly, the technology holds out the promise of improved learning, but evidence that it has actually achieved this is extremely rare. Sure, a bells-and-whistles LMS offers exciting possibilities for flipped or blended learning, but research that demonstrates the effectiveness of these approaches in the real world is sadly lacking. Sure, VR might seem to offer a glimpse of motivated learners interacting meaningfully in the Metaverse, but I wouldn’t advise you to bet on it.

And betting is what most edtech is all about. An eye-watering $16.1 billion of venture capital was invested in global edtech in 2020. What matters is not that any of these products or services have any use-value, but that they are perceived to have a use-value. Central to this investment is the further commercialisation and privatisation of education (William & Hogan 2020). BETT is a part of this.

Returning to the development of my sustainability skills, I still need to consider the bigger picture. I’ve suggested that it is difficult to separate edtech from a consideration of capitalism, a system that needs to manufacture consumption, to expand production and markets in order to survive (Dauvergne, 2016: 48). Economic growth is the sine qua non of this system, and it is this that makes the British government (and others) so keen on BETT. Education and edtech in particular are rapidly growing markets. But growth is only sustainable, in environmental terms, if it is premised on things that we actually need, rather than things which are less necessary and ecologically destructive (Hickel, 2020). At the very least, as Selwyn (2021) noted, we need more diverse thinking: ‘What if environmental instability cannot be ‘solved’ simply through the expanded application of digital technologies but is actually exacerbated through increased technology use?

References

Brevini, B. (2022) Is AI Good for the Planet? Cambridge: Polity Press

Dauvergne, P. (2016) Environmentalism of the Rich. Cambridge, Mass.: MIT Press

Hickel, J. (2020) Less Is More. London: William Heinemann

Selwyn, N. (2018) EdTech is killing us all: facing up to the environmental consequences of digital education. EduResearch Matters 22 October, 2018. https://www.aare.edu.au/blog/?p=3293

Selwyn, N. (2021) Ed-Tech Within Limits: Anticipating educational technology in times of environmental crisis. E-Learning and Digital Media, 18 (5): 496 – 510. https://journals.sagepub.com/doi/pdf/10.1177/20427530211022951

Strubell, E., Ganesh, A. & McCallum, A. (2019) Energy and Policy Considerations for Deep Learning in NLP. Cornell University: https://arxiv.org/pdf/1906.02243.pdf

Williamson, B. & Hogan, A. (2020) Commercialisation and privatisation in / of education in the context of Covid-19. Education International

The paragraph above was written by an AI-powered text generator called neuroflash https://app.neuro-flash.com/home which I told to produce a text on the topic ‘AI and education’. As texts on this topic go, it is both remarkable (in that it was not written by a human) and entirely unremarkable (in that it is practically indistinguishable from hundreds of human-written texts on the same subject). Neuroflash uses a neural network technology called GPT-3 – ‘a large language model’ – and ‘one of the most interesting and important AI systems ever produced’ (Chalmers, 2020). Basically, it generates text by predicting sequences of words based on huge databases. The nature of the paragraph above tells you all you need to know about the kinds of content that are usually found in texts about AI and education.

Not dissimilar from the neuroflash paragraph, educational commentary on uses of AI is characterised by (1) descriptions of AI tools already in use (e.g. speech recognition and machine translation) and (2) vague predictions which invariably refer to ‘the promise of personalised learning, adjusting what we give learners according to what they need to learn and keeping them motivated by giving them content that is of interest to them’ (Hughes, 2022). The question of what precisely will be personalised is unanswered: providing learners with optimal sets of resources (but which ones?), providing counselling services, recommendations or feedback for learners and teachers (but of what kind?) (Luckin, 2022). Nearly four years ago, I wrote https://adaptivelearninginelt.wordpress.com/2018/08/13/ai-and-language-teaching/ about the reasons why these questions remain unanswered. The short answer is that AI in language learning requires a ‘domain knowledge model’. This specifies what is to be learnt and includes an analysis of the steps that must be taken to reach that learning goal. This is lacking in SLA, or, at least, there is no general agreement on what it is. Worse, the models that are most commonly adopted in AI-driven programs (e.g. the deliberate learning of discrete items of grammar and vocabulary) are not supported by either current theory or research (see, for example, VanPatten & Smith, 2022).

In 2021, the IATEFL Learning Technologies SIG organised an event dedicated to AI in education. Unsurprisingly, there was a fair amount of input on AI in assessment, but my interest is in how AI might revolutionize how we learn and teach, not how we assess. What concrete examples did speakers provide?

Rose Luckin, the most well-known British expert on AI in education, kicked things off by mentioning three tools. One of these, Carnegie Learning, is a digital language course that looks very much like any of the ELT courses on offer from the big publishers – a fully blendable, multimedia (e.g. flashcards and videos) synthetic syllabus. This ‘blended learning solution’ is personalizable, since ‘no two students learn alike’, and, it claims, will develop a ‘lifelong love of language’. It appears to be premised on the idea of language learning as optimizing the delivery of ‘content’, of this content consisting primarily of discrete items, and of equating input with uptake. Been there, done that.

A second was Alelo Enskill https://www.alelo.com/about-us/ a chatbot / avatar roleplay program, first developed by the US military to teach Iraqi Arabic and aspects of Iraqi culture to Marines. I looked at the limitations of chatbot technology for language learning here https://adaptivelearninginelt.wordpress.com/2016/12/01/chatbots/ . The third tool mentioned by Luckin was Duolingo. Enough said.

Another speaker at this event was the founder and CEO of Edugo.AI https://www.edugo.ai/ , an AI-powered LMS which uses GPT-3. It allows schools to ‘create and upload on the platform any kind of language material (audio, video, text…). Our AI algorithms process and convert it in gamified exercises, which engage different parts of the brain, and gets students eager to practice’. Does this speaker know anything about gamification (for a quick read, I’d recommend Paul Driver (2012)) or neuroscience, I wonder. What, for that matter, does he know about language learning? Apparently, ‘language is not just about words, language is about sentences’ (Tomasello, 2022). Hmm, this doesn’t inspire confidence.

When you look at current uses of AI in language learning, there is very little (outside of testing, translation and speech ↔ text applications) that could justify enthusiastic claims that AI has any great educational potential. Skepticism seems to me a more reasonable and scientific response: de omnibus dubitandum.

Education is not the only field where AI has been talked up. When Covid hit us, AI was seen as the game-changing technology. It ‘could be deployed to make predictions, enhance efficiencies, and free up staff through automation; it could help rapidly process vast amounts of information and make lifesaving decisions’ (Chakravorti, 2022). The contribution of AI to the development of vaccines has been huge, but its role in diagnosing and triaging patients has been another matter altogether. Hundreds of predictive tools were developed: ‘none of them made a real difference, and some were potentially harmful’ (Heaven, 2021). Expectations were unrealistic and led to the deployment of tools before they were properly trialled. Thirty months down the line, a much more sober understanding of the potential of AI has emerged. Here, then, are the main lessons that have been learnt (I draw particularly on Engler, 2020, and Chakravorti, 2022) that are also relevant to education and language learning.

  • Anticipate what could go wrong before anticipating what might go right. Engler (2020) writes that ‘a poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem’. In language learning, it is highly unlikely that personalized recommendations will ever reach anything even approaching this level of reliability. What are the implications for individual learners whose learning is inappropriately personalised?
  • We also know that a significant problem with AI systems is bias (O’Neil, 2016). There is a well-documented history of discriminatory outcomes because of people’s race, gender, social class or disability profile. Bias needs to be addressed proactively, not reactively.
  • Acknowledge from the outset that, for AI to work, huge amounts of data related to prior outcomes will be needed. In the cases of both Covid and language learning, much of this data will be personal. This raises immediate questions of privacy and consent, especially for learners who are children. Don’t minimise this issue.
  • Common by-products of AI use include insecure data storage, digital surveillance and consolidation of market power (Engler, 2020). Since such issues are so pervasive, assume they will be a problem, and need to be considered in evaluating the potential of any piece of AI edtech.
  • Find out what the subject matter experts have to say: ‘AI is only helpful when applied judiciously by subject-matter experts—people with long-standing experience with the problem that they are trying to solve […] Effectively predicting a badly defined problem is worse than doing nothing at all’ (Engler, 2020). It may be that the problem the AI is trying to help solve is the wrong problem and not worth solving at all (e.g. more efficient ‘delivery’ of explicit grammatical knowledge, gamifying gapfills).

To bring this post to an end, I returned to the AI / GTP-3 driven neuroflash program. This time, I asked it to write a longer piece entitled ‘The dangers of artificial intelligence in education: what could go wrong?’. Here’s what it came up with:

There are a few potential dangers of artificial intelligence in education. One is that AI could be used to replace human teachers. This could lead to a decline in the quality of education, as machines are not yet capable of replicating the full range of human emotions and social interactions that are necessary for teaching. Additionally, AI could be used to create biased or “racist” educational materials, as has already happened with some commercial software products. Finally, AI could be used to track and monitor students in ways that invade their privacy, or that could be used to manipulate their behavior.

All of these dangers are real and should be taken seriously. However, it’s important to remember that AI is still a new technology, and its impact on education is not yet fully understood. As such, we should tread cautiously when implementing AI into our educational systems.

When it comes to education, artificial intelligence could be a double-edged sword. While AI has the potential to revolutionize the field of education and help students learn in more efficient ways, there are also dangers that need to be considered.

One of the dangers of AI in education is that it could lead to a more standardized and homogenized form of learning. If all students are relying on AI-based learning tools, they may all end up learning the same things in the same way. This could lead to a loss of creativity and individuality in education.

Another danger is that AI-based learning tools could become too reliant on data and algorithms. This could lead to a situation where students are simply being taught what the machines think they should know, rather than being allowed to explore and discover new things for themselves.

Finally, there is the danger that AI could eventually replace human teachers altogether. While this may seem like a far-fetched scenario, it is not outside the realm of possibility. If AI-based learning tools become sophisticated enough, they may one day be able to do the job of teaching just as well as humans can.

These are just some of the dangers that need to be considered when it comes to artificial intelligence in education. While AI has the potential to revolutionize education, we need to be careful about how we implement it so that we don’t end up doing more harm than good.

References

Chakravorti, B. (2022) Why AI Failed to Live Up to Its Potential During the Pandemic. Harvard Business Review March 17,2022. https://hbr.org/2022/03/why-ai-failed-to-live-up-to-its-potential-during-the-pandemic

Chalmers, D. (2020) Weinberg, Justin (ed.). “GPT-3 and General Intelligence”. Daily Nous. Philosophers On GPT-3 (updated with replies by GPT-3) July 30, 2020. https://dailynous.com/2020/07/30/philosophers-gpt-3/#chalmers

Driver, P. (2012) The Irony of Gamification. In English Digital Magazine 3, British Council Portugal, pp. 21 – 24 http://digitaldebris.info/digital-debris/2011/12/31/the-irony-of-gamification-written-for-ied-magazine.html

Engler, A. (2020) A guide to healthy skepticism of artificial intelligence and coronavirus. Washington D.C.: Brookings Institution https://www.brookings.edu/research/a-guide-to-healthy-skepticism-of-artificial-intelligence-and-coronavirus/

Heaven, W. D. (2021) Hundreds of AI tools have been built to catch covid. None of them helped. MIT Technology Review, July 30, 2021. https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/

Hughes, G. (2022) What lies at the end of the AI rainbow? IATEFL LTSIG Newsletter Issue April 2022

Luckin, R. (2022) The implications of AI for language learning and teaching. IATEFL LTSIG Newsletter Issue April 2022

O’Neil, C. (2016) Weapons of Math Destruction. London: Allen Lane

Tomasello, G. (2022) Next Generation of AI-Language Education Software:NLP & Language Modules (GPT3). IATEFL LTSIG Newsletter Issue April 2022

VanPatten, B. & Smith, M. (2022) Explicit and Implicit Learning in Second Language Acquisition. Cambridge: Cambridge University Press

On 21 January, I attended the launch webinar of DEFI (the Digital Education Futures Initiative), an initiative of the University of Cambridge, which seeks to work ‘with partners in industry, policy and practice to explore the field of possibilities that digital technology opens up for education’. The opening keynote speaker was Andrea Schleicher, head of education at the OECD. The OECD’s vision of the future of education is outlined in Schleicher’s book, ‘World Class: How to Build a 21st-Century School System’, freely available from the OECD, but his presentation for DEFI offers a relatively short summary. A recording is available here, and this post will take a closer look at some of the things he had to say.

Schleicher is a statistician and the coordinator of the OECD’s PISA programme. Along with other international organisations, such as the World Economic Forum and the World Bank (see my post here), the OECD promotes the global economization and corporatization of education, ‘based on the [human capital] view that developing work skills is the primary purpose of schooling’ (Spring, 2015: 14). In other words, the main proper function of education is seen to be meeting the needs of global corporate interests. In the early days of the COVID-19 pandemic, with the impact of school closures becoming very visible, Schleicher expressed concern about the disruption to human capital development, but thought it was ‘a great moment’: ‘the current wave of school closures offers an opportunity for experimentation and for envisioning new models of education’. Every cloud has a silver lining, and the pandemic has been a godsend for private companies selling digital learning (see my post about this here) and for those who want to reimagine education in a more corporate way.

Schleicher’s presentation for DEFI was a good opportunity to look again at the way in which organisations like the OECD are shaping educational discourse (see my post about the EdTech imaginary and ELT).

He begins by suggesting that, as a result of the development of digital technology (Google, YouTube, etc.) literacy is ‘no longer just about extracting knowledge’. PISA reading scores, he points out, have remained more or less static since 2000, despite the fact that we have invested (globally) more than 15% extra per student in this time. Only 9% of all 15-year-old students in the industrialised world can distinguish between fact and opinion.

To begin with, one might argue about the reliability and validity of the PISA reading scores (Berliner, 2020). One might also argue, as did a collection of 80 education experts in a letter to the Guardian, that the scores themselves are responsible for damaging global education, raising further questions about their validity. One might argue that the increased investment was spent in the wrong way (e.g. on hardware and software, rather than teacher training, for example), because the advice of organisations like OECD has been uncritically followed. And the statistic about critical reading skills is fairly meaningless unless it is compared to comparable metrics over a long time span: there is no reason to believe that susceptibility to fake news is any more of a problem now than it was, say, one hundred years ago. Nor is there any reason to believe that education can solve the fake-news problem (see my post about fake news and critical thinking here). These are more than just quibbles, but the main point that Schleicher is making is that education needs to change.

Schleicher next presents a graph which is designed to show that the amount of time that students spend studying correlates poorly with the amount they learn. His interest is in the (lack of) productivity of educational activities in some contexts. He goes on to argue that there is greater productivity in educational activities when learners have a growth mindset, implying (but not stating) that mindset interventions in schools would lead to a more productive educational environment.

Schleicher appears to confuse what students learn with the things they have learnt that have been measured by PISA. The two are obviously rather different, since PISA is only interested in a relatively small subset of the possible learning outcomes of schooling. His argument for growth mindset interventions hinges on the assumption that such interventions will lead to gains in reading scores. However, his graph demonstrates a correlation between growth mindset and reading scores, not a causal relationship. A causal relationship has not been clearly and empirically demonstrated (see my post about growth mindsets here) and recent work by Carol Dweck and her associates (e.g. Yeager et al., 2016), as well as other researchers (e.g. McPartlan et al, 2020), indicates that the relationship between gains in learning outcomes and mindset interventions is extremely complex.

Schleicher then turns to digitalisation and briefly discusses the positive and negative affordances of technology. He eulogizes platform companies before showing a slide designed to demonstrate that (in the workplace) there is a strong correlation between ICT use and learning. He concludes: ‘the digital world of learning is a hugely empowering world of learning’.

A brief paraphrase of this very disingenuous part of the presentation would be: technology can be good and bad, but I’ll only focus on the former. The discourse appears balanced, but it is anything but.

During the segment, Schleicher argues that technology is empowering, and gives the examples of ‘the most successful companies these days, they’re not created by a big industry, they’re created by a big idea’. This is plainly counterfactual. In the case of Alphabet and Facebook, profits did not follow from a ‘big idea’: the ideas changed as the companies evolved.

Schleicher then sketches a picture of an unpredictable future (pandemics, climate change, AI, cyber wars, etc.) as a way of framing the importance of being open (and resilient) to different futures and how we respond to them. He offers two different kinds of response: maintenance of the status quo, or ‘outsourcing’ of education. The pandemic, he suggests, has made more countries aware that the latter is the way forward.

In his discussion of the maintenance of the status quo, Schleicher talks about the maintenance of educational monopolies. By this, he must be referring to state monopolies on education: this is a favoured way of neoliberals of referring to state-sponsored education. But the extent to which, in 2021 in many OECD countries, the state has any kind of monopoly of education, is very open to debate. Privatization is advancing fast. Even in 2015, the World Education Forum’s ‘Final Report’ wrote that ‘the scale of engagement of nonstate actors at all levels of education is growing and becoming more diversified’. Schleicher goes on to talk about ‘large, bureaucratic school systems’, suggesting that such systems cannot be sufficiently agile, adaptive or responsive. ‘We should ask this question,’ he says, but his own answer to it is totally transparent: ‘changing education can be like moving graveyards’ is the title of the next slide. Education needs to be more like the health sector, he claims, which has been able to develop a COVID vaccine in such a short period of time. We need an education industry that underpins change in the same way as the health industry underpins vaccine development. In case his message isn’t yet clear enough, I’ll spell it out: education needs to be privatized still further.

Schleicher then turns to the ways in which he feels that digital technology can enhance learning. These include the use of AR, VR and AI. Technology, he says, can make learning so much more personalized: ‘the computer can study how you study, and then adapt learning so that it is much more granular, so much more adaptive, so much more responsive to your learning style’. He moves on to the field of assessment, again singing the praises of technology in the ways that it can offer new modes of assessment and ‘increase the reliability of machine rating for essays’. Through technology, we can ‘reunite learning and assessment’. Moving on to learning analytics, he briefly mentions privacy issues, before enthusing at greater length about the benefits of analytics.

Learning styles? Really? The reliability of machine scoring of essays? How reliable exactly? Data privacy as an area worth only a passing mention? The use of sensors to measure learners’ responses to learning experiences? Any pretence of balance appears now to have been shed. This is in-your-face sales talk.

Next up is a graph which purports to show the number of teachers in OECD countries who use technology for learners’ project work. This is followed by another graph showing the number of teachers who have participated in face-to-face and online CPD. The point of this is to argue that online CPD needs to become more common.

I couldn’t understand what point he was trying to make with the first graph. For the second, it is surely the quality of the CPD, rather than the channel, that matters.

Schleicher then turns to two further possible responses of education to unpredictable futures: ‘schools as learning hubs’ and ‘learn-as-you-go’. In the latter, digital infrastructure replaces physical infrastructure. Neither is explored in any detail. The main point appears to be that we should consider these possibilities, weighing up as we do so the risks and the opportunities (see slide below).

Useful ways to frame questions about the future of education, no doubt, but Schleicher is operating with a set of assumptions about the purpose of education, which he chooses not to explore. His fundamental assumption – that the primary purpose of education is to develop human capital in and for the global economy – is not one that I would share. However, if you do take that view, then privatization, economization, digitalization and the training of social-emotional competences are all reasonable corollaries, and the big question about the future concerns how to go about this in a more efficient way.

Schleicher’s (and the OECD’s) views are very much in accord with the libertarian values of the right-wing philanthro-capitalist foundations of the United States (the Gates Foundation, the Broad Foundation and so on), funded by Silicon Valley and hedge-fund managers. It is to the US that we can trace the spread and promotion of these ideas, but it is also, perhaps, to the US that we can now turn in search of hope for an alternative educational future. The privatization / disruption / reform movement in the US has stalled in recent years, as it has become clear that it failed to deliver on its promise of improved learning. The resistance to privatized and digitalized education is chronicled in Diane Ravitch’s latest book, ‘Slaying Goliath’ (2020). School closures during the pandemic may have been ‘a great moment’ for Schleicher, but for most of us, they have underscored the importance of face-to-face free public schooling. Now, with the electoral victory of Joe Biden and the appointment of a new US Secretary for Education (still to be confirmed), we are likely to see, for the first time in decades, an education policy that is firmly committed to public schools. The US is by far the largest contributor to the budget of the OECD – more than twice any other nation. Perhaps a rethink of the OECD’s educational policies will soon be in order?

References

Berliner D.C. (2020) The Implications of Understanding That PISA Is Simply Another Standardized Achievement Test. In Fan G., Popkewitz T. (Eds.) Handbook of Education Policy Studies. Springer, Singapore. https://doi.org/10.1007/978-981-13-8343-4_13

McPartlan, P., Solanki, S., Xu, D. & Sato, B. (2020) Testing Basic Assumptions Reveals When (Not) to Expect Mindset and Belonging Interventions to Succeed. AERA Open, 6 (4): 1 – 16 https://journals.sagepub.com/doi/pdf/10.1177/2332858420966994

Ravitch, D. (2020) Slaying Goliath: The Passionate Resistance to Privatization and the Fight to Save America’s Public School. New York: Vintage Books

Schleicher, A. (2018) World Class: How to Build a 21st-Century School System. Paris: OECD Publishing https://www.oecd.org/education/world-class-9789264300002-en.htm

Spring, J. (2015) Globalization of Education 2nd Edition. New York: Routledge

Yeager, D. S., et al. (2016) Using design thinking to improve psychological interventions: The case of the growth mindset during the transition to high school. Journal of Educational Psychology, 108(3), 374–391. https://doi.org/10.1037/edu0000098

Take the Cambridge Assessment English website, for example. When you connect to the site, you will see, at the bottom of the screen, a familiar (to people in Europe, at least) notification about the site’s use of cookies: the cookies consent.

You probably trust the site, so ignore the notification and quickly move on to find the resource you are looking for. But if you did click on hyperlinked ‘set cookies’, what would you find? The first link takes you to the ‘Cookie policy’ where you will be told that ‘We use cookies principally because we want to make our websites and mobile applications user-friendly, and we are interested in anonymous user behaviour. Generally our cookies don’t store sensitive or personally identifiable information such as your name and address or credit card details’. Scroll down, and you will find out more about the kind of cookies that are used. Besides the cookies that are necessary to the functioning of the site, you will see that there are also ‘third party cookies’. These are explained as follows: ‘Cambridge Assessment works with third parties who serve advertisements or present offers on our behalf and personalise the content that you see. Cookies may be used by those third parties to build a profile of your interests and show you relevant adverts on other sites. They do not store personal information directly but use a unique identifier in your browser or internet device. If you do not allow these cookies, you will experience less targeted content’.

This is not factually inaccurate: personal information is not stored directly. However, it is extremely easy for this information to be triangulated with other information to identify you personally. In addition to the data that you generate by having cookies on your device, Cambridge Assessment will also directly collect data about you. Depending on your interactions with Cambridge Assessment, this will include ‘your name, date of birth, gender, contact data including your home/work postal address, email address and phone number, transaction data including your credit card number when you make a payment to us, technical data including internet protocol (IP) address, login data, browser type and technology used to access this website’. They say they may share this data ‘with other people and/or businesses who provide services on our behalf or at our request’ and ‘with social media platforms, including but not limited to Facebook, Google, Google Analytics, LinkedIn, in pseudonymised or anonymised forms’.

In short, Cambridge Assessment may hold a huge amount of data about you and they can, basically, do what they like with it.

The cookie and privacy policies are fairly standard, as is the lack of transparency in the phrasing of them. Rather more transparency would include, for example, information about which particular ad trackers you are giving your consent to. This information can be found with a browser extension tool like Ghostery, and these trackers can be blocked. As you’ll see below, there are 5 ad trackers on this site. This is rather more than other sites that English language teachers are likely to go to. ETS-TOEFL has 4, Macmillan English and Pearson have 3, CUP ELT and the British Council Teaching English have 1, OUP ELT, IATEFL, BBC Learning English and Trinity College have none. I could only find TESOL, with 6 ad trackers, which has more. The blogs for all these organisations invariably have more trackers than their websites.

The use of numerous ad trackers is probably a reflection of the importance that Cambridge Assessment gives to social media marketing. There is a research paper, produced by Cambridge Assessment, which outlines the significance of big data and social media analytics. They have far more Facebook followers (and nearly 6 million likes) than any other ELT page, and they are proud of their #1 ranking in the education category of social media. The amount of data that can be collected here is enormous and it can be analysed in myriad ways using tools like Ubervu, Yomego and Hootsuite.

A little more transparency, however, would not go amiss. According to a report in Vox, Apple has announced that some time next year ‘iPhone users will start seeing a new question when they use many of the apps on their devices: Do they want the app to follow them around the internet, tracking their behavior?’ Obviously, Google and Facebook are none too pleased about this and will be fighting back. The implications for ad trackers and online advertising, more generally, are potentially huge. I wrote to Cambridge Assessment about this and was pleased to hear that ‘Cambridge Assessment are currently reviewing the process by which we obtain users consent for the use of cookies with the intention of moving to a much more transparent model in the future’. Let’s hope that other ELT organisations are doing the same.

You may be less bothered than I am by the thought of dozens of ad trackers following you around the net so that you can be served with more personalized ads. But the digital profile about you, to which these cookies contribute, may include information about your ethnicity, disabilities and sexual orientation. This profile is auctioned to advertisers when you visit some sites, allowing them to show you ‘personalized’ adverts based on the categories in your digital profile. Contrary to EU regulations, these categories may include whether you have cancer, a substance-abuse problem, your politics and religion (as reported in Fortune https://fortune.com/2019/01/28/google-iab-sensitive-profiles/ ).

But it’s not these cookies that are the most worrying aspect about our lack of digital privacy. It’s the sheer quantity of personal data that is stored about us. Every time we ask our students to use an app or a platform, we are asking them to divulge huge amounts of data. With ClassDojo, for example, this includes names, usernames, passwords, age, addresses, photographs, videos, documents, drawings, or audio files, IP addresses and browser details, clicks, referring URL’s, time spent on site, and page views (Manolev et al., 2019; see also Williamson, 2019).

It is now widely recognized that the ‘consent’ that is obtained through cookie policies and other end-user agreements is largely spurious. These consent agreements, as Sadowski (2019) observes, are non-negotiated, and non-negotiable; you either agree or you are denied access. What’s more, he adds, citing one study, it would take 76 days, working for 8 hours a day, to read the privacy policies a person typically encounters in a year. As a result, most of us choose not to choose when we accept online services (Cobo, 2019: 25). We have little, if any, control over how the data that is collected is used (Birch et al., 2020). More importantly, perhaps, when we ask our students to sign up to an educational app, we are asking / telling them to give away their personal data, not just ours. They are unlikely to fully understand the consequences of doing so.

The extent of this ignorance is also now widely recognized. In the UK, for example, two reports (cited by Sander, 2020) indicate that ‘only a third of people know that data they have not actively chosen to share has been collected’ (Doteveryone, 2018: 5), and that ‘less than half of British adult internet users are aware that apps collect their location and information on their personal preferences’ (Ofcom, 2019: 14).

The main problem with this has been expressed by programmer and activist, Richard Stallman, in an interview with New York magazine (Kulwin, 2018): Companies are collecting data about people. The data that is collected will be abused. That’s not an absolute certainty, but it’s a practical, extreme likelihood, which is enough to make collection a problem.

The abuse that Smallman is referring to can come in a variety of forms. At the relatively trivial end is the personalized advertising. Much more serious is the way that data aggregation companies will scrape data from a variety of sources, building up individual data profiles which can be used to make significant life-impacting decisions, such as final academic grades or whether one is offered a job, insurance or credit (Manolev et al., 2019). Cathy O’Neil’s (2016) best-selling ‘Weapons of Math Destruction’ spells out in detail how this abuse of data increases racial, gender and class inequalities. And after the revelations of Edward Snowden, we all know about the routine collection by states of huge amounts of data about, well, everyone. Whether it’s used for predictive policing or straightforward repression or something else, it is simply not possible for younger people, our students, to know what personal data they may regret divulging at a later date.

Digital educational providers may try to reassure us that they will keep data private, and not use it for advertising purposes, but the reassurances are hollow. These companies may change their terms and conditions further down the line, and examples exist of when this has happened (Moore, 2018: 210). But even if this does not happen, the data can never be secure. Illegal data breaches and cyber attacks are relentless, and education ranked worst at cybersecurity out of 17 major industries in one recent analysis (Foresman, 2018). One report suggests that one in five US schools and colleges have fallen victim to cyber-crime. Two weeks ago, I learnt (by chance, as I happened to be looking at my security settings on Chrome) that my passwords for Quizlet, Future Learn, Elsevier and Science Direct had been compromised by a data breach. To get a better understanding of the scale of data breaches, you might like to look at the UK’s IT Governance site, which lists detected and publicly disclosed data breaches and cyber attacks each month (36.6 million records breached in August 2020). If you scroll through the list, you’ll see how many of them are educational sites. You’ll also see a comment about how leaky organisations have been throughout lockdown … because they weren’t prepared for the sudden shift online.

Recent years have seen a growing consensus that ‘it is crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Dudeney et al., 2013). Most of the focus has been on the skills that are needed to use digital media. There also appears to be growing interest in developing critical thinking skills in the context of digital media (e.g. Peachey, 2016) – identifying fake news and so on. To a much lesser extent, there has been some focus on ‘issues of digital identity, responsibility, safety and ethics when students use these technologies’ (Mavridi, 2020a: 172). Mavridi (2020b: 91) also briefly discusses the personal risks of digital footprints, but she does not have the space to explore more fully the notion of critical data literacy. This literacy involves an understanding of not just the personal risks of using ‘free’ educational apps and platforms, but of why they are ‘free’ in the first place. Sander (2020b) suggests that this literacy entails ‘an understanding of datafication, recognizing the risks and benefits of the growing prevalence of data collection, analytics, automation, and predictive systems, as well as being able to critically reflect upon these developments. This includes, but goes beyond the skills of, for example, changing one’s social media settings, and rather constitutes an altered view on the pervasive, structural, and systemic levels of changing big data systems in our datafied societies’.

In my next two posts, I will, first of all, explore in more detail the idea of critical data literacy, before suggesting a range of classroom resources.

(I posted about privacy in March 2014, when I looked at the connections between big data and personalized / adaptive learning. In another post, September 2014, I looked at the claims of the CEO of Knewton who bragged that his company had five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. You might find both of these posts interesting.)

References

Birch, K., Chiappetta, M. & Artyushina, A. (2020). ‘The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset’ Policy Studies, 41:5, 468-487, DOI: 10.1080/01442872.2020.1748264

Cobo, C. (2019). I Accept the Terms and Conditions. https://adaptivelearninginelt.files.wordpress.com/2020/01/41acf-cd84b5_7a6e74f4592c460b8f34d1f69f2d5068.pdf

Doteveryone. (2018). People, Power and Technology: The 2018 Digital Attitudes Report. https://attitudes.doteveryone.org.uk

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Foresman, B. (2018). Education ranked worst at cybersecurity out of 17 major industries. Edscoop, December 17, 2018. https://edscoop.com/education-ranked-worst-at-cybersecurity-out-of-17-major-industries/

Kulwin, K. (2018). F*ck Them. We Need a Law’: A Legendary Programmer Takes on Silicon Valley, New York Intelligencer, 2018, https://nymag.com/intelligencer/2018/04/richard-stallman-rms-on-privacy-data-and-free-software.html

Manolev, J., Sullivan, A. & Slee, R. (2019). ‘Vast amounts of data about our children are being harvested and stored via apps used by schools’ EduReseach Matters, February 18, 2019. https://www.aare.edu.au/blog/?p=3712

Mavridi, S. (2020a). Fostering Students’ Digital Responsibility, Ethics and Safety Skills (Dress). In Mavridi, S. & Saumell, V. (Eds.) Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL. pp. 170 – 196

Mavridi, S. (2020b). Digital literacies and the new digital divide. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp. 90 – 98

Moore, M. (2018). Democracy Hacked. London: Oneworld

Ofcom. (2019). Adults: Media use and attitudes report [Report]. https://www.ofcom.org.uk/__data/assets/pdf_file/0021/149124/adults-media-use-and-attitudes-report.pdf

O’Neil, C. (2016). Weapons of Math Destruction. London: Allen Lane

Peachey, N. (2016). Thinking Critically through Digital Media. http://peacheypublications.com/

Sadowski, J. (2019). ‘When data is capital: Datafication, accumulation, and extraction’ Big Data and Society 6 (1) https://doi.org/10.1177%2F2053951718820549

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479 https://www.econstor.eu/bitstream/10419/218936/1/2020-2-1479.pdf

Sander, I. (2020b). Critical big data literacy tools—Engaging citizens and promoting empowered internet usage. Data & Policy, 2: e5 doi:10.1017/dap.2020.5

Williamson, B. (2019). ‘Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry’ Journal of Professional Learning, 2019 (Semester 2) https://cpl.asn.au/journal/semester-2-2019/killer-apps-for-the-classroom-developing-critical-perspectives-on-classdojo

Online teaching is big business. Very big business. Online language teaching is a significant part of it, expected to be worth over $5 billion by 2025. Within this market, the biggest demand is for English and the lion’s share of the demand comes from individual learners. And a sizable number of them are Chinese kids.

There are a number of service providers, and the competition between them is hot. To give you an idea of the scale of this business, here are a few details taken from a report in USA Today. VIPKid, is valued at over $3 billion, attracts celebrity investors, and has around 70,000 tutors who live in the US and Canada. 51Talk has 14,800 English teachers from a variety of English-speaking countries. BlingABC gets over 1,000 American applicants a month for its online tutoring jobs. There are many, many others.

Demand for English teachers in China is huge. The Pie News, citing a Chinese state media announcement, reported in September of last year that there were approximately 400,000 foreign citizens working in China as English language teachers, two-thirds of whom were working illegally. Recruitment problems, exacerbated by quotas and more stringent official requirements for qualifications, along with a very restricted desired teacher profile (white, native-speakers from a few countries like the US and the UK), have led more providers to look towards online solutions. Eric Yang, founder of the Shanghai-based iTutorGroup, which operates under a number of different brands and claims to be the ‘largest English-language learning institution in the world’, said that he had been expecting online tutoring to surpass F2F classes within a few years. With coronavirus, he now thinks it will come ‘much earlier’.

Typically, the work does not require much, if anything, in the way of training (besides familiarity with the platform), although a 40-hour TEFL course is usually preferred. Teachers deliver pre-packaged lessons. According to the USA Today report, Chinese students pay between $49 and $80 dollars an hour for the classes.

It’s a highly profitable business and the biggest cost to the platform providers is the rates they pay the tutors. If you google “Teaching TEFL jobs online”, you’ll quickly find claims that teachers can earn $40 / hour and up. Such claims are invariably found on the sites of recruitment agencies, who are competing for attention. However, although it’s possible that a small number of people might make this kind of money, the reality is that most will get nowhere near it. Scroll down the pages a little and you’ll discover that a more generally quoted and accepted figure is between $14 and $20 / hour. These tutors are, of course, freelancers, so the wages are before tax, and there is no health coverage or pension plan.

Reed job advertVIPKid, for example, considered to be one of the better companies, offers payment in the $14 – $22 / hour range. Others offer considerably less, especially if you are not a white, graduate US citizen. Current rates advertised on OETJobs include work for Ziktalk ($10 – 15 / hour), NiceTalk ($10 – 11 / hour), 247MyTutor ($5 – 8 / hour) and Weblio ($5 – 6 / hour). The number of hours that you get is rarely fixed and tutors need to build up a client base by getting good reviews. They will often need to upload short introductory videos, selling their skills. They are in direct competition with other tutors.

They also need to make themselves available when demand for their services is highest. Peak hours for VIPKid, for example, are between 2 and 8 in the morning, depending on where you live in the US. Weekends, too, are popular. With VIPKid, classes are scheduled in advance, but this is not always the case with other companies, where you log on to show that you are available and hope someone wants you. This is the case with, for example, Cambly (which pays $10.20 / hour … or rather $0.17 / minute) and NiceTalk. According to one review, Cambly has a ‘priority hours system [which] allows teachers who book their teaching slots in advance to feature higher on the teacher list than those who have just logged in, meaning that they will receive more calls’. Teachers have to commit to a set schedule and any changes are heavily penalised. The review states that ‘new tutors on the platform should expect to receive calls for about 50% of the time they’re logged on’.

 

Taking the gig economy to its logical conclusion, there are other companies where tutors can fix their own rates. SkimaTalk, for example, offers a deal where tutors first teach three unpaid lessons (‘to understand how the system works and build up their initial reputation on the platform’), then the system sets $16 / hour as a default rate, but tutors can change this to anything they wish. With another, Palfish, where tutors set their own rate, the typical rate is $10 – 18 / hour, and the company takes a 20% commission. With Preply, here is the deal on offer:

Your earnings depend on the hourly rate you set in your profile and how often you can provide lessons. Preply takes a 100% commission fee of your first lesson payment with every new student. For all subsequent lessons, the commission varies from 33 to 18% and depends on the number of completed lesson hours with students. The more tutoring you do through Preply, the less commission you pay.

Not one to miss a trick, Ziktalk (‘currently focusing on language learning and building global audience’) encourages teachers ‘to upload educational videos in order to attract more students’. Or, to put it another way, teachers provide free content in order to have more chance of earning $10 – 15 / hour. Ah, the joys of digital labour!

And, then, coronavirus came along. With schools shutting down, first in China and then elsewhere, tens of millions of students are migrating online. In Hong Kong, for example, the South China Morning Post reports that schools will remain closed until April 20, at the earliest, but university entrance exams will be going ahead as planned in late March. CNBC reported yesterday that classes are being cancelled across the US, and the same is happening, or is likely to happen, in many other countries.

Shares in the big online providers soared in February, with Forbes reporting that $3.2 billion had been added to the share value of China’s e-Learning leaders. Stock in New Oriental (owners of BlingABC, mentioned above) ‘rose 7.3% last month, adding $190 million to the wealth of its founder Yu Minhong [whose] current net worth is estimated at $3.4 billion’.

DingTalk, a communication and management app owned by Alibaba (and the most downloaded free app in China’s iOS App Store), has been adapted to offer online services for schools, reports Xinhua, the official state-run Chinese news agency. The scale of operations is enormous: more than 10,000 new cloud servers were deployed within just two hours.

Current impacts are likely to be dwarfed by what happens in the future. According to Terry Weng, a Shenzhen-based analyst, ‘The gradual exit of smaller education firms means there are more opportunities for TAL and New Oriental. […] Investors are more keen for their future performance.’ Zhu Hong, CTO of DingTalk, observes ‘the epidemic is like a catalyst for many enterprises and schools to adopt digital technology platforms and products’.

For edtech investors, things look rosy. Smaller, F2F providers are in danger of going under. In an attempt to mop up this market and gain overall market share, many elearning providers are offering weighty discounts and free services. Profits can come later.

For the hundreds of thousands of illegal or semi-legal English language teachers in China, things look doubly bleak. Their situation is likely to become even more precarious, with the online gig economy their obvious fall-back path. But English language teachers everywhere are likely to be affected one way or another, as will the whole world of TEFL.

Now seems like a pretty good time to find out more about precarity (see the Teachers as Workers website) and native-speakerism (see TEFL Equity Advocates).

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

Back in the middle of the last century, the first interactive machines for language teaching appeared. Previously, there had been phonograph discs and wire recorders (Ornstein, 1968: 401), but these had never really taken off. This time, things were different. Buoyed by a belief in the power of technology, along with the need (following the Soviet Union’s successful Sputnik programme) to demonstrate the pre-eminence of the United States’ technological expertise, the interactive teaching machines that were used in programmed instruction promised to revolutionize language learning (Valdman, 1968: 1). From coast to coast, ‘tremors of excitement ran through professional journals and conferences and department meetings’ (Kennedy, 1967: 871). The new technology was driven by hard science, supported and promoted by the one of the most well-known and respected psychologists and public intellectuals of the day (Skinner, 1961).

In classrooms, the machines acted as powerfully effective triggers in generating situational interest (Hidi & Renninger, 2006). Even more exciting than the mechanical teaching machines were the computers that were appearing on the scene. ‘Lick’ Licklider, a pioneer in interactive computing at the Advanced Research Projects Agency in Arlington, Virginia, developed an automated drill routine for learning German by hooking up a computer, two typewriters, an oscilloscope and a light pen (Noble, 1991: 124). Students loved it, and some would ‘go on and on, learning German words until they were forced by scheduling to cease their efforts’. Researchers called the seductive nature of the technology ‘stimulus trapping’, and Licklider hoped that ‘before [the student] gets out from under the control of the computer’s incentives, [they] will learn enough German words’ (Noble, 1991: 125).

With many of the developed economies of the world facing a critical shortage of teachers, ‘an urgent pedagogical emergency’ (Hof, 2018), the new approach was considered to be extremely efficient and could equalise opportunity in schools across the country. It was ‘here to stay: [it] appears destined to make progress that could well go beyond the fondest dreams of its originators […] an entire industry is just coming into being and significant sales and profits should not be too long in coming’ (Kozlowski, 1961: 47).

Unfortunately, however, researchers and entrepreneurs had massively underestimated the significance of novelty effects. The triggered situational interest of the machines did not lead to intrinsic individual motivation. Students quickly tired of, and eventually came to dislike, programmed instruction and the machines that delivered it (McDonald et al.: 2005: 89). What’s more, the machines were expensive and ‘research studies conducted on its effectiveness showed that the differences in achievement did not constantly or substantially favour programmed instruction over conventional instruction (Saettler, 2004: 303). Newer technologies, with better ‘stimulus trapping’, were appearing. Programmed instruction lost its backing and disappeared, leaving as traces only its interest in clearly defined learning objectives, the measurement of learning outcomes and a concern with the efficiency of learning approaches.

Hot on the heels of programmed instruction came the language laboratory. Futuristic in appearance, not entirely unlike the deck of the starship USS Enterprise which launched at around the same time, language labs captured the public imagination and promised to explore the final frontiers of language learning. As with the earlier teaching machines, students were initially enthusiastic. Even today, when language labs are introduced into contexts where they may be perceived as new technology, they can lead to high levels of initial motivation (e.g. Ramganesh & Janaki, 2017).

Given the huge investments into these labs, it’s unfortunate that initial interest waned fast. By 1969, many of these rooms had turned into ‘“electronic graveyards,” sitting empty and unused, or perhaps somewhat glorified study halls to which students grudgingly repair to don headphones, turn down the volume, and prepare the next period’s history or English lesson, unmolested by any member of the foreign language faculty’ (Turner, 1969: 1, quoted in Roby, 2003: 527). ‘Many second language students shudder[ed] at the thought of entering into the bowels of the “language laboratory” to practice and perfect the acoustical aerobics of proper pronunciation skills. Visions of sterile white-walled, windowless rooms, filled with endless bolted-down rows of claustrophobic metal carrels, and overseen by a humorless, lab director, evoke[d] fear in the hearts of even the most stout-hearted prospective second-language learners (Wiley, 1990: 44).

By the turn of this century, language labs had mostly gone, consigned to oblivion by the appearance of yet newer technology: the internet, laptops and smartphones. Education had been on the brink of being transformed through new learning technologies for decades (Laurillard, 2008: 1), but this time it really was different. It wasn’t just one technology that had appeared, but a whole slew of them: ‘artificial intelligence, learning analytics, predictive analytics, adaptive learning software, school management software, learning management systems (LMS), school clouds. No school was without these and other technologies branded as ‘superintelligent’ by the late 2020s’ (Macgilchrist et al., 2019). The hardware, especially phones, was ubiquitous and, therefore, free. Unlike teaching machines and language laboratories, students were used to using the technology and expected to use their devices in their studies.

A barrage of publicity, mostly paid for by the industry, surrounded the new technologies. These would ‘meet the demands of Generation Z’, the new generation of students, now cast as consumers, who ‘were accustomed to personalizing everything’.  AR, VR, interactive whiteboards, digital projectors and so on made it easier to ‘create engaging, interactive experiences’. The ‘New Age’ technologies made learning fun and easy,  ‘bringing enthusiasm among the students, improving student engagement, enriching the teaching process, and bringing liveliness in the classroom’. On top of that, they allowed huge amounts of data to be captured and sold, whilst tracking progress and attendance. In any case, resistance to digital technology, said more than one language teaching expert, was pointless (Styring, 2015).slide

At the same time, technology companies increasingly took on ‘central roles as advisors to national governments and local districts on educational futures’ and public educational institutions came to be ‘regarded by many as dispensable or even harmful’ (Macgilchrist et al., 2019).

But, as it turned out, the students of Generation Z were not as uniformly enthusiastic about the new technology as had been assumed, and resistance to digital, personalized delivery in education was not long in coming. In November 2018, high school students at Brooklyn’s Secondary School for Journalism staged a walkout in protest at their school’s use of Summit Learning, a web-based platform promoting personalized learning developed by Facebook. They complained that the platform resulted in coursework requiring students to spend much of their day in front of a computer screen, that made it easy to cheat by looking up answers online, and that some of their teachers didn’t have the proper training for the curriculum (Leskin, 2018). Besides, their school was in a deplorable state of disrepair, especially the toilets. There were similar protests in Kansas, where students staged sit-ins, supported by their parents, one of whom complained that ‘we’re allowing the computers to teach and the kids all looked like zombies’ before pulling his son out of the school (Bowles, 2019). In Pennsylvania and Connecticut, some schools stopped using Summit Learning altogether, following protests.

But the resistance did not last. Protesters were accused of being nostalgic conservatives and educationalists kept largely quiet, fearful of losing their funding from the Chan Zuckerberg Initiative (Facebook) and other philanthro-capitalists. The provision of training in grit, growth mindset, positive psychology and mindfulness (also promoted by the technology companies) was ramped up, and eventually the disaffected students became more quiescent. Before long, the data-intensive, personalized approach, relying on the tools, services and data storage of particular platforms had become ‘baked in’ to educational systems around the world (Moore, 2018: 211). There was no going back (except for small numbers of ultra-privileged students in a few private institutions).

By the middle of the century (2155), most students, of all ages, studied with interactive screens in the comfort of their homes. Algorithmically-driven content, with personalized, adaptive tests had become the norm, but the technology occasionally went wrong, leading to some frustration. One day, two young children discovered a book in their attic. Made of paper with yellow, crinkly pages, where ‘the words stood still instead of moving the way they were supposed to’. The book recounted the experience of schools in the distant past, where ‘all the kids from the neighbourhood came’, sitting in the same room with a human teacher, studying the same things ‘so they could help one another on the homework and talk about it’. Margie, the younger of the children at 11 years old, was engrossed in the book when she received a nudge from her personalized learning platform to return to her studies. But Margie was reluctant to go back to her fractions. She ‘was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had’ (Asimov, 1951).

References

Asimov, I. 1951. The Fun They Had. Accessed September 20, 2019. http://web1.nbed.nb.ca/sites/ASD-S/1820/J%20Johnston/Isaac%20Asimov%20-%20The%20fun%20they%20had.pdf

Bowles, N. 2019. ‘Silicon Valley Came to Kansas Schools. That Started a Rebellion’ The New York Times, April 21. Accessed September 20, 2019. https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html

Hidi, S. & Renninger, K.A. 2006. ‘The Four-Phase Model of Interest Development’ Educational Psychologist, 41 (2), 111 – 127

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47 (4): 445-465

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 (6): 47 – 54

Laurillard, D. 2008. Digital Technologies and their Role in Achieving our Ambitions for Education. London: Institute for Education.

Leskin, P. 2018. ‘Students in Brooklyn protest their school’s use of a Zuckerberg-backed online curriculum that Facebook engineers helped build’ Business Insider, 12.11.18 Accessed 20 September 2019. https://www.businessinsider.de/summit-learning-school-curriculum-funded-by-zuckerberg-faces-backlash-brooklyn-2018-11?r=US&IR=T

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 (2): 84 – 98

Macgilchrist, F., Allert, H. & Bruch, A. 2019. ‚Students and society in the 2020s. Three future ‘histories’ of education and technology’. Learning, Media and Technology, https://www.tandfonline.com/doi/full/10.1080/17439884.2019.1656235 )

Moore, M. 2018. Democracy Hacked. London: Oneworld

Noble, D. D. 1991. The Classroom Arsenal. London: The Falmer Press

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 (7), 401 – 410

Ramganesh, E. & Janaki, S. 2017. ‘Attitude of College Teachers towards the Utilization of Language Laboratories for Learning English’ Asian Journal of Social Science Studies; Vol. 2 (1): 103 – 109

Roby, W.B. 2003. ‘Technology in the service of foreign language teaching: The case of the language laboratory’ In D. Jonassen (ed.), Handbook of Research on Educational Communications and Technology, 2nd ed.: 523 – 541. Mahwah, NJ.: Lawrence Erlbaum Associates

Saettler, P. 2004. The Evolution of American Educational Technology. Greenwich, Conn.: Information Age Publishing

Skinner, B. F. 1961. ‘Teaching Machines’ Scientific American, 205(5), 90-107

Styring, J. 2015. Engaging Generation Z. Cambridge English webinar 2015 https://www.youtube.com/watch?time_continue=4&v=XCxl4TqgQZA

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14.

Wiley, P. D. 1990. ‘Language labs for 1990: User-friendly, expandable and affordable’. Media & Methods, 27(1), 44–47)

jenny-holzer-untitled-protect-me-from-what-i-want-text-displayed-in-times-square-nyc-1982

Jenny Holzer, Protect me from what I want

At a recent ELT conference, a plenary presentation entitled ‘Getting it right with edtech’ (sponsored by a vendor of – increasingly digital – ELT products) began with the speaker suggesting that technology was basically neutral, that what you do with educational technology matters far more than the nature of the technology itself. The idea that technology is a ‘neutral tool’ has a long pedigree and often accompanies exhortations to embrace edtech in one form or another (see for example Fox, 2001). It is an idea that is supported by no less a luminary than Chomsky, who, in a 2012 video entitled ‘The Purpose of Education’ (Chomsky, 2012), said that:

As far as […] technology […] and education is concerned, technology is basically neutral. It’s kind of like a hammer. I mean, […] the hammer doesn’t care whether you use it to build a house or whether a torturer uses it to crush somebody’s skull; a hammer can do either. The same with the modern technology; say, the Internet, and so on.

Womans hammerAlthough hammers are not usually classic examples of educational technology, they are worthy of a short discussion. Hammers come in all shapes and sizes and when you choose one, you need to consider its head weight (usually between 16 and 20 ounces), the length of the handle, the shape of the grip, etc. Appropriate specifications for particular hammering tasks have been calculated in great detail. The data on which these specifications is based on an analysis of the hand size and upper body strength of the typical user. The typical user is a man, and the typical hammer has been designed for a man. The average male hand length is 177.9 mm, that of the average woman is 10 mm shorter (Wang & Cai, 2017). Women typically have about half the upper body strength of men (Miller et al., 1993). It’s possible, but not easy to find hammers designed for women (they are referred to as ‘Ladies hammers’ on Amazon). They have a much lighter head weight, a shorter handle length, and many come in pink or floral designs. Hammers, in other words, are far from neutral: they are highly gendered.

Moving closer to educational purposes and ways in which we might ‘get it right with edtech’, it is useful to look at the smart phone. The average size of these devices has risen in recent years, and is now 5.5 inches, with the market for 6 inch screens growing fast. Why is this an issue? Well, as Caroline Criado Perez (2019: 159) notes, ‘while we’re all admittedly impressed by the size of your screen, it’s a slightly different matter when it comes to fitting into half the population’s hands. The average man can fairly comfortably use his device one-handed – but the average woman’s hand is not much bigger than the handset itself’. This is despite the fact the fact that women are more likely to own an iPhone than men  .

It is not, of course, just technological artefacts that are gendered. Voice-recognition software is also very biased. One researcher (Tatman, 2017) has found that Google’s speech recognition tool is 13% more accurate for men than it is for women. There are also significant biases for race and social class. The reason lies in the dataset that the tool is trained on: the algorithms may be gender- and socio-culturally-neutral, but the dataset is not. It would not be difficult to redress this bias by training the tool on a different dataset.

The same bias can be found in automatic translation software. Because corpora such as the BNC or COCA have twice as many male pronouns as female ones (as a result of the kinds of text that are selected for the corpora), translation software reflects the bias. With Google Translate, a sentence in a language with a gender-neutral pronoun, such as ‘S/he is a doctor’ is rendered into English as ‘He is a doctor’. Meanwhile, ‘S/he is a nurse’ is translated as ‘She is a nurse’ (Criado Perez, 2019: 166).

Datasets, then, are often very far from neutral. Algorithms are not necessarily any more neutral than the datasets, and Cathy O’Neil’s best-seller ‘Weapons of Math Destruction’ catalogues the many, many ways in which algorithms, posing as neutral mathematical tools, can increase racial, social and gender inequalities.

It would not be hard to provide many more examples, but the selection above is probably enough. Technology, as Langdon Winner (Winner, 1980) observed almost forty years ago, is ‘deeply interwoven in the conditions of modern politics’. Technology cannot be neutral: it has politics.

So far, I have focused primarily on the non-neutrality of technology in terms of gender (and, in passing, race and class). Before returning to broader societal issues, I would like to make a relatively brief mention of another kind of non-neutrality: the pedagogic. Language learning materials necessarily contain content of some kind: texts, topics, the choice of values or role models, language examples, and so on. These cannot be value-free. In the early days of educational computer software, one researcher (Biraimah, 1993) found that it was ‘at least, if not more, biased than the printed page it may one day replace’. My own impression is that this remains true today.

Equally interesting to my mind is the fact that all educational technologies, ranging from the writing slate to the blackboard (see Buzbee, 2014), from the overhead projector to the interactive whiteboard, always privilege a particular kind of teaching (and learning). ‘Technologies are inherently biased because they are built to accomplish certain very specific goals which means that some technologies are good for some tasks while not so good for other tasks’ (Zhao et al., 2004: 25). Digital flashcards, for example, inevitably encourage a focus on rote learning. Contemporary LMSs have impressive multi-functionality (i.e. they often could be used in a very wide variety of ways), but, in practice, most teachers use them in very conservative ways (Laanpere et al., 2004). This may be a result of teacher and institutional preferences, but it is almost certainly due, at least in part, to the way that LMSs are designed. They are usually ‘based on traditional approaches to instruction dating from the nineteenth century: presentation and assessment [and] this can be seen in the selection of features which are most accessible in the interface, and easiest to use’ (Lane, 2009).

The argument that educational technology is neutral because it could be put to many different uses, good or bad, is problematic because the likelihood of one particular use is usually much greater than another. There is, however, another way of looking at technological neutrality, and that is to look at its origins. Elsewhere on this blog, in post after post, I have given examples of the ways in which educational technology has been developed, marketed and sold primarily for commercial purposes. Educational values, if indeed there are any, are often an afterthought. The research literature in this area is rich and growing: Stephen Ball, Larry Cuban, Neil Selwyn, Joel Spring, Audrey Watters, etc.

Rather than revisit old ground here, this is an opportunity to look at a slightly different origin of educational technology: the US military. The close connection of the early history of the internet and the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense is fairly well-known. Much less well-known are the very close connections between the US military and educational technologies, which are catalogued in the recently reissued ‘The Classroom Arsenal’ by Douglas D. Noble.

Following the twin shocks of the Soviet Sputnik 1 (in 1957) and Yuri Gagarin (in 1961), the United States launched a massive programme of investment in the development of high-tech weaponry. This included ‘computer systems design, time-sharing, graphics displays, conversational programming languages, heuristic problem-solving, artificial intelligence, and cognitive science’ (Noble, 1991: 55), all of which are now crucial components in educational technology. But it also quickly became clear that more sophisticated weapons required much better trained operators, hence the US military’s huge (and continuing) interest in training. Early interest focused on teaching machines and programmed instruction (branches of the US military were by far the biggest purchasers of programmed instruction products). It was essential that training was effective and efficient, and this led to a wide interest in the mathematical modelling of learning and instruction.

What was then called computer-based education (CBE) was developed as a response to military needs. The first experiments in computer-based training took place at the Systems Research Laboratory of the Air Force’s RAND Corporation think tank (Noble, 1991: 73). Research and development in this area accelerated in the 1960s and 1970s and CBE (which has morphed into the platforms of today) ‘assumed particular forms because of the historical, contingent, military contexts for which and within which it was developed’ (Noble, 1991: 83). It is possible to imagine computer-based education having developed in very different directions. Between the 1960s and 1980s, for example, the PLATO (Programmed Logic for Automatic Teaching Operations) project at the University of Illinois focused heavily on computer-mediated social interaction (forums, message boards, email, chat rooms and multi-player games). PLATO was also significantly funded by a variety of US military agencies, but proved to be of much less interest to the generals than the work taking place in other laboratories. As Noble observes, ‘some technologies get developed while others do not, and those that do are shaped by particular interests and by the historical and political circumstances surrounding their development (Noble, 1991: 4).

According to Noble, however, the influence of the military reached far beyond the development of particular technologies. Alongside the investment in technologies, the military were the prime movers in a campaign to promote computer literacy in schools.

Computer literacy was an ideological campaign rather than an educational initiative – a campaign designed, at bottom, to render people ‘comfortable’ with the ‘inevitable’ new technologies. Its basic intent was to win the reluctant acquiescence of an entire population in a brave new world sculpted in silicon.

The computer campaign also succeeded in getting people in front of that screen and used to having computers around; it made people ‘computer-friendly’, just as computers were being rendered ‘used-friendly’. It also managed to distract the population, suddenly propelled by the urgency of learning about computers, from learning about other things, such as how computers were being used to erode the quality of their working lives, or why they, supposedly the citizens of a democracy, had no say in technological decisions that were determining the shape of their own futures.

Third, it made possible the successful introduction of millions of computers into schools, factories and offices, even homes, with minimal resistance. The nation’s public schools have by now spent over two billion dollars on over a million and a half computers, and this trend still shows no signs of abating. At this time, schools continue to spend one-fifth as much on computers, software, training and staffing as they do on all books and other instructional materials combined. Yet the impact of this enormous expenditure is a stockpile of often idle machines, typically used for quite unimaginative educational applications. Furthermore, the accumulated results of three decades of research on the effectiveness of computer-based instruction remain ‘inconclusive and often contradictory’. (Noble, 1991: x – xi)

Rather than being neutral in any way, it seems more reasonable to argue, along with (I think) most contemporary researchers, that edtech is profoundly value-laden because it has the potential to (i) influence certain values in students; (ii) change educational values in [various] ways; and (iii) change national values (Omotoyinbo & Omotoyinbo, 2016: 173). Most importantly, the growth in the use of educational technology has been accompanied by a change in the way that education itself is viewed: ‘as a tool, a sophisticated supply system of human cognitive resources, in the service of a computerized, technology-driven economy’ (Noble, 1991: 1). These two trends are inextricably linked.

References

Biraimah, K. 1993. The non-neutrality of educational computer software. Computers and Education 20 / 4: 283 – 290

Buzbee, L. 2014. Blackboard: A Personal History of the Classroom. Minneapolis: Graywolf Press

Chomsky, N. 2012. The Purpose of Education (video). Learning Without Frontiers Conference. https://www.youtube.com/watch?v=DdNAUJWJN08

Criado Perez, C. 2019. Invisible Women. London: Chatto & Windus

Fox, R. 2001. Technological neutrality and practice in higher education. In A. Herrmann and M. M. Kulski (Eds), Expanding Horizons in Teaching and Learning. Proceedings of the 10th Annual Teaching Learning Forum, 7-9 February 2001. Perth: Curtin University of Technology. http://clt.curtin.edu.au/events/conferences/tlf/tlf2001/fox.html

Laanpere, M., Poldoja, H. & Kikkas, K. 2004. The second thoughts about pedagogical neutrality of LMS. Proceedings of IEEE International Conference on Advanced Learning Technologies, 2004. https://ieeexplore.ieee.org/abstract/document/1357664

Lane, L. 2009. Insidious pedagogy: How course management systems impact teaching. First Monday, 14(10). https://firstmonday.org/ojs/index.php/fm/article/view/2530/2303Lane

Miller, A.E., MacDougall, J.D., Tarnopolsky, M. A. & Sale, D.G. 1993. ‘Gender differences in strength and muscle fiber characteristics’ European Journal of Applied Physiology and Occupational Physiology. 66(3): 254-62 https://www.ncbi.nlm.nih.gov/pubmed/8477683

Noble, D. D. 1991. The Classroom Arsenal. Abingdon, Oxon.: Routledge

Omotoyinbo, D. W. & Omotoyinbo, F. R. 2016. Educational Technology and Value Neutrality. Societal Studies, 8 / 2: 163 – 179 https://www3.mruni.eu/ojs/societal-studies/article/view/4652/4276

O’Neil, C. 2016. Weapons of Math Destruction. London: Penguin

Sundström, P. Interpreting the Notion that Technology is Value Neutral. Medicine, Health Care and Philosophy 1, 1998: 42-44

Tatman, R. 2017. ‘Gender and Dialect Bias in YouTube’s Automatic Captions’ Proceedings of the First Workshop on Ethics in Natural Language Processing, pp. 53–59 http://www.ethicsinnlp.org/workshop/pdf/EthNLP06.pdf

Wang, C. & Cai, D. 2017. ‘Hand tool handle design based on hand measurements’ MATEC Web of Conferences 119, 01044 (2017) https://www.matec-conferences.org/articles/matecconf/pdf/2017/33/matecconf_imeti2017_01044.pdf

Winner, L. 1980. Do Artifacts have Politics? Daedalus 109 / 1: 121 – 136

Zhao, Y, Alvarez-Torres, M. J., Smith, B. & Tan, H. S. 2004. The Non-neutrality of Technology: a Theoretical Analysis and Empirical Study of Computer Mediated Communication Technologies. Journal of Educational Computing Research 30 (1 &2): 23 – 55

When the startup, AltSchool, was founded in 2013 by Max Ventilla, the former head of personalization at Google, it quickly drew the attention of venture capitalists and within a few years had raised $174 million from the likes of the Zuckerberg Foundation, Peter Thiel, Laurene Powell Jobs and Pierre Omidyar. It garnered gushing articles in a fawning edtech press which enthused about ‘how successful students can be when they learn in small, personalized communities that champion project-based learning, guided by educators who get a say in the technology they use’. It promised ‘a personalized learning approach that would far surpass the standardized education most kids receive’.

altschoolVentilla was an impressive money-raiser who used, and appeared to believe, every cliché in the edTech sales manual. Dressed in regulation jeans, polo shirt and fleece, he claimed that schools in America were ‘stuck in an industrial-age model, [which] has been in steady decline for the last century’ . What he offered, instead, was a learner-centred, project-based curriculum providing real-world lessons. There was a focus on social-emotional learning activities and critical thinking was vital.

The key to the approach was technology. From the start, software developers, engineers and researchers worked alongside teachers everyday, ‘constantly tweaking the Personalized Learning Plan, which shows students their assignments for each day and helps teachers keep track of and assess student’s learning’. There were tablets for pre-schoolers, laptops for older kids and wall-mounted cameras to record the lessons. There were, of course, Khan Academy videos. Ventilla explained that “we start with a representation of each child”, and even though “the vast majority of the learning should happen non-digitally”, the child’s habits and preferences gets converted into data, “a digital representation of the important things that relate to that child’s learning, not just their academic learning but also their non-academic learning. Everything logistic that goes into setting up the experience for them, whether it’s who has permission to pick them up or their allergy information. You name it.” And just like Netflix matches us to TV shows, “If you have that accurate and actionable representation for each child, now you can start to personalize the whole experience for that child. You can create that kind of loop you described where because we can represent a child well, we can match them to the right experiences.”

AltSchool seemed to offer the possibility of doing something noble, of transforming education, ‘bringing it into the digital age’, and, at the same time, a healthy return on investors’ money. Expanding rapidly, nine AltSchool microschools were opened in New York and the Bay Area, and plans were afoot for further expansion in Chicago. But, by then, it was already clear that something was going wrong. Five of the schools were closed before they had really got started and the attrition rate in some classrooms had reached about 30%. Revenue in 2018 was only $7 million and there were few buyers for the AltSchool platform. Quoting once more from the edTech bible, Ventilla explained the situation: ‘Our whole strategy is to spend more than we make,’ he says. Since software is expensive to develop and cheap to distribute, the losses, he believes, will turn into steep profits once AltSchool refines its product and lands enough customers.

The problems were many and apparent. Some of the buildings were simply not appropriate for schools, with no playgrounds or gyms, malfunctioning toilets, among other issues. Parents were becoming unhappy and accused AltSchool of putting ‘its ambitions as a tech company above its responsibility to teach their children. […] We kind of came to the conclusion that, really, AltSchool as a school was kind of a front for what Max really wants to do, which is develop software that he’s selling,’ a parent of a former AltSchool student told Business Insider. ‘We had really mediocre educators using technology as a crutch,’ said one father who transferred his child to a different private school after two years at AltSchool. […] We learned that it’s almost impossible to really customize the learning experience for each kid.’ Some parents began to wonder whether AltSchool had enticed families into its program merely to extract data from their children, then toss them aside?

With the benefit of hindsight, it would seem that the accusations were hardly unfair. In June of this year, AltSchool announced that its four remaining schools would be operated by a new partner, Higher Ground Education (a well-funded startup founded in 2016 which promotes and ‘modernises’ Montessori education). Meanwhile, AltSchool has been rebranded as Altitude Learning, focusing its ‘resources on the development and expansion of its personalized learning platform’ for licensing to other schools across the country.

Quoting once more from the edTech sales manual, Ventilla has said that education should drive the tech, not the other way round. Not so many years earlier, before starting AltSchool, Ventilla also said that he had read two dozen books on education and emerged a fan of Sir Ken Robinson. He had no experience as a teacher or as an educational administrator. Instead, he had ‘extensive knowledge of networks, and he understood the kinds of insights that can be gleaned from big data’.