Posts Tagged ‘dataveillance’

In the last post, I mentioned a lesson plan from an article by Pegrum, M., Dudeney, G. & Hockly, N. (2018. Digital literacies revisited. The European Journal of Applied Linguistics and TEFL, 7 (2), pp. 3-24) in which students discuss the data that is collected by fitness apps and the possibility of using this data to calculate health insurance premiums, before carrying out and sharing online research about companies that track personal data. It’s a nice plan, unfortunately pay-walled, but you could try requesting a copy through Research Gate.

The only other off-the-shelf lesson plan I have been able to find is entitled ‘You and Your Data’ from the British Council. Suitable for level B2, this plan, along with a photocopiable pdf, contains with a vocabulary task (matching), a reading text (you and your data, who uses our data and why, can you protect your data) with true / false and sentence completion tasks) and a discussion (what do you do to protect our data). The material was written to coincide with Safer Internet Day (an EU project), which takes place in early February (next data 9 February 2021). The related website, Better Internet for Kids, contains links to a wide range of educational resources for younger learners.

For other resources, a good first stop is Ina Sander’s ‘A Critically Commented Guide to Data Literacy Tools’ in which she describes and evaluates a wide range of educational online resources for developing critical data literacy. Some of the resources that I discuss below are also evaluated in this guide. Here are some suggestion for learning / teaching resources.

A glossary

This is simply a glossary of terms that are useful in discussing data issues. It could easily be converted into a matching exercise or flashcards.

A series of interactive videos

Do not Track’ is an award-winning series of interactive videos, produced by a consortium of broadcasters. In seven parts, the videos consider such issues as who profits from the personal data that we generate online, the role of cookies in the internet economy, how online profiling is done, the data generated by mobile phones and how algorithms interpret the data.

Each episode is between 5 and 10 minutes long, and is therefore ideal for asynchronous viewing. In a survey of critical data literacy tools (Sander, 2020), ‘Do not Track’ proved popular with the students who used it. I highly recommend it, but students will probably need a B2 level or higher.

More informational videos

If you do not have time to watch the ‘Do Not Track’ video series, you may want to use something shorter. There are a huge number of freely available videos about online privacy. I have selected just two which I think would be useful. You may be able to find something better!

1 Students watch a video about how cookies work. This video, from Vox, is well-produced and is just under 7 minutes long. The speaker speaks fairly rapidly, so captions may be helpful.

Students watch a video as an introduction to the topic of surveillance and privacy. This video, ‘Reclaim our Privacy’, was produced by ‘La Quadrature du Net’, a French advocacy group that promotes digital rights and freedoms of citizens. It is short (3 mins) and can be watched with or without captions (English or 6 other languages). Its message is simple: political leaders should ensure that our online privacy is respected.

A simple matching task ‘ten principles for online privacy’

1 Share the image below with all the students and ask them to take a few minutes matching the illustrations to the principles on the right. There is no need for anyone to write or say anything, but it doesn’t matter if some students write the answers in the chat box.

(Note: This image and the other ideas for this activity are adapted from , a project developed by the International Computer Science Institute and the University of California-Berkeley for secondary school students and undergraduates. Each of the images corresponds to a course module, which contains a wide-range of materials (videos, readings, discussions, etc.) which you may wish to explore more fully.)

2 Share the image below (which shows the answers in abbreviated form). Ask if anyone needs anything clarified.

You’re Leaving Footprints Principle: Your information footprint is larger than you think.

There’s No Anonymity Principle: There is no anonymity on the Internet.

Information Is Valuable Principle: Information about you on the Internet will be used by somebody in their interest — including against you.

Someone Could Listen Principle: Communication over a network, unless strongly encrypted, is never just between two parties.

Sharing Releases Control Principle: Sharing information over a network means you give up control over that information — forever.

Search Is Improving Principle: Just because something can’t be found today, doesn’t mean it can’t be found tomorrow.

Online Is Real Principle: The online world is inseparable from the “real” world.

Identity Isn’t Guaranteed Principle: Identity is not guaranteed on the Internet.

You Can’t Escape Principle: You can’t avoid having an information footprint by not going online.

Privacy Requires Work Principle: Only you have an interest in maintaining your privacy.

3 Wrap up with a discussion of these principles.

Hands-on exploration of privacy tools

Click on the link below to download the procedure for the activity, as well as supporting material.

A graphic novel

Written by Michael Keller and Josh Neufeld, and produced by Al Jazeera, this graphic novel ‘Terms of Service. Understanding our role in the world of Big Data’ provides a good overview of critical data literacy issues, offering lots of interesting, concrete examples of real cases. The language is, however, challenging (C1+). It may be especially useful for trainee teachers.

A website

The Privacy International website is an extraordinary goldmine of information and resources. Rather than recommending anything specific, my suggestion is that you, or your students, use the ‘Search’ function on the homepage and see where you end up.

In the first post in this 3-part series, I focussed on data collection practices in a number of ELT websites, as a way of introducing ‘critical data literacy’. Here, I explore the term in more detail.

Although the term ‘big data’ has been around for a while (see this article and infographic) it’s less than ten years ago that it began to enter everyday language, and found its way into the OED (2013). In the same year, Viktor Mayer-Schönberger and Kenneth Cukier published their best-selling ‘Big Data: A Revolution That Will Transform How We Live, Work, and Think’ (2013) and it was hard to avoid enthusiastic references in the media to the transformative potential of big data in every sector of society.

Since then, the use of big data and analytics has become ubiquitous. Massive data collection (and data surveillance) has now become routine and companies like Palantir, which specialise in big data analytics, have become part of everyday life. Palantir’s customers include the LAPD, the CIA, the US Immigration and Customs Enforcement (ICE) and the British Government. Its recent history includes links with Cambridge Analytica, assistance in an operation to arrest the parents of illegal migrant children, and a racial discrimination lawsuit where the company was accused of having ‘routinely eliminated’ Asian job applicants (settled out of court for $1.7 million).

Unsurprisingly, the datafication of society has not gone entirely uncontested. Whilst the vast majority of people seem happy to trade their personal data for convenience and connectivity, a growing number are concerned about who benefits most from this trade-off. On an institutional level, the EU introduced the General Data Protection Regulation (GDPR), which led to Google being fined Ꞓ50 million for insufficient transparency in their privacy policy and their practices of processing personal data for the purposes of behavioural advertising. In the intellectual sphere, there has been a recent spate of books that challenge the practices of ubiquitous data collection, coining new terms like ‘surveillance capitalism’, ‘digital capitalism’ and ‘data colonialism’. Here are four recent books that I have found particularly interesting.

Beer, D. (2019). The Data Gaze. London: Sage

Couldry, N. & Mejias, U. A. (2019). The Costs of Connection. Stanford: Stanford University Press

Sadowski, J. (2020). Too Smart. Cambridge, Mass.: MIT Press

Zuboff, S. (2019). The Age of Surveillance Capitalism. New York: Public Affairs

The use of big data and analytics in education is also now a thriving industry, with its supporters claiming that these technologies can lead to greater personalization, greater efficiency of instruction and greater accountability. Opponents (myself included) argue that none of these supposed gains have been empirically demonstrated, and that the costs to privacy, equity and democracy outweigh any potential gains. There is a growing critical literature and useful, recent books include:

Bradbury, A. & Roberts-Holmes, G. (2018). The Datafication of Primary and Early Years Education. Abingdon: Routledge

Jarke, J. & Breiter, A. (Eds.) (2020). The Datafication of Education. Abingdon: Routledge

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. London: Sage

Concomitant with the rapid growth in the use of digital tools for language learning and teaching, and therefore the rapid growth in the amount of data that learners were (mostly unwittingly) giving away, came a growing interest in the need for learners to develop a set of digital competencies, or literacies, which would enable them to use these tools effectively. In the same year that Mayer-Schönberger and Cukier brought out their ‘Big Data’ book, the first book devoted to digital literacies in English language teaching came out (Dudeney et al., 2013). They defined digital literacies as the individual and social skills needed to effectively interpret, manage, share and create meaning in the growing range of digital communication channels (Dudeney et al., 2013: 2). The book contained a couple of activities designed to raise students’ awareness of online identity issues, along with others intended to promote critical thinking about digitally-mediated information (what the authors call ‘information literacy’), but ‘critical literacy’ was missing from the authors’ framework.

Critical thinking and critical literacy are not the same thing. Although there is no generally agreed definition of the former (with a small ‘c’), it is focussed primarily on logic and comprehension (Lee, 2011). Paul Dummett and John Hughes (2019: 4) describe it as ‘a mindset that involves thinking reflectively, rationally and reasonably’. The prototypical critical thinking activity involves the analysis of a piece of fake news (e.g. the task where students look at a website about tree octopuses in Dudeney et al. 2013: 198 – 203). Critical literacy, on the other hand, involves standing back from texts and technologies and viewing them as ‘circulating within a larger social and textual context’ (Warnick, 2002). Consideration of the larger social context necessarily entails consideration of unequal power relationships (Leee, 2011; Darvin, 2017), such as that between Google and the average user of Google. And it follows from this that critical literacy has a socio-political emancipatory function.

Critical digital literacy is now a growing field of enquiry (e.g. Pötzsch, 2019) and there is an awareness that digital competence frameworks, such as the Digital Competence Framework of the European Commission, are incomplete and out of date without the inclusion of critical digital literacy. Dudeney et al (2013) clearly recognise the importance of including critical literacy in frameworks of digital literacies. In Pegrum et al. (2018, unfortunately paywalled), they update the framework from their 2013 book, and the biggest change is the inclusion of critical literacy. They divide this into the following:

  • critical digital literacy – closely related to information literacy
  • critical mobile literacy – focussing on issues brought to the fore by mobile devices, ranging from protecting privacy through to safeguarding mental and physical health
  • critical material literacy – concerned with the material conditions underpinning the use of digital technologies, ranging from the socioeconomic influences on technological access to the environmental impacts of technological manufacturing and disposal
  • critical philosophical literacy – concerned with the big questions posed to and about humanity as our lives become conjoined with the existence of our smart devices, robots and AI
  • critical academic literacy, which refers to the pressing need to conduct meaningful studies of digital technologies in place of what is at times ‘cookie-cutter’ research

I’m not entirely convinced by the subdivisions, but labelling in this area is still in its infancy. My particular interest here, in critical data literacy, seems to span across a number of their sub-divisions. And the term that I am using, ‘critical data literacy’, which I’ve taken from Tygel & Kirsch (2016), is sometimes referred to as ‘critical big data literacy’ (Sander, 2020a) or ‘personal data literacy’ (Pangrazio & Selwyn, 2019). Whatever it is called, it is the development of ‘informed and critical stances toward how and why [our] data are being used’ (Pangrazio & Selwyn, 2018). One of the two practical activities in the Pegrum at al article (2018) looks at precisely this area (the task requires students to consider the data that is collected by fitness apps). It will be interesting to see, when the new edition of the ‘Digital Literacies’ book comes out (perhaps some time next year), how many other activities take a more overtly critical stance.

In the next post, I’ll be looking at a range of practical activities for developing critical data literacy in the classroom. This involves both bridging the gaps in knowledge (about data, algorithms and online privacy) and learning, practically, how to implement ‘this knowledge for a more empowered internet usage’ (Sander, 2020b).

Without wanting to invalidate the suggestions in the next post, a word of caution is needed. Just as critical thinking activities in the ELT classroom cannot be assumed to lead to any demonstrable increase in critical thinking (although there may be other benefits to the activities), activities to promote critical literacy cannot be assumed to lead to any actual increase in critical literacy. The reaction of many people may well be ‘It’s not like it’s life or death or whatever’ (Pangrazio & Selwyn, 2018). And, perhaps, education is rarely, if ever, a solution to political and social problems, anyway. And perhaps, too, we shouldn’t worry too much about educational interventions not leading to their intended outcomes. Isn’t that almost always the case? But, with those provisos in mind, I’ll come back next time with some practical ideas.


Darvin R. (2017). Language, Ideology, and Critical Digital Literacy. In: Thorne S., May S. (eds) Language, Education and Technology. Encyclopedia of Language and Education (3rd ed.). Springer, Cham. pp. 17 – 30

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Dummett, P. & Hughes, J. (2019). Critical Thinking in ELT. Boston: National Geographic Learning

Lee, C. J. (2011). Myths about critical literacy: What teachers need to unlearn. Journal of Language and Literacy Education [Online], 7 (1), 95-102. Available at

Mayer-Schönberger, V. & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think. London: John Murray

Pangrazio, L. & Selwyn, N. (2018). ‘It’s not like it’s life or death or whatever’: young people’s understandings of social media data. Social Media + Society, 4 (3): pp. 1–9.

Pangrazio, L. & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media and Society, 21 (2): pp. 419 – 437

Pegrum, M., Dudeney, G. & Hockly, N. (2018). Digital literacies revisited. The European Journal of Applied Linguistics and TEFL, 7 (2), pp. 3-24

Pötzsch, H. (2019). Critical Digital Literacy: Technology in Education Beyond Issues of User Competence and Labour-Market Qualifications. tripleC: Communication, Capitalism & Critique, 17: pp. 221 – 240 Available at

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479

Sander, I. (2020b). Critical big data literacy tools – Engaging citizens and promoting empowered internet usage. Data & Policy, 2: DOI:

Tygel, A. & Kirsch, R. (2016). Contributions of Paulo Freire for a Critical Data Literacy: a Popular Education Approach. The Journal of Community Informatics, 12 (3). Available at

Warnick, B. (2002). Critical Literacy in a Digital Era. Mahwah, NJ, Lawrence Erlbaum Associates

Take the Cambridge Assessment English website, for example. When you connect to the site, you will see, at the bottom of the screen, a familiar (to people in Europe, at least) notification about the site’s use of cookies: the cookies consent.

You probably trust the site, so ignore the notification and quickly move on to find the resource you are looking for. But if you did click on hyperlinked ‘set cookies’, what would you find? The first link takes you to the ‘Cookie policy’ where you will be told that ‘We use cookies principally because we want to make our websites and mobile applications user-friendly, and we are interested in anonymous user behaviour. Generally our cookies don’t store sensitive or personally identifiable information such as your name and address or credit card details’. Scroll down, and you will find out more about the kind of cookies that are used. Besides the cookies that are necessary to the functioning of the site, you will see that there are also ‘third party cookies’. These are explained as follows: ‘Cambridge Assessment works with third parties who serve advertisements or present offers on our behalf and personalise the content that you see. Cookies may be used by those third parties to build a profile of your interests and show you relevant adverts on other sites. They do not store personal information directly but use a unique identifier in your browser or internet device. If you do not allow these cookies, you will experience less targeted content’.

This is not factually inaccurate: personal information is not stored directly. However, it is extremely easy for this information to be triangulated with other information to identify you personally. In addition to the data that you generate by having cookies on your device, Cambridge Assessment will also directly collect data about you. Depending on your interactions with Cambridge Assessment, this will include ‘your name, date of birth, gender, contact data including your home/work postal address, email address and phone number, transaction data including your credit card number when you make a payment to us, technical data including internet protocol (IP) address, login data, browser type and technology used to access this website’. They say they may share this data ‘with other people and/or businesses who provide services on our behalf or at our request’ and ‘with social media platforms, including but not limited to Facebook, Google, Google Analytics, LinkedIn, in pseudonymised or anonymised forms’.

In short, Cambridge Assessment may hold a huge amount of data about you and they can, basically, do what they like with it.

The cookie and privacy policies are fairly standard, as is the lack of transparency in the phrasing of them. Rather more transparency would include, for example, information about which particular ad trackers you are giving your consent to. This information can be found with a browser extension tool like Ghostery, and these trackers can be blocked. As you’ll see below, there are 5 ad trackers on this site. This is rather more than other sites that English language teachers are likely to go to. ETS-TOEFL has 4, Macmillan English and Pearson have 3, CUP ELT and the British Council Teaching English have 1, OUP ELT, IATEFL, BBC Learning English and Trinity College have none. I could only find TESOL, with 6 ad trackers, which has more. The blogs for all these organisations invariably have more trackers than their websites.

The use of numerous ad trackers is probably a reflection of the importance that Cambridge Assessment gives to social media marketing. There is a research paper, produced by Cambridge Assessment, which outlines the significance of big data and social media analytics. They have far more Facebook followers (and nearly 6 million likes) than any other ELT page, and they are proud of their #1 ranking in the education category of social media. The amount of data that can be collected here is enormous and it can be analysed in myriad ways using tools like Ubervu, Yomego and Hootsuite.

A little more transparency, however, would not go amiss. According to a report in Vox, Apple has announced that some time next year ‘iPhone users will start seeing a new question when they use many of the apps on their devices: Do they want the app to follow them around the internet, tracking their behavior?’ Obviously, Google and Facebook are none too pleased about this and will be fighting back. The implications for ad trackers and online advertising, more generally, are potentially huge. I wrote to Cambridge Assessment about this and was pleased to hear that ‘Cambridge Assessment are currently reviewing the process by which we obtain users consent for the use of cookies with the intention of moving to a much more transparent model in the future’. Let’s hope that other ELT organisations are doing the same.

You may be less bothered than I am by the thought of dozens of ad trackers following you around the net so that you can be served with more personalized ads. But the digital profile about you, to which these cookies contribute, may include information about your ethnicity, disabilities and sexual orientation. This profile is auctioned to advertisers when you visit some sites, allowing them to show you ‘personalized’ adverts based on the categories in your digital profile. Contrary to EU regulations, these categories may include whether you have cancer, a substance-abuse problem, your politics and religion (as reported in Fortune ).

But it’s not these cookies that are the most worrying aspect about our lack of digital privacy. It’s the sheer quantity of personal data that is stored about us. Every time we ask our students to use an app or a platform, we are asking them to divulge huge amounts of data. With ClassDojo, for example, this includes names, usernames, passwords, age, addresses, photographs, videos, documents, drawings, or audio files, IP addresses and browser details, clicks, referring URL’s, time spent on site, and page views (Manolev et al., 2019; see also Williamson, 2019).

It is now widely recognized that the ‘consent’ that is obtained through cookie policies and other end-user agreements is largely spurious. These consent agreements, as Sadowski (2019) observes, are non-negotiated, and non-negotiable; you either agree or you are denied access. What’s more, he adds, citing one study, it would take 76 days, working for 8 hours a day, to read the privacy policies a person typically encounters in a year. As a result, most of us choose not to choose when we accept online services (Cobo, 2019: 25). We have little, if any, control over how the data that is collected is used (Birch et al., 2020). More importantly, perhaps, when we ask our students to sign up to an educational app, we are asking / telling them to give away their personal data, not just ours. They are unlikely to fully understand the consequences of doing so.

The extent of this ignorance is also now widely recognized. In the UK, for example, two reports (cited by Sander, 2020) indicate that ‘only a third of people know that data they have not actively chosen to share has been collected’ (Doteveryone, 2018: 5), and that ‘less than half of British adult internet users are aware that apps collect their location and information on their personal preferences’ (Ofcom, 2019: 14).

The main problem with this has been expressed by programmer and activist, Richard Stallman, in an interview with New York magazine (Kulwin, 2018): Companies are collecting data about people. The data that is collected will be abused. That’s not an absolute certainty, but it’s a practical, extreme likelihood, which is enough to make collection a problem.

The abuse that Smallman is referring to can come in a variety of forms. At the relatively trivial end is the personalized advertising. Much more serious is the way that data aggregation companies will scrape data from a variety of sources, building up individual data profiles which can be used to make significant life-impacting decisions, such as final academic grades or whether one is offered a job, insurance or credit (Manolev et al., 2019). Cathy O’Neil’s (2016) best-selling ‘Weapons of Math Destruction’ spells out in detail how this abuse of data increases racial, gender and class inequalities. And after the revelations of Edward Snowden, we all know about the routine collection by states of huge amounts of data about, well, everyone. Whether it’s used for predictive policing or straightforward repression or something else, it is simply not possible for younger people, our students, to know what personal data they may regret divulging at a later date.

Digital educational providers may try to reassure us that they will keep data private, and not use it for advertising purposes, but the reassurances are hollow. These companies may change their terms and conditions further down the line, and examples exist of when this has happened (Moore, 2018: 210). But even if this does not happen, the data can never be secure. Illegal data breaches and cyber attacks are relentless, and education ranked worst at cybersecurity out of 17 major industries in one recent analysis (Foresman, 2018). One report suggests that one in five US schools and colleges have fallen victim to cyber-crime. Two weeks ago, I learnt (by chance, as I happened to be looking at my security settings on Chrome) that my passwords for Quizlet, Future Learn, Elsevier and Science Direct had been compromised by a data breach. To get a better understanding of the scale of data breaches, you might like to look at the UK’s IT Governance site, which lists detected and publicly disclosed data breaches and cyber attacks each month (36.6 million records breached in August 2020). If you scroll through the list, you’ll see how many of them are educational sites. You’ll also see a comment about how leaky organisations have been throughout lockdown … because they weren’t prepared for the sudden shift online.

Recent years have seen a growing consensus that ‘it is crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Dudeney et al., 2013). Most of the focus has been on the skills that are needed to use digital media. There also appears to be growing interest in developing critical thinking skills in the context of digital media (e.g. Peachey, 2016) – identifying fake news and so on. To a much lesser extent, there has been some focus on ‘issues of digital identity, responsibility, safety and ethics when students use these technologies’ (Mavridi, 2020a: 172). Mavridi (2020b: 91) also briefly discusses the personal risks of digital footprints, but she does not have the space to explore more fully the notion of critical data literacy. This literacy involves an understanding of not just the personal risks of using ‘free’ educational apps and platforms, but of why they are ‘free’ in the first place. Sander (2020b) suggests that this literacy entails ‘an understanding of datafication, recognizing the risks and benefits of the growing prevalence of data collection, analytics, automation, and predictive systems, as well as being able to critically reflect upon these developments. This includes, but goes beyond the skills of, for example, changing one’s social media settings, and rather constitutes an altered view on the pervasive, structural, and systemic levels of changing big data systems in our datafied societies’.

In my next two posts, I will, first of all, explore in more detail the idea of critical data literacy, before suggesting a range of classroom resources.

(I posted about privacy in March 2014, when I looked at the connections between big data and personalized / adaptive learning. In another post, September 2014, I looked at the claims of the CEO of Knewton who bragged that his company had five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. You might find both of these posts interesting.)


Birch, K., Chiappetta, M. & Artyushina, A. (2020). ‘The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset’ Policy Studies, 41:5, 468-487, DOI: 10.1080/01442872.2020.1748264

Cobo, C. (2019). I Accept the Terms and Conditions.

Doteveryone. (2018). People, Power and Technology: The 2018 Digital Attitudes Report.

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Foresman, B. (2018). Education ranked worst at cybersecurity out of 17 major industries. Edscoop, December 17, 2018.

Kulwin, K. (2018). F*ck Them. We Need a Law’: A Legendary Programmer Takes on Silicon Valley, New York Intelligencer, 2018,

Manolev, J., Sullivan, A. & Slee, R. (2019). ‘Vast amounts of data about our children are being harvested and stored via apps used by schools’ EduReseach Matters, February 18, 2019.

Mavridi, S. (2020a). Fostering Students’ Digital Responsibility, Ethics and Safety Skills (Dress). In Mavridi, S. & Saumell, V. (Eds.) Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL. pp. 170 – 196

Mavridi, S. (2020b). Digital literacies and the new digital divide. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp. 90 – 98

Moore, M. (2018). Democracy Hacked. London: Oneworld

Ofcom. (2019). Adults: Media use and attitudes report [Report].

O’Neil, C. (2016). Weapons of Math Destruction. London: Allen Lane

Peachey, N. (2016). Thinking Critically through Digital Media.

Sadowski, J. (2019). ‘When data is capital: Datafication, accumulation, and extraction’ Big Data and Society 6 (1)

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479

Sander, I. (2020b). Critical big data literacy tools—Engaging citizens and promoting empowered internet usage. Data & Policy, 2: e5 doi:10.1017/dap.2020.5

Williamson, B. (2019). ‘Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry’ Journal of Professional Learning, 2019 (Semester 2)

The drive towards adaptive learning is being fuelled less by individual learners or teachers than it is by commercial interests, large educational institutions and even larger agencies, including national governments. How one feels about adaptive learning is likely to be shaped by one’s beliefs about how education should be managed.

Huge amounts of money are at stake. Education is ‘a global marketplace that is estimated conservatively to be worth in excess of $5 trillion per annum’ (Selwyn, Distrusting Educational Technology 2013, p.2). With an eye on this pot, in one year, 2012, ‘venture capital funds, private equity investors and transnational corporations like Pearson poured over $1.1 billion into education technology companies’[1] Knewton, just one of a number of adaptive learning companies, managed to raise $54 million before it signed multi-million dollar contracts with ELT publishers like Macmillan and Cambridge University Press. In ELT, some publishing companies are preferring to sit back and wait to see what happens. Most, however, have their sights firmly set on the earnings potential and are fully aware that late-starters may never be able to catch up with the pace-setters.

The nexus of vested interests that is driving the move towards adaptive learning is both tight and complicated. Fuller accounts of this can be found in Stephen Ball’s ‘Education Inc.’ (2012) and Joel Spring’s ‘Education Networks’ (2012) but for this post I hope that a few examples will suffice.

Leading the way is the Bill and Melinda Gates Foundation, the world’s largest private foundation with endowments of almost $40 billion. One of its activities is the ‘Adaptive Learning Market Acceleration Program’ which seeks to promote adaptive learning and claims that the adaptive learning loop can defeat the iron triangle of costs, quality and access (referred to in The Selling Points of Adaptive Learning, above). It is worth noting that this foundation has also funded Teach Plus, an organisation that has been lobbying US ‘state legislatures to eliminate protection of senior teachers during layoffs’ (Spring, 2012, p.51). It also supports the Foundation for Excellence in Education, ‘a major advocacy group for expanding online instruction by changing state laws’ (ibid., p.51). The chairman of this foundation is Jeb Bush, brother of ex-president Bush, who took the message of his foundation’s ‘Digital Learning Now!’ program on the road in 2011. The message, reports Spring (ibid. p.63) was simple: ‘the economic crises provided an opportunity to reduce school budgets by replacing teachers with online courses.’ The Foundation for Excellence in Education is also supported by the Walton Foundation (the Walmart family) and iQity, a company whose website makes clear its reasons for supporting Jeb Bush’s lobbying. ‘The iQity e-Learning Platform is the most complete solution available for the electronic search and delivery of curriculum, courses, and other learning objects. Delivering over one million courses each year, the iQity Platform is a proven success for students, teachers, school administrators, and district offices; as well as state, regional, and national education officials across the country.[2]

Another supporter of the Foundation for Excellence in Education is the Pearson Foundation, the philanthropic arm of Pearson. The Pearson Foundation, in its turn, is supported by the Gates Foundation. In 2011, the Pearson Foundation received funding from the Gates Foundation to create 24 online courses, four of which would be distributed free and the others sold by Pearson the publishers (Spring, 2012, p.66).

The campaign to promote online adaptive learning is massively funded and extremely well-articulated. It receives support from transnational agencies such as the World Bank, WTO and OECD, and its arguments are firmly rooted in the discourse ‘of international management consultancies and education businesses’ (Ball, 2012, p.11-12). It is in this context that observers like Neil Selwyn connect the growing use of digital technologies in education to the corporatisation and globalisation of education and neo-liberal ideology.

Adaptive learning also holds rich promise for those who can profit from the huge amount of data it will generate. Jose Fereira, CEO of Knewton, acknowledges that adaptive learning has ‘the capacity to produce a tremendous amount of data, more than maybe any other industry’[3]. He continues ‘Big data is going to impact education in a big way. It is inevitable. It has already begun. If you’re part of an education organization, you need to have a vision for how you will take advantage of big data. Wait too long and you’ll wake up to find that your competitors (and the instructors that use them) have left you behind with new capabilities and insights that seem almost magical.’ Rather paradoxically, he then concludes that ‘we must all commit to the principle that the data ultimately belong to the students and the schools’. It is not easy to understand how such data can be both the property of individuals and, at the same time, be used by educational organizations to gain competitive advantage.

The existence and exploitation of this data may also raise concerns about privacy. In the same way that many people do not fully understand the extent or purpose of ‘dataveillance’ by cookies when they are browsing the internet, students cannot be expected to fully grasp the extent or potential commercial use of the data that they generate when engaged in adaptive learning programs.

Selwyn (Distrusting Educational Technology 2013, p.59-60) highlights a further problem connected with the arrival of big data. ‘Dataveillance’, he writes, also ‘functions to decrease the influence of ‘human’ experience and judgement, with it no longer seeming to matter what a teacher may personally know about a student in the face of his or her ‘dashboard’ profile and aggregated tally of positive and negative ‘events’. As such, there would seem to be little room for ‘professional’ expertise or interpersonal emotion when faced with such data. In these terms, institutional technologies could be said to be both dehumanizing and deprofessionalizing the relationships between people in an education context – be they students, teachers, administrators or managers.’

Adaptive learning in online and blended programs may well offer a number of advantages, but these will need to be weighed against the replacement or deskilling of teachers, and the growing control of big business over educational processes and content. Does adaptive learning increase the risk of transforming language teaching into a digital diploma mill (Noble, Digital Diploma Mills: The automation of higher education 2002)?


Evgeney Morozov’s 2013 best-seller, ‘To Save Everything, Click Here’, takes issue with our current preoccupation with finding technological solutions to complex and contentious problems. If adaptive learning is being presented as a solution, what is the problem that it is the solution of? In Morosov’s analysis, it is not an educational problem. ‘Digital technologies might be a perfect solution to some problems,’ he writes, ‘but those problems don’t include education – not if by education we mean the development of the skills to think critically about any given issue’ (Morosov, 2013, p.8). Only if we conceive of education as the transmission of bits of information (and in the case of language education as the transmission of bits of linguistic information), could adaptive learning be seen as some sort of solution to an educational problem. The push towards adaptive learning in ELT can be seen, in Morosov’s terms, as reaching ‘for the answer before the questions have been fully asked’ (ibid., p.6).

The world of education has been particularly susceptible to the dreams of a ‘technical fix’. Its history, writes Neil Selwyn, ‘has been characterised by attempts to use the ‘power’ of technology in order to solve problems that are non-technological in nature. […] This faith in the technical fix is pervasive and relentless – especially in the minds of the key interests and opinion formers of this digital age. As the co-founder of the influential Wired magazine reasoned more recently, ‘tools and technology drive us. Even if a problem has been caused by technology, the answer will always be more technology’ (Selwyn, Education in a Digital World 2013, p.36).

Morosov cautions against solutionism in all fields of human activity, pointing out that, by the time a problem is ‘solved’, it becomes something else entirely. Anyone involved in language teaching would be well-advised to identify and prioritise the problems that matter to them before jumping to the conclusion that adaptive learning is the ‘solution’. Like other technologies, it might, just possibly, ‘reproduce, perpetuate, strengthen and deepen existing patterns of social relations and structures – albeit in different forms and guises. In this respect, then, it is perhaps best to approach educational technology as a ‘problem changer’ rather than a ‘problem solver’ (Selwyn, Education in a Digital World 2013, p.21).

[1] Philip McRae Rebirth of the Teaching Machine through the Seduction of Data Analytics: This time it’s personal April 14, 2013 (last accessed 13 January 2014)

[2] (last accessed 13 January, 2014)