Posts Tagged ‘big data’

In the last post, I mentioned a lesson plan from an article by Pegrum, M., Dudeney, G. & Hockly, N. (2018. Digital literacies revisited. The European Journal of Applied Linguistics and TEFL, 7 (2), pp. 3-24) in which students discuss the data that is collected by fitness apps and the possibility of using this data to calculate health insurance premiums, before carrying out and sharing online research about companies that track personal data. It’s a nice plan, unfortunately pay-walled, but you could try requesting a copy through Research Gate.

The only other off-the-shelf lesson plan I have been able to find is entitled ‘You and Your Data’ from the British Council. Suitable for level B2, this plan, along with a photocopiable pdf, contains with a vocabulary task (matching), a reading text (you and your data, who uses our data and why, can you protect your data) with true / false and sentence completion tasks) and a discussion (what do you do to protect our data). The material was written to coincide with Safer Internet Day (an EU project), which takes place in early February (next data 9 February 2021). The related website, Better Internet for Kids, contains links to a wide range of educational resources for younger learners.

For other resources, a good first stop is Ina Sander’s ‘A Critically Commented Guide to Data Literacy Tools’ in which she describes and evaluates a wide range of educational online resources for developing critical data literacy. Some of the resources that I discuss below are also evaluated in this guide. Here are some suggestion for learning / teaching resources.

A glossary

This is simply a glossary of terms that are useful in discussing data issues. It could easily be converted into a matching exercise or flashcards.

A series of interactive videos

Do not Track’ is an award-winning series of interactive videos, produced by a consortium of broadcasters. In seven parts, the videos consider such issues as who profits from the personal data that we generate online, the role of cookies in the internet economy, how online profiling is done, the data generated by mobile phones and how algorithms interpret the data.

Each episode is between 5 and 10 minutes long, and is therefore ideal for asynchronous viewing. In a survey of critical data literacy tools (Sander, 2020), ‘Do not Track’ proved popular with the students who used it. I highly recommend it, but students will probably need a B2 level or higher.

More informational videos

If you do not have time to watch the ‘Do Not Track’ video series, you may want to use something shorter. There are a huge number of freely available videos about online privacy. I have selected just two which I think would be useful. You may be able to find something better!

1 Students watch a video about how cookies work. This video, from Vox, is well-produced and is just under 7 minutes long. The speaker speaks fairly rapidly, so captions may be helpful.

Students watch a video as an introduction to the topic of surveillance and privacy. This video, ‘Reclaim our Privacy’, was produced by ‘La Quadrature du Net’, a French advocacy group that promotes digital rights and freedoms of citizens. It is short (3 mins) and can be watched with or without captions (English or 6 other languages). Its message is simple: political leaders should ensure that our online privacy is respected.

A simple matching task ‘ten principles for online privacy’

1 Share the image below with all the students and ask them to take a few minutes matching the illustrations to the principles on the right. There is no need for anyone to write or say anything, but it doesn’t matter if some students write the answers in the chat box.

(Note: This image and the other ideas for this activity are adapted from https://teachingprivacy.org/ , a project developed by the International Computer Science Institute and the University of California-Berkeley for secondary school students and undergraduates. Each of the images corresponds to a course module, which contains a wide-range of materials (videos, readings, discussions, etc.) which you may wish to explore more fully.)

2 Share the image below (which shows the answers in abbreviated form). Ask if anyone needs anything clarified.

You’re Leaving Footprints Principle: Your information footprint is larger than you think.

There’s No Anonymity Principle: There is no anonymity on the Internet.

Information Is Valuable Principle: Information about you on the Internet will be used by somebody in their interest — including against you.

Someone Could Listen Principle: Communication over a network, unless strongly encrypted, is never just between two parties.

Sharing Releases Control Principle: Sharing information over a network means you give up control over that information — forever.

Search Is Improving Principle: Just because something can’t be found today, doesn’t mean it can’t be found tomorrow.

Online Is Real Principle: The online world is inseparable from the “real” world.

Identity Isn’t Guaranteed Principle: Identity is not guaranteed on the Internet.

You Can’t Escape Principle: You can’t avoid having an information footprint by not going online.

Privacy Requires Work Principle: Only you have an interest in maintaining your privacy.

3 Wrap up with a discussion of these principles.

Hands-on exploration of privacy tools

Click on the link below to download the procedure for the activity, as well as supporting material.

A graphic novel

Written by Michael Keller and Josh Neufeld, and produced by Al Jazeera, this graphic novel ‘Terms of Service. Understanding our role in the world of Big Data’ provides a good overview of critical data literacy issues, offering lots of interesting, concrete examples of real cases. The language is, however, challenging (C1+). It may be especially useful for trainee teachers.

A website

The Privacy International website is an extraordinary goldmine of information and resources. Rather than recommending anything specific, my suggestion is that you, or your students, use the ‘Search’ function on the homepage and see where you end up.

In the first post in this 3-part series, I focussed on data collection practices in a number of ELT websites, as a way of introducing ‘critical data literacy’. Here, I explore the term in more detail.

Although the term ‘big data’ has been around for a while (see this article and infographic) it’s less than ten years ago that it began to enter everyday language, and found its way into the OED (2013). In the same year, Viktor Mayer-Schönberger and Kenneth Cukier published their best-selling ‘Big Data: A Revolution That Will Transform How We Live, Work, and Think’ (2013) and it was hard to avoid enthusiastic references in the media to the transformative potential of big data in every sector of society.

Since then, the use of big data and analytics has become ubiquitous. Massive data collection (and data surveillance) has now become routine and companies like Palantir, which specialise in big data analytics, have become part of everyday life. Palantir’s customers include the LAPD, the CIA, the US Immigration and Customs Enforcement (ICE) and the British Government. Its recent history includes links with Cambridge Analytica, assistance in an operation to arrest the parents of illegal migrant children, and a racial discrimination lawsuit where the company was accused of having ‘routinely eliminated’ Asian job applicants (settled out of court for $1.7 million).

Unsurprisingly, the datafication of society has not gone entirely uncontested. Whilst the vast majority of people seem happy to trade their personal data for convenience and connectivity, a growing number are concerned about who benefits most from this trade-off. On an institutional level, the EU introduced the General Data Protection Regulation (GDPR), which led to Google being fined Ꞓ50 million for insufficient transparency in their privacy policy and their practices of processing personal data for the purposes of behavioural advertising. In the intellectual sphere, there has been a recent spate of books that challenge the practices of ubiquitous data collection, coining new terms like ‘surveillance capitalism’, ‘digital capitalism’ and ‘data colonialism’. Here are four recent books that I have found particularly interesting.

Beer, D. (2019). The Data Gaze. London: Sage

Couldry, N. & Mejias, U. A. (2019). The Costs of Connection. Stanford: Stanford University Press

Sadowski, J. (2020). Too Smart. Cambridge, Mass.: MIT Press

Zuboff, S. (2019). The Age of Surveillance Capitalism. New York: Public Affairs

The use of big data and analytics in education is also now a thriving industry, with its supporters claiming that these technologies can lead to greater personalization, greater efficiency of instruction and greater accountability. Opponents (myself included) argue that none of these supposed gains have been empirically demonstrated, and that the costs to privacy, equity and democracy outweigh any potential gains. There is a growing critical literature and useful, recent books include:

Bradbury, A. & Roberts-Holmes, G. (2018). The Datafication of Primary and Early Years Education. Abingdon: Routledge

Jarke, J. & Breiter, A. (Eds.) (2020). The Datafication of Education. Abingdon: Routledge

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. London: Sage

Concomitant with the rapid growth in the use of digital tools for language learning and teaching, and therefore the rapid growth in the amount of data that learners were (mostly unwittingly) giving away, came a growing interest in the need for learners to develop a set of digital competencies, or literacies, which would enable them to use these tools effectively. In the same year that Mayer-Schönberger and Cukier brought out their ‘Big Data’ book, the first book devoted to digital literacies in English language teaching came out (Dudeney et al., 2013). They defined digital literacies as the individual and social skills needed to effectively interpret, manage, share and create meaning in the growing range of digital communication channels (Dudeney et al., 2013: 2). The book contained a couple of activities designed to raise students’ awareness of online identity issues, along with others intended to promote critical thinking about digitally-mediated information (what the authors call ‘information literacy’), but ‘critical literacy’ was missing from the authors’ framework.

Critical thinking and critical literacy are not the same thing. Although there is no generally agreed definition of the former (with a small ‘c’), it is focussed primarily on logic and comprehension (Lee, 2011). Paul Dummett and John Hughes (2019: 4) describe it as ‘a mindset that involves thinking reflectively, rationally and reasonably’. The prototypical critical thinking activity involves the analysis of a piece of fake news (e.g. the task where students look at a website about tree octopuses in Dudeney et al. 2013: 198 – 203). Critical literacy, on the other hand, involves standing back from texts and technologies and viewing them as ‘circulating within a larger social and textual context’ (Warnick, 2002). Consideration of the larger social context necessarily entails consideration of unequal power relationships (Leee, 2011; Darvin, 2017), such as that between Google and the average user of Google. And it follows from this that critical literacy has a socio-political emancipatory function.

Critical digital literacy is now a growing field of enquiry (e.g. Pötzsch, 2019) and there is an awareness that digital competence frameworks, such as the Digital Competence Framework of the European Commission, are incomplete and out of date without the inclusion of critical digital literacy. Dudeney et al (2013) clearly recognise the importance of including critical literacy in frameworks of digital literacies. In Pegrum et al. (2018, unfortunately paywalled), they update the framework from their 2013 book, and the biggest change is the inclusion of critical literacy. They divide this into the following:

  • critical digital literacy – closely related to information literacy
  • critical mobile literacy – focussing on issues brought to the fore by mobile devices, ranging from protecting privacy through to safeguarding mental and physical health
  • critical material literacy – concerned with the material conditions underpinning the use of digital technologies, ranging from the socioeconomic influences on technological access to the environmental impacts of technological manufacturing and disposal
  • critical philosophical literacy – concerned with the big questions posed to and about humanity as our lives become conjoined with the existence of our smart devices, robots and AI
  • critical academic literacy, which refers to the pressing need to conduct meaningful studies of digital technologies in place of what is at times ‘cookie-cutter’ research

I’m not entirely convinced by the subdivisions, but labelling in this area is still in its infancy. My particular interest here, in critical data literacy, seems to span across a number of their sub-divisions. And the term that I am using, ‘critical data literacy’, which I’ve taken from Tygel & Kirsch (2016), is sometimes referred to as ‘critical big data literacy’ (Sander, 2020a) or ‘personal data literacy’ (Pangrazio & Selwyn, 2019). Whatever it is called, it is the development of ‘informed and critical stances toward how and why [our] data are being used’ (Pangrazio & Selwyn, 2018). One of the two practical activities in the Pegrum at al article (2018) looks at precisely this area (the task requires students to consider the data that is collected by fitness apps). It will be interesting to see, when the new edition of the ‘Digital Literacies’ book comes out (perhaps some time next year), how many other activities take a more overtly critical stance.

In the next post, I’ll be looking at a range of practical activities for developing critical data literacy in the classroom. This involves both bridging the gaps in knowledge (about data, algorithms and online privacy) and learning, practically, how to implement ‘this knowledge for a more empowered internet usage’ (Sander, 2020b).

Without wanting to invalidate the suggestions in the next post, a word of caution is needed. Just as critical thinking activities in the ELT classroom cannot be assumed to lead to any demonstrable increase in critical thinking (although there may be other benefits to the activities), activities to promote critical literacy cannot be assumed to lead to any actual increase in critical literacy. The reaction of many people may well be ‘It’s not like it’s life or death or whatever’ (Pangrazio & Selwyn, 2018). And, perhaps, education is rarely, if ever, a solution to political and social problems, anyway. And perhaps, too, we shouldn’t worry too much about educational interventions not leading to their intended outcomes. Isn’t that almost always the case? But, with those provisos in mind, I’ll come back next time with some practical ideas.

REFERENCES

Darvin R. (2017). Language, Ideology, and Critical Digital Literacy. In: Thorne S., May S. (eds) Language, Education and Technology. Encyclopedia of Language and Education (3rd ed.). Springer, Cham. pp. 17 – 30 https://doi.org/10.1007/978-3-319-02237-6_35

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Dummett, P. & Hughes, J. (2019). Critical Thinking in ELT. Boston: National Geographic Learning

Lee, C. J. (2011). Myths about critical literacy: What teachers need to unlearn. Journal of Language and Literacy Education [Online], 7 (1), 95-102. Available at http://www.coa.uga.edu/jolle/2011_1/lee.pdf

Mayer-Schönberger, V. & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think. London: John Murray

Pangrazio, L. & Selwyn, N. (2018). ‘It’s not like it’s life or death or whatever’: young people’s understandings of social media data. Social Media + Society, 4 (3): pp. 1–9. https://journals.sagepub.com/doi/pdf/10.1177/2056305118787808

Pangrazio, L. & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media and Society, 21 (2): pp. 419 – 437

Pegrum, M., Dudeney, G. & Hockly, N. (2018). Digital literacies revisited. The European Journal of Applied Linguistics and TEFL, 7 (2), pp. 3-24

Pötzsch, H. (2019). Critical Digital Literacy: Technology in Education Beyond Issues of User Competence and Labour-Market Qualifications. tripleC: Communication, Capitalism & Critique, 17: pp. 221 – 240 Available at https://www.triple-c.at/index.php/tripleC/article/view/1093

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479 https://www.econstor.eu/bitstream/10419/218936/1/2020-2-1479.pdf

Sander, I. (2020b). Critical big data literacy tools – Engaging citizens and promoting empowered internet usage. Data & Policy, 2: DOI: https://doi.org/10.1017/dap.2020.5

Tygel, A. & Kirsch, R. (2016). Contributions of Paulo Freire for a Critical Data Literacy: a Popular Education Approach. The Journal of Community Informatics, 12 (3). Available at http://www.ci-journal.net/index.php/ciej/article/view/1296

Warnick, B. (2002). Critical Literacy in a Digital Era. Mahwah, NJ, Lawrence Erlbaum Associates

Take the Cambridge Assessment English website, for example. When you connect to the site, you will see, at the bottom of the screen, a familiar (to people in Europe, at least) notification about the site’s use of cookies: the cookies consent.

You probably trust the site, so ignore the notification and quickly move on to find the resource you are looking for. But if you did click on hyperlinked ‘set cookies’, what would you find? The first link takes you to the ‘Cookie policy’ where you will be told that ‘We use cookies principally because we want to make our websites and mobile applications user-friendly, and we are interested in anonymous user behaviour. Generally our cookies don’t store sensitive or personally identifiable information such as your name and address or credit card details’. Scroll down, and you will find out more about the kind of cookies that are used. Besides the cookies that are necessary to the functioning of the site, you will see that there are also ‘third party cookies’. These are explained as follows: ‘Cambridge Assessment works with third parties who serve advertisements or present offers on our behalf and personalise the content that you see. Cookies may be used by those third parties to build a profile of your interests and show you relevant adverts on other sites. They do not store personal information directly but use a unique identifier in your browser or internet device. If you do not allow these cookies, you will experience less targeted content’.

This is not factually inaccurate: personal information is not stored directly. However, it is extremely easy for this information to be triangulated with other information to identify you personally. In addition to the data that you generate by having cookies on your device, Cambridge Assessment will also directly collect data about you. Depending on your interactions with Cambridge Assessment, this will include ‘your name, date of birth, gender, contact data including your home/work postal address, email address and phone number, transaction data including your credit card number when you make a payment to us, technical data including internet protocol (IP) address, login data, browser type and technology used to access this website’. They say they may share this data ‘with other people and/or businesses who provide services on our behalf or at our request’ and ‘with social media platforms, including but not limited to Facebook, Google, Google Analytics, LinkedIn, in pseudonymised or anonymised forms’.

In short, Cambridge Assessment may hold a huge amount of data about you and they can, basically, do what they like with it.

The cookie and privacy policies are fairly standard, as is the lack of transparency in the phrasing of them. Rather more transparency would include, for example, information about which particular ad trackers you are giving your consent to. This information can be found with a browser extension tool like Ghostery, and these trackers can be blocked. As you’ll see below, there are 5 ad trackers on this site. This is rather more than other sites that English language teachers are likely to go to. ETS-TOEFL has 4, Macmillan English and Pearson have 3, CUP ELT and the British Council Teaching English have 1, OUP ELT, IATEFL, BBC Learning English and Trinity College have none. I could only find TESOL, with 6 ad trackers, which has more. The blogs for all these organisations invariably have more trackers than their websites.

The use of numerous ad trackers is probably a reflection of the importance that Cambridge Assessment gives to social media marketing. There is a research paper, produced by Cambridge Assessment, which outlines the significance of big data and social media analytics. They have far more Facebook followers (and nearly 6 million likes) than any other ELT page, and they are proud of their #1 ranking in the education category of social media. The amount of data that can be collected here is enormous and it can be analysed in myriad ways using tools like Ubervu, Yomego and Hootsuite.

A little more transparency, however, would not go amiss. According to a report in Vox, Apple has announced that some time next year ‘iPhone users will start seeing a new question when they use many of the apps on their devices: Do they want the app to follow them around the internet, tracking their behavior?’ Obviously, Google and Facebook are none too pleased about this and will be fighting back. The implications for ad trackers and online advertising, more generally, are potentially huge. I wrote to Cambridge Assessment about this and was pleased to hear that ‘Cambridge Assessment are currently reviewing the process by which we obtain users consent for the use of cookies with the intention of moving to a much more transparent model in the future’. Let’s hope that other ELT organisations are doing the same.

You may be less bothered than I am by the thought of dozens of ad trackers following you around the net so that you can be served with more personalized ads. But the digital profile about you, to which these cookies contribute, may include information about your ethnicity, disabilities and sexual orientation. This profile is auctioned to advertisers when you visit some sites, allowing them to show you ‘personalized’ adverts based on the categories in your digital profile. Contrary to EU regulations, these categories may include whether you have cancer, a substance-abuse problem, your politics and religion (as reported in Fortune https://fortune.com/2019/01/28/google-iab-sensitive-profiles/ ).

But it’s not these cookies that are the most worrying aspect about our lack of digital privacy. It’s the sheer quantity of personal data that is stored about us. Every time we ask our students to use an app or a platform, we are asking them to divulge huge amounts of data. With ClassDojo, for example, this includes names, usernames, passwords, age, addresses, photographs, videos, documents, drawings, or audio files, IP addresses and browser details, clicks, referring URL’s, time spent on site, and page views (Manolev et al., 2019; see also Williamson, 2019).

It is now widely recognized that the ‘consent’ that is obtained through cookie policies and other end-user agreements is largely spurious. These consent agreements, as Sadowski (2019) observes, are non-negotiated, and non-negotiable; you either agree or you are denied access. What’s more, he adds, citing one study, it would take 76 days, working for 8 hours a day, to read the privacy policies a person typically encounters in a year. As a result, most of us choose not to choose when we accept online services (Cobo, 2019: 25). We have little, if any, control over how the data that is collected is used (Birch et al., 2020). More importantly, perhaps, when we ask our students to sign up to an educational app, we are asking / telling them to give away their personal data, not just ours. They are unlikely to fully understand the consequences of doing so.

The extent of this ignorance is also now widely recognized. In the UK, for example, two reports (cited by Sander, 2020) indicate that ‘only a third of people know that data they have not actively chosen to share has been collected’ (Doteveryone, 2018: 5), and that ‘less than half of British adult internet users are aware that apps collect their location and information on their personal preferences’ (Ofcom, 2019: 14).

The main problem with this has been expressed by programmer and activist, Richard Stallman, in an interview with New York magazine (Kulwin, 2018): Companies are collecting data about people. The data that is collected will be abused. That’s not an absolute certainty, but it’s a practical, extreme likelihood, which is enough to make collection a problem.

The abuse that Smallman is referring to can come in a variety of forms. At the relatively trivial end is the personalized advertising. Much more serious is the way that data aggregation companies will scrape data from a variety of sources, building up individual data profiles which can be used to make significant life-impacting decisions, such as final academic grades or whether one is offered a job, insurance or credit (Manolev et al., 2019). Cathy O’Neil’s (2016) best-selling ‘Weapons of Math Destruction’ spells out in detail how this abuse of data increases racial, gender and class inequalities. And after the revelations of Edward Snowden, we all know about the routine collection by states of huge amounts of data about, well, everyone. Whether it’s used for predictive policing or straightforward repression or something else, it is simply not possible for younger people, our students, to know what personal data they may regret divulging at a later date.

Digital educational providers may try to reassure us that they will keep data private, and not use it for advertising purposes, but the reassurances are hollow. These companies may change their terms and conditions further down the line, and examples exist of when this has happened (Moore, 2018: 210). But even if this does not happen, the data can never be secure. Illegal data breaches and cyber attacks are relentless, and education ranked worst at cybersecurity out of 17 major industries in one recent analysis (Foresman, 2018). One report suggests that one in five US schools and colleges have fallen victim to cyber-crime. Two weeks ago, I learnt (by chance, as I happened to be looking at my security settings on Chrome) that my passwords for Quizlet, Future Learn, Elsevier and Science Direct had been compromised by a data breach. To get a better understanding of the scale of data breaches, you might like to look at the UK’s IT Governance site, which lists detected and publicly disclosed data breaches and cyber attacks each month (36.6 million records breached in August 2020). If you scroll through the list, you’ll see how many of them are educational sites. You’ll also see a comment about how leaky organisations have been throughout lockdown … because they weren’t prepared for the sudden shift online.

Recent years have seen a growing consensus that ‘it is crucial for language teaching to […] encompass the digital literacies which are increasingly central to learners’ […] lives’ (Dudeney et al., 2013). Most of the focus has been on the skills that are needed to use digital media. There also appears to be growing interest in developing critical thinking skills in the context of digital media (e.g. Peachey, 2016) – identifying fake news and so on. To a much lesser extent, there has been some focus on ‘issues of digital identity, responsibility, safety and ethics when students use these technologies’ (Mavridi, 2020a: 172). Mavridi (2020b: 91) also briefly discusses the personal risks of digital footprints, but she does not have the space to explore more fully the notion of critical data literacy. This literacy involves an understanding of not just the personal risks of using ‘free’ educational apps and platforms, but of why they are ‘free’ in the first place. Sander (2020b) suggests that this literacy entails ‘an understanding of datafication, recognizing the risks and benefits of the growing prevalence of data collection, analytics, automation, and predictive systems, as well as being able to critically reflect upon these developments. This includes, but goes beyond the skills of, for example, changing one’s social media settings, and rather constitutes an altered view on the pervasive, structural, and systemic levels of changing big data systems in our datafied societies’.

In my next two posts, I will, first of all, explore in more detail the idea of critical data literacy, before suggesting a range of classroom resources.

(I posted about privacy in March 2014, when I looked at the connections between big data and personalized / adaptive learning. In another post, September 2014, I looked at the claims of the CEO of Knewton who bragged that his company had five orders of magnitude more data about you than Google has. … We literally have more data about our students than any company has about anybody else about anything, and it’s not even close. You might find both of these posts interesting.)

References

Birch, K., Chiappetta, M. & Artyushina, A. (2020). ‘The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset’ Policy Studies, 41:5, 468-487, DOI: 10.1080/01442872.2020.1748264

Cobo, C. (2019). I Accept the Terms and Conditions. https://adaptivelearninginelt.files.wordpress.com/2020/01/41acf-cd84b5_7a6e74f4592c460b8f34d1f69f2d5068.pdf

Doteveryone. (2018). People, Power and Technology: The 2018 Digital Attitudes Report. https://attitudes.doteveryone.org.uk

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Foresman, B. (2018). Education ranked worst at cybersecurity out of 17 major industries. Edscoop, December 17, 2018. https://edscoop.com/education-ranked-worst-at-cybersecurity-out-of-17-major-industries/

Kulwin, K. (2018). F*ck Them. We Need a Law’: A Legendary Programmer Takes on Silicon Valley, New York Intelligencer, 2018, https://nymag.com/intelligencer/2018/04/richard-stallman-rms-on-privacy-data-and-free-software.html

Manolev, J., Sullivan, A. & Slee, R. (2019). ‘Vast amounts of data about our children are being harvested and stored via apps used by schools’ EduReseach Matters, February 18, 2019. https://www.aare.edu.au/blog/?p=3712

Mavridi, S. (2020a). Fostering Students’ Digital Responsibility, Ethics and Safety Skills (Dress). In Mavridi, S. & Saumell, V. (Eds.) Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL. pp. 170 – 196

Mavridi, S. (2020b). Digital literacies and the new digital divide. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp. 90 – 98

Moore, M. (2018). Democracy Hacked. London: Oneworld

Ofcom. (2019). Adults: Media use and attitudes report [Report]. https://www.ofcom.org.uk/__data/assets/pdf_file/0021/149124/adults-media-use-and-attitudes-report.pdf

O’Neil, C. (2016). Weapons of Math Destruction. London: Allen Lane

Peachey, N. (2016). Thinking Critically through Digital Media. http://peacheypublications.com/

Sadowski, J. (2019). ‘When data is capital: Datafication, accumulation, and extraction’ Big Data and Society 6 (1) https://doi.org/10.1177%2F2053951718820549

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479 https://www.econstor.eu/bitstream/10419/218936/1/2020-2-1479.pdf

Sander, I. (2020b). Critical big data literacy tools—Engaging citizens and promoting empowered internet usage. Data & Policy, 2: e5 doi:10.1017/dap.2020.5

Williamson, B. (2019). ‘Killer Apps for the Classroom? Developing Critical Perspectives on ClassDojo and the ‘Ed-tech’ Industry’ Journal of Professional Learning, 2019 (Semester 2) https://cpl.asn.au/journal/semester-2-2019/killer-apps-for-the-classroom-developing-critical-perspectives-on-classdojo

What is the ‘new normal’?

Among the many words and phrases that have been coined or gained new currency since COVID-19 first struck, I find ‘the new normal’ particularly interesting. In the educational world, its meaning is so obvious that it doesn’t need spelling out. But in case you’re unclear about what I’m referring to, the title of this webinar, run by GENTEFL, the Global Educators Network Association of Teachers of English as a Foreign Language (an affiliate of IATEFL), will give you a hint.

webinar GENTEFL

Teaching in a VLE may be overstating it a bit, but you get the picture. ‘The new normal’ is the shift away from face-to-face teaching in bricks-and-mortar institutions, towards online teaching of one kind or another. The Malaysian New Straits Times refers to it as ‘E-learning, new way forward in new norm’. The TEFL Academy says that ‘digital learning is the new normal’, and the New Indian Express prefers the term ‘tech education’.

Indian express

I’ll come back to these sources in a little while.

Whose new normal?

There is, indeed, a strong possibility that online learning and teaching may become ‘the new normal’ for many people working in education. In corporate training and in higher education, ‘tech education’ will likely become increasingly common. Many universities, especially but not only in the US, Britain and Australia, have been relying on ‘international students’ (almost half a million in the UK in 2018/ 2019), in particular Chinese, to fill their coffers. With uncertainty about how and when these universities will reopen for the next academic year, a successful transition to online is a matter of survival – a challenge that a number of universities will probably not be able to rise to. The core of ELT, private TEFL schools in Inner Circle countries, likewise dependent on visitors from other countries, has also been hard hit. It is not easy for them to transition to online, since the heart of their appeal lies in their physical location.

But elsewhere, the picture is rather different. A recent Reddit discussion began as follows: ‘In Vietnam, [English language] schools have reopened and things have returned to normal almost overnight. There’s actually a teacher shortage at the moment as so many left and interest in online learning is minimal, although most schools are still offering it as an option’. The consensus in the discussion that follows is that bricks-and-mortar schools will take a hit, especially with adult (but not kids’) groups, but that ‘teaching online will not be the new normal’.

By far the greatest number of students studying English around the world are in primary and secondary schools. It is highly unlikely that online study will be the ‘new normal’ for most of these students (although we may expect to see attempts to move towards more blended approaches). There are many reasons for this, but perhaps the most glaringly obvious is that the function of schools is not exclusively educational: child-care, allowing parents to go to work, is the first among these.

We can expect some exceptions. In New York, for example, current plans include a ‘hybrid model’ (a sexed-up term for blended learning), in which students are in schools for part of the time and continue learning remotely for the rest. The idea emerged after Governor Andrew Cuomo ‘convened a committee with the Bill and Melinda Gates Foundation to reimagine education for students when school goes back in session in the fall’. How exactly this will pan out remains to be seen, but, in much of the rest of the world, where the influence of the Gates Foundation is less strong, ‘hybrid schooling’ is likely to be seen as even more unpalatable and unworkable than it is by many in New York.

In short, the ‘new normal’ will affect some sectors of English language teaching much more than others. For some, perhaps the majority, little change can be expected once state schools reopen. Smaller classes, maybe, more blended, but not a wholesale shift to ‘tech education’.

Not so new anyway!

Scott Galloway, a New York professor of marketing and author of the best-selling ‘The Four’ (an analysis of the Big Four tech firms), began a recent blog post as follows:

After COVID-19, nothing will be the same. The previous sentence is bullsh*t. On the contrary, things will never be more the same, just accelerated.

He elaborates his point by pointing out that many universities were already in deep trouble before COVID. Big tech had already moved massively into education and healthcare, which are ‘the only two sectors, other than government, that offer the margin dollars required to sate investors’ growth expectations’ (from another recent post by Galloway). Education start-ups have long been attracting cheap capital: COVID has simply sped the process up.

Coming from a very different perspective, Audrey Watters gave a conference presentation over three years ago entitled ‘Education Technology as ‘The New Normal’’. I have been writing about the normalization of digital tools in language teaching for over six years. What is new is the speed, rather than the nature, of the change.

Galloway draws an interesting parallel with the SARS virus, which, he says, ‘was huge for e-commerce in Asia, and it helped Alibaba break out into the consumer space. COVID-19 could be to education in the United States what SARS was to e-commerce in Asia’.

‘The new normal’ as a marketing tool

Earlier in this post, I mentioned three articles that discussed the ‘new normal’ in education. The first of these, from the New Straits Times, looks like a news article, but features extensive quotes from Shereen Chee, chief operating officer of Sunago Education, a Malaysian vendor of online English classes. The article is basically an advert for Sunago: one section includes the following:

Sunago combines digitisation and the human touch to create a personalised learning experience. […] Chee said now is a great time for employers to take advantage of the scheme and equip their team with enhanced English skills, so they can hit the ground running once the Covid-19 slump is over.

The second reference about ‘digital learning is the new normal’ comes from The TEFL Academy, which sells online training courses, particularly targeting prospective teachers who want to work online. The third reference, from the New Indian Express, was written by Ananth Koppar, the founder of Kshema Technologies Pvt Ltd, India’s first venture-funded software company. Koppar is hardly a neutral reporter.

Other examples abound. For example, a similar piece called ‘The ‘New Normal’ in Education’ can be found in FE News (10 June 2020). This was written by Simon Carter, Marketing and Propositions Director of RM Education, an EdTech vendor in the UK. EdTech has a long history of promoting its wares through sponsored content and adverts masquerading as reportage.

It is, therefore, a good idea, whenever you come across the phrase, ‘the new normal’, to adopt a sceptical stance from the outset. I’ll give two more examples to illustrate my point.

A recent article (1 April 2020) in the ELTABB (English Language Teachers Association Berlin Brandenburg) journal is introduced as follows:

With online language teaching being the new normal in ELT, coaching principles can help teachers and students share responsibility for the learning process.

Putting aside, for the moment, my reservations about whether online teaching is, in fact, the new normal in ‘ELT’, I’m happy to accept that coaching principles may be helpful in online teaching. But I can’t help noticing that the article was written by a self-described edupreneur and co-founder of the International Language Coaching Association (€50 annual subscription) which runs three-day training courses (€400).

My second example is a Macmillan webinar by Thom Kiddle called ‘Professional Development for teachers in the ‘new normal’. It’s a good webinar, a very good one in my opinion, but you’ll notice a NILE poster tacked to the wall behind Thom as he speaks. NILE, a highly reputed provider of teacher education courses in the UK, has invested significantly in online teacher education in recent years and is well-positioned to deal with the ‘new normal’. It’s also worth noting that the webinar host, Macmillan, is in a commercial partnership with NILE, the purpose of which is to ‘develop and promote quality teacher education programmes worldwide’. As good as the webinar is, it is also clearly, in part, an advertisement.

Thom Kiddle

The use of the phrase ‘the new normal’ as a marketing hook is not new. Although its first recorded use dates back to the first part of the 20th century, it became more widespread at the start of the 21st. One populariser of the phrase was Roger McNamee, a venture capitalist and early investor in technology, including Facebook, who wrote a book called ‘The New Normal: Great Opportunities in a Time of Great Risk’ (2004). Since then, the phrase has been used extensively to refer to the state of the business world after the financial crisis of 2018. (For more about the history of the phrase, see here.) More often than not, users of the phrase are selling the idea (and sometimes a product) that we need to get used to a new configuration of the world, one in which technology plays a greater role.

Normalizing ‘the new normal’

Of all the most unlikely sources for a critique of ‘the new normal’, the World Economic Forum has the following to offer in a blog post entitled ‘There’s nothing new about the ‘new normal’. Here’s why’:

The language of a ‘new normal’ is being deployed almost as a way to quell any uncertainty ushered in by the coronavirus. With no cure in sight, everyone from politicians and the media to friends and family has perpetuated this rhetoric as they imagine settling into life under this ‘new normal’. This framing is inviting: it contends that things will never be the same as they were before — so welcome to a new world order. By using this language, we reimagine where we were previously relative to where we are now, appropriating our present as the standard. As we weigh our personal and political responses to this pandemic, the language we employ matters. It helps to shape and reinforce our understanding of the world and the ways in which we choose to approach it. The analytic frame embodied by the persistent discussion of the ‘new normal’ helps bring order to our current turbulence, but it should not be the lens through which we examine today’s crisis.

We can’t expect the World Economic Forum to become too critical of the ‘new normal’ of digital learning, since they have been pushing for it so hard for so long. But the quote from their blog above may usefully be read in conjunction with an article by Jun Yu and Nick Couldry, called ‘Education as a domain of natural data extraction: analysing corporate discourse about educational tracking’ (Information, Communication and Society, 2020, DOI: 10.1080/1369118X.2020.1764604). The article explores the general discursive framing by which the use of big data in education has come to seem normal. The authors looked at the public discourse of eight major vendors of educational platforms that use big data (including Macmillan, Pearson, Knewton and Blackboard). They found that ‘the most fundamental move in today’s dominant commercial discourse is to promote the idea that data and its growth are natural’. In this way, ‘software systems, not teachers, [are] central to education’. Yu and Couldry’s main interest is in the way that discourse shapes the normalization of dataveillance, but, in a more general sense, the phrase, ‘the new normal’, is contributing to the normalization of digital education. If you think that’s fine, I suggest you dip into some of the books I listed in my last blog post.

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

Back in the middle of the last century, the first interactive machines for language teaching appeared. Previously, there had been phonograph discs and wire recorders (Ornstein, 1968: 401), but these had never really taken off. This time, things were different. Buoyed by a belief in the power of technology, along with the need (following the Soviet Union’s successful Sputnik programme) to demonstrate the pre-eminence of the United States’ technological expertise, the interactive teaching machines that were used in programmed instruction promised to revolutionize language learning (Valdman, 1968: 1). From coast to coast, ‘tremors of excitement ran through professional journals and conferences and department meetings’ (Kennedy, 1967: 871). The new technology was driven by hard science, supported and promoted by the one of the most well-known and respected psychologists and public intellectuals of the day (Skinner, 1961).

In classrooms, the machines acted as powerfully effective triggers in generating situational interest (Hidi & Renninger, 2006). Even more exciting than the mechanical teaching machines were the computers that were appearing on the scene. ‘Lick’ Licklider, a pioneer in interactive computing at the Advanced Research Projects Agency in Arlington, Virginia, developed an automated drill routine for learning German by hooking up a computer, two typewriters, an oscilloscope and a light pen (Noble, 1991: 124). Students loved it, and some would ‘go on and on, learning German words until they were forced by scheduling to cease their efforts’. Researchers called the seductive nature of the technology ‘stimulus trapping’, and Licklider hoped that ‘before [the student] gets out from under the control of the computer’s incentives, [they] will learn enough German words’ (Noble, 1991: 125).

With many of the developed economies of the world facing a critical shortage of teachers, ‘an urgent pedagogical emergency’ (Hof, 2018), the new approach was considered to be extremely efficient and could equalise opportunity in schools across the country. It was ‘here to stay: [it] appears destined to make progress that could well go beyond the fondest dreams of its originators […] an entire industry is just coming into being and significant sales and profits should not be too long in coming’ (Kozlowski, 1961: 47).

Unfortunately, however, researchers and entrepreneurs had massively underestimated the significance of novelty effects. The triggered situational interest of the machines did not lead to intrinsic individual motivation. Students quickly tired of, and eventually came to dislike, programmed instruction and the machines that delivered it (McDonald et al.: 2005: 89). What’s more, the machines were expensive and ‘research studies conducted on its effectiveness showed that the differences in achievement did not constantly or substantially favour programmed instruction over conventional instruction (Saettler, 2004: 303). Newer technologies, with better ‘stimulus trapping’, were appearing. Programmed instruction lost its backing and disappeared, leaving as traces only its interest in clearly defined learning objectives, the measurement of learning outcomes and a concern with the efficiency of learning approaches.

Hot on the heels of programmed instruction came the language laboratory. Futuristic in appearance, not entirely unlike the deck of the starship USS Enterprise which launched at around the same time, language labs captured the public imagination and promised to explore the final frontiers of language learning. As with the earlier teaching machines, students were initially enthusiastic. Even today, when language labs are introduced into contexts where they may be perceived as new technology, they can lead to high levels of initial motivation (e.g. Ramganesh & Janaki, 2017).

Given the huge investments into these labs, it’s unfortunate that initial interest waned fast. By 1969, many of these rooms had turned into ‘“electronic graveyards,” sitting empty and unused, or perhaps somewhat glorified study halls to which students grudgingly repair to don headphones, turn down the volume, and prepare the next period’s history or English lesson, unmolested by any member of the foreign language faculty’ (Turner, 1969: 1, quoted in Roby, 2003: 527). ‘Many second language students shudder[ed] at the thought of entering into the bowels of the “language laboratory” to practice and perfect the acoustical aerobics of proper pronunciation skills. Visions of sterile white-walled, windowless rooms, filled with endless bolted-down rows of claustrophobic metal carrels, and overseen by a humorless, lab director, evoke[d] fear in the hearts of even the most stout-hearted prospective second-language learners (Wiley, 1990: 44).

By the turn of this century, language labs had mostly gone, consigned to oblivion by the appearance of yet newer technology: the internet, laptops and smartphones. Education had been on the brink of being transformed through new learning technologies for decades (Laurillard, 2008: 1), but this time it really was different. It wasn’t just one technology that had appeared, but a whole slew of them: ‘artificial intelligence, learning analytics, predictive analytics, adaptive learning software, school management software, learning management systems (LMS), school clouds. No school was without these and other technologies branded as ‘superintelligent’ by the late 2020s’ (Macgilchrist et al., 2019). The hardware, especially phones, was ubiquitous and, therefore, free. Unlike teaching machines and language laboratories, students were used to using the technology and expected to use their devices in their studies.

A barrage of publicity, mostly paid for by the industry, surrounded the new technologies. These would ‘meet the demands of Generation Z’, the new generation of students, now cast as consumers, who ‘were accustomed to personalizing everything’.  AR, VR, interactive whiteboards, digital projectors and so on made it easier to ‘create engaging, interactive experiences’. The ‘New Age’ technologies made learning fun and easy,  ‘bringing enthusiasm among the students, improving student engagement, enriching the teaching process, and bringing liveliness in the classroom’. On top of that, they allowed huge amounts of data to be captured and sold, whilst tracking progress and attendance. In any case, resistance to digital technology, said more than one language teaching expert, was pointless (Styring, 2015).slide

At the same time, technology companies increasingly took on ‘central roles as advisors to national governments and local districts on educational futures’ and public educational institutions came to be ‘regarded by many as dispensable or even harmful’ (Macgilchrist et al., 2019).

But, as it turned out, the students of Generation Z were not as uniformly enthusiastic about the new technology as had been assumed, and resistance to digital, personalized delivery in education was not long in coming. In November 2018, high school students at Brooklyn’s Secondary School for Journalism staged a walkout in protest at their school’s use of Summit Learning, a web-based platform promoting personalized learning developed by Facebook. They complained that the platform resulted in coursework requiring students to spend much of their day in front of a computer screen, that made it easy to cheat by looking up answers online, and that some of their teachers didn’t have the proper training for the curriculum (Leskin, 2018). Besides, their school was in a deplorable state of disrepair, especially the toilets. There were similar protests in Kansas, where students staged sit-ins, supported by their parents, one of whom complained that ‘we’re allowing the computers to teach and the kids all looked like zombies’ before pulling his son out of the school (Bowles, 2019). In Pennsylvania and Connecticut, some schools stopped using Summit Learning altogether, following protests.

But the resistance did not last. Protesters were accused of being nostalgic conservatives and educationalists kept largely quiet, fearful of losing their funding from the Chan Zuckerberg Initiative (Facebook) and other philanthro-capitalists. The provision of training in grit, growth mindset, positive psychology and mindfulness (also promoted by the technology companies) was ramped up, and eventually the disaffected students became more quiescent. Before long, the data-intensive, personalized approach, relying on the tools, services and data storage of particular platforms had become ‘baked in’ to educational systems around the world (Moore, 2018: 211). There was no going back (except for small numbers of ultra-privileged students in a few private institutions).

By the middle of the century (2155), most students, of all ages, studied with interactive screens in the comfort of their homes. Algorithmically-driven content, with personalized, adaptive tests had become the norm, but the technology occasionally went wrong, leading to some frustration. One day, two young children discovered a book in their attic. Made of paper with yellow, crinkly pages, where ‘the words stood still instead of moving the way they were supposed to’. The book recounted the experience of schools in the distant past, where ‘all the kids from the neighbourhood came’, sitting in the same room with a human teacher, studying the same things ‘so they could help one another on the homework and talk about it’. Margie, the younger of the children at 11 years old, was engrossed in the book when she received a nudge from her personalized learning platform to return to her studies. But Margie was reluctant to go back to her fractions. She ‘was thinking about how the kids must have loved it in the old days. She was thinking about the fun they had’ (Asimov, 1951).

References

Asimov, I. 1951. The Fun They Had. Accessed September 20, 2019. http://web1.nbed.nb.ca/sites/ASD-S/1820/J%20Johnston/Isaac%20Asimov%20-%20The%20fun%20they%20had.pdf

Bowles, N. 2019. ‘Silicon Valley Came to Kansas Schools. That Started a Rebellion’ The New York Times, April 21. Accessed September 20, 2019. https://www.nytimes.com/2019/04/21/technology/silicon-valley-kansas-schools.html

Hidi, S. & Renninger, K.A. 2006. ‘The Four-Phase Model of Interest Development’ Educational Psychologist, 41 (2), 111 – 127

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47 (4): 445-465

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 (6): 47 – 54

Laurillard, D. 2008. Digital Technologies and their Role in Achieving our Ambitions for Education. London: Institute for Education.

Leskin, P. 2018. ‘Students in Brooklyn protest their school’s use of a Zuckerberg-backed online curriculum that Facebook engineers helped build’ Business Insider, 12.11.18 Accessed 20 September 2019. https://www.businessinsider.de/summit-learning-school-curriculum-funded-by-zuckerberg-faces-backlash-brooklyn-2018-11?r=US&IR=T

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 (2): 84 – 98

Macgilchrist, F., Allert, H. & Bruch, A. 2019. ‚Students and society in the 2020s. Three future ‘histories’ of education and technology’. Learning, Media and Technology, https://www.tandfonline.com/doi/full/10.1080/17439884.2019.1656235 )

Moore, M. 2018. Democracy Hacked. London: Oneworld

Noble, D. D. 1991. The Classroom Arsenal. London: The Falmer Press

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 (7), 401 – 410

Ramganesh, E. & Janaki, S. 2017. ‘Attitude of College Teachers towards the Utilization of Language Laboratories for Learning English’ Asian Journal of Social Science Studies; Vol. 2 (1): 103 – 109

Roby, W.B. 2003. ‘Technology in the service of foreign language teaching: The case of the language laboratory’ In D. Jonassen (ed.), Handbook of Research on Educational Communications and Technology, 2nd ed.: 523 – 541. Mahwah, NJ.: Lawrence Erlbaum Associates

Saettler, P. 2004. The Evolution of American Educational Technology. Greenwich, Conn.: Information Age Publishing

Skinner, B. F. 1961. ‘Teaching Machines’ Scientific American, 205(5), 90-107

Styring, J. 2015. Engaging Generation Z. Cambridge English webinar 2015 https://www.youtube.com/watch?time_continue=4&v=XCxl4TqgQZA

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14.

Wiley, P. D. 1990. ‘Language labs for 1990: User-friendly, expandable and affordable’. Media & Methods, 27(1), 44–47)

jenny-holzer-untitled-protect-me-from-what-i-want-text-displayed-in-times-square-nyc-1982

Jenny Holzer, Protect me from what I want

At a recent ELT conference, a plenary presentation entitled ‘Getting it right with edtech’ (sponsored by a vendor of – increasingly digital – ELT products) began with the speaker suggesting that technology was basically neutral, that what you do with educational technology matters far more than the nature of the technology itself. The idea that technology is a ‘neutral tool’ has a long pedigree and often accompanies exhortations to embrace edtech in one form or another (see for example Fox, 2001). It is an idea that is supported by no less a luminary than Chomsky, who, in a 2012 video entitled ‘The Purpose of Education’ (Chomsky, 2012), said that:

As far as […] technology […] and education is concerned, technology is basically neutral. It’s kind of like a hammer. I mean, […] the hammer doesn’t care whether you use it to build a house or whether a torturer uses it to crush somebody’s skull; a hammer can do either. The same with the modern technology; say, the Internet, and so on.

Womans hammerAlthough hammers are not usually classic examples of educational technology, they are worthy of a short discussion. Hammers come in all shapes and sizes and when you choose one, you need to consider its head weight (usually between 16 and 20 ounces), the length of the handle, the shape of the grip, etc. Appropriate specifications for particular hammering tasks have been calculated in great detail. The data on which these specifications is based on an analysis of the hand size and upper body strength of the typical user. The typical user is a man, and the typical hammer has been designed for a man. The average male hand length is 177.9 mm, that of the average woman is 10 mm shorter (Wang & Cai, 2017). Women typically have about half the upper body strength of men (Miller et al., 1993). It’s possible, but not easy to find hammers designed for women (they are referred to as ‘Ladies hammers’ on Amazon). They have a much lighter head weight, a shorter handle length, and many come in pink or floral designs. Hammers, in other words, are far from neutral: they are highly gendered.

Moving closer to educational purposes and ways in which we might ‘get it right with edtech’, it is useful to look at the smart phone. The average size of these devices has risen in recent years, and is now 5.5 inches, with the market for 6 inch screens growing fast. Why is this an issue? Well, as Caroline Criado Perez (2019: 159) notes, ‘while we’re all admittedly impressed by the size of your screen, it’s a slightly different matter when it comes to fitting into half the population’s hands. The average man can fairly comfortably use his device one-handed – but the average woman’s hand is not much bigger than the handset itself’. This is despite the fact the fact that women are more likely to own an iPhone than men  .

It is not, of course, just technological artefacts that are gendered. Voice-recognition software is also very biased. One researcher (Tatman, 2017) has found that Google’s speech recognition tool is 13% more accurate for men than it is for women. There are also significant biases for race and social class. The reason lies in the dataset that the tool is trained on: the algorithms may be gender- and socio-culturally-neutral, but the dataset is not. It would not be difficult to redress this bias by training the tool on a different dataset.

The same bias can be found in automatic translation software. Because corpora such as the BNC or COCA have twice as many male pronouns as female ones (as a result of the kinds of text that are selected for the corpora), translation software reflects the bias. With Google Translate, a sentence in a language with a gender-neutral pronoun, such as ‘S/he is a doctor’ is rendered into English as ‘He is a doctor’. Meanwhile, ‘S/he is a nurse’ is translated as ‘She is a nurse’ (Criado Perez, 2019: 166).

Datasets, then, are often very far from neutral. Algorithms are not necessarily any more neutral than the datasets, and Cathy O’Neil’s best-seller ‘Weapons of Math Destruction’ catalogues the many, many ways in which algorithms, posing as neutral mathematical tools, can increase racial, social and gender inequalities.

It would not be hard to provide many more examples, but the selection above is probably enough. Technology, as Langdon Winner (Winner, 1980) observed almost forty years ago, is ‘deeply interwoven in the conditions of modern politics’. Technology cannot be neutral: it has politics.

So far, I have focused primarily on the non-neutrality of technology in terms of gender (and, in passing, race and class). Before returning to broader societal issues, I would like to make a relatively brief mention of another kind of non-neutrality: the pedagogic. Language learning materials necessarily contain content of some kind: texts, topics, the choice of values or role models, language examples, and so on. These cannot be value-free. In the early days of educational computer software, one researcher (Biraimah, 1993) found that it was ‘at least, if not more, biased than the printed page it may one day replace’. My own impression is that this remains true today.

Equally interesting to my mind is the fact that all educational technologies, ranging from the writing slate to the blackboard (see Buzbee, 2014), from the overhead projector to the interactive whiteboard, always privilege a particular kind of teaching (and learning). ‘Technologies are inherently biased because they are built to accomplish certain very specific goals which means that some technologies are good for some tasks while not so good for other tasks’ (Zhao et al., 2004: 25). Digital flashcards, for example, inevitably encourage a focus on rote learning. Contemporary LMSs have impressive multi-functionality (i.e. they often could be used in a very wide variety of ways), but, in practice, most teachers use them in very conservative ways (Laanpere et al., 2004). This may be a result of teacher and institutional preferences, but it is almost certainly due, at least in part, to the way that LMSs are designed. They are usually ‘based on traditional approaches to instruction dating from the nineteenth century: presentation and assessment [and] this can be seen in the selection of features which are most accessible in the interface, and easiest to use’ (Lane, 2009).

The argument that educational technology is neutral because it could be put to many different uses, good or bad, is problematic because the likelihood of one particular use is usually much greater than another. There is, however, another way of looking at technological neutrality, and that is to look at its origins. Elsewhere on this blog, in post after post, I have given examples of the ways in which educational technology has been developed, marketed and sold primarily for commercial purposes. Educational values, if indeed there are any, are often an afterthought. The research literature in this area is rich and growing: Stephen Ball, Larry Cuban, Neil Selwyn, Joel Spring, Audrey Watters, etc.

Rather than revisit old ground here, this is an opportunity to look at a slightly different origin of educational technology: the US military. The close connection of the early history of the internet and the Advanced Research Projects Agency (now DARPA) of the United States Department of Defense is fairly well-known. Much less well-known are the very close connections between the US military and educational technologies, which are catalogued in the recently reissued ‘The Classroom Arsenal’ by Douglas D. Noble.

Following the twin shocks of the Soviet Sputnik 1 (in 1957) and Yuri Gagarin (in 1961), the United States launched a massive programme of investment in the development of high-tech weaponry. This included ‘computer systems design, time-sharing, graphics displays, conversational programming languages, heuristic problem-solving, artificial intelligence, and cognitive science’ (Noble, 1991: 55), all of which are now crucial components in educational technology. But it also quickly became clear that more sophisticated weapons required much better trained operators, hence the US military’s huge (and continuing) interest in training. Early interest focused on teaching machines and programmed instruction (branches of the US military were by far the biggest purchasers of programmed instruction products). It was essential that training was effective and efficient, and this led to a wide interest in the mathematical modelling of learning and instruction.

What was then called computer-based education (CBE) was developed as a response to military needs. The first experiments in computer-based training took place at the Systems Research Laboratory of the Air Force’s RAND Corporation think tank (Noble, 1991: 73). Research and development in this area accelerated in the 1960s and 1970s and CBE (which has morphed into the platforms of today) ‘assumed particular forms because of the historical, contingent, military contexts for which and within which it was developed’ (Noble, 1991: 83). It is possible to imagine computer-based education having developed in very different directions. Between the 1960s and 1980s, for example, the PLATO (Programmed Logic for Automatic Teaching Operations) project at the University of Illinois focused heavily on computer-mediated social interaction (forums, message boards, email, chat rooms and multi-player games). PLATO was also significantly funded by a variety of US military agencies, but proved to be of much less interest to the generals than the work taking place in other laboratories. As Noble observes, ‘some technologies get developed while others do not, and those that do are shaped by particular interests and by the historical and political circumstances surrounding their development (Noble, 1991: 4).

According to Noble, however, the influence of the military reached far beyond the development of particular technologies. Alongside the investment in technologies, the military were the prime movers in a campaign to promote computer literacy in schools.

Computer literacy was an ideological campaign rather than an educational initiative – a campaign designed, at bottom, to render people ‘comfortable’ with the ‘inevitable’ new technologies. Its basic intent was to win the reluctant acquiescence of an entire population in a brave new world sculpted in silicon.

The computer campaign also succeeded in getting people in front of that screen and used to having computers around; it made people ‘computer-friendly’, just as computers were being rendered ‘used-friendly’. It also managed to distract the population, suddenly propelled by the urgency of learning about computers, from learning about other things, such as how computers were being used to erode the quality of their working lives, or why they, supposedly the citizens of a democracy, had no say in technological decisions that were determining the shape of their own futures.

Third, it made possible the successful introduction of millions of computers into schools, factories and offices, even homes, with minimal resistance. The nation’s public schools have by now spent over two billion dollars on over a million and a half computers, and this trend still shows no signs of abating. At this time, schools continue to spend one-fifth as much on computers, software, training and staffing as they do on all books and other instructional materials combined. Yet the impact of this enormous expenditure is a stockpile of often idle machines, typically used for quite unimaginative educational applications. Furthermore, the accumulated results of three decades of research on the effectiveness of computer-based instruction remain ‘inconclusive and often contradictory’. (Noble, 1991: x – xi)

Rather than being neutral in any way, it seems more reasonable to argue, along with (I think) most contemporary researchers, that edtech is profoundly value-laden because it has the potential to (i) influence certain values in students; (ii) change educational values in [various] ways; and (iii) change national values (Omotoyinbo & Omotoyinbo, 2016: 173). Most importantly, the growth in the use of educational technology has been accompanied by a change in the way that education itself is viewed: ‘as a tool, a sophisticated supply system of human cognitive resources, in the service of a computerized, technology-driven economy’ (Noble, 1991: 1). These two trends are inextricably linked.

References

Biraimah, K. 1993. The non-neutrality of educational computer software. Computers and Education 20 / 4: 283 – 290

Buzbee, L. 2014. Blackboard: A Personal History of the Classroom. Minneapolis: Graywolf Press

Chomsky, N. 2012. The Purpose of Education (video). Learning Without Frontiers Conference. https://www.youtube.com/watch?v=DdNAUJWJN08

Criado Perez, C. 2019. Invisible Women. London: Chatto & Windus

Fox, R. 2001. Technological neutrality and practice in higher education. In A. Herrmann and M. M. Kulski (Eds), Expanding Horizons in Teaching and Learning. Proceedings of the 10th Annual Teaching Learning Forum, 7-9 February 2001. Perth: Curtin University of Technology. http://clt.curtin.edu.au/events/conferences/tlf/tlf2001/fox.html

Laanpere, M., Poldoja, H. & Kikkas, K. 2004. The second thoughts about pedagogical neutrality of LMS. Proceedings of IEEE International Conference on Advanced Learning Technologies, 2004. https://ieeexplore.ieee.org/abstract/document/1357664

Lane, L. 2009. Insidious pedagogy: How course management systems impact teaching. First Monday, 14(10). https://firstmonday.org/ojs/index.php/fm/article/view/2530/2303Lane

Miller, A.E., MacDougall, J.D., Tarnopolsky, M. A. & Sale, D.G. 1993. ‘Gender differences in strength and muscle fiber characteristics’ European Journal of Applied Physiology and Occupational Physiology. 66(3): 254-62 https://www.ncbi.nlm.nih.gov/pubmed/8477683

Noble, D. D. 1991. The Classroom Arsenal. Abingdon, Oxon.: Routledge

Omotoyinbo, D. W. & Omotoyinbo, F. R. 2016. Educational Technology and Value Neutrality. Societal Studies, 8 / 2: 163 – 179 https://www3.mruni.eu/ojs/societal-studies/article/view/4652/4276

O’Neil, C. 2016. Weapons of Math Destruction. London: Penguin

Sundström, P. Interpreting the Notion that Technology is Value Neutral. Medicine, Health Care and Philosophy 1, 1998: 42-44

Tatman, R. 2017. ‘Gender and Dialect Bias in YouTube’s Automatic Captions’ Proceedings of the First Workshop on Ethics in Natural Language Processing, pp. 53–59 http://www.ethicsinnlp.org/workshop/pdf/EthNLP06.pdf

Wang, C. & Cai, D. 2017. ‘Hand tool handle design based on hand measurements’ MATEC Web of Conferences 119, 01044 (2017) https://www.matec-conferences.org/articles/matecconf/pdf/2017/33/matecconf_imeti2017_01044.pdf

Winner, L. 1980. Do Artifacts have Politics? Daedalus 109 / 1: 121 – 136

Zhao, Y, Alvarez-Torres, M. J., Smith, B. & Tan, H. S. 2004. The Non-neutrality of Technology: a Theoretical Analysis and Empirical Study of Computer Mediated Communication Technologies. Journal of Educational Computing Research 30 (1 &2): 23 – 55

The use of big data and analytics in education continues to grow.

A vast apparatus of measurement is being developed to underpin national education systems, institutions and the actions of the individuals who occupy them. […] The presence of digital data and software in education is being amplified through massive financial and political investment in educational technologies, as well as huge growth in data collection and analysis in policymaking practices, extension of performance measurement technologies in the management of educational institutions, and rapid expansion of digital methodologies in educational research. To a significant extent, many of the ways in which classrooms function, educational policy departments and leaders make decisions, and researchers make sense of data, simply would not happen as currently intended without the presence of software code and the digital data processing programs it enacts. (Williamson, 2017: 4)

The most common and successful use of this technology so far has been in the identification of students at risk of dropping out of their courses (Jørno & Gynther, 2018: 204). The kind of analytics used in this context may be called ‘academic analytics’ and focuses on educational processes at the institutional level or higher (Gelan et al, 2018: 3). However, ‘learning analytics’, the capture and analysis of learner and learning data in order to personalize learning ‘(1) through real-time feedback on online courses and e-textbooks that can ‘learn’ from how they are used and ‘talk back’ to the teacher, and (2) individualization and personalization of the educational experience through adaptive learning systems that enable materials to be tailored to each student’s individual needs through automated real-time analysis’ (Mayer-Schönberger & Cukier, 2014) has become ‘the main keyword of data-driven education’ (Williamson, 2017: 10). See my earlier posts on this topic here and here and here.

Learning with big dataNear the start of Mayer-Schönberger and Cukier’s enthusiastic sales pitch (Learning with Big Data: The Future of Education) for the use of big data in education, there is a discussion of Duolingo. They quote Luis von Ahn, the founder of Duolingo, as saying ‘there has been little empirical work on what is the best way to teach a foreign language’. This is so far from the truth as to be laughable. Von Ahn’s comment, along with the Duolingo product itself, is merely indicative of a lack of awareness of the enormous amount of research that has been carried out. But what could the data gleaned from the interactions of millions of users with Duolingo tell us of value? The example that is given is the following. Apparently, ‘in the case of Spanish speakers learning English, it’s common to teach pronouns early on: words like “he,” “she,” and “it”.’ But, Duolingo discovered, ‘the term “it” tends to confuse and create anxiety for Spanish speakers, since the word doesn’t easily translate into their language […] Delaying the introduction of “it” until a few weeks later dramatically improves the number of people who stick with learning English rather than drop out.’ Was von Ahn unaware of the decades of research into language transfer effects? Did von Ahn (who grew up speaking Spanish in Guatemala) need all this data to tell him that English personal pronouns can cause problems for Spanish learners of English? Was von Ahn unaware of the debates concerning the value of teaching isolated words (especially grammar words!)?

The area where little empirical research has been done is not in different ways of learning another language: it is in the use of big data and learning analytics to assist language learning. Claims about the value of these technologies in language learning are almost always speculative – they are based on comparison to other school subjects (especially, mathematics). Gelan et al (2018: 2), who note this lack of research, suggest that ‘understanding language learner behaviour could provide valuable insights into task design for instructors and materials designers, as well as help students with effective learning strategies and personalised learning pathways’ (my italics). Reinders (2018: 81) writes ‘that analysis of prior experiences with certain groups or certain courses may help to identify key moments at which students need to receive more or different support. Analysis of student engagement and performance throughout a course may help with early identification of learning problems and may prompt early intervention’ (italics added). But there is some research out there, and it’s worth having a look at. Most studies that have collected learner-tracking data concern glossary use for reading comprehension and vocabulary retention (Gelan et al, 2018: 5), but a few have attempted to go further in scope.

Volk et al (2015) looked at the behaviour of the 20,000 students per day using the platform which accompanies ‘More!’ (Gerngross et al. 2008) to do their English homework for Austrian lower secondary schools. They discovered that

  • the exercises used least frequently were those that are located further back in the course book
  • usage is highest from Monday to Wednesday, declining from Thursday, with a rise again on Sunday
  • most interaction took place between 3:00 and 5:00 pm.
  • repetition of exercises led to a strong improvement in success rate
  • students performed better on multiple choice and matching exercises than they did where they had to produce some language

The authors of this paper conclude by saying that ‘the results of this study suggest a number of new avenues for research. In general, the authors plan to extend their analysis of exercise results and applied exercises to the population of all schools using the online learning platform more-online.at. This step enables a deeper insight into student’s learning behaviour and allows making more generalizing statements.’ When I shared these research findings with the Austrian lower secondary teachers that I work with, their reaction was one of utter disbelief. People get paid to do this research? Why not just ask us?

More useful, more actionable insights may yet come from other sources. For example, Gu Yueguo, Pro-Vice-Chancellor of the Beijing Foreign Studies University has announced the intention to set up a national Big Data research center, specializing in big data-related research topics in foreign language education (Yu, 2015). Meanwhile, I’m aware of only one big research project that has published its results. The EC Erasmus+ VITAL project (Visualisation Tools and Analytics to monitor Online Language Learning & Teaching) was carried out between 2015 and 2017 and looked at the learning trails of students from universities in Belgium, Britain and the Netherlands. It was discovered (Gelan et al, 2015) that:

  • students who did online exercises when they were supposed to do them were slightly more successful than those who were late carrying out the tasks
  • successful students logged on more often, spent more time online, attempted and completed more tasks, revisited both exercises and theory pages more frequently, did the work in the order in which it was supposed to be done and did more work in the holidays
  • most students preferred to go straight into the assessed exercises and only used the theory pages when they felt they needed to; successful students referred back to the theory pages more often than unsuccessful students
  • students made little use of the voice recording functionality
  • most online activity took place the day before a class and the day of the class itself

EU funding for this VITAL project amounted to 274,840 Euros[1]. The technology for capturing the data has been around for a long time. In my opinion, nothing of value, or at least nothing new, has been learnt. Publishers like Pearson and Cambridge University Press who have large numbers of learners using their platforms have been capturing learning data for many years. They do not publish their findings and, intriguingly, do not even claim that they have learnt anything useful / actionable from the data they have collected. Sure, an exercise here or there may need to be amended. Both teachers and students may need more support in using the more open-ended functionalities of the platforms (e.g. discussion forums). But are they getting ‘unprecedented insights into what works and what doesn’t’ (Mayer-Schönberger & Cukier, 2014)? Are they any closer to building better pedagogies? On the basis of what we know so far, you wouldn’t want to bet on it.

It may be the case that all the learning / learner data that is captured could be used in some way that has nothing to do with language learning. Show me a language-learning app developer who does not dream of monetizing the ‘behavioural surplus’ (Zuboff, 2018) that they collect! But, for the data and analytics to be of any value in guiding language learning, it must lead to actionable insights. Unfortunately, as Jørno & Gynther (2018: 198) point out, there is very little clarity about what is meant by ‘actionable insights’. There is a danger that data and analytics ‘simply gravitates towards insights that confirm longstanding good practice and insights, such as “students tend to ignore optional learning activities … [and] focus on activities that are assessed” (Jørno & Gynther, 2018: 211). While this is happening, the focus on data inevitably shapes the way we look at the object of study (i.e. language learning), ‘thereby systematically excluding other perspectives’ (Mau, 2019: 15; see also Beer, 2019). The belief that tech is always the solution, that all we need is more data and better analytics, remains very powerful: it’s called techno-chauvinism (Broussard, 2018: 7-8).

References

Beer, D. 2019. The Data Gaze. London: Sage

Broussard, M. 2018. Artificial Unintelligence. Cambridge, Mass.: MIT Press

Gelan, A., Fastre, G., Verjans, M., Martin, N., Jansenswillen, G., Creemers, M., Lieben, J., Depaire, B. & Thomas, M. 2018. ‘Affordances and limitations of learning analytics for computer­assisted language learning: a case study of the VITAL project’. Computer Assisted Language Learning. pp. 1­26. http://clok.uclan.ac.uk/21289/

Gerngross, G., Puchta, H., Holzmann, C., Stranks, J., Lewis-Jones, P. & Finnie, R. 2008. More! 1 Cyber Homework. Innsbruck, Austria: Helbling

Jørno, R. L. & Gynther, K. 2018. ‘What Constitutes an “Actionable Insight” in Learning Analytics?’ Journal of Learning Analytics 5 (3): 198 – 221

Mau, S. 2019. The Metric Society. Cambridge: Polity Press

Mayer-Schönberger, V. & Cukier, K. 2014. Learning with Big Data: The Future of Education. New York: Houghton Mifflin Harcourt

Reinders, H. 2018. ‘Learning analytics for language learning and teaching’. JALT CALL Journal 14 / 1: 77 – 86 https://files.eric.ed.gov/fulltext/EJ1177327.pdf

Volk, H., Kellner, K. & Wohlhart, D. 2015. ‘Learning Analytics for English Language Teaching.’ Journal of Universal Computer Science, Vol. 21 / 1: 156-174 http://www.jucs.org/jucs_21_1/learning_analytics_for_english/jucs_21_01_0156_0174_volk.pdf

Williamson, B. 2017. Big Data in Education. London: Sage

Yu, Q. 2015. ‘Learning Analytics: The next frontier for computer assisted language learning in big data age’ SHS Web of Conferences, 17 https://www.shs-conferences.org/articles/shsconf/pdf/2015/04/shsconf_icmetm2015_02013.pdf

Zuboff, S. 2019. The Age of Surveillance Capitalism. London: Profile Books

 

[1] See https://ec.europa.eu/programmes/erasmus-plus/sites/erasmusplus2/files/ka2-2015-he_en.pdf

Learners are different, the argument goes, so learning paths will be different, too. And, the argument continues, if learners will benefit from individualized learning pathways, so instruction should be based around an analysis of the optimal learning pathways for individuals and tailored to match them. In previous posts, I have questioned whether such an analysis is meaningful or reliable and whether the tailoring leads to any measurable learning gains. In this post, I want to focus primarily on the analysis of learner differences.

Family / social background and previous educational experiences are obvious ways in which learners differ when they embark on any course of study. The way they impact on educational success is well researched and well established. Despite this research, there are some who disagree. For example, Dominic Cummings (former adviser to Michael Gove when he was UK Education minister and former campaign director of the pro-Brexit Vote Leave group) has argued  that genetic differences, especially in intelligence, account for more than 50% of the differences in educational achievement.

Cummings got his ideas from Robert Plomin , one of the world’s most cited living psychologists. Plomin, in a recent paper in Nature, ‘The New Genetics of Intelligence’ , argues that ‘intelligence is highly heritable and predicts important educational, occupational and health outcomes better than any other trait’. In an earlier paper, ‘Genetics affects choice of academic subjects as well as achievement’, Plomin and his co-authors argued that ‘choosing to do A-levels and the choice of subjects show substantial genetic influence, as does performance after two years studying the chosen subjects’. Environment matters, says Plomin , but it’s possible that genes matter more.

All of which leads us to the field known as ‘educational genomics’. In an article of breathless enthusiasm entitled ‘How genetics could help future learners unlock hidden potential’ , University of Sussex psychologist, Darya Gaysina, describes educational genomics as the use of ‘detailed information about the human genome – DNA variants – to identify their contribution to particular traits that are related to education [… ] it is thought that one day, educational genomics could enable educational organisations to create tailor-made curriculum programmes based on a pupil’s DNA profile’. It could, she writes, ‘enable schools to accommodate a variety of different learning styles – both well-worn and modern – suited to the individual needs of the learner [and] help society to take a decisive step towards the creation of an education system that plays on the advantages of genetic background. Rather than the current system, that penalises those individuals who do not fit the educational mould’.

The goal is not just personalized learning. It is ‘Personalized Precision Education’ where researchers ‘look for patterns in huge numbers of genetic factors that might explain behaviors and achievements in individuals. It also focuses on the ways that individuals’ genotypes and environments interact, or how other “epigenetic” factors impact on whether and how genes become active’. This will require huge amounts of ‘data gathering from learners and complex analysis to identify patterns across psychological, neural and genetic datasets’. Why not, suggests Darya Gaysina, use the same massive databases that are being used to identify health risks and to develop approaches to preventative medicine?

BG-for-educationIf I had a spare 100 Euros, I (or you) could buy Darya Gaysina’s book, ‘Behavioural Genetics for Education’ (Palgrave Macmillan, 2016) and, no doubt, I’d understand the science better as a result. There is much about the science that seems problematic, to say the least (e.g. the definition and measurement of intelligence, the lack of reference to other research that suggests academic success is linked to non-genetic factors), but it isn’t the science that concerns me most. It’s the ethics. I don’t share Gaysina’s optimism that ‘every child in the future could be given the opportunity to achieve their maximum potential’. Her utopianism is my fear of Gattaca-like dystopias. IQ testing, in its early days, promised something similarly wonderful, but look what became of that. When you already have reporting of educational genomics using terms like ‘dictate’, you have to fear for the future of Gaysina’s brave new world.

Futurism.pngEducational genomics could equally well lead to expectations of ‘certain levels of achievement from certain groups of children – perhaps from different socioeconomic or ethnic groups’ and you can be pretty sure it will lead to ‘companies with the means to assess students’ genetic identities [seeking] to create new marketplaces of products to sell to schools, educators and parents’. The very fact that people like Dominic Cummings (described by David Cameron as a ‘career psychopath’ ) have opted to jump on this particular bandwagon is, for me, more than enough cause for concern.

Underlying my doubts about educational genomics is a much broader concern. It’s the apparent belief of educational genomicists that science can provide technical solutions to educational problems. It’s called ‘solutionism’ and it doesn’t have a pretty history.

440px-HydraOrganization_HeadLike the mythical monster, the ancient Hydra organisation of Marvel Comics grows two more heads if one is cut off, becoming more powerful in the process. With the most advanced technology on the planet and with a particular focus on data gathering, Hydra operates through international corporations and highly-placed individuals in national governments.
Personalized learning has also been around for centuries. Its present incarnation can be traced to the individualized instructional programmes of the late 19th century which ‘focused on delivering specific subject matter […] based on the principles of scientific management. The intent was to solve the practical problems of the classroom by reducing waste and increasing efficiency, effectiveness, and cost containment in education (Januszewski, 2001: 58). Since then, personalized learning has adopted many different names, including differentiated instruction, individualized instruction, individually guided education, programmed instruction, personalized learning, personalized instruction, and individually prescribed instruction.
Disambiguating the terms has never been easy. In the world of language learning / teaching, it was observed back in the early 1970s ‘that there is little agreement on the description and definition of individualized foreign language instruction’ (Garfinkel, 1971: 379). The point was echoed a few years later by Grittner (1975: 323): it ‘means so many things to so many different people’. A UNESCO document (Chaix & O’Neil, 1978: 6) complained that ‘the term ‘individualization’ and the many expressions using the same root, such as ‘individualized learning’, are much too ambiguous’. Zoom forward to the present day and nothing has changed. Critiquing the British government’s focus on personalized learning, the Institute for Public Policy Research (Johnson, 2004: 17) wrote that it ‘remains difficult to be certain what the Government means by personalised learning’. In the U.S. context, a piece by Sean Cavanagh (2014) in Education Week (which is financially supported by the Gates Foundation) noted that although ‘the term “personalized learning” seems to be everywhere, there is not yet a shared understanding of what it means’. In short, as Arthur Levine  has put it, the words personalized learning ‘generate more heat than light’.
Despite the lack of clarity about what precisely personalized learning actually is, it has been in the limelight of language teaching and learning since before the 1930s when Pendleton (1930: 195) described the idea as being more widespread than ever before. Zoom forward to the 1970s and we find it described as ‘one of the major movements in second-language education at the present time’ (Chastain, 1975: 334). In 1971, it was described as ‘a bandwagon onto which foreign language teachers at all levels are jumping’ (Altman & Politzer, 1971: 6). A little later, in the 1980s, ‘words or phrases such as ‘learner-centered’, ‘student-centered’, ‘personalized’, ‘individualized’, and ‘humanized’ appear as the most frequent modifiers of ‘instruction’ in journals and conferences of foreign language education (Altman & James, 1980). Continue to the present day, and we find that personalized learning is at the centre of the educational policies of governments across the world. Between 2012 and 2015, the U.S. Department of Education threw over half a billion dollars at personalized learning initiatives (Bulger, 2016: 22). At the same time, there is massive sponsorship of personalized learning from the biggest international corporations (the William and Flora Hewlett Foundation, Rogers Family Foundation, Susan and Michael Dell Foundation, and the Eli and Edythe Broad Foundation) (Bulger, 2016: 22). The Bill & Melinda Gates Foundation has invested nearly $175 million in personalized learning development and Facebook’s Mark Zuckerberg is ploughing billions of dollars into it.
There has, however, been one constant: the belief that technology can facilitate the process of personalization (whatever that might be). Technology appears to offer the potential to realise the goal of personalized learning. We have come a long way from Sydney Pressey’s attempts in the 1920s to use teaching machines to individualize instruction. At that time, the machines were just one part of the programme (and not the most important). But each new technology has offered a new range of possibilities to be exploited and each new technology, its advocates argue, ‘will solve the problems better than previous efforts’ (Ferster, 2014: xii). With the advent of data-capturing learning technologies, it has now become virtually impossible to separate advocacy of personalized instruction from advocacy of digitalization in education. As the British Department for Education has put it ‘central to personalised learning is schools’ use of data (DfES (2005) White Paper: Higher Standards, Better Schools for All. London, Department for Education and Skills, para 4.50). When the U.S. Department of Education threw half a billion dollars at personalized learning initiatives, the condition was that these projects ‘use collaborative, data-based strategies and 21st century tools to deliver instruction’ (Bulger, 2016: 22).
Is it just a coincidence that the primary advocates of personalized learning are either vendors of technology or are very close to them in the higher echelons of Hydra (World Economic Forum, World Bank, IMF, etc.)? ‘Personalized learning’ has ‘almost no descriptive value’: it is ‘a term that sounds good without the inconvenience of having any obviously specific pedagogical meaning’ (Feldstein & Hill, 2016: 30). It evokes positive responses, with its ‘nod towards more student-centered learning […], a move that honors the person learning not just the learning institution’ (Watters, 2014). As such, it is ‘a natural for marketing purposes’ since nobody in their right mind would want unpersonalized or depersonalized learning (Feldstein & Hill, 2016: 25). It’s ‘a slogan that nobody’s going to be against, and everybody’s going to be for. Nobody knows what it means, because it doesn’t mean anything. Its crucial value is that it diverts your attention from a question that does mean something: Do you support our policy?’ (Chomsky, 1997).
None of the above is intended to suggest that there might not be goals that come under the ‘personalized learning’ umbrella that are worth working towards. But that’s another story – one I will return to in another post. For the moment, it’s just worth remembering that, in one of the Marvel Comics stories, Captain America, who appeared to be fighting the depersonalized evils of the world, was actually a deep sleeper agent for Hydra.

References
Altman, H.B. & James, C.V. (eds.) 1980. Foreign Language Teaching: Meeting Individual Needs. Oxford: Pergamon Press
Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare
Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. New York: Data and Society Research Institute.
Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week
Chaix, P., & O’Neil, C. 1978. A Critical Analysis of Forms of Autonomous Learning (Autodidaxy and Semi-autonomy in the Field of Foreign Language Learning. Final Report. UNESCO Doc Ed 78/WS/58
Chastain, K. 1975. ‘An Examination of the Basic Assumptions of “Individualized” Instruction’ The Modern Language Journal 59 / 7: 334 – 344
Chomsky, N. 1997. Media Control: The Spectacular Achievements of Propaganda. New York: Seven Stories Press
Feldstein, M. & Hill, P. 2016. ‘Personalized Learning: What it Really is and why it Really Matters’ EduCause Review March / April 2016: 25 – 35
Ferster, B. 2014. Teaching Machines. Baltimore: John Hopkins University Press
Garfinkel, A. 1971. ‘Stanford University Conference on Individualizing Foreign Language Instruction, May 6-8, 1971.’ The Modern Language Journal Vol. 55, No. 6 (Oct., 1971), pp. 378-381
Grittner, F. M. 1975. ‘Individualized Instruction: An Historical Perspective’ The Modern Language Journal 59 / 7: 323 – 333
Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited
Johnson, M. 2004. Personalised Learning – an Emperor’s Outfit? London: Institute for Public Policy Research
Pendleton, C. S. 1930. ‘Personalizing English Teaching’ Peabody Journal of Education 7 / 4: 195 – 200
Watters, A. 2014. The problem with ‘personalization’ Hack Education