Posts Tagged ‘critical thinking’

Since no single definition of critical thinking prevails (Dummett & Hughes, 2019: 2), discussions of the topic invariably begin with attempts to provide a definition. Lai (2011) offers an accessible summary of a range of possible meanings, but points out that, in educational contexts, its meaning is often rather vague and encompasses other concepts (such as higher order thinking skills) which also lack clarity. Paul Dummett and John Hughes (2019: 4) plump for ‘a mindset that involves thinking reflectively, rationally and reasonably’ – a definition which involves a vague noun (that could mean a fixed state of mind, a learned attitude, a disposition or a mood) and three highly subjective adverbs. I don’t think I could do any better. However, instead of looking for a definition, we can reach a sort of understanding by looking at examples of it. Dummett and Hughes’ book is extremely rich in practical examples, and the picture that emerges of critical thinking is complex and multifaceted.

As you might expect of a weasel word like ‘critical thinking’, there appears to be general agreement that it’s a ‘good thing’. Paul Dummett suggests that there are two common reasons for promoting the inclusion of critical thinking activities in the language classroom. The first of these is a desire to get students thinking for themselves. The second is the idea ‘that we live in an age of misinformation in which only the critically minded can avoid manipulation or slavish conformity’. Neither seems contentious at first glance, although he points out that ‘they tend to lead to a narrow application of critical thinking in ELT materials: that is to say, the analysis of texts and evaluation of the ideas expressed in them’. It’s the second of these rationales that I’d like to explore further.

Penny Ur (2020: 9) offers a more extended version of it:

The role of critical thinking in education has become more central in the 21st century, simply because there is far more information readily available to today’s students than there was in previous centuries (mainly, but not only, online), and it is vital for them to be able to deal with such input wisely. They need to be able to distinguish between what is important and what is trivial, between truth and lies, between fact and opinion, between logical argument and specious propaganda […] Without such skills and awareness of the need to exercise them, they are liable to find themselves victims of commercial or political interests, their thinking manipulated by persuasion disguised as information.

In the same edited collection Olja Milosevic (2020:18) echoes Ur’s argument:

Critical thinking becomes even more important as communication increasingly moves online. Students find an overwhelming amount of information and need to be taught how to evaluate its relevance, accuracy and quality. If teachers do not teach students how to go beyond surface meaning, students cannot be expected to practise it.

In the passages I’ve quoted, these writers are referring to one particular kind of critical thinking. The ability to critically evaluate the reliability, accuracy, etc of a text is generally considered to be a part of what is usually called ‘media information literacy’. In these times of fake news, so the argument goes, it is vital for students to develop (with their teachers’ help) the necessary skills to spot fake news when they see it. The most prototypical critical thinking activity in ELT classrooms is probably one in which students analyse some fake news, such as the website about the Pacific Tree Octopus (which is the basis of a lesson in Dudeney et al., 2013: 198 – 203).

Before considering media information literacy in more detail, it’s worth noting in passing that a rationale for critical thinking activities is no rationale at all if it only concerns one aspect of critical thinking, since it has applied attributes of a part (media information literacy) to a bigger whole (critical thinking).

There is no shortage of good (free) material available for dealing with fake news in the ELT classroom. Examples include work by James Taylor, Chia Suan Chong and Tyson Seburn. Material of this kind may result in lively, interesting, cognitively challenging, communicative and, therefore, useful lessons. But how likely is it that material of this kind will develop learners’ media information literacy and, by extension therefore, their critical thinking skills? How likely is it that teaching material of this kind will help people identify (and reject) fake news? Is it possible that material of this kind is valuable despite its rationale, rather than because of it? In the spirit of rational, reflective and reasonable thinking, these are questions that seem to be worth exploring.

ELT classes and fake news

James Taylor has suggested that the English language classroom is ‘the perfect venue for [critical thinking] skills to be developed’. Although academic English courses necessarily involve elements of critical thinking, I’m not so sure that media information literacy (and, specifically, the identification of fake news) can be adequately addressed in general English classes. There are so many areas, besides those that are specifically language-focussed, competing for space in language classes (think of all those other 21st century skills), that it is hard to see how sufficient time can be found for real development of this skill. It requires modelling, practice of the skill, feedback on the practice, and more practice (Mulnix, 2010): it needs time. Fake news activities in the language classroom would, of course, be of greater value if they were part of an integrated approach across the curriculum. Unfortunately, this is rarely the case.

Information literacy skills

Training materials for media information literacy usually involve a number of stages. These include things like fact-checking and triangulation of different sources, consideration of web address, analysis of images, other items on the site, source citation and so on. The problem, however, is that news-fakers have become so good at what they do. The tree octopus site is very crude in comparison to what can be produced nowadays by people who have learnt to profit from the online economy of misinformation. Facebook employs an army of algorithmic and human fact-checkers, but still struggles. The bottom line is that background knowledge is needed (this is as true for media information literacy as it is for critical thinking more generally) (Willingham, 2007). With news, the scope of domain knowledge is so vast that it is extremely hard to transfer one’s ability to critically evaluate one particular piece of news to another. We are all fooled from time to time.

Media information literacy interventions: research on effectiveness

With the onset of COVID-19, the ability to identify fake news has become, more than ever, a matter of life and death. There is little question that this ability correlates strongly with analytic thinking (see, for example, Stanley et al., 2020). What is much less clear is how we can go about promoting analytic thinking. Analytic thinking comes in different varieties, and another hot-off-the-press research study into susceptibility to COVID-19 fake news (Roozenbeek et al., 2020) has found that the ability to spot fake news may correlate more strongly with numerical literacy than with reasoning ability. In fact, the research team found that a lack of numerical literacy was the most consistent predictor of susceptibility to misinformation about COVID-19. Perhaps we are attempting to develop the wrong kind of analytic thinking?

In educational contexts, attempts to promote media information literacy typically seek to develop reasoning abilities, and the evidence for their effectiveness is mixed. First of all, it needs to be said that ‘little large-scale evidence exists on the effectiveness of promoting digital media literacy as a response to online misinformation’ (Guess et al., 2020). An early meta-analysis (Jeong et al., 2012) found that such interventions had a positive effect, when the interventions were long (not one-off), but impacted more on students’ knowledge than they did on their behaviour. More recently, Huguet et al (2019) were unable to draw ‘definitive conclusions from past research, such as what kinds of media literacy practices work and under what conditions’. And this year, a study by Guess et al (2020) did not generate sufficient evidence ‘to conclude that the [media information literacy] intervention changed real-world consumption of false news’. I am unaware of any robust research in this area in the context of ELT.

It’s all rather disappointing. Why are we not better at it? After all, teachers of media studies have been exploring pathways for many years now. One possible answer is this: Media information literacy, like critical thinking more generally, is a skill that is acquirable, but it can only be acquired if there is a disposition to do so. The ability to think critically and the disposition to do so are separate entities (Facione, 2000). Training learners to be more critical in their approach to media information may be so much pissing in the wind if the disposition to be sceptical is not there. Shaping dispositions is a much harder task than training skills.

Both of the research studies into susceptibility to COVID-19 misinformation that I referred to earlier in this section underscore the significance of dispositions to analytic thinking. Roozenbeek et al (2020) found, in line with much previous research (for example, Jost et al. 2018), that political conservatism is associated with a slightly higher susceptibility to misinformation. Political views (on either side of the political spectrum) rarely change as a result of exposure to science or reasoned thinking. They also found that ‘self-identifying as a member of a minority predicts susceptibility to misinformation about the virus in all countries surveyed’ (except, interestingly, in the UK). Again, when issues of identity are at stake, emotional responses tend to trump rational ones.

Rational, reflective and reasonable thinking about media information literacy leads to an uncomfortable red-pill rabbit-hole. This is how Bulger and Davidson (2018) put it:

The extent to which media literacy can combat the problematic news environment is an open question. Is denying the existence of climate change a media literacy problem? Is believing that a presidential candidate was running a sex-trafficking ring out of a pizza shop a media literacy problem? Can media literacy combat the intentionally opaque systems of serving news on social media platforms? Or intentional campaigns of disinformation?

Teachers and fake news

The assumption that the critical thinking skills of young people can be developed through the intervention of their teachers is rarely problematized. It should be. A recent study of Spanish pre-service teachers (Fuertes-Prieto et al., 2020) showed that their ‘level of belief in pseudoscientific issues is comparable, or even higher in some cases to those of the general population’. There is no reason to believe that this changes after they have qualified. Teachers are probably no more likely to change their beliefs when presented with empirical evidence (Menz et al., 2020) than people from any other profession. Research has tended to focus on teachers’ lack of critical thinking in areas related to their work, but, things may be no different in the wider world. It is estimated that over a quarter of teachers in the US voted for the world’s greatest peddler of fake news in the 2016 presidential election.

It is also interesting to note that the sharing of fake news on social media is much more widespread among older people (including US teachers who have an average age of 42.4) than those under 30 (Bouygues, 2019).

Institutional contexts and fake news

Cory Doctorow has suggested that the fake news problem is not a problem of identifying what is true and what is fake, but a problem ‘about how we know whether something is true. We’re not disagreeing about facts, we’re disagreeing about epistemology’. In a post-modernist world of ‘Truth Decay’ (Kavanagh & Rich, 2018), where there is ‘a blurring of the line between opinion and fact’, epistemological authority is a rare commodity. Medicine, social sciences and applied linguistics are all currently experiencing a ‘replication crisis’ (Ioannidis, 2005) and we had a British education minister saying that ‘people of this country have had enough of experts’.

News reporting has always relied to some extent on trust in the reliability of the news source. The BBC or CNN might attempt to present themselves as more objective than, say, Fox News or InfoWars, but trust in all news outlets has collapsed globally in recent years. As Michael Shudson has written in the Columbia Journalism Review, ‘all news outlets write from a set of values, not simply from a disinterested effort at truth’. If a particular news channel manifestly shares different values from your own, it is easy to reject the veracity of the news it reports. Believers in COVID conspiracy theories often hold their views precisely because of their rejection of the epistemological authority of mainstream news and the WHO or governments who support lockdown measures.

The training of media information literacy in schools is difficult because, for many people in the US (and elsewhere), education is not dissimilar to mainstream media. They ‘are seen as the enemy — two institutions who are trying to have power over how people think. Two institutions that are trying to assert authority over epistemology’ (boyd, 2018). Schools have always been characterized by imbalances in power (between students and teachers / administrators), and this power dynamic is not conducive to open-minded enquiry. Children are often more aware of the power of their teachers than they are accepting of their epistemological authority. They are enjoined to be critical thinkers, but only about certain things and only up to a certain point. One way for children to redress the power imbalance is to reject the epistemological authority of their teachers. I think this may explain why a group of young children I observed recently coming out of a lesson devoted to environmental issues found such pleasure in joking about Greta ‘Thunfisch’.

Power relationships in schools are reflected and enacted in the interaction patterns between teachers and students. The most common of these is ‘initiation-response-feedback (IRF)’ and it is unlikely that this is particularly conducive to rational, reflective and reasonable thinking. At the same time, as Richard Paul, one of the early advocates of critical thinking in schools, noted, much learning activity is characterised by lower order thinking skills, especially memorization (Paul, 1992: 22). With this kind of backdrop, training in media information literacy is more likely to be effective if it goes beyond the inclusion of a few ‘fake news’ exercises: a transformation in the way that the teaching is done will also be needed. Benesch (1999) describes this as a more ‘dialogic’ approach and there is some evidence that a more dialogic approach can have a positive impact on students’ dispositions (e.g. Hajhosseiny, 2012).

I think that David Buckingham (2019a) captures the educational problem very neatly:

There’s a danger here of assuming that we are dealing with a rational process – or at least one that can, by some pedagogical means, be made rational. But from an educational perspective, we surely have to begin with the question of why people might believe apparently ‘fake’ news in the first place. Where we decide to place our trust is as much to do with fantasy, emotion and desire, as with rational calculation. All of us are inclined to believe what we want to believe.

Fake news: a problem or a symptom of a problem?

There has always been fake news. The big problem now is ‘the speed and ease of its dissemination, and it exists primarily because today’s digital capitalism makes it extremely profitable – look at Google and Facebook – to produce and circulate false but click-worthy narratives’ (Morosov, 2017). Fake news taps into and amplifies broader tendencies and divides in society: the problem is not straightforward and is unlikely to be easy to eradicate (Buckingham, 2019a: 3).

There is increasing discussion of media regulation and the recent banning by Facebook of Holocaust denial and QAnon is a recognition that some regulation cannot now be avoided. But strict regulations would threaten the ‘basic business model, and the enormous profitability’ of social media companies (Buckingham, 2009b) and there are real practical and ethical problems in working out exactly how regulation would happen. Governments do not know what to do.

Lacking any obvious alternative, media information literacy is often seen as the solution: can’t we ‘fact check and moderate our way out of this conundrum’ (boyd, 2018)? danah boyd’s stark response is, no, this will fail. It’s an inadequate solution to an oversimplified problem (Buckingham, 2019a).

Along with boyd and Buckingham, I’m not trying to argue that we drop media information literacy activities from educational (including ELT) programmes. Quite the opposite. But if we want our students to think reflectively, rationally and reasonably, I think we will need to start by doing the same.

References

Benesch, S. (1999). Thinking critically, thinking dialogically. TESOL Quarterly, 33: pp. 573 – 580

Bouygues, H. L. (2019). Fighting Fake News: Lessons From The Information Wars. Reboot Foundation https://reboot-foundation.org/fighting-fake-news/

boyd, d. (2018). You Think You Want Media Literacy… Do You? Data and Society: Points https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2

Buckingham, D. (2019a). Teaching Media in a ‘Post-Truth’ Age: Fake News, Media Bias and the Challenge for Media Literacy Education. Cultura y Educación 31(2): pp. 1-19

Buckingham, D. (2019b). Rethinking digital literacy: Media education in the age of digital capitalism. https://ddbuckingham.files.wordpress.com/2019/12/media-education-in-digital-capitalism.pdf

Bulger, M. & Davidson, P. (2018). The Promises, Challenges and Futures of Media Literacy. Data and Society. https://datasociety.net/pubs/oh/DataAndSociety_Media_Literacy_2018.pdf

Doctorow, C. (2017). Three kinds of propaganda, and what to do about them. boingboing 25th February 2017, https://boingboing.net/2017/02/25/counternarratives-not-fact-che.html

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Dummett, P. & Hughes, J. (2019). Critical Thinking in ELT. Boston: National Geographic Learning

Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and relation to critical thinking skill. Informal Logic, 20(1), 61–84.

Fuertes-Prieto, M.Á., Andrés-Sánchez, S., Corrochano-Fernández, D. et al. (2020). Pre-service Teachers’ False Beliefs in Superstitions and Pseudosciences in Relation to Science and Technology. Science & Education 29, 1235–1254 (2020). https://doi.org/10.1007/s11191-020-00140-8

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, N., Reifler, J. & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences Jul 2020, 117 (27) 15536-15545; DOI: 10.1073/pnas.1920498117

Hajhosseiny, M. (2012). The Effect of Dialogic Teaching on Students’ Critical Thinking Disposition. Procedia – Social and Behavioral Sciences, 69: pp. 1358 – 1368

Huguet, A., Kavanagh, J., Baker, G. & Blumenthal, M. S. (2019). Exploring Media Literacy Education as a Tool for Mitigating Truth Decay. RAND Corporation, https://www.rand.org/content/dam/rand/pubs/research_reports/RR3000/RR3050/RAND_RR3050.pdf

Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLoS Medicine 2 (8): e124. https://doi.org/10.1371/journal.pmed.0020124

Jeong, S. H., Cho, H., & Hwang, Y. (2012). Media literacy interventions: A meta-analytic review. Journal of Communication, 62, pp. 454–472

Jones-Jang, S. M., Mortensen, T. & Liu, J. (2019). Does media literacy help identification of fake news? Information literacy helps, but other literacies don’t. American Behavioral Scientist, pp. 1 – 18, doi:10.1177/0002764219869406

Jost, J. T., van der Linden, S., Panagopoulos, C. & Hardin, C. D. (2018). Ideological asymmetries in conformity, desire for shared reality, and the spread of misinformation. Current Opinion in Psychology, 23: pp/ 77-83. doi:10.1016/j.copsyc.2018.01.003

Kavanagh, J. & Rich, M. D. (2018). Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life. RAND Corporation, https://www.rand.org/pubs/research_reports/RR2314.html

Lai, E.R. 2011. Critical Thinking: A Literature Review. Pearson. http://images.pearsonassessments.com/images/tmrs/CriticalThinkingReviewFINAL.pdf

Menz, C., Spinath, B. & Seifried, E. (2020). Misconceptions die hard: prevalence and reduction of wrong beliefs in topics from educational psychology among preservice teachers. European Journal of Psychology of Education https://doi.org/10.1007/s10212-020-00474-5

Milosevic, O. (2020). Promoting critical thinking in the EFL classroom. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp.17 – 22

Morozov, E. (2017). Moral panic over fake news hides the real enemy – the digital giants. The Guardian, 8 January 2017 https://www.theguardian.com/commentisfree/2017/jan/08/blaming-fake-news-not-the-answer-democracy-crisis

Mulnix, J.W. 2010. ‘Thinking critically about critical thinking’ Educational Philosophy and Theory, 2010

Paul, R. W. (1992). Critical thinking: What, why, and how? New Directions for Community Colleges, 77: pp. 3–24.

Roozenbeek, J., Schneider, C.R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., van der Bles, A. M. & and van der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7 (10) https://doi.org/10.1098/rsos.201199

Stanley, M., Barr, N., Peters, K. & Seli, P. (2020). Analytic-thinking predicts hoax beliefs and helping behaviors in response to the COVID-19 pandemic. PsyArxiv Preprints doi:10.31234/osf.io/m3vt

Ur, P. (2020). Critical Thinking. In Mavridi, S. & Xerri, D. (Eds.) English for 21st Century Skills. Newbury, Berks.: Express Publishing. pp.9 – 16

Willingham, D. T. (2007). Critical Thinking: Why Is It So Hard to Teach? American Educator Summer 2007: pp. 8 – 19

In the first post in this 3-part series, I focussed on data collection practices in a number of ELT websites, as a way of introducing ‘critical data literacy’. Here, I explore the term in more detail.

Although the term ‘big data’ has been around for a while (see this article and infographic) it’s less than ten years ago that it began to enter everyday language, and found its way into the OED (2013). In the same year, Viktor Mayer-Schönberger and Kenneth Cukier published their best-selling ‘Big Data: A Revolution That Will Transform How We Live, Work, and Think’ (2013) and it was hard to avoid enthusiastic references in the media to the transformative potential of big data in every sector of society.

Since then, the use of big data and analytics has become ubiquitous. Massive data collection (and data surveillance) has now become routine and companies like Palantir, which specialise in big data analytics, have become part of everyday life. Palantir’s customers include the LAPD, the CIA, the US Immigration and Customs Enforcement (ICE) and the British Government. Its recent history includes links with Cambridge Analytica, assistance in an operation to arrest the parents of illegal migrant children, and a racial discrimination lawsuit where the company was accused of having ‘routinely eliminated’ Asian job applicants (settled out of court for $1.7 million).

Unsurprisingly, the datafication of society has not gone entirely uncontested. Whilst the vast majority of people seem happy to trade their personal data for convenience and connectivity, a growing number are concerned about who benefits most from this trade-off. On an institutional level, the EU introduced the General Data Protection Regulation (GDPR), which led to Google being fined Ꞓ50 million for insufficient transparency in their privacy policy and their practices of processing personal data for the purposes of behavioural advertising. In the intellectual sphere, there has been a recent spate of books that challenge the practices of ubiquitous data collection, coining new terms like ‘surveillance capitalism’, ‘digital capitalism’ and ‘data colonialism’. Here are four recent books that I have found particularly interesting.

Beer, D. (2019). The Data Gaze. London: Sage

Couldry, N. & Mejias, U. A. (2019). The Costs of Connection. Stanford: Stanford University Press

Sadowski, J. (2020). Too Smart. Cambridge, Mass.: MIT Press

Zuboff, S. (2019). The Age of Surveillance Capitalism. New York: Public Affairs

The use of big data and analytics in education is also now a thriving industry, with its supporters claiming that these technologies can lead to greater personalization, greater efficiency of instruction and greater accountability. Opponents (myself included) argue that none of these supposed gains have been empirically demonstrated, and that the costs to privacy, equity and democracy outweigh any potential gains. There is a growing critical literature and useful, recent books include:

Bradbury, A. & Roberts-Holmes, G. (2018). The Datafication of Primary and Early Years Education. Abingdon: Routledge

Jarke, J. & Breiter, A. (Eds.) (2020). The Datafication of Education. Abingdon: Routledge

Williamson, B. (2017). Big Data in Education: The digital future of learning, policy and practice. London: Sage

Concomitant with the rapid growth in the use of digital tools for language learning and teaching, and therefore the rapid growth in the amount of data that learners were (mostly unwittingly) giving away, came a growing interest in the need for learners to develop a set of digital competencies, or literacies, which would enable them to use these tools effectively. In the same year that Mayer-Schönberger and Cukier brought out their ‘Big Data’ book, the first book devoted to digital literacies in English language teaching came out (Dudeney et al., 2013). They defined digital literacies as the individual and social skills needed to effectively interpret, manage, share and create meaning in the growing range of digital communication channels (Dudeney et al., 2013: 2). The book contained a couple of activities designed to raise students’ awareness of online identity issues, along with others intended to promote critical thinking about digitally-mediated information (what the authors call ‘information literacy’), but ‘critical literacy’ was missing from the authors’ framework.

Critical thinking and critical literacy are not the same thing. Although there is no generally agreed definition of the former (with a small ‘c’), it is focussed primarily on logic and comprehension (Lee, 2011). Paul Dummett and John Hughes (2019: 4) describe it as ‘a mindset that involves thinking reflectively, rationally and reasonably’. The prototypical critical thinking activity involves the analysis of a piece of fake news (e.g. the task where students look at a website about tree octopuses in Dudeney et al. 2013: 198 – 203). Critical literacy, on the other hand, involves standing back from texts and technologies and viewing them as ‘circulating within a larger social and textual context’ (Warnick, 2002). Consideration of the larger social context necessarily entails consideration of unequal power relationships (Leee, 2011; Darvin, 2017), such as that between Google and the average user of Google. And it follows from this that critical literacy has a socio-political emancipatory function.

Critical digital literacy is now a growing field of enquiry (e.g. Pötzsch, 2019) and there is an awareness that digital competence frameworks, such as the Digital Competence Framework of the European Commission, are incomplete and out of date without the inclusion of critical digital literacy. Dudeney et al (2013) clearly recognise the importance of including critical literacy in frameworks of digital literacies. In Pegrum et al. (2018, unfortunately paywalled), they update the framework from their 2013 book, and the biggest change is the inclusion of critical literacy. They divide this into the following:

  • critical digital literacy – closely related to information literacy
  • critical mobile literacy – focussing on issues brought to the fore by mobile devices, ranging from protecting privacy through to safeguarding mental and physical health
  • critical material literacy – concerned with the material conditions underpinning the use of digital technologies, ranging from the socioeconomic influences on technological access to the environmental impacts of technological manufacturing and disposal
  • critical philosophical literacy – concerned with the big questions posed to and about humanity as our lives become conjoined with the existence of our smart devices, robots and AI
  • critical academic literacy, which refers to the pressing need to conduct meaningful studies of digital technologies in place of what is at times ‘cookie-cutter’ research

I’m not entirely convinced by the subdivisions, but labelling in this area is still in its infancy. My particular interest here, in critical data literacy, seems to span across a number of their sub-divisions. And the term that I am using, ‘critical data literacy’, which I’ve taken from Tygel & Kirsch (2016), is sometimes referred to as ‘critical big data literacy’ (Sander, 2020a) or ‘personal data literacy’ (Pangrazio & Selwyn, 2019). Whatever it is called, it is the development of ‘informed and critical stances toward how and why [our] data are being used’ (Pangrazio & Selwyn, 2018). One of the two practical activities in the Pegrum at al article (2018) looks at precisely this area (the task requires students to consider the data that is collected by fitness apps). It will be interesting to see, when the new edition of the ‘Digital Literacies’ book comes out (perhaps some time next year), how many other activities take a more overtly critical stance.

In the next post, I’ll be looking at a range of practical activities for developing critical data literacy in the classroom. This involves both bridging the gaps in knowledge (about data, algorithms and online privacy) and learning, practically, how to implement ‘this knowledge for a more empowered internet usage’ (Sander, 2020b).

Without wanting to invalidate the suggestions in the next post, a word of caution is needed. Just as critical thinking activities in the ELT classroom cannot be assumed to lead to any demonstrable increase in critical thinking (although there may be other benefits to the activities), activities to promote critical literacy cannot be assumed to lead to any actual increase in critical literacy. The reaction of many people may well be ‘It’s not like it’s life or death or whatever’ (Pangrazio & Selwyn, 2018). And, perhaps, education is rarely, if ever, a solution to political and social problems, anyway. And perhaps, too, we shouldn’t worry too much about educational interventions not leading to their intended outcomes. Isn’t that almost always the case? But, with those provisos in mind, I’ll come back next time with some practical ideas.

REFERENCES

Darvin R. (2017). Language, Ideology, and Critical Digital Literacy. In: Thorne S., May S. (eds) Language, Education and Technology. Encyclopedia of Language and Education (3rd ed.). Springer, Cham. pp. 17 – 30 https://doi.org/10.1007/978-3-319-02237-6_35

Dudeney, G., Hockly, N. & Pegrum, M. (2013). Digital Literacies. Harlow: Pearson Education

Dummett, P. & Hughes, J. (2019). Critical Thinking in ELT. Boston: National Geographic Learning

Lee, C. J. (2011). Myths about critical literacy: What teachers need to unlearn. Journal of Language and Literacy Education [Online], 7 (1), 95-102. Available at http://www.coa.uga.edu/jolle/2011_1/lee.pdf

Mayer-Schönberger, V. & Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think. London: John Murray

Pangrazio, L. & Selwyn, N. (2018). ‘It’s not like it’s life or death or whatever’: young people’s understandings of social media data. Social Media + Society, 4 (3): pp. 1–9. https://journals.sagepub.com/doi/pdf/10.1177/2056305118787808

Pangrazio, L. & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media and Society, 21 (2): pp. 419 – 437

Pegrum, M., Dudeney, G. & Hockly, N. (2018). Digital literacies revisited. The European Journal of Applied Linguistics and TEFL, 7 (2), pp. 3-24

Pötzsch, H. (2019). Critical Digital Literacy: Technology in Education Beyond Issues of User Competence and Labour-Market Qualifications. tripleC: Communication, Capitalism & Critique, 17: pp. 221 – 240 Available at https://www.triple-c.at/index.php/tripleC/article/view/1093

Sander, I. (2020a). What is critical big data literacy and how can it be implemented? Internet Policy Review, 9 (2). DOI: 10.14763/2020.2.1479 https://www.econstor.eu/bitstream/10419/218936/1/2020-2-1479.pdf

Sander, I. (2020b). Critical big data literacy tools – Engaging citizens and promoting empowered internet usage. Data & Policy, 2: DOI: https://doi.org/10.1017/dap.2020.5

Tygel, A. & Kirsch, R. (2016). Contributions of Paulo Freire for a Critical Data Literacy: a Popular Education Approach. The Journal of Community Informatics, 12 (3). Available at http://www.ci-journal.net/index.php/ciej/article/view/1296

Warnick, B. (2002). Critical Literacy in a Digital Era. Mahwah, NJ, Lawrence Erlbaum Associates

If you cast your eye over the English language teaching landscape, you can’t help noticing a number of prominent features that weren’t there, or at least were much less visible, twenty years ago. I’d like to highlight three. First, there is the interest in life skills (aka 21st century skills). Second, there is the use of digital technology to deliver content. And third, there is a concern with measuring educational outputs through frameworks such as the Pearson GSE. In this post, I will focus primarily on the last of these, with a closer look at measuring teacher performance.

Recent years have seen the development of a number of frameworks for evaluating teacher competence in ELT. These include

TESOL has also produced a set of guidelines for developing professional teaching standards for EFL.

Frameworks such as these were not always intended as tools to evaluate teachers. The British Council’s framework, for example, was apparently designed for teachers to understand and plan their own professional development. Similarly, the Cambridge framework says that it is for teachers to see where they are in their development – and think about where they want to go next. But much like the CEFR for language competence, frameworks can be used for purposes rather different from their designers’ intentions. I think it is likely that frameworks such as these are more often used to evaluate teachers than for teachers to evaluate themselves.

But where did the idea for such frameworks come from? Was there a suddenly perceived need for things like this to aid in self-directed professional development? Were teachers’ associations calling out for frameworks to help their members? Even if that were the case, it would still be useful to know why, and why now.

One possibility is that the interest in life skills, digital technology and the measurement of educational outputs have all come about as a result of what has been called the Global Educational Reform Movement, or GERM (Sahlberg, 2016). GERM dates back to the 1980s and the shifts (especially in the United States under Reagan and the United Kingdom under Thatcher) in education policy towards more market-led approaches which emphasize (1) greater competition between educational providers, (2) greater autonomy from the state for educational providers (and therefore a greater role for private suppliers), (3) greater choice of educational provider for students and their parents, and (4) standardized tests and measurements which allow consumers of education to make more informed choices. One of the most significant GERM vectors is the World Bank.

The interest in incorporating the so-called 21st century skills as part of the curriculum can be traced back to the early 1980s when the US National Commission on Excellence in Education recommended the inclusion of a range of skills, which eventually crystallized into the four Cs of communication, collaboration, critical thinking and creativity. The labelling of this skill set as ‘life skills’ or ‘21st century skills’ was always something of a misnomer: the reality was that these were the soft skills required by the world of work. The key argument for their inclusion in the curriculum was that they were necessary for the ‘competitiveness and wealth of corporations and countries’ (Trilling & Fadel, 2009: 7). Unsurprisingly, the World Bank, whose interest in education extends only so far as its economic value, embraced the notion of ‘life skills’ with enthusiasm. Its document ‘Life skills : what are they, why do they matter, and how are they taught?’ (World Bank, 2013), makes the case very clearly. It took a while for the world of English language teaching to get on board, but by 2012, Pearson was already sponsoring a ‘signature event’ at IATEFL Glasgow entitled ‘21st Century Skills for ELT’. Since then, the currency of ‘life skills’ as an ELT buzz phrase has not abated.

Just as the World Bank’s interest in ‘life skills’ is motivated by the perceived need to prepare students for the world of work (for participation in the ‘knowledge economy’), the Bank emphasizes the classroom use of computers and resources from the internet: Information and communication technology (ICT) allows the adaptation of globally available information to local learning situations. […] A large percentage of the World Bank’s education funds are used for the purchase of educational technology. […] According to the Bank’s figures, 40 per cent of their education budget in 2000 and 27 per cent in 2001 was used to purchase technology. (Spring, 2015: 50).

Digital technology is also central to capturing data, which will allow for the measurement of educational outputs. As befits an organisation of economists that is interested in the cost-effectiveness of investments into education, it accords enormous importance to what are thought to be empirical measures or accountability. So intrinsic to the Bank’s approach is this concern with measurement that ‘the Bank’s implicit message to national governments seems to be: ‘improve your data collection capacity so that we can run more reliable cross-country analysis and regressions’. (Verger & Bonal, 2012: 131).

Measuring the performance of teachers is, of course, a part of assessing educational outputs. The World Bank, which sees global education as fundamentally ‘broken’, has, quite recently, turned more of its attention to the role of teachers. A World Bank blog from 2019 explains the reasons:

A growing body of evidence suggests the learning crisis is, at its core, a teaching crisis. For students to learn, they need good teachers—but many education systems pay little attention to what teachers know, what they do in the classroom, and in some cases whether they even show up. Rapid technological change is raising the stakes. Technology is already playing a crucial role in providing support to teachers, students, and the learning process more broadly. It can help teachers better manage the classroom and offer different challenges to different students. And technology can allow principals, parents, and students to interact seamlessly.

A key plank in the World Banks’s attempts to implement its educational vision is its System Assessment and Benchmarking for Education Results (SABER), which I will return to in due course. As part of its SABER efforts, last year the World Bank launched its ‘Teach’ tool . This tool is basically an evaluation framework. Videos of lessons are recorded and coded for indicators of teacher efficiency by coders who can be ‘90% reliable’ after only four days of training. The coding system focuses on the time that students spend on-task, but also ‘life skills’ like collaboration and critical thinking (see below).

Teach framework

Like the ELT frameworks, it can be used as a professional development tool, but, like them, it may also be used for summative evaluation.

The connections between those landmarks on the ELT landscape and the concerns of the World Bank are not, I would suggest, coincidental. The World Bank is, of course, not the only player in GERM, but it is a very special case. It is the largest single source of external financing in ‘developing countries’ (Beech, 2009: 345), managing a portfolio of $8.9 billion, with operations in 70 countries as of August 2013 (Spring, 2015: 32). Its loans come attached with conditions which tie the borrowing countries to GERM objectives. Arguably of even greater importance than its influence through funding, is the Bank’s direct entry into the world of ideas:

The Bank yearns for a deeper and more comprehensive impact through avenues of influence transcending both project and program loans. Not least in education, the World Bank is investing much in its quest to shape global opinion about economic, developmental, and social policy. Rather than imposing views through specific loan negotiations, Bank style is broadening in attempts to lead borrower country officials to its preferred way of thinking. (Jones, 2007: 259).

The World Bank sees itself as a Knowledge Bank and acts accordingly. Rizvi and Lingard (2010: 48) observe that ‘in many nations of the Global South, the only extant education policy analysis is research commissioned by donor agencies such as the World Bank […] with all the implications that result in relation to problem setting, theoretical frameworks and methodologies’. Hundreds of academics are engaged to do research related to the Bank’s areas of educational interest, and ‘the close links with the academic world give a strong credibility to the ideas disseminated by the Bank […] In fact, many ideas that acquired currency and legitimacy were originally proposed by them. This is the case of testing students and using the results to evaluate progress in education’ (Castro, 2009: 472).

Through a combination of substantial financial clout and relentless marketing (Selwyn, 2013: 50), the Bank has succeeded in shaping global academic discourse. In partnership with similar institutions, it has introduced a way of classifying and thinking about education (Beech, 2009: 352). It has become, in short, a major site ‘for the organization of knowledge about education’ (Rizvi & Lingard, 2010: 79), wielding ‘a degree of power that has arguably enabled it to shape the educational agendas of nations throughout the Global South’ and beyond (Menashy, 2012).

So, is there any problem in the world of ELT taking up the inclusion of ‘life skills’? I think there is. The first is one of definition. Creativity and critical thinking are very poorly defined, meaning very different things to different people, so it is not always clear what is being taught. Following on from this, there is substantial debate about whether such skills can actually be taught at all, and, if they can, how they should be taught. It seems highly unlikely that the tokenistic way in which they are ‘taught’ in most published ELT courses can be of any positive impact. But this is not my main reservation, which is that, by and large, we have come to uncritically accept the idea that English language learning is mostly concerned with preparation for the workplace (see my earlier post ‘The EdTech Imaginary in ELT’).

Is there any problem with the promotion of digital technologies in ELT? Again, I think there is, and a good proportion of the posts on this blog have argued for the need for circumspection in rolling out more technology in language learning and teaching. My main reason is that while it is clear that this trend is beneficial to technology vendors, it is much less clear that advantages will necessarily accrue to learners. Beyond this, there must be serious concerns about data ownership, privacy, and the way in which the datafication of education, led by businesses and governments in the Global North, is changing what counts as good education, a good student or an effective teacher, especially in the Global South. ‘Data and metrics,’ observe Williamson et al. (2020: 353), ‘do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place’.

And what about tools for evaluating teacher competences? Here I would like to provide a little more background. There is, first of all, a huge question mark about how accurately such tools measure what they are supposed to measure. This may not matter too much if the tool is only used for self-evaluation or self-development, but ‘once smart systems of data collection and social control are available, they are likely to be widely applied for other purposes’ (Sadowski, 2020: 138). Jaime Saavedra, head of education at the World Bank, insists that the World Bank’s ‘Teach’ tool is not for evaluation and is not useful for firing teachers who perform badly.

Saavedra needs teachers to buy into the tool, so he obviously doesn’t want to scare them off. However, ‘Teach’ clearly is an evaluation tool (if not, what is it?) and, as with other tools (I’m thinking of CEFR and teacher competency frameworks in ELT), its purposes will evolve. Eric Hanushek, an education economist at Stanford University, has commented that ‘this is a clear evaluation tool at the probationary stage … It provides a basis for counseling new teachers on how they should behave … but then again if they don’t change over the first few years you also have information you should use.

At this point, it is useful to take a look at the World Bank’s attitudes towards teachers. Teachers are seen to be at the heart of the ‘learning crisis’. However, the greatest focus in World Bank documents is on (1) teacher absenteeism in some countries, (2) unskilled and demotivated teachers, and (3) the reluctance of teachers and their unions to back World Bank-sponsored reforms. As real as these problems are, it is important to understand that the Bank has been complicit in them:

For decades, the Bank has criticised pre-service and in-service teacher training as not cost-effective For decades, the Bank has been pushing the hiring of untrained contract teachers as a cheap fix and a way to get around teacher unions – and contract teachers are again praised in the World Bank Development Report (WDR). This contradicts the occasional places in the WDR in which the Bank argues that developing countries need to follow the lead of the few countries that attract the best students to teaching, improve training, and improve working conditions. There is no explicit evidence offered at all for the repeated claim that teachers are unmotivated and need to be controlled and monitored to do their job. The Bank has a long history of blaming teachers and teacher unions for educational failures. The Bank implicitly argues that the problem of teacher absenteeism, referred to throughout the report, means teachers are unmotivated, but that simply is not true. Teacher absenteeism is not a sign of low motivation. Teacher salaries are abysmally low, as is the status of teaching. Because of this, teaching in many countries has become an occupation of last resort, yet it still attracts dedicated teachers. Once again, the Bank has been very complicit in this state of affairs as it, and the IMF, for decades have enforced neoliberal, Washington Consensus policies which resulted in government cutbacks and declining real salaries for teachers around the world. It is incredible that economists at the Bank do not recognise that the deterioration of salaries is the major cause of teacher absenteeism and that all the Bank is willing to peddle are ineffective and insulting pay-for-performance schemes. (Klees, 2017).

The SABER framework (referred to above) focuses very clearly on policies for hiring, rewarding and firing teachers.

[The World Bank] places the private sector’s methods of dealing with teachers as better than those of the public sector, because it is more ‘flexible’. In other words, it is possible to say that teachers can be hired and fired more easily; that is, hired without the need of organizing a public competition and fired if they do not achieve the expected outcomes as, for example, students’ improvements in international test scores. Further, the SABER document states that ‘Flexibility in teacher contracting is one of the primary motivations for engaging the private sector’ (World Bank, 2011: 4). This affirmation seeks to reduce expenditures on teachers while fostering other expenses such as the creation of testing schemes and spending more on ICTs, as well as making room to expand the hiring of private sector providers to design curriculum, evaluate students, train teachers, produce education software, and books. (De Siqueira, 2012).

The World Bank has argued consistently for a reduction of education costs by driving down teachers’ salaries. One of the authors of the World Bank Development Report 2018 notes that ‘in most countries, teacher salaries consume the lion’s share of the education budget, so there are already fewer resources to implement other education programs’. Another World Bank report (2007) makes the importance of ‘flexible’ hiring and lower salaries very clear:

In particular, recent progress in primary education in Francophone countries resulted from reduced teacher costs, especially through the recruitment of contractual teachers, generally at about 50% the salary of civil service teachers. (cited in Compton & Weiner, 2008: 7).

Merit pay (or ‘pay for performance’) is another of the Bank’s preferred wheezes. Despite enormous problems in reaching fair evaluations of teachers’ work and a distinct lack of convincing evidence that merit pay leads to anything positive (and may actually be counter-productive) (De Bruyckere et al., 2018: 143 – 147), the Bank is fully committed to the idea. Perhaps this is connected to the usefulness of merit pay in keeping teachers on their toes, compliant and fearful of losing their jobs, rather than any desire to improve teacher effectiveness?

There is evidence that this may be the case. Yet another World Bank report (Bau & Das, 2017) argues, on the basis of research, that improved TVA (teacher value added) does not correlate with wages in the public sector (where it is hard to fire teachers), but it does in the private sector. The study found that ‘a policy change that shifted public hiring from permanent to temporary contracts, reducing wages by 35 percent, had no adverse impact on TVA’. All of which would seem to suggest that improving the quality of teaching is of less importance to the Bank than flexible hiring and firing. This is very much in line with a more general advocacy of making education fit for the world of work. Lois Weiner of New Jersey City University puts it like this:

The architects of [GERM] policies—imposed first in developing countries—openly state that the changes will make education better fit the new global economy by producing workers who are (minimally) educated for jobs that require no more than a 7th or 8th grade education; while a small fraction of the population receive a high quality education to become the elite who oversee finance, industry, and technology. Since most workers do not need to be highly educated, it follows that teachers with considerable formal education and experience are neither needed nor desired because they demand higher wages, which is considered a waste of government money. Most teachers need only be “good enough”—as one U.S. government official phrased it—to follow scripted materials that prepare students for standardized tests. (Weiner, 2012).

It seems impossible to separate the World Bank’s ‘Teach’ tool from the broader goals of GERM. Teacher evaluation tools, like the teaching of 21st century skills and the datafication of education, need to be understood properly, I think, as means to an end. It’s time to spell out what that end is.

The World Bank’s mission is ‘to end extreme poverty (by reducing the share of the global population that lives in extreme poverty to 3 percent by 2030)’ and ‘to promote shared prosperity (by increasing the incomes of the poorest 40 percent of people in every country)’. Its education activities are part of this broad aim and are driven by subscription to human capital theory (a view of the skills, knowledge and experience of individuals in terms of their ability to produce economic value). This may be described as the ‘economization of education’: a shift in educational concerns away from ‘such things as civic participation, protecting human rights, and environmentalism to economic growth and employment’ (Spring, 2015: xiii). Both students and teachers are seen as human capital. For students, human capital education places an emphasis on the cognitive skills needed to succeed in the workplace and the ‘soft skills’, needed to function in the corporate world (Spring, 2015: 2). Accordingly, World Bank investments require ‘justifications on the basis of manpower demands’ (Heyneman, 2003: 317). One of the Bank’s current strategic priorities is the education of girls: although human rights and equity may also play a part, the Bank’s primary concern is that ‘Not Educating Girls Costs Countries Trillions of Dollars’ .

According to the Bank’s logic, its educational aims can best be achieved through a combination of support for the following:

  • cost accounting and quantification (since returns on investment must be carefully measured)
  • competition and market incentives (since it is believed that the ‘invisible hand’ of the market leads to the greatest benefits)
  • the private sector in education and a rolling back of the role of the state (since it is believed that private ownership improves efficiency)

The package of measures is a straightforward reflection of ‘what Western mainstream economists believe’ (Castro, 2009: 474).

Mainstream Western economics is, however, going through something of a rocky patch right now. Human capital theory is ‘useful when prevailing conditions are right’ (Jones, 2007: 248), but prevailing conditions are not right in much of the world (even in the United States), and the theory ‘for the most part ignores the intersections of poverty, equity and education’ (Menashy, 2012). In poorer countries evidence for the positive effects of markets in education is in very short supply, and even in richer countries it is still not conclusive (Verger & Bonal, 2012: 135). An OECD Education Paper (Waslander et al., 2010: 64) found that the effects of choice and competition between schools were at best small, if indeed any effects were found at all. Similarly, the claim that privatization improves efficiency is not sufficiently supported by evidence. Analyses of PISA data would seem to indicate that, ‘all else being equal (especially when controlling for the socio-economic status of the students), the type of ownership of the school, whether it is a private or a state school, has only modest effects on student achievement or none at all’ (Verger & Bonal, 2012: 133). Educational privatization as a one-size-fits-all panacea to educational problems has little to recommend it.

There are, then, serious limitations in the Bank’s theoretical approach. Its practical track record is also less than illustrious, even by the Bank’s own reckoning. Many of the Bank’s interventions have proved very ‘costly to developing countries. At the Bank’s insistence countries over-invested in vocational and technical education. Because of the narrow definition of recurrent costs, countries ignored investments in reading materials and in maintaining teacher salaries. Later at the Bank’s insistence, countries invested in thousands of workshops and laboratories that, for the most part, became useless ‘white elephants’ (Heyneman, 2003: 333).

As a bank, the World Bank is naturally interested in the rate of return of investment in that capital, and is therefore concerned with efficiency and efficacy. This raises the question of ‘Effective for what?’ and given that what may be effective for one individual or group may not necessarily be effective for another individual or group, one may wish to add a second question: ‘Effective for whom?’ (Biesta, 2020: 31). Critics of the World Bank, of whom there are many, argue that its policies serve ‘the interests of corporations by keeping down wages for skilled workers, cause global brain migration to the detriment of developing countries, undermine local cultures, and ensure corporate domination by not preparing school graduates who think critically and are democratically oriented’ (Spring, 2015: 56). Lest this sound a bit harsh, we can turn to the Bank’s own commissioned history: ‘The way in which [the Bank’s] ideology has been shaped conforms in significant degree to the interests and conventional wisdom of its principal stockholders [i.e. bankers and economists from wealthy nations]. International competitive bidding, reluctance to accord preferences to local suppliers, emphasis on financing foreign exchange costs, insistence on a predominant use of foreign consultants, attitudes toward public sector industries, assertion of the right to approve project managers – all proclaim the Bank to be a Western capitalist institution’ (Mason & Asher, 1973: 478 – 479).

The teaching of ‘life skills’, the promotion of data-capturing digital technologies and the push to evaluate teachers’ performance are, then, all closely linked to the agenda of the World Bank, and owe their existence in the ELT landscape, in no small part, to the way that the World Bank has shaped educational discourse. There is, however, one other connection between ELT and the World Bank which must be mentioned.

The World Bank’s foreign language instructional goals are directly related to English as a global language. The Bank urges, ‘Policymakers in developing countries …to ensure that young people acquire a language with more than just local use, preferably one used internationally.’ What is this international language? First, the World Bank mentions that schools of higher education around the world are offering courses in English. In addition, the Bank states, ‘People seeking access to international stores of knowledge through the internet require, principally, English language skills.’ (Spring, 2015: 48).

Without the World Bank, then, there might be a lot less English language teaching than there is. I have written this piece to encourage people to think more about the World Bank, its policies and particular instantiations of those policies. You might or might not agree that the Bank is an undemocratic, technocratic, neoliberal institution unfit for the necessities of today’s world (Klees, 2017). But whatever you think about the World Bank, you might like to consider the answers to Tony Benn’s ‘five little democratic questions’ (quoted in Sardowski, 2020: 17):

  • What power has it got?
  • Where did it get this power from?
  • In whose interests does it exercise this power?
  • To whom is it accountable?
  • How can we get rid of it?

References

Bau, N. and Das, J. (2017). The Misallocation of Pay and Productivity in the Public Sector : Evidence from the Labor Market for Teachers. Policy Research Working Paper; No. 8050. World Bank, Washington, DC. Retrieved [18 May 2020] from https://openknowledge.worldbank.org/handle/10986/26502

Beech, J. (2009). Who is Strolling Through The Global Garden? International Agencies and Educational Transfer. In Cowen, R. and Kazamias, A. M. (Eds.) Second International Handbook of Comparative Education. Dordrecht: Springer. pp. 341 – 358

Biesta, G. (2020). Educational Research. London: Bloomsbury.

Castro, C. De M., (2009). Can Multilateral Banks Educate The World? In Cowen, R. and Kazamias, A. M. (Eds.) Second International Handbook of Comparative Education. Dordrecht: Springer. pp. 455 – 478

Compton, M. and Weiner, L. (Eds.) (2008). The Global Assault on Teaching, Teachers, and their Unions. New York: Palgrave Macmillan

De Bruyckere, P., Kirschner, P.A. and Hulshof, C. (2020). More Urban Myths about Learning and Education. New York: Routledge.

De Siqueira, A. C. (2012). The 2020 World Bank Education Strategy: Nothing New, or the Same Old Gospel. In Klees, S. J., Samoff, J. and Stromquist, N. P. (Eds.) The World Bank and Education. Rotterdam: Sense Publishers. pp. 69 – 81

Heyneman, S.P. (2003). The history and problems in the making of education policy at the World Bank 1960–2000. International Journal of Educational Development 23 (2003) pp. 315–337. Retrieved [18 May 2020] from https://www.academia.edu/29593153/The_History_and_Problems_in_the_Making_of_Education_Policy_at_the_World_Bank_1960_2000

Jones, P. W. (2007). World Bank Financing of Education. 2nd edition. Abingdon, Oxon.: Routledge.

Klees, S. (2017). A critical analysis of the World Bank’s World Development Report on education. Retrieved [18 May 2020] from: https://www.brettonwoodsproject.org/2017/11/critical-analysis-world-banks-world-development-report-education/

Mason, E. S. & Asher, R. E. (1973). The World Bank since Bretton Woods. Washington, DC: Brookings Institution.

Menashy, F. (2012). Review of Klees, S J., Samoff, J. & Stromquist, N. P. (Eds) (2012). The World Bank and Education: Critiques and Alternatives .Rotterdam: Sense Publishers. Education Review, 15. Retrieved [18 May 2020] from https://www.academia.edu/7672656/Review_of_The_World_Bank_and_Education_Critiques_and_Alternatives

Rizvi, F. & Lingard, B. (2010). Globalizing Education Policy. Abingdon, Oxon.: Routledge.

Sadowski, J. (2020). Too Smart. Cambridge, MA.: MIT Press.

Sahlberg, P. (2016). The global educational reform movement and its impact on schooling. In K. Mundy, A. Green, R. Lingard, & A. Verger (Eds.), The handbook of global policy and policymaking in education. New York, NY: Wiley-Blackwell. pp.128 – 144

Selwyn, N. (2013). Education in a Digital World. New York: Routledge.

Spring, J. (2015). Globalization of Education 2nd Edition. New York: Routledge.

Trilling, B. & C. Fadel (2009). 21st Century Skills. San Francisco: Wiley

Verger, A. & Bonal, X. (2012). ‘All Things Being Equal?’ In Klees, S. J., Samoff, J. and Stromquist, N. P. (Eds.) The World Bank and Education. Rotterdam: Sense Publishers. pp. 69 – 81

Waslander, S., Pater, C. & van der Weide, M. (2010). Markets in Education: An analytical review of empirical research on market mechanisms in education. OECD EDU Working Paper 52.

Weiner, L. (2012). Social Movement Unionism: Teachers Can Lead the Way. Reimagine, 19 (2) Retrieved [18 May 2020] from: https://www.reimaginerpe.org/19-2/weiner-fletcher

Williamson, B., Bayne, S. & Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives, Teaching in Higher Education, 25:4, 351-365, DOI: 10.1080/13562517.2020.1748811

World Bank. (2013). Life skills : what are they, why do they matter, and how are they taught? (English). Adolescent Girls Initiative (AGI) learning from practice series. Washington DC ; World Bank. Retrieved [18 May 2020] from: http://documents.worldbank.org/curated/en/569931468331784110/Life-skills-what-are-they-why-do-they-matter-and-how-are-they-taught

Book_coverIn my last post, I looked at shortcomings in edtech research, mostly from outside the world of ELT. I made a series of recommendations of ways in which such research could become more useful. In this post, I look at two very recent collections of ELT edtech research. The first of these is Digital Innovations and Research in Language Learning, edited by Mavridi and Saumell, and published this February by the Learning Technologies SIG of IATEFL. I’ll refer to it here as DIRLL. It’s available free to IATEFL LT SIG members, and can be bought for $10.97 as an ebook on Amazon (US). The second is the most recent edition (February 2020) of the Language Learning & Technology journal, which is open access and available here. I’ll refer to it here as LLTJ.

In both of these collections, the focus is not on ‘technology per se, but rather issues related to language learning and language teaching, and how they are affected or enhanced by the use of digital technologies’. However, they are very different kinds of publication. Nobody involved in the production of DIRLL got paid in any way (to the best of my knowledge) and, in keeping with its provenance from a teachers’ association, has ‘a focus on the practitioner as teacher-researcher’. Almost all of the contributing authors are university-based, but they are typically involved more in language teaching than in research. With one exception (a grant from the EU), their work was unfunded.

The triannual LLTJ is funded by two American universities and published by the University of Hawaii Press. The editors and associate editors are well-known scholars in their fields. The journal’s impact factor is high, close to the impact factor of the paywalled reCALL (published by the University of Cambridge), which is the highest-ranking journal in the field of CALL. The contributing authors are all university-based, many with a string of published articles (in prestige journals), chapters or books behind them. At least six of the studies were funded by national grant-awarding bodies.

I should begin by making clear that there was much in both collections that I found interesting. However, it was not usually the research itself that I found informative, but the literature review that preceded it. Two of the chapters in DIRLL were not really research, anyway. One was the development of a template for evaluating ICT-mediated tasks in CLIL, another was an advocacy of comics as a resource for language teaching. Both of these were new, useful and interesting to me. LLTJ included a valuable literature review of research into VR in FL learning (but no actual new research). With some exceptions in both collections, though, I felt that I would have been better off curtailing my reading after the reviews. Admittedly, there wouldn’t be much in the way of literature reviews if there were no previous research to report …

It was no surprise to see the learners who were the subjects of this research were overwhelmingly university students. In fact, only one article (about a high-school project in Israel, reported in DIRLL) was not about university students. The research areas focused on reflected this bias towards tertiary contexts: online academic reading skills, academic writing, online reflective practices in teacher training programmes, etc.

In a couple of cases, the selection of experimental subjects seemed plain bizarre. Why, if you want to find out about the extent to which Moodle use can help EAP students become better academic readers (in DIRLL), would you investigate this with a small volunteer cohort of postgraduate students of linguistics, with previous experience of using Moodle and experience of teaching? Is a less representative sample imaginable? Why, if you want to investigate the learning potential of the English File Pronunciation app (reported in LLTJ), which is clearly most appropriate for A1 – B1 levels, would you do this with a group of C1-level undergraduates following a course in phonetics as part of an English Studies programme?

More problematic, in my view, was the small sample size in many of the research projects. The Israeli virtual high school project (DIRLL), previously referred to, started out with only 11 students, but 7 dropped out, primarily, it seems, because of institutional incompetence: ‘the project was probably doomed […] to failure from the start’, according to the author. Interesting as this was as an account of how not to set up a project of this kind, it is simply impossible to draw any conclusions from 4 students about the potential of a VLE for ‘interaction, focus and self-paced learning’. The questionnaire investigating experience of and attitudes towards VR (in DIRLL) was completed by only 7 (out of 36 possible) students and 7 (out of 70+ possible) teachers. As the author acknowledges, ‘no great claims can be made’, but then goes on to note the generally ‘positive attitudes to VR’. Perhaps those who did not volunteer had different attitudes? We will never know. The study of motivational videos in tertiary education (DIRLL) started off with 15 subjects, but 5 did not complete the necessary tasks. The research into L1 use in videoconferencing (LLTJ) started off with 10 experimental subjects, all with the same L1 and similar cultural backgrounds, but there was no data available from 4 of them (because they never switched into L1). The author claims that the paper demonstrates ‘how L1 is used by language learners in videoconferencing as a social semiotic resource to support social presence’ – something which, after reading the literature review, we already knew. But the paper also demonstrates quite clearly how L1 is not used by language learners in videoconferencing as a social semiotic resource to support social presence. In all these cases, it is the participants who did not complete or the potential participants who did not want to take part that have the greatest interest for me.

Unsurprisingly, the LLTJ articles had larger sample sizes than those in DIRLL, but in both collections the length of the research was limited. The production of one motivational video (DIRLL) does not really allow us to draw any conclusions about the development of students’ critical thinking skills. Two four-week interventions do not really seem long enough to me to discover anything about learner autonomy and Moodle (DIRLL). An experiment looking at different feedback modes needs more than two written assignments to reach any conclusions about student preferences (LLTJ).

More research might well be needed to compensate for the short-term projects with small sample sizes, but I’m not convinced that this is always the case. Lacking sufficient information about the content of the technologically-mediated tools being used, I was often unable to reach any conclusions. A gamified Twitter environment was developed in one project (DIRLL), using principles derived from contemporary literature on gamification. The authors concluded that the game design ‘failed to generate interaction among students’, but without knowing a lot more about the specific details of the activity, it is impossible to say whether the problem was the principles or the particular instantiation of those principles. Another project, looking at the development of pronunciation materials for online learning (LLTJ), came to the conclusion that online pronunciation training was helpful – better than none at all. Claims are then made about the value of the method used (called ‘innovative Cued Pronunciation Readings’), but this is not compared to any other method / materials, and only a very small selection of these materials are illustrated. Basically, the reader of this research has no choice but to take things on trust. The study looking at the use of Alexa to help listening comprehension and speaking fluency (LLTJ) cannot really tell us anything about IPAs unless we know more about the particular way that Alexa is being used. Here, it seems that the students were using Alexa in an interactive storytelling exercise, but so little information is given about the exercise itself that I didn’t actually learn anything at all. The author’s own conclusion is that the results, such as they are, need to be treated with caution. Nevertheless, he adds ‘the current study illustrates that IPAs may have some value to foreign language learners’.

This brings me onto my final gripe. To be told that IPAs like Alexa may have some value to foreign language learners is to be told something that I already know. This wasn’t the only time this happened during my reading of these collections. I appreciate that research cannot always tell us something new and interesting, but a little more often would be nice. I ‘learnt’ that goal-setting plays an important role in motivation and that gamification can boost short-term motivation. I ‘learnt’ that reflective journals can take a long time for teachers to look at, and that reflective video journals are also very time-consuming. I ‘learnt’ that peer feedback can be very useful. I ‘learnt’ from two papers that intercultural difficulties may be exacerbated by online communication. I ‘learnt’ that text-to-speech software is pretty good these days. I ‘learnt’ that multimodal literacy can, most frequently, be divided up into visual and auditory forms.

With the exception of a piece about online safety issues (DIRLL), I did not once encounter anything which hinted that there may be problems in using technology. No mention of the use to which student data might be put. No mention of the costs involved (except for the observation that many students would not be happy to spend money on the English File Pronunciation app) or the cost-effectiveness of digital ‘solutions’. No consideration of the institutional (or other) pressures (or the reasons behind them) that may be applied to encourage teachers to ‘leverage’ edtech. No suggestion that a zero-tech option might actually be preferable. In both collections, the language used is invariably positive, or, at least, technology is associated with positive things: uncovering the possibilities, promoting autonomy, etc. Even if the focus of these publications is not on technology per se (although I think this claim doesn’t really stand up to close examination), it’s a little disingenuous to claim (as LLTJ does) that the interest is in how language learning and language teaching is ‘affected or enhanced by the use of digital technologies’. The reality is that the overwhelming interest is in potential enhancements, not potential negative effects.

I have deliberately not mentioned any names in referring to the articles I have discussed. I would, though, like to take my hat off to the editors of DIRLL, Sophia Mavridi and Vicky Saumell, for attempting to do something a little different. I think that Alicia Artusi and Graham Stanley’s article (DIRLL) about CPD for ‘remote’ teachers was very good and should interest the huge number of teachers working online. Chryssa Themelis and Julie-Ann Sime have kindled my interest in the potential of comics as a learning resource (DIRLL). Yu-Ju Lan’s article about VR (LLTJ) is surely the most up-to-date, go-to article on this topic. There were other pieces, or parts of pieces, that I liked, too. But, to me, it’s clear that ‘more research is needed’ … much less than (1) better and more critical research, and (2) more digestible summaries of research.

Colloquium

At the beginning of March, I’ll be going to Cambridge to take part in a Digital Learning Colloquium (for more information about the event, see here ). One of the questions that will be explored is how research might contribute to the development of digital language learning. In this, the first of two posts on the subject, I’ll be taking a broad overview of the current state of play in edtech research.

I try my best to keep up to date with research. Of the main journals, there are Language Learning and Technology, which is open access; CALICO, which offers quite a lot of open access material; and reCALL, which is the most restricted in terms of access of the three. But there is something deeply frustrating about most of this research, and this is what I want to explore in these posts. More often than not, research articles end with a call for more research. And more often than not, I find myself saying ‘Please, no, not more research like this!’

First, though, I would like to turn to a more reader-friendly source of research findings. Systematic reviews are, basically literature reviews which can save people like me from having to plough through endless papers on similar subjects, all of which contain the same (or similar) literature review in the opening sections. If only there were more of them. Others agree with me: the conclusion of one systematic review of learning and teaching with technology in higher education (Lillejord et al., 2018) was that more systematic reviews were needed.

Last year saw the publication of a systematic review of research on artificial intelligence applications in higher education (Zawacki-Richter, et al., 2019) which caught my eye. The first thing that struck me about this review was that ‘out of 2656 initially identified publications for the period between 2007 and 2018, 146 articles were included for final synthesis’. In other words, only just over 5% of the research was considered worthy of inclusion.

The review did not paint a very pretty picture of the current state of AIEd research. As the second part of the title of this review (‘Where are the educators?’) makes clear, the research, taken as a whole, showed a ‘weak connection to theoretical pedagogical perspectives’. This is not entirely surprising. As Bates (2019) has noted: ‘since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). As a result, such AI applications tend to adopt a very behaviourist model of learning: present / test / feedback.’ More generally, it is clear that technology adoption (and research) is being driven by technology enthusiasts, with insufficient expertise in education. The danger is that edtech developers ‘will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning’ (Lynch, 2017).

This, then, is the first of my checklist of things that, collectively, researchers need to do to improve the value of their work. The rest of this list is drawn from observations mostly, but not exclusively, from the authors of systematic reviews, and mostly come from reviews of general edtech research. In the next blog post, I’ll look more closely at a recent collection of ELT edtech research (Mavridi & Saumell, 2020) to see how it measures up.

1 Make sure your research is adequately informed by educational research outside the field of edtech

Unproblematised behaviourist assumptions about the nature of learning are all too frequent. References to learning styles are still fairly common. The most frequently investigated skill that is considered in the context of edtech is critical thinking (Sosa Neira, et al., 2017), but this is rarely defined and almost never problematized, despite a broad literature that questions the construct.

2 Adopt a sceptical attitude from the outset

Know your history. Decades of technological innovation in education have shown precious little in the way of educational gains and, more than anything else, have taught us that we need to be sceptical from the outset. ‘Enthusiasm and praise that are directed towards ‘virtual education, ‘school 2.0’, ‘e-learning and the like’ (Selwyn, 2014: vii) are indications that the lessons of the past have not been sufficiently absorbed (Levy, 2016: 102). The phrase ‘exciting potential’, for example, should be banned from all edtech research. See, for example, a ‘state-of-the-art analysis of chatbots in education’ (Winkler & Söllner, 2018), which has nothing to conclude but ‘exciting potential’. Potential is fine (indeed, it is perhaps the only thing that research can unambiguously demonstrate – see section 3 below), but can we try to be a little more grown-up about things?

3 Know what you are measuring

Measuring learning outcomes is tricky, to say the least, but it’s understandable that researchers should try to focus on them. Unfortunately, ‘the vast array of literature involving learning technology evaluation makes it challenging to acquire an accurate sense of the different aspects of learning that are evaluated, and the possible approaches that can be used to evaluate them’ (Lai & Bower, 2019). Metrics such as student grades are hard to interpret, not least because of the large number of variables and the danger of many things being conflated in one score. Equally, or possibly even more, problematic, are self-reporting measures which are rarely robust. It seems that surveys are the most widely used instrument in qualitative research (Sosa Neira, et al., 2017), but these will tell us little or nothing when used for short-term interventions (see point 5 below).

4 Ensure that the sample size is big enough to mean something

In most of the research into digital technology in education that was analysed in a literature review carried out for the Scottish government (ICF Consulting Services Ltd, 2015), there were only ‘small numbers of learners or teachers or schools’.

5 Privilege longitudinal studies over short-term projects

The Scottish government literature review (ICF Consulting Services Ltd, 2015), also noted that ‘most studies that attempt to measure any outcomes focus on short and medium term outcomes’. The fact that the use of a particular technology has some sort of impact over the short or medium term tells us very little of value. Unless there is very good reason to suspect the contrary, we should assume that it is a novelty effect that has been captured (Levy, 2016: 102).

6 Don’t forget the content

The starting point of much edtech research is the technology, but most edtech, whether it’s a flashcard app or a full-blown Moodle course, has content. Research reports rarely give details of this content, assuming perhaps that it’s just fine, and all that’s needed is a little tech to ‘present learners with the ‘right’ content at the ‘right’ time’ (Lynch, 2017). It’s a foolish assumption. Take a random educational app from the Play Store, a random MOOC or whatever, and the chances are you’ll find it’s crap.

7 Avoid anecdotal accounts of technology use in quasi-experiments as the basis of a ‘research article’

Control (i.e technology-free) groups may not always be possible but without them, we’re unlikely to learn much from a single study. What would, however, be extremely useful would be a large, collated collection of such action-research projects, using the same or similar technology, in a variety of settings. There is a marked absence of this kind of work.

8 Enough already of higher education contexts

Researchers typically work in universities where they have captive students who they can carry out research on. But we have a problem here. The systematic review of Lundin et al (2018), for example, found that ‘studies on flipped classrooms are dominated by studies in the higher education sector’ (besides lacking anchors in learning theory or instructional design). With some urgency, primary and secondary contexts need to be investigated in more detail, not just regarding flipped learning.

9 Be critical

Very little edtech research considers the downsides of edtech adoption. Online safety, privacy and data security are hardly peripheral issues, especially with younger learners. Ignoring them won’t make them go away.

More research?

So do we need more research? For me, two things stand out. We might benefit more from, firstly, a different kind of research, and, secondly, more syntheses of the work that has already been done. Although I will probably continue to dip into the pot-pourri of articles published in the main CALL journals, I’m looking forward to a change at the CALICO journal. From September of this year, one issue a year will be thematic, with a lead article written by established researchers which will ‘first discuss in broad terms what has been accomplished in the relevant subfield of CALL. It should then outline which questions have been answered to our satisfaction and what evidence there is to support these conclusions. Finally, this article should pose a “soft” research agenda that can guide researchers interested in pursuing empirical work in this area’. This will be followed by two or three empirical pieces that ‘specifically reflect the research agenda, methodologies, and other suggestions laid out in the lead article’.

But I think I’ll still have a soft spot for some of the other journals that are coyer about their impact factor and that can be freely accessed. How else would I discover (it would be too mean to give the references here) that ‘the effective use of new technologies improves learners’ language learning skills’? Presumably, the ineffective use of new technologies has the opposite effect? Or that ‘the application of modern technology represents a significant advance in contemporary English language teaching methods’?

References

Bates, A. W. (2019). Teaching in a Digital Age Second Edition. Vancouver, B.C.: Tony Bates Associates Ltd. Retrieved from https://pressbooks.bccampus.ca/teachinginadigitalagev2/

ICF Consulting Services Ltd (2015). Literature Review on the Impact of Digital Technology on Learning and Teaching. Edinburgh: The Scottish Government. https://dera.ioe.ac.uk/24843/1/00489224.pdf

Lai, J.W.M. & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education, 133(1), 27-42. Elsevier Ltd. Retrieved January 14, 2020 from https://www.learntechlib.org/p/207137/

Levy, M. 2016. Researching in language learning and technology. In Farr, F. & Murray, L. (Eds.) The Routledge Handbook of Language Learning and Technology. Abingdon, Oxon.: Routledge. pp.101 – 114

Lillejord S., Børte K., Nesje K. & Ruud E. (2018). Learning and teaching with technology in higher education – a systematic review. Oslo: Knowledge Centre for Education https://www.forskningsradet.no/siteassets/publikasjoner/1254035532334.pdf

Lundin, M., Bergviken Rensfeldt, A., Hillman, T. et al. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education 15, 20 (2018) doi:10.1186/s41239-018-0101-6

Lynch, J. (2017). How AI Will Destroy Education. Medium, November 13, 2017. https://buzzrobot.com/how-ai-will-destroy-education-20053b7b88a6

Mavridi, S. & Saumell, V. (Eds.) (2020). Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL

Selwyn, N. (2014). Distrusting Educational Technology. New York: Routledge

Sosa Neira, E. A., Salinas, J. and de Benito Crosetti, B. (2017). Emerging Technologies (ETs) in Education: A Systematic Review of the Literature Published between 2006 and 2016. International Journal of Emerging Technologies in Education, 12 (5). https://online-journals.org/index.php/i-jet/article/view/6939

Winkler, R. & Söllner, M. (2018): Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. In: Academy of Management Annual Meeting (AOM). Chicago, USA. https://www.alexandria.unisg.ch/254848/1/JML_699.pdf

Zawacki-Richter, O., Bond, M., Marin, V. I. And Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education 2019

When the startup, AltSchool, was founded in 2013 by Max Ventilla, the former head of personalization at Google, it quickly drew the attention of venture capitalists and within a few years had raised $174 million from the likes of the Zuckerberg Foundation, Peter Thiel, Laurene Powell Jobs and Pierre Omidyar. It garnered gushing articles in a fawning edtech press which enthused about ‘how successful students can be when they learn in small, personalized communities that champion project-based learning, guided by educators who get a say in the technology they use’. It promised ‘a personalized learning approach that would far surpass the standardized education most kids receive’.

altschoolVentilla was an impressive money-raiser who used, and appeared to believe, every cliché in the edTech sales manual. Dressed in regulation jeans, polo shirt and fleece, he claimed that schools in America were ‘stuck in an industrial-age model, [which] has been in steady decline for the last century’ . What he offered, instead, was a learner-centred, project-based curriculum providing real-world lessons. There was a focus on social-emotional learning activities and critical thinking was vital.

The key to the approach was technology. From the start, software developers, engineers and researchers worked alongside teachers everyday, ‘constantly tweaking the Personalized Learning Plan, which shows students their assignments for each day and helps teachers keep track of and assess student’s learning’. There were tablets for pre-schoolers, laptops for older kids and wall-mounted cameras to record the lessons. There were, of course, Khan Academy videos. Ventilla explained that “we start with a representation of each child”, and even though “the vast majority of the learning should happen non-digitally”, the child’s habits and preferences gets converted into data, “a digital representation of the important things that relate to that child’s learning, not just their academic learning but also their non-academic learning. Everything logistic that goes into setting up the experience for them, whether it’s who has permission to pick them up or their allergy information. You name it.” And just like Netflix matches us to TV shows, “If you have that accurate and actionable representation for each child, now you can start to personalize the whole experience for that child. You can create that kind of loop you described where because we can represent a child well, we can match them to the right experiences.”

AltSchool seemed to offer the possibility of doing something noble, of transforming education, ‘bringing it into the digital age’, and, at the same time, a healthy return on investors’ money. Expanding rapidly, nine AltSchool microschools were opened in New York and the Bay Area, and plans were afoot for further expansion in Chicago. But, by then, it was already clear that something was going wrong. Five of the schools were closed before they had really got started and the attrition rate in some classrooms had reached about 30%. Revenue in 2018 was only $7 million and there were few buyers for the AltSchool platform. Quoting once more from the edTech bible, Ventilla explained the situation: ‘Our whole strategy is to spend more than we make,’ he says. Since software is expensive to develop and cheap to distribute, the losses, he believes, will turn into steep profits once AltSchool refines its product and lands enough customers.

The problems were many and apparent. Some of the buildings were simply not appropriate for schools, with no playgrounds or gyms, malfunctioning toilets, among other issues. Parents were becoming unhappy and accused AltSchool of putting ‘its ambitions as a tech company above its responsibility to teach their children. […] We kind of came to the conclusion that, really, AltSchool as a school was kind of a front for what Max really wants to do, which is develop software that he’s selling,’ a parent of a former AltSchool student told Business Insider. ‘We had really mediocre educators using technology as a crutch,’ said one father who transferred his child to a different private school after two years at AltSchool. […] We learned that it’s almost impossible to really customize the learning experience for each kid.’ Some parents began to wonder whether AltSchool had enticed families into its program merely to extract data from their children, then toss them aside?

With the benefit of hindsight, it would seem that the accusations were hardly unfair. In June of this year, AltSchool announced that its four remaining schools would be operated by a new partner, Higher Ground Education (a well-funded startup founded in 2016 which promotes and ‘modernises’ Montessori education). Meanwhile, AltSchool has been rebranded as Altitude Learning, focusing its ‘resources on the development and expansion of its personalized learning platform’ for licensing to other schools across the country.

Quoting once more from the edTech sales manual, Ventilla has said that education should drive the tech, not the other way round. Not so many years earlier, before starting AltSchool, Ventilla also said that he had read two dozen books on education and emerged a fan of Sir Ken Robinson. He had no experience as a teacher or as an educational administrator. Instead, he had ‘extensive knowledge of networks, and he understood the kinds of insights that can be gleaned from big data’.