The ‘Routledge Handbook of Language Learning and Technology’ (eds. Farr and Murray, 2016) claims to be ‘the essential reference’ on the topic and its first two sections are devoted to ‘Historical and conceptual concepts’ and ‘Core issues’. One chapter (‘Limitations and boundaries in language learning and technology’ by Kern and Malinowski) mentions that ‘a growing body of research in intercultural communication and online language learning recognises how all technologies are embedded in cultural and linguistic practices, meaning that a given technological artefact can be used in radically different ways, and for different purposes by different groups of people’ (p.205). However, in terms of critical analyses of technology and language learning, that’s about as far as this book goes. In over 500 pages, there is one passing reference to privacy and a couple of brief mentions of the digital divide. There is no meaningful consideration of the costs, ownership or externalities of EdTech, of the ways in which EdTech is sold and marketed, of the vested interests that profit from EdTech, of the connections between EdTech and the privatisation of education, of the non-educational uses to which data is put, or of the implications of attention tracking, facial analysis and dataveillance in educational settings.

The Routledge Handbook is not alone in this respect. Li Li’s ‘New Technologies and Language Learning’ (Palgrave, 2017) is breathlessly enthusiastic about the potential of EdTech. The opening chapter catalogues a series of huge investments in global EdTech, as if the scale of investment was an indication of its wisdom. No mention of the lack of evidence that huge investments into IWBs and PCs in classrooms led to any significant improvement in learning. No mention of how these investments were funded (or which other parts of budgets were cut). Instead, we are told that ‘computers can promote visual, verbal and kinaesthetic learning’ (p.5).

I have never come across a book-length critical analysis of technology and language learning. As the world of language teaching jumps on board Zoom, Google Meet, Microsoft Teams, Skype (aka Microsoft) and the like, the need for a better critical awareness of EdTech and language learning has never been more urgent. Fortunately, there is a growing body of critical literature on technology and general education. Here are my twelve favourites:

Big Data in Education1 Big Data in Education

Ben Williamson (Sage, 2017)

An investigation into the growing digitalization and datafication of education. Williamson looks at how education policy is enacted through digital tools, the use of learning analytics and educational data science. His interest is in the way that technology has reshaped the way we think about education and the book may be read as a critical response to the techno-enthusiasm of Mayer-Schönberger and Cukier’s ‘Learning with Big Data: The Future of Education’ (Houghton Mifflin Harcourt, 2014). Williamson’s blog, Code Acts in Education, is excellent.

 

Distrusting Educational Technology2 Distrusting Educational Technology

Neil Selwyn (Routledge, 2014)

Neil Selwyn is probably the most widely-quoted critical voice in this field, and this book is as good a place to start with his work as any. EdTech, for Selwyn, is a profoundly political affair, and this book explores the gulf between how it could be used, and how it is actually used. Unpacking the ideological agendas of what EdTech is and does, Selwyn covers the reduction of education along data-driven lines, the deskilling of educational labour, the commodification of learning, issues of inequality, and much more. An essential primer.

 

 

The Great American Education Industrial Complex3 The Great American Education-Industrial Complex

Anthony G. Picciano & Joel Spring (Routledge, 2013)

Covering similar ground to both ‘Education Networks’ and ‘Edu.net’ (see below), this book’s subtitle, ‘Ideology, Technology, and Profit’, says it all. Chapter 4 (‘Technology in American Education’) is of particular interest, tracing the recent history of EdTech and the for-profit sector. Chapter 5 provides a wide range of examples of the growing privatization (through EdTech) of American schooling.

 

 

Disruptive Fixation4 Disruptive Fixation

Christo Sims (Princeton University Press, 2017)

The story of a New York school, funded by philanthropists and put together by games designers and educational reformers, that promised to ‘reinvent the classroom for the digital age’. And how it all went wrong … reverting to conventional rote learning with an emphasis on discipline, along with gender and racialized class divisions. A cautionary tale about techno-philanthropism.

 

 

Education Networks5 Education Networks

Joel Spring (Routledge, 2012)

Similar in many ways to ‘Edu.net’ (see below), this is an analysis of the relationships between the interest groups (international agencies, private companies and philanthropic foundations) that are pushing for greater use of EdTech. Spring considers the psychological, social and political implications of the growth of EdTech and concludes with a discussion of the dangers of consumerist approaches to education and dataveillance.

 

 

Edunet6 Edu.net

Stephen J. Ball, Carolina Junemann & Diego Santori (Routledge, 2017)

An account of the ways in which international agencies, private companies (e.g. Bridge International Academies, Pearson) and philanthropic foundations shape global education policies, with a particular focus on India and Ghana. These policies include the standardisation of education, the focus on core subjects, the use of corporate management models and test-based accountability, and are key planks in what has been referred to as the Global Education Reform Movement (GERM). Chapter 4 (‘Following things’) focusses on the role of EdTech in realising GERM goals.

 

Education and Technology7 Education and Technology

Neil Selwyn (Continuum, 2011)

Although covering some similar ground to his ‘Distrusting Educational Technology’, this handy volume summarises key issues, including ‘does technology inevitably change education?’, ‘what can history tell us about education and technology?’, ‘does technology improve learning?’, ‘does technology make education fairer?’, ‘will technology displace the teacher?’ and ‘will technology displace the school?’.

 

 

The Evolution of American Educational Technology8 The Evolution of American Educational Technology

Paul Saettler (Information Age, 2004)

A goldmine of historical information, this is the first of three history books on my list. Early educational films from the start of the 20th century, educational radio, teaching machines and programmed instruction, early computer-assisted instruction like the PLATO project, educational broadcasting and television … moving on to interactive video, teleconferencing, and artificial intelligence. A fascinatingly detailed study of educational dreams and obsolescence.

 

Oversold and Underused9 Oversold and Underused

Larry Cuban (Harvard University Press, 2003)

Larry Cuban’s ground-breaking ‘Teachers and Machines: The Classroom Use of Technology since 1920’ (published in 1986, four years before Saettler’s history) was arguably the first critical evaluation of EdTech. In this title, Cuban pursues his interest in the troubled relationship between teachers and technology, arguing that more attention needs to be paid to the civic and social goals of schooling, goals that make the question of how many computers are in classrooms trivial. Larry Cuban’s blog is well worth following.

 

The Flickering Mind10 The Flickering Mind

Todd Oppenheimer (Random House, 2003)

A journalistic account of how approximately $70 billion was thrown at EdTech in American schools at the end of the 20th century in an attempt to improve them. It’s a tale of getting the wrong priorities, technological obsolescence and, ultimately, a colossal waste of money. Technology has changed since the writing of this book, but as the epigram of Alphonse Karr (cited by Oppenheimer in his afterword) puts it – ‘plus ça change, plus c’est la même chose’.

 

 

Teaching Machines11 Teaching Machines

Bill Ferster (John Hopkins University Press, 2014)

This is the third history of EdTech on my list. A critical look at past attempts to automate instruction, and learning from successes and failures as a way of trying to avoid EdTech insanity (‘doing the same thing over and over again and expecting different results’). Not explicitly political, but the final chapter offers a useful framework for ‘making sense of teaching machines’.

 

 

The Technical Fix12 The Technical Fix

Kevin Robbins & Frank Webster (Macmillan, 1989)

Over thirty years old now, this remarkably prescient book situates the push for more EdTech in Britain in the 1980s as a part of broader social and political forces demanding a more market-oriented and entrepreneurial approach to education. The argument that EdTech cannot be extracted from relations of power and the social values that these entail is presented forcefully. Technology, write the authors, ‘is always shaped by, even constitutive of, prevailing values and power distribution’.

 

 

And here’s hoping that Audrey Watters’ new book sees the light of day soon, so it can be added to the list of history books!

 

 

 

 

 

 

I’ve long felt that the greatest value of technology in language learning is to facilitate interaction between learners, rather than interaction between learners and software. I can’t claim any originality here. Twenty years ago, Kern and Warschauer (2000) described ‘the changing nature of computer use in language teaching’, away from ‘grammar and vocabulary tutorials, drill and practice programs’, towards computer-mediated communication (CMC). This change has even been described as a paradigm shift (Ciftci & Kocoglu, 2012: 62), although I suspect that the shift has affected approaches to research much more than it has actual practices.

However, there is one application of CMC that is probably at least as widespread in actual practice as it is in the research literature: online peer feedback. Online peer feedback on writing, especially in the development of academic writing skills in higher education, is certainly very common. To a much lesser extent, online peer feedback on speaking (e.g. in audio and video blogs) has also been explored (see, for example, Yeh et al., 2019 and Rodríguez-González & Castañeda, 2018).

Peer feedback

Interest in feedback has spread widely since the publication of Hattie and Timperley’s influential ‘The Power of Feedback’, which argued that ‘feedback is one of the most powerful influences on learning and achievement’ (Hattie & Timperley, 2007: 81). Peer feedback, in particular, has generated much optimism in the general educational literature as a formative practice (Double et al., 2019) because of its potential to:

  • ‘promote a sense of ownership, personal responsibility, and motivation,
  • reduce assessee anxiety and improve acceptance of negative feedback,
  • increase variety and interest, activity and interactivity, identification and bonding, self-confidence, and empathy for others’ (Topping, 1988: 256)
  • improve academic performance (Double et al., 2019).

In the literature on language learning, this enthusiasm is mirrored and peer feedback is generally recommended by both methodologists and researchers (Burkert & Wally, 2013). The reasons given, in addition to those listed above, include the following:

  • it can benefit both the receiver and the giver of feedback (Storch & Aldossary, 2019: 124),
  • it requires the givers of feedback to listen to or read attentively the language of their peers, and, in the process, may provide opportunities for them to make improvements in their own speaking and writing (Alshuraidah & Storch, 2019: 166–167,
  • it can facilitate a move away from a teacher centred classroom, and promote independent learning (and the skill of self-correction) as well as critical thinking (Hyland & Hyland, 2019: 7),
  • the target reader is an important consideration in any piece of writing (it is often specified in formal assessment tasks). Peer feedback may be especially helpful in developing the idea of what audience the writer is writing for (Nation, 2009: 139),
  • many learners are very receptive to peer feedback (Biber et al., 2011: 54),
  • it can reduce a teacher’s workload.

The theoretical arguments in support of peer feedback are supported to some extent by research. A recent meta-analysis found ‘an overall small to medium effect of peer assessment on academic performance’ (Double et al., 2019) in general educational settings. In language learning, ‘recent research has provided generally positive evidence to support the use of peer feedback in L2 writing classes’ (Yu & Lee, 2016: 467). However, ‘firm causal evidence is as yet unavailable’ (Yu & Lee, 2016: 466).

Online peer feedback

Taking peer feedback online would seem to offer a number of advantages over traditional face-to-face oral or written channels. These include:

  • a significant reduction of the logistical burden (Double et al.: 2019) because there are fewer constraints of time and place (Ho, 2015: 1),
  • the possibility (with many platforms) of monitoring students’ interactions more closely (DiGiovanni & Nagaswami, 2001: 268),
  • the encouragement of ‘greater and more equal member participation than face-to-face feedback’ (Yu & Lee, 2016: 469),
  • the possibility of reducing learners’ anxiety (which may be greater in face-to-face settings and / or when an immediate response to feedback is required) (Yeh et al.: 2019: 1).

Given these potential advantages, it is disappointing to find that a meta-analysis of peer assessment in general educational contexts did not find any significant difference between online and offline feedback (Double et al.:2019). Similarly, in language learning contexts, Yu & Lee (2016: 469) report that ‘there is inconclusive evidence about the impact of computer-mediated peer feedback on the quality of peer comments and text revisions’. The rest of this article is an exploration of possible reasons why online peer feedback is not more effective than it is.

The challenges of online peer feedback

Peer feedback is usually of greatest value when it focuses on the content and organization of what has been expressed. Learners, however, have a tendency to focus on formal accuracy, rather than on the communicative success (or otherwise) of their peers’ writing or speaking. Training can go a long way towards remedying this situation (Yu & Lee, 2016: 472 – 473): indeed, ‘the importance of properly training students to provide adequately useful peer comments cannot be over-emphasized’ (Bailey & Cassidy, 2018: 82). In addition, clearly organised rubrics to guide the feedback giver, such as those offered by feedback platforms like Peergrade, may also help to steer feedback in appropriate directions. There are, however, caveats which I will come on to.

A bigger problem occurs when the interaction which takes places when learners are supposedly engaged in peer feedback is completely off-task. In one analysis of students’ online discourse in two writing tasks, ‘meaning negotiation, error correction, and technical actions seldom occurred and […] social talk, task management, and content discussion predominated the chat’ (Liang, 2010: 45). One proposed solution to this is to grade peer comments: ‘reviewers will be more motivated to spend time in their peer review process if they know that their instructors will assess or even grade their comments’ (Choi, 2014: 225). Whilst this may sometimes be an effective strategy, the curtailment of social chat may actually create more problems than it solves, as we will see later.

Other challenges of peer feedback may be even less amenable to solutions. The most common problem concerns learners’ attitudes towards peer feedback: some learners are not receptive to feedback from their peers, preferring feedback from their teachers (Maas, 2017), and some learners may be reluctant to offer peer feedback for fear of giving offence. Attitudinal issues may derive from personal or cultural factors, or a combination of both. Whatever the cause, ‘interpersonal variables play a substantial role in determining the type and quality of peer assessment’ (Double et al., 2019). One proposed solution to this is to anonymise the peer feedback process, since it might be thought that this would lead to greater honesty and fewer concerns about loss of face. Research into this possibility, however, offers only very limited support: two studies out of three found little benefit of anonymity (Double et al., 2019). What is more, as with the curtailment of social chat, the practice must limit the development of the interpersonal relationship, and therefore positive pair / group dynamics (Liang, 2010: 45), that is necessary for effective collaborative work.

Towards solutions?

Online peer feedback is a form of computer-supported collaborative learning (CSCL), and it is to research in this broader field that I will now turn. The claim that CSCL ‘can facilitate group processes and group dynamics in ways that may not be achievable in face-to-face collaboration’ (Dooly, 2007: 64) is not contentious, but, in order for this to happen, a number of ‘motivational or affective perceptions are important preconditions’ (Chen et al., 2018: 801). Collaborative learning presupposes a collaborative pattern of peer interaction, as opposed to expert-novice, dominant- dominant, dominant-passive, or passive-passive patterns (Yu & Lee, 2016: 475).

Simply putting students together into pairs or groups does not guarantee collaboration. Collaboration is less likely to take place when instructional management focusses primarily on cognitive processes, and ‘socio-emotional processes are ignored, neglected or forgotten […] Social interaction is equally important for affiliation, impression formation, building social relationships and, ultimately, the development of a healthy community of learning’ (Kreijns et al., 2003: 336, 348 – 9). This can happen in all contexts, but in online environments, the problem becomes ‘more salient and critical’ (Kreijns et al., 2003: 336). This is why the curtailment of social chat, the grading of peer comments, and the provision of tight rubrics may be problematic.

There is no ‘single learning tool or strategy’ that can be deployed to address the challenges of online peer feedback and CSCL more generally (Chen et al., 2018: 833). In some cases, for personal or cultural reasons, peer feedback may simply not be a sensible option. In others, where effective online peer feedback is a reasonable target, the instructional approach must find ways to train students in the specifics of giving feedback on a peer’s work, to promote mutual support, to show how to work effectively with others, and to develop the language skills needed to do this (assuming that the target language is the language that will be used in the feedback).

So, what can we learn from looking at online peer feedback? I think it’s the same old answer: technology may confer a certain number of potential advantages, but, unfortunately, it cannot provide a ‘solution’ to complex learning issues.

 

Note: Some parts of this article first appeared in Kerr, P. (2020). Giving feedback to language learners. Part of the Cambridge Papers in ELT Series. Cambridge: Cambridge University Press. Available at: https://www.cambridge.org/gb/files/4415/8594/0876/Giving_Feedback_minipaper_ONLINE.pdf

 

References

Alshuraidah, A. and Storch, N. (2019). Investigating a collaborative approach to feedback. ELT Journal, 73 (2), pp. 166–174

Bailey, D. and Cassidy, R. (2018). Online Peer Feedback Tasks: Training for Improved L2 Writing Proficiency, Anxiety Reduction, and Language Learning Strategies. CALL-EJ, 20(2), pp. 70-88

Biber, D., Nekrasova, T., and Horn, B. (2011). The Effectiveness of Feedback for L1-English and L2-Writing Development: A Meta-Analysis, TOEFL iBT RR-11-05. Princeton: Educational Testing Service. Available at: https://www.ets.org/Media/Research/pdf/RR-11-05.pdf

Burkert, A. and Wally, J. (2013). Peer-reviewing in a collaborative teaching and learning environment. In Reitbauer, M., Campbell, N., Mercer, S., Schumm Fauster, J. and Vaupetitsch, R. (Eds.) Feedback Matters. Frankfurt am Main: Peter Lang, pp. 69–85

Chen, J., Wang, M., Kirschner, P.A. and Tsai, C.C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88 (6) (2018), pp. 799-843

Choi, J. (2014). Online Peer Discourse in a Writing Classroom. International Journal of Teaching and Learning in Higher Education, 26 (2): pp. 217 – 231

Ciftci, H. and Kocoglu, Z. (2012). Effects of Peer E-Feedback on Turkish EFL Students’ Writing Performance. Journal of Educational Computing Research, 46 (1), pp. 61 – 84

DiGiovanni, E. and Nagaswami. G. (2001). Online peer review: an alternative to face-to-face? ELT Journal 55 (3), pp. 263 – 272

Dooly, M. (2007). Joining forces: Promoting metalinguistic awareness through computer-supported collaborative learning. Language Awareness, 16 (1), pp. 57-74

Double, K.S., McGrane, J.A. and Hopfenbeck, T.N. (2019). The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies. Educational Psychology Review (2019)

Hattie, J. and Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), pp. 81–112

Ho, M. (2015). The effects of face-to-face and computer-mediated peer review on EFL writers’ comments and revisions. Australasian Journal of Educational Technology, 2015, 31(1)

Hyland K. and Hyland, F. (2019). Contexts and issues in feedback on L2 writing. In Hyland K. & Hyland, F. (Eds.) Feedback in Second Language Writing. Cambridge: Cambridge University Press, pp. 1–22

Kern, R. and Warschauer, M. (2000). Theory and practice of network-based language teaching. In M. Warschauer and R. Kern (eds) Network-Based Language Teaching: Concepts and Practice. New York: Cambridge University Press. pp. 1 – 19

Kreijns, K., Kirschner, P. A. and Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research. Computers in Human Behavior, 19(3), pp. 335-353

Liang, M. (2010). Using Synchronous Online Peer Response Groups in EFL Writing: Revision-Related Discourse. Language Learning and Technology, 14 (1), pp. 45 – 64

Maas, C. (2017). Receptivity to learner-driven feedback. ELT Journal, 71 (2), pp. 127–140

Nation, I. S. P. (2009). Teaching ESL / EFL Reading and Writing. New York: Routledge

Panadero, E. and Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 1–26

Rodríguez-González, E. and Castañeda, M. E. (2018). The effects and perceptions of trained peer feedback in L2 speaking: impact on revision and speaking quality, Innovation in Language Learning and Teaching, 12 (2), pp. 120-136, DOI: 10.1080/17501229.2015.1108978

Storch, N. and Aldossary, K. (2019). Peer Feedback: An activity theory perspective on givers’ and receivers’ stances. In Sato, M. and Loewen, S. (Eds.) Evidence-based Second Language Pedagogy. New York: Routledge, pp. 123–144

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68 (3), pp. 249-276.

Yeh, H.-C., Tseng, S.-S., and Chen, Y.-S. (2019). Using Online Peer Feedback through Blogs to Promote Speaking Performance. Educational Technology & Society, 22 (1), pp. 1–14

Yu, S. and Lee, I. (2016). Peer feedback in second language writing (2005 – 2014). Language Teaching, 49 (4), pp. 461 – 493

If you cast your eye over the English language teaching landscape, you can’t help noticing a number of prominent features that weren’t there, or at least were much less visible, twenty years ago. I’d like to highlight three. First, there is the interest in life skills (aka 21st century skills). Second, there is the use of digital technology to deliver content. And third, there is a concern with measuring educational outputs through frameworks such as the Pearson GSE. In this post, I will focus primarily on the last of these, with a closer look at measuring teacher performance.

Recent years have seen the development of a number of frameworks for evaluating teacher competence in ELT. These include

TESOL has also produced a set of guidelines for developing professional teaching standards for EFL.

Frameworks such as these were not always intended as tools to evaluate teachers. The British Council’s framework, for example, was apparently designed for teachers to understand and plan their own professional development. Similarly, the Cambridge framework says that it is for teachers to see where they are in their development – and think about where they want to go next. But much like the CEFR for language competence, frameworks can be used for purposes rather different from their designers’ intentions. I think it is likely that frameworks such as these are more often used to evaluate teachers than for teachers to evaluate themselves.

But where did the idea for such frameworks come from? Was there a suddenly perceived need for things like this to aid in self-directed professional development? Were teachers’ associations calling out for frameworks to help their members? Even if that were the case, it would still be useful to know why, and why now.

One possibility is that the interest in life skills, digital technology and the measurement of educational outputs have all come about as a result of what has been called the Global Educational Reform Movement, or GERM (Sahlberg, 2016). GERM dates back to the 1980s and the shifts (especially in the United States under Reagan and the United Kingdom under Thatcher) in education policy towards more market-led approaches which emphasize (1) greater competition between educational providers, (2) greater autonomy from the state for educational providers (and therefore a greater role for private suppliers), (3) greater choice of educational provider for students and their parents, and (4) standardized tests and measurements which allow consumers of education to make more informed choices. One of the most significant GERM vectors is the World Bank.

The interest in incorporating the so-called 21st century skills as part of the curriculum can be traced back to the early 1980s when the US National Commission on Excellence in Education recommended the inclusion of a range of skills, which eventually crystallized into the four Cs of communication, collaboration, critical thinking and creativity. The labelling of this skill set as ‘life skills’ or ‘21st century skills’ was always something of a misnomer: the reality was that these were the soft skills required by the world of work. The key argument for their inclusion in the curriculum was that they were necessary for the ‘competitiveness and wealth of corporations and countries’ (Trilling & Fadel, 2009: 7). Unsurprisingly, the World Bank, whose interest in education extends only so far as its economic value, embraced the notion of ‘life skills’ with enthusiasm. Its document ‘Life skills : what are they, why do they matter, and how are they taught?’ (World Bank, 2013), makes the case very clearly. It took a while for the world of English language teaching to get on board, but by 2012, Pearson was already sponsoring a ‘signature event’ at IATEFL Glasgow entitled ‘21st Century Skills for ELT’. Since then, the currency of ‘life skills’ as an ELT buzz phrase has not abated.

Just as the World Bank’s interest in ‘life skills’ is motivated by the perceived need to prepare students for the world of work (for participation in the ‘knowledge economy’), the Bank emphasizes the classroom use of computers and resources from the internet: Information and communication technology (ICT) allows the adaptation of globally available information to local learning situations. […] A large percentage of the World Bank’s education funds are used for the purchase of educational technology. […] According to the Bank’s figures, 40 per cent of their education budget in 2000 and 27 per cent in 2001 was used to purchase technology. (Spring, 2015: 50).

Digital technology is also central to capturing data, which will allow for the measurement of educational outputs. As befits an organisation of economists that is interested in the cost-effectiveness of investments into education, it accords enormous importance to what are thought to be empirical measures or accountability. So intrinsic to the Bank’s approach is this concern with measurement that ‘the Bank’s implicit message to national governments seems to be: ‘improve your data collection capacity so that we can run more reliable cross-country analysis and regressions’. (Verger & Bonal, 2012: 131).

Measuring the performance of teachers is, of course, a part of assessing educational outputs. The World Bank, which sees global education as fundamentally ‘broken’, has, quite recently, turned more of its attention to the role of teachers. A World Bank blog from 2019 explains the reasons:

A growing body of evidence suggests the learning crisis is, at its core, a teaching crisis. For students to learn, they need good teachers—but many education systems pay little attention to what teachers know, what they do in the classroom, and in some cases whether they even show up. Rapid technological change is raising the stakes. Technology is already playing a crucial role in providing support to teachers, students, and the learning process more broadly. It can help teachers better manage the classroom and offer different challenges to different students. And technology can allow principals, parents, and students to interact seamlessly.

A key plank in the World Banks’s attempts to implement its educational vision is its System Assessment and Benchmarking for Education Results (SABER), which I will return to in due course. As part of its SABER efforts, last year the World Bank launched its ‘Teach’ tool . This tool is basically an evaluation framework. Videos of lessons are recorded and coded for indicators of teacher efficiency by coders who can be ‘90% reliable’ after only four days of training. The coding system focuses on the time that students spend on-task, but also ‘life skills’ like collaboration and critical thinking (see below).

Teach framework

Like the ELT frameworks, it can be used as a professional development tool, but, like them, it may also be used for summative evaluation.

The connections between those landmarks on the ELT landscape and the concerns of the World Bank are not, I would suggest, coincidental. The World Bank is, of course, not the only player in GERM, but it is a very special case. It is the largest single source of external financing in ‘developing countries’ (Beech, 2009: 345), managing a portfolio of $8.9 billion, with operations in 70 countries as of August 2013 (Spring, 2015: 32). Its loans come attached with conditions which tie the borrowing countries to GERM objectives. Arguably of even greater importance than its influence through funding, is the Bank’s direct entry into the world of ideas:

The Bank yearns for a deeper and more comprehensive impact through avenues of influence transcending both project and program loans. Not least in education, the World Bank is investing much in its quest to shape global opinion about economic, developmental, and social policy. Rather than imposing views through specific loan negotiations, Bank style is broadening in attempts to lead borrower country officials to its preferred way of thinking. (Jones, 2007: 259).

The World Bank sees itself as a Knowledge Bank and acts accordingly. Rizvi and Lingard (2010: 48) observe that ‘in many nations of the Global South, the only extant education policy analysis is research commissioned by donor agencies such as the World Bank […] with all the implications that result in relation to problem setting, theoretical frameworks and methodologies’. Hundreds of academics are engaged to do research related to the Bank’s areas of educational interest, and ‘the close links with the academic world give a strong credibility to the ideas disseminated by the Bank […] In fact, many ideas that acquired currency and legitimacy were originally proposed by them. This is the case of testing students and using the results to evaluate progress in education’ (Castro, 2009: 472).

Through a combination of substantial financial clout and relentless marketing (Selwyn, 2013: 50), the Bank has succeeded in shaping global academic discourse. In partnership with similar institutions, it has introduced a way of classifying and thinking about education (Beech, 2009: 352). It has become, in short, a major site ‘for the organization of knowledge about education’ (Rizvi & Lingard, 2010: 79), wielding ‘a degree of power that has arguably enabled it to shape the educational agendas of nations throughout the Global South’ and beyond (Menashy, 2012).

So, is there any problem in the world of ELT taking up the inclusion of ‘life skills’? I think there is. The first is one of definition. Creativity and critical thinking are very poorly defined, meaning very different things to different people, so it is not always clear what is being taught. Following on from this, there is substantial debate about whether such skills can actually be taught at all, and, if they can, how they should be taught. It seems highly unlikely that the tokenistic way in which they are ‘taught’ in most published ELT courses can be of any positive impact. But this is not my main reservation, which is that, by and large, we have come to uncritically accept the idea that English language learning is mostly concerned with preparation for the workplace (see my earlier post ‘The EdTech Imaginary in ELT’).

Is there any problem with the promotion of digital technologies in ELT? Again, I think there is, and a good proportion of the posts on this blog have argued for the need for circumspection in rolling out more technology in language learning and teaching. My main reason is that while it is clear that this trend is beneficial to technology vendors, it is much less clear that advantages will necessarily accrue to learners. Beyond this, there must be serious concerns about data ownership, privacy, and the way in which the datafication of education, led by businesses and governments in the Global North, is changing what counts as good education, a good student or an effective teacher, especially in the Global South. ‘Data and metrics,’ observe Williamson et al. (2020: 353), ‘do not just reflect what they are designed to measure, but actively loop back into action that can change the very thing that was measured in the first place’.

And what about tools for evaluating teacher competences? Here I would like to provide a little more background. There is, first of all, a huge question mark about how accurately such tools measure what they are supposed to measure. This may not matter too much if the tool is only used for self-evaluation or self-development, but ‘once smart systems of data collection and social control are available, they are likely to be widely applied for other purposes’ (Sadowski, 2020: 138). Jaime Saavedra, head of education at the World Bank, insists that the World Bank’s ‘Teach’ tool is not for evaluation and is not useful for firing teachers who perform badly.

Saavedra needs teachers to buy into the tool, so he obviously doesn’t want to scare them off. However, ‘Teach’ clearly is an evaluation tool (if not, what is it?) and, as with other tools (I’m thinking of CEFR and teacher competency frameworks in ELT), its purposes will evolve. Eric Hanushek, an education economist at Stanford University, has commented that ‘this is a clear evaluation tool at the probationary stage … It provides a basis for counseling new teachers on how they should behave … but then again if they don’t change over the first few years you also have information you should use.

At this point, it is useful to take a look at the World Bank’s attitudes towards teachers. Teachers are seen to be at the heart of the ‘learning crisis’. However, the greatest focus in World Bank documents is on (1) teacher absenteeism in some countries, (2) unskilled and demotivated teachers, and (3) the reluctance of teachers and their unions to back World Bank-sponsored reforms. As real as these problems are, it is important to understand that the Bank has been complicit in them:

For decades, the Bank has criticised pre-service and in-service teacher training as not cost-effective For decades, the Bank has been pushing the hiring of untrained contract teachers as a cheap fix and a way to get around teacher unions – and contract teachers are again praised in the World Bank Development Report (WDR). This contradicts the occasional places in the WDR in which the Bank argues that developing countries need to follow the lead of the few countries that attract the best students to teaching, improve training, and improve working conditions. There is no explicit evidence offered at all for the repeated claim that teachers are unmotivated and need to be controlled and monitored to do their job. The Bank has a long history of blaming teachers and teacher unions for educational failures. The Bank implicitly argues that the problem of teacher absenteeism, referred to throughout the report, means teachers are unmotivated, but that simply is not true. Teacher absenteeism is not a sign of low motivation. Teacher salaries are abysmally low, as is the status of teaching. Because of this, teaching in many countries has become an occupation of last resort, yet it still attracts dedicated teachers. Once again, the Bank has been very complicit in this state of affairs as it, and the IMF, for decades have enforced neoliberal, Washington Consensus policies which resulted in government cutbacks and declining real salaries for teachers around the world. It is incredible that economists at the Bank do not recognise that the deterioration of salaries is the major cause of teacher absenteeism and that all the Bank is willing to peddle are ineffective and insulting pay-for-performance schemes. (Klees, 2017).

The SABER framework (referred to above) focuses very clearly on policies for hiring, rewarding and firing teachers.

[The World Bank] places the private sector’s methods of dealing with teachers as better than those of the public sector, because it is more ‘flexible’. In other words, it is possible to say that teachers can be hired and fired more easily; that is, hired without the need of organizing a public competition and fired if they do not achieve the expected outcomes as, for example, students’ improvements in international test scores. Further, the SABER document states that ‘Flexibility in teacher contracting is one of the primary motivations for engaging the private sector’ (World Bank, 2011: 4). This affirmation seeks to reduce expenditures on teachers while fostering other expenses such as the creation of testing schemes and spending more on ICTs, as well as making room to expand the hiring of private sector providers to design curriculum, evaluate students, train teachers, produce education software, and books. (De Siqueira, 2012).

The World Bank has argued consistently for a reduction of education costs by driving down teachers’ salaries. One of the authors of the World Bank Development Report 2018 notes that ‘in most countries, teacher salaries consume the lion’s share of the education budget, so there are already fewer resources to implement other education programs’. Another World Bank report (2007) makes the importance of ‘flexible’ hiring and lower salaries very clear:

In particular, recent progress in primary education in Francophone countries resulted from reduced teacher costs, especially through the recruitment of contractual teachers, generally at about 50% the salary of civil service teachers. (cited in Compton & Weiner, 2008: 7).

Merit pay (or ‘pay for performance’) is another of the Bank’s preferred wheezes. Despite enormous problems in reaching fair evaluations of teachers’ work and a distinct lack of convincing evidence that merit pay leads to anything positive (and may actually be counter-productive) (De Bruyckere et al., 2018: 143 – 147), the Bank is fully committed to the idea. Perhaps this is connected to the usefulness of merit pay in keeping teachers on their toes, compliant and fearful of losing their jobs, rather than any desire to improve teacher effectiveness?

There is evidence that this may be the case. Yet another World Bank report (Bau & Das, 2017) argues, on the basis of research, that improved TVA (teacher value added) does not correlate with wages in the public sector (where it is hard to fire teachers), but it does in the private sector. The study found that ‘a policy change that shifted public hiring from permanent to temporary contracts, reducing wages by 35 percent, had no adverse impact on TVA’. All of which would seem to suggest that improving the quality of teaching is of less importance to the Bank than flexible hiring and firing. This is very much in line with a more general advocacy of making education fit for the world of work. Lois Weiner of New Jersey City University puts it like this:

The architects of [GERM] policies—imposed first in developing countries—openly state that the changes will make education better fit the new global economy by producing workers who are (minimally) educated for jobs that require no more than a 7th or 8th grade education; while a small fraction of the population receive a high quality education to become the elite who oversee finance, industry, and technology. Since most workers do not need to be highly educated, it follows that teachers with considerable formal education and experience are neither needed nor desired because they demand higher wages, which is considered a waste of government money. Most teachers need only be “good enough”—as one U.S. government official phrased it—to follow scripted materials that prepare students for standardized tests. (Weiner, 2012).

It seems impossible to separate the World Bank’s ‘Teach’ tool from the broader goals of GERM. Teacher evaluation tools, like the teaching of 21st century skills and the datafication of education, need to be understood properly, I think, as means to an end. It’s time to spell out what that end is.

The World Bank’s mission is ‘to end extreme poverty (by reducing the share of the global population that lives in extreme poverty to 3 percent by 2030)’ and ‘to promote shared prosperity (by increasing the incomes of the poorest 40 percent of people in every country)’. Its education activities are part of this broad aim and are driven by subscription to human capital theory (a view of the skills, knowledge and experience of individuals in terms of their ability to produce economic value). This may be described as the ‘economization of education’: a shift in educational concerns away from ‘such things as civic participation, protecting human rights, and environmentalism to economic growth and employment’ (Spring, 2015: xiii). Both students and teachers are seen as human capital. For students, human capital education places an emphasis on the cognitive skills needed to succeed in the workplace and the ‘soft skills’, needed to function in the corporate world (Spring, 2015: 2). Accordingly, World Bank investments require ‘justifications on the basis of manpower demands’ (Heyneman, 2003: 317). One of the Bank’s current strategic priorities is the education of girls: although human rights and equity may also play a part, the Bank’s primary concern is that ‘Not Educating Girls Costs Countries Trillions of Dollars’ .

According to the Bank’s logic, its educational aims can best be achieved through a combination of support for the following:

  • cost accounting and quantification (since returns on investment must be carefully measured)
  • competition and market incentives (since it is believed that the ‘invisible hand’ of the market leads to the greatest benefits)
  • the private sector in education and a rolling back of the role of the state (since it is believed that private ownership improves efficiency)

The package of measures is a straightforward reflection of ‘what Western mainstream economists believe’ (Castro, 2009: 474).

Mainstream Western economics is, however, going through something of a rocky patch right now. Human capital theory is ‘useful when prevailing conditions are right’ (Jones, 2007: 248), but prevailing conditions are not right in much of the world (even in the United States), and the theory ‘for the most part ignores the intersections of poverty, equity and education’ (Menashy, 2012). In poorer countries evidence for the positive effects of markets in education is in very short supply, and even in richer countries it is still not conclusive (Verger & Bonal, 2012: 135). An OECD Education Paper (Waslander et al., 2010: 64) found that the effects of choice and competition between schools were at best small, if indeed any effects were found at all. Similarly, the claim that privatization improves efficiency is not sufficiently supported by evidence. Analyses of PISA data would seem to indicate that, ‘all else being equal (especially when controlling for the socio-economic status of the students), the type of ownership of the school, whether it is a private or a state school, has only modest effects on student achievement or none at all’ (Verger & Bonal, 2012: 133). Educational privatization as a one-size-fits-all panacea to educational problems has little to recommend it.

There are, then, serious limitations in the Bank’s theoretical approach. Its practical track record is also less than illustrious, even by the Bank’s own reckoning. Many of the Bank’s interventions have proved very ‘costly to developing countries. At the Bank’s insistence countries over-invested in vocational and technical education. Because of the narrow definition of recurrent costs, countries ignored investments in reading materials and in maintaining teacher salaries. Later at the Bank’s insistence, countries invested in thousands of workshops and laboratories that, for the most part, became useless ‘white elephants’ (Heyneman, 2003: 333).

As a bank, the World Bank is naturally interested in the rate of return of investment in that capital, and is therefore concerned with efficiency and efficacy. This raises the question of ‘Effective for what?’ and given that what may be effective for one individual or group may not necessarily be effective for another individual or group, one may wish to add a second question: ‘Effective for whom?’ (Biesta, 2020: 31). Critics of the World Bank, of whom there are many, argue that its policies serve ‘the interests of corporations by keeping down wages for skilled workers, cause global brain migration to the detriment of developing countries, undermine local cultures, and ensure corporate domination by not preparing school graduates who think critically and are democratically oriented’ (Spring, 2015: 56). Lest this sound a bit harsh, we can turn to the Bank’s own commissioned history: ‘The way in which [the Bank’s] ideology has been shaped conforms in significant degree to the interests and conventional wisdom of its principal stockholders [i.e. bankers and economists from wealthy nations]. International competitive bidding, reluctance to accord preferences to local suppliers, emphasis on financing foreign exchange costs, insistence on a predominant use of foreign consultants, attitudes toward public sector industries, assertion of the right to approve project managers – all proclaim the Bank to be a Western capitalist institution’ (Mason & Asher, 1973: 478 – 479).

The teaching of ‘life skills’, the promotion of data-capturing digital technologies and the push to evaluate teachers’ performance are, then, all closely linked to the agenda of the World Bank, and owe their existence in the ELT landscape, in no small part, to the way that the World Bank has shaped educational discourse. There is, however, one other connection between ELT and the World Bank which must be mentioned.

The World Bank’s foreign language instructional goals are directly related to English as a global language. The Bank urges, ‘Policymakers in developing countries …to ensure that young people acquire a language with more than just local use, preferably one used internationally.’ What is this international language? First, the World Bank mentions that schools of higher education around the world are offering courses in English. In addition, the Bank states, ‘People seeking access to international stores of knowledge through the internet require, principally, English language skills.’ (Spring, 2015: 48).

Without the World Bank, then, there might be a lot less English language teaching than there is. I have written this piece to encourage people to think more about the World Bank, its policies and particular instantiations of those policies. You might or might not agree that the Bank is an undemocratic, technocratic, neoliberal institution unfit for the necessities of today’s world (Klees, 2017). But whatever you think about the World Bank, you might like to consider the answers to Tony Benn’s ‘five little democratic questions’ (quoted in Sardowski, 2020: 17):

  • What power has it got?
  • Where did it get this power from?
  • In whose interests does it exercise this power?
  • To whom is it accountable?
  • How can we get rid of it?

References

Bau, N. and Das, J. (2017). The Misallocation of Pay and Productivity in the Public Sector : Evidence from the Labor Market for Teachers. Policy Research Working Paper; No. 8050. World Bank, Washington, DC. Retrieved [18 May 2020] from https://openknowledge.worldbank.org/handle/10986/26502

Beech, J. (2009). Who is Strolling Through The Global Garden? International Agencies and Educational Transfer. In Cowen, R. and Kazamias, A. M. (Eds.) Second International Handbook of Comparative Education. Dordrecht: Springer. pp. 341 – 358

Biesta, G. (2020). Educational Research. London: Bloomsbury.

Castro, C. De M., (2009). Can Multilateral Banks Educate The World? In Cowen, R. and Kazamias, A. M. (Eds.) Second International Handbook of Comparative Education. Dordrecht: Springer. pp. 455 – 478

Compton, M. and Weiner, L. (Eds.) (2008). The Global Assault on Teaching, Teachers, and their Unions. New York: Palgrave Macmillan

De Bruyckere, P., Kirschner, P.A. and Hulshof, C. (2020). More Urban Myths about Learning and Education. New York: Routledge.

De Siqueira, A. C. (2012). The 2020 World Bank Education Strategy: Nothing New, or the Same Old Gospel. In Klees, S. J., Samoff, J. and Stromquist, N. P. (Eds.) The World Bank and Education. Rotterdam: Sense Publishers. pp. 69 – 81

Heyneman, S.P. (2003). The history and problems in the making of education policy at the World Bank 1960–2000. International Journal of Educational Development 23 (2003) pp. 315–337. Retrieved [18 May 2020] from https://www.academia.edu/29593153/The_History_and_Problems_in_the_Making_of_Education_Policy_at_the_World_Bank_1960_2000

Jones, P. W. (2007). World Bank Financing of Education. 2nd edition. Abingdon, Oxon.: Routledge.

Klees, S. (2017). A critical analysis of the World Bank’s World Development Report on education. Retrieved [18 May 2020] from: https://www.brettonwoodsproject.org/2017/11/critical-analysis-world-banks-world-development-report-education/

Mason, E. S. & Asher, R. E. (1973). The World Bank since Bretton Woods. Washington, DC: Brookings Institution.

Menashy, F. (2012). Review of Klees, S J., Samoff, J. & Stromquist, N. P. (Eds) (2012). The World Bank and Education: Critiques and Alternatives .Rotterdam: Sense Publishers. Education Review, 15. Retrieved [18 May 2020] from https://www.academia.edu/7672656/Review_of_The_World_Bank_and_Education_Critiques_and_Alternatives

Rizvi, F. & Lingard, B. (2010). Globalizing Education Policy. Abingdon, Oxon.: Routledge.

Sadowski, J. (2020). Too Smart. Cambridge, MA.: MIT Press.

Sahlberg, P. (2016). The global educational reform movement and its impact on schooling. In K. Mundy, A. Green, R. Lingard, & A. Verger (Eds.), The handbook of global policy and policymaking in education. New York, NY: Wiley-Blackwell. pp.128 – 144

Selwyn, N. (2013). Education in a Digital World. New York: Routledge.

Spring, J. (2015). Globalization of Education 2nd Edition. New York: Routledge.

Trilling, B. & C. Fadel (2009). 21st Century Skills. San Francisco: Wiley

Verger, A. & Bonal, X. (2012). ‘All Things Being Equal?’ In Klees, S. J., Samoff, J. and Stromquist, N. P. (Eds.) The World Bank and Education. Rotterdam: Sense Publishers. pp. 69 – 81

Waslander, S., Pater, C. & van der Weide, M. (2010). Markets in Education: An analytical review of empirical research on market mechanisms in education. OECD EDU Working Paper 52.

Weiner, L. (2012). Social Movement Unionism: Teachers Can Lead the Way. Reimagine, 19 (2) Retrieved [18 May 2020] from: https://www.reimaginerpe.org/19-2/weiner-fletcher

Williamson, B., Bayne, S. & Shay, S. (2020). The datafication of teaching in Higher Education: critical issues and perspectives, Teaching in Higher Education, 25:4, 351-365, DOI: 10.1080/13562517.2020.1748811

World Bank. (2013). Life skills : what are they, why do they matter, and how are they taught? (English). Adolescent Girls Initiative (AGI) learning from practice series. Washington DC ; World Bank. Retrieved [18 May 2020] from: http://documents.worldbank.org/curated/en/569931468331784110/Life-skills-what-are-they-why-do-they-matter-and-how-are-they-taught

Vocab Victor is a very curious vocab app. It’s not a flashcard system, designed to extend vocabulary breadth. Rather it tests the depth of a user’s vocabulary knowledge.

The app’s website refers to the work of Paul Meara (see, for example, Meara, P. 2009. Connected Words. Amsterdam: John Benjamins). Meara explored the ways in which an analysis of the words that we associate with other words can shed light on the organisation of our mental lexicon. Described as ‘gigantic multidimensional cobwebs’ (Aitchison, J. 1987. Words in the Mind. Oxford: Blackwell, p.86), our mental lexicons do not appear to store lexical items in individual slots, but rather they are distributed across networks of associations.

The size of the web (i.e. the number of words, or the level of vocabulary breadth) is important, but equally important is the strength of the connections within the web (or vocabulary depth), as this determines the robustness of vocabulary knowledge. These connections or associations are between different words and concepts and experiences, and they are developed by repeated, meaningful, contextualised exposure to a word. In other words, the connections are firmed up through extensive opportunities to use language.

In word association research, a person is given a prompt word and asked to say the first other word that comes to their mind. For an entertaining example of this process at work, you might enjoy this clip from the comedy show ‘Help’. The research has implications for a wide range of questions, not least second language acquisition. For example, given a particular prompt, native speakers produce a relatively small number of associative responses, and these are reasonably predictable. Learners, on the other hand, typically produce a much greater variety of responses (which might seem surprising, given that they have a smaller vocabulary store to select from).

One way of classifying the different kinds of response is to divide them into two categories: syntagmatic (words that are discoursally connected to the prompt, such as collocations) and paradigmatic (words that are semantically close to the prompt and are the same part of speech). Linguists have noted that learners (both L1 children and L2 learners) show a shift from predominantly syntagmatic responses to more paradigmatic responses as their mental lexicon develops.

The developers of Vocab Victor have set out to build ‘more and stronger associations for the words your students already know, and teaches new words by associating them with existing, known words, helping students acquire native-like word networks. Furthermore, Victor teaches different types of knowledge, including synonyms, “type-of” relationships, collocations, derivations, multiple meanings and form-focused knowledge’. Since we know how important vocabulary depth is, this seems like a pretty sensible learning target.

The app attempts to develop this breadth in two main ways (see below). The ‘core game’ is called ‘Word Strike’ where learners have to pick the word on the arrow which most closely matches the word on the target. The second is called ‘Word Drop’ where a bird holds a word card and the user has to decide if it relates more to one of two other words below. Significantly, they carry out these tasks before any kind of association between form and meaning has been established. The meaning of unknown items can be checked in a monolingual dictionary later. There are a couple of other, less important games that I won’t describe now. The graphics are attractive, if a little juvenile. The whole thing is gamified with levels, leaderboards and so on. It’s free and, presumably, still under development.

Word strike backsideBird drop certain

The app claims to be for ‘English language learners of all ages [to] develop a more native-like vocabulary’. It also says that it is appropriate for ‘native speaking primary students [to] build and strengthen vocabulary for better test performance and stronger reading skills’, as well as ‘secondary students [to] prepare for the PSAT and SAT’. It was the scope of these claims that first set my alarm bells ringing. How could one app be appropriate for such diverse users? (Spoiler: it can’t, and attempts to make an edtech product suitable for everyone inevitably end up with a product that is suitable for no one.)

Rich, associative lexical networks are the result of successful vocabulary acquisition, but neither Paul Meara nor anyone else in the word association field has, to the best of my knowledge, ever suggested that deliberate study is the way to develop the networks. It is uncontentious to say that vocabulary depth (as shown by associative networks) is best developed through extensive exposure to input – reading and listening.

It is also reasonably uncontentious to say that deliberate study of vocabulary pays greatest dividends in developing vocabulary breadth (not depth), especially at lower levels, with a focus on the top three to eight thousand words in terms of frequency. It may also be useful at higher levels when a learner needs to acquire a limited number of new words for a particular purpose. An example of this would be someone who is going to study in an EMI context and would benefit from rapid learning of the words of the Academic Word List.

The Vocab Victor website says that the app ‘is uniquely focused on intermediate-level vocabulary. The app helps get students beyond this plateau by selecting intermediate-level vocabulary words for your students’. At B1 and B2 levels, learners typically know words that fall between #2500 and #3750 in the frequency tables. At level C2, they know most of the most frequent 5000 items. The less frequent a word is, the less point there is in studying it deliberately.

For deliberate study of vocabulary to serve any useful function, the target language needs to be carefully selected, with a focus on high-frequency items. It makes little sense to study words that will already be very familiar. And it makes no sense to deliberately study apparently random words that are so infrequent (i.e. outside the top 10,000) that it is unlikely they will be encountered again before the deliberate study has been forgotten. Take a look at the examples below and judge for yourself how well chosen the items are.

Year etcsmashed etc

Vocab Victor appears to focus primarily on semantic fields, as in the example above with ‘smashed’ as a key word. ‘Smashed’, ‘fractured’, ‘shattered’ and ‘cracked’ are all very close in meaning. In order to disambiguate them, it would help learners to see which nouns typically collocate with these words. But they don’t get this with the app – all they get are English-language definitions from Merriam-Webster. What this means is that learners are (1) unlikely to develop a sufficient understanding of target items to allow them to incorporate them into their productive lexicon, and (2) likely to get completely confused with a huge number of similar, low-frequency words (that weren’t really appropriate for deliberate study in the first place). What’s more, lexical sets of this kind may not be a terribly good idea, anyway (see my blog post on the topic).

Vocab Victor takes words, as opposed to lexical items, as the target learning objects. Users may be tested on the associations of any of the meanings of polysemantic items. In the example below (not perhaps the most appropriate choice for primary students!), there are two main meanings, but with other items, things get decidedly more complex (see the example with ‘toss’). Learners are also asked to do the associative tasks ‘Word Strike’ and ‘Word Drop’ before they have had a chance to check the possible meanings of either the prompt item or the associative options.

Stripper definitionStripper taskToss definition

How anyone could learn from any of this is quite beyond me. I often struggled to choose the correct answer myself; there were also a small number of items whose meaning I wasn’t sure of. I could see no clear way in which items were being recycled (there’s no spaced repetition here). The website claims that ‘adaptating [sic] to your student’s level happens automatically from the very first game’, but I could not see this happening. In fact, it’s very hard to adapt target item selection to an individual learner, since right / wrong or multiple choice answers tell us so little. Does a correct answer tell us that someone knows an item or just that they made a lucky guess? Does an incorrect answer tell us that an item is unknown or just that, under game pressure, someone tapped the wrong button? And how do you evaluate a learner’s lexical level (as a starting point),  even with very rough approximation,  without testing knowledge of at least thirty items first? All in all, then, a very curious app.

One of the most powerful associative responses to a word (especially with younger learners) is what is called a ‘klang’ response: another word which rhymes with or sounds like the prompt word. So, if someone says the word ‘app’ to you, what’s the first klang response that comes to mind?

Online teaching is big business. Very big business. Online language teaching is a significant part of it, expected to be worth over $5 billion by 2025. Within this market, the biggest demand is for English and the lion’s share of the demand comes from individual learners. And a sizable number of them are Chinese kids.

There are a number of service providers, and the competition between them is hot. To give you an idea of the scale of this business, here are a few details taken from a report in USA Today. VIPKid, is valued at over $3 billion, attracts celebrity investors, and has around 70,000 tutors who live in the US and Canada. 51Talk has 14,800 English teachers from a variety of English-speaking countries. BlingABC gets over 1,000 American applicants a month for its online tutoring jobs. There are many, many others.

Demand for English teachers in China is huge. The Pie News, citing a Chinese state media announcement, reported in September of last year that there were approximately 400,000 foreign citizens working in China as English language teachers, two-thirds of whom were working illegally. Recruitment problems, exacerbated by quotas and more stringent official requirements for qualifications, along with a very restricted desired teacher profile (white, native-speakers from a few countries like the US and the UK), have led more providers to look towards online solutions. Eric Yang, founder of the Shanghai-based iTutorGroup, which operates under a number of different brands and claims to be the ‘largest English-language learning institution in the world’, said that he had been expecting online tutoring to surpass F2F classes within a few years. With coronavirus, he now thinks it will come ‘much earlier’.

Typically, the work does not require much, if anything, in the way of training (besides familiarity with the platform), although a 40-hour TEFL course is usually preferred. Teachers deliver pre-packaged lessons. According to the USA Today report, Chinese students pay between $49 and $80 dollars an hour for the classes.

It’s a highly profitable business and the biggest cost to the platform providers is the rates they pay the tutors. If you google “Teaching TEFL jobs online”, you’ll quickly find claims that teachers can earn $40 / hour and up. Such claims are invariably found on the sites of recruitment agencies, who are competing for attention. However, although it’s possible that a small number of people might make this kind of money, the reality is that most will get nowhere near it. Scroll down the pages a little and you’ll discover that a more generally quoted and accepted figure is between $14 and $20 / hour. These tutors are, of course, freelancers, so the wages are before tax, and there is no health coverage or pension plan.

Reed job advertVIPKid, for example, considered to be one of the better companies, offers payment in the $14 – $22 / hour range. Others offer considerably less, especially if you are not a white, graduate US citizen. Current rates advertised on OETJobs include work for Ziktalk ($10 – 15 / hour), NiceTalk ($10 – 11 / hour), 247MyTutor ($5 – 8 / hour) and Weblio ($5 – 6 / hour). The number of hours that you get is rarely fixed and tutors need to build up a client base by getting good reviews. They will often need to upload short introductory videos, selling their skills. They are in direct competition with other tutors.

They also need to make themselves available when demand for their services is highest. Peak hours for VIPKid, for example, are between 2 and 8 in the morning, depending on where you live in the US. Weekends, too, are popular. With VIPKid, classes are scheduled in advance, but this is not always the case with other companies, where you log on to show that you are available and hope someone wants you. This is the case with, for example, Cambly (which pays $10.20 / hour … or rather $0.17 / minute) and NiceTalk. According to one review, Cambly has a ‘priority hours system [which] allows teachers who book their teaching slots in advance to feature higher on the teacher list than those who have just logged in, meaning that they will receive more calls’. Teachers have to commit to a set schedule and any changes are heavily penalised. The review states that ‘new tutors on the platform should expect to receive calls for about 50% of the time they’re logged on’.

 

Taking the gig economy to its logical conclusion, there are other companies where tutors can fix their own rates. SkimaTalk, for example, offers a deal where tutors first teach three unpaid lessons (‘to understand how the system works and build up their initial reputation on the platform’), then the system sets $16 / hour as a default rate, but tutors can change this to anything they wish. With another, Palfish, where tutors set their own rate, the typical rate is $10 – 18 / hour, and the company takes a 20% commission. With Preply, here is the deal on offer:

Your earnings depend on the hourly rate you set in your profile and how often you can provide lessons. Preply takes a 100% commission fee of your first lesson payment with every new student. For all subsequent lessons, the commission varies from 33 to 18% and depends on the number of completed lesson hours with students. The more tutoring you do through Preply, the less commission you pay.

Not one to miss a trick, Ziktalk (‘currently focusing on language learning and building global audience’) encourages teachers ‘to upload educational videos in order to attract more students’. Or, to put it another way, teachers provide free content in order to have more chance of earning $10 – 15 / hour. Ah, the joys of digital labour!

And, then, coronavirus came along. With schools shutting down, first in China and then elsewhere, tens of millions of students are migrating online. In Hong Kong, for example, the South China Morning Post reports that schools will remain closed until April 20, at the earliest, but university entrance exams will be going ahead as planned in late March. CNBC reported yesterday that classes are being cancelled across the US, and the same is happening, or is likely to happen, in many other countries.

Shares in the big online providers soared in February, with Forbes reporting that $3.2 billion had been added to the share value of China’s e-Learning leaders. Stock in New Oriental (owners of BlingABC, mentioned above) ‘rose 7.3% last month, adding $190 million to the wealth of its founder Yu Minhong [whose] current net worth is estimated at $3.4 billion’.

DingTalk, a communication and management app owned by Alibaba (and the most downloaded free app in China’s iOS App Store), has been adapted to offer online services for schools, reports Xinhua, the official state-run Chinese news agency. The scale of operations is enormous: more than 10,000 new cloud servers were deployed within just two hours.

Current impacts are likely to be dwarfed by what happens in the future. According to Terry Weng, a Shenzhen-based analyst, ‘The gradual exit of smaller education firms means there are more opportunities for TAL and New Oriental. […] Investors are more keen for their future performance.’ Zhu Hong, CTO of DingTalk, observes ‘the epidemic is like a catalyst for many enterprises and schools to adopt digital technology platforms and products’.

For edtech investors, things look rosy. Smaller, F2F providers are in danger of going under. In an attempt to mop up this market and gain overall market share, many elearning providers are offering weighty discounts and free services. Profits can come later.

For the hundreds of thousands of illegal or semi-legal English language teachers in China, things look doubly bleak. Their situation is likely to become even more precarious, with the online gig economy their obvious fall-back path. But English language teachers everywhere are likely to be affected one way or another, as will the whole world of TEFL.

Now seems like a pretty good time to find out more about precarity (see the Teachers as Workers website) and native-speakerism (see TEFL Equity Advocates).

Google search resultsUnconditional calls for language teachers to incorporate digital technology into their teaching are common. The reasons that are given are many and typically include the fact that (1) our students are ‘digital natives’ and expect technology to be integrated into their learning, (2) and digital technology is ubiquitous and has so many affordances for learning. Writing on the topic is almost invariably enthusiastic and the general conclusion is that the integration of technology is necessary and essential. Here’s a fairly typical example: digital technology is ‘an essential multisensory extension to the textbook’ (Torben Schmidt and Thomas Strasser in Surkamp & Viebrock, 2018: 221).

 

Teachers who are reluctant or fail to embrace technology are often ‘characterised as technophobic, or too traditional in their teaching style, or reluctant to adopt change’ (Watson, 2001: 253). (It’s those pesky teachers again.)

Claims for the importance of digital technology are often backed up by vague references to research. Michael Carrier, for example, in his introductory chapter to ‘Digital Language Learning and Teaching’ (Carrier et al. 2017: 3) writes that ‘research results […] seem to show conclusively that the use of educational technology adds certain degrees of richness to the learning and teaching process […] at the very least, digital learning seems to provide enhanced motivation for learners’.

Unfortunately, this is simply not true. Neither in language learning / teaching, nor in education more generally, is there any clear evidence of the necessary benefits of introducing educational technology. In the broader context, the ‘PISA analysis of the impact of Information Communication Technology (ICT) on reading, mathematics, and science (OECD, 2015: 3) in countries heavily invested in educational technology showed mixed effects and “no appreciable improvements”’ (Herodotou et al., 2019). Educational technology can or might  ‘add certain degrees of richness’ or ‘provide enhanced motivation’, but that is not the same as saying that it does or will. The shift from can to will, a piece of modal legerdemain used to advocate for educational technology, is neatly illustrated in a quote from the MIT’s Office of Digital Learning, whose remit is to improve learning and teaching across the university via digital learning: ‘Digital Learning technologies can enable students to grasp concepts more quickly [etc….] Digital technologies will enable this in new and better ways and create possibilities beyond the limits of our current imagination’ (quoted by Carrier, 2017: 1).

Before moving on, here’s another example. The introduction to Li Li’s ‘New Technologies and Language Learning’ (Li, 2017: x) states, with a cautious can, that one of the objectives of the book is ‘to provide examples of how technologies can be used in assisting language education’. In the next paragraph, however, caution is thrown to the wind and we are told, unequivocally, that ‘technology is beneficial for language learning’.

Pedagogy before technology

Examples of gratuitous technology use are not hard to find. Mark Warschauer (who, as the founding director of the Digital Learning Lab at the University of California, Irvine, could be fairly described as an edtech enthusiast) describes one example: ‘I remember observing a beginners’ French class a number of years ago, the teacher bragged about how engaged the learners were in creating multimedia in French. However, the students were spending most of their time and energy talking with each other in English about how to make PowerPoints, when, as beginning learners, they really needed to be spending time hearing as much French as possible’ (quoted in the Guardian, May 2014).

As a result, no doubt, of having similar experiences, it seems that many people are becoming a little more circumspect in their enthusiasm for edtech. In the same Guardian article as Warschauer’s recollections, Russell Stannard ‘says the trick is to put the pedagogy first, not the technology. “You’ve got to know why you’re using it. Teachers do need to learn to use new technology, but the driving force should always be the pedagogy behind it’. Nicky Hockly, Gavin Dudeney and Mark Pegrum (Hockly et al., 2013: 45) concur: ‘Content and pedagogy come before technology. We must decide on our content and pedagogical aims before determining whether our students should use pens or keyboards, write essays or blogs, or design posters or videos’. And Graham Stanley (2013: 1) in the introduction to his ‘Language Learning With Technology’ states that his ‘book makes a point of putting pedagogy at the forefront of the lesson, which is why content has been organised around specific learning content goals rather than specific technologies’.

But, Axel Krommer, of the Friedrich-Alexander University of Erlangen-Nürnberg, has argued that the principle of ‘pedagogy before technology’ is ‘trivial at best’. In a piece for the Goethe Institute he writes ‘a theory with which everyone agrees and whose opposite no-one believes true is meaningless’, although he adds that it may be useful as ‘an admonitory wake-up call when educational institutions risk being blinded by technological possibilities that cause them to neglect pedagogical principles that should really be taken for granted’. It was this piece that set me thinking more about ‘pedagogy before technology’.

Pedagogy before technology (on condition that there is technology)

Another person to lament the placing of technology before pedagogy is Nik Peachey. In an opinion piece for the Guardian, entitled ‘Technology can sometimes be wasted on English language teaching’, he complains about how teachers are left to sort out how to use technology ‘in a pedagogically effective way, often with very little training or support’. He appears to take it as given that technology is a positive force, and argues that it shouldn’t be wasted. The issue, he says, is that better teacher training is needed so that teachers’ ‘digital literacies’ are improved and to ensure that technological potential is fulfilled.

His position, therefore, cannot really be said to be one of ‘pedagogy before technology’. Like the other writers mentioned above, he comes to the pedagogy through and after an interest in the technology. The educational use of digital technology per se is never seriously questioned. The same holds true for almost the entirety of the world of CALL research.

confer

A Canadian conference ‘Pedagogy b4 Technology’ illustrates my point beautifully.

There are occasional exceptions. A recent example which I found interesting was an article by Herodotou et al (2019), in which the authors take as their starting point a set of OECD educational goals (quality of life, including health, civic engagement, social connections, education, security, life satisfaction and the environment), and then investigate the extent to which a variety of learning approaches (formative analytics, teachback, place-based learning, learning with robots, learning with drones, citizen inquiry) – not all of which involve technology – might contribute to the realisation of these goals.

Technology before pedagogy as policy

Some of the high school English teachers I work with have to use tablets in one lesson a week. Some welcome it, some accept it (they can catch up with other duties while the kids are busy with exercises on the tablet), others just roll their eyes at the mention of this policy. In the same school system, English language learning materials can only be bought if they come in digital versions (even if it is the paper versions that are actually used). The digital versions are mostly used for projecting pages onto the IWBs. Meanwhile, budgets and the time available for in-service training have been cut.

Elsewhere, a chain of universities decides that a certain proportion of all courses must be taught online. English language courses, being less prestigious than major subjects, are one of the first to be migrated to platforms. The staff, few of whom have tenure or time to spare, cope as best as they can, with some support from a department head. Training is provided in the mechanics of operating the platform, and, hopefully before too long, more training will become available to optimize the use of the platform for pedagogical purposes. An adequate budget has yet to be agreed.

The reasons why so many educational authorities introduce such policies are, at best, only superficially related to pedagogy. There is a belief, widely held, that technology cannot fail to make things better. In the words of Tony Blair: ‘Technology has revolutionised the way we work and is now set to transform education. Children cannot be effective in tomorrow’s world if they are trained in yesterday’s skills’. But there is also the potential of education technology to scale education up (i.e. increase student numbers), to reduce long-term costs, to facilitate accountability, to increase productivity, to restrict the power of teachers (and their unions), and so on.

In such circumstances, which are not uncommon, it seems to me that there are more pressing things to worry about than teachers who are not sufficiently thinking about the pedagogical uses to which they put the technology that they have to use. Working conditions, pay and hours, are all affected by the digitalisation of education. These things do get talked about (see, for example, Walsh, 2019), but only rarely.

Technology as pedagogy

Blended learning, described by Pete Sharma in 2010 as a ‘buzz word’ in ELT, remains a popular pedagogical approach. In a recent article (2019), he enthuses about the possibilities of blended learning, suggesting that teachers should use it all the time: ‘teaching in this new digital age should use the technologies which students meet in their everyday lives, such as the Internet, laptop, smartphone and tablet’. It’s also, he claims, time-efficient, but other pedagogical justifications are scant: ‘some language areas are really suited to be studied outside the classroom. Extensive reading and practising difficult phonemes, for instance’.

Blended learning and digital technology are inseparable. Hockley (2018) explains the spread of blended learning in ELT as being driven primarily by ‘the twin drivers of economics (i.e. lower costs) and increasingly accessible and affordable hardware and software’. It might be nice to believe that ‘it is pedagogy, rather than technology, that should underpin the design of blended learning programmes’ (McCarthy, 2016, back cover), but the technology is the pedagogy here. Precisely how it is used is almost inevitably an afterthought.

Which pedagogy, anyway?

We can talk about putting pedagogy before technology, but this raises the question of which particular pedagogy we want to put in the driving seat. Presumably not all pedagogies are of equal value.

One of the most common uses of digital technology that has been designed specifically for language learning is the IWB- or platform-delivered coursebook and its accompanying digital workbook. We know that a majority of teachers using online coursebook packages direct their students more readily to tasks with clear right / wrong answers (e.g. drag-and-drop or gap-fill grammar exercises) than they do to the forum facilities where communicative language use is possible. Here, technology is merely replicating and, perhaps (because of its ease of use), encouraging established pedagogical practices. The pedagogy precedes the technology, but it’s probably not the best pedagogy in the world. Nor does it make best use of the technology’s potential. Would the affordances of the technology make a better starting point for course design?

Graham Stanley’s book (2013) offers suggestions for using technology for a variety of purposes, ranging from deliberate practice of grammar and vocabulary to ways of facilitating opportunities for skills practice. It’s an eclectic mix, similar to the range of activities on offer in the average coursebook for adults or teenagers. It is pedagogy-neutral in the sense that it does not offer a set of principles of language learning or teaching, and from these derive a set of practices for using the technology. It is a recipe book for using technological tools and, like all recipe books, prioritises activities over principles. I like the book and I don’t intend these comments as criticism. My point is simply that it’s not easy to take pedagogical principles as a starting point. Does the world of ELT even have generally agreed pedagogical principles?

And what is it that we’re teaching?

One final thought … If we consider how learners are likely to be using the English they are learning in their real-world futures, technology will not be far away: reading online, listening to / watching online material, writing and speaking with messaging apps, writing with text, email or Google Docs … If, in designing pedagogical approaches, we wish to include features of authentic language use, it’s hard to see how we can avoid placing technology fairly near the centre of the stage. Technologically-mediated language use is inseparable from pedagogy: one does not precede the other.

Similarly, if we believe that it is part of the English teacher’s job to develop the digital literacy (e.g. Hockly et al., 2013), visual literacy (e.g. Donaghy, 2015) or multimodal literacy of their students – not, incidentally, a belief that I share – then, again, technology cannot be separated from pedagogy.

Pedagogy before technology, OK??

So, I ask myself what precisely it is that people mean when they say that pedagogy should come before technology. The locutionary force, or referential meaning, usually remains unclear: in the absence of a particular pedagogy and particular contexts, what exactly is being said? The illocutionary force, likewise, is difficult to understand in the absence of a particular addressee: is the message only intended for teachers suffering from Everest Syndrome? And the perlocutionary force is equally intriguing: how are people who make the statement positioning themselves, and in relation to which addressee? Along the lines of green-washing and woke-washing, are we sometimes seeing cases of pedagogy-washing?

REFERENCES

Carrier, M., Damerow, R. M. & Bailey, K. M. (2017) Digital Language Learning and Teaching: Research, theory, and practice. New York: Routledge

Donaghy, K. (2015) Film in Action. Peaslake, Surrey: DELTA Publishing

Herodotou, C., Sharples, M., Gaved, M., Kukulska-Hulme, A., Rienties, B., Scanlon, E. & Whitelock, D. (2019) Innovative Pedagogies of the Future: An Evidence-Based Selection. Frontiers in Education, 4 (113)

Hockly, N. (2018) Blended Learning. ELT Journal 72 (1): pp. 97 – 101

Hockly, N., Dudeney, G. & Pegrum, M. (2013) Digital Literacies. Harlow: Pearson

Li, L. (2017) New Technologies and Language Learning. London: Palgrave

McCarthy, M. (Ed.) (2016) The Cambridge Guide to Blended Learning for Language Teaching. Cambridge: Cambridge University Press

OECD (2015) Students, Computers and Learning: Making the Connection, PISA. Paris: OECD Publishing

Sharma, P. (2010) Blended Learning. ELT Journal, 64 (4): pp. 456 – 458

Sharma, P. (2019) The Complete Guide to Running a Blended Learning Course. Oxford University Press English Language Teaching Global Blog 17 October 2019. Available at: https://oupeltglobalblog.com/2019/10/17/complete-guidagogyde-blended-learning/

Stanley, G. (2013) Language Learning with Technology. Cambridge: Cambridge University Press

Surkamp, C. & Viebrock, B. (Eds.) (2018) Teaching English as a Foreign Language: An Introduction. Stuttgart: J. B. Metzler

Walsh, P. (2019) Precarity. ELT Journal, 73 (4): pp. 459–462

Watson, D. M. (2001) Pedagogy before Technology: Re-thinking the Relationship between ICT and Teaching. Education and Information Technologies 6:4: pp.251–26

From time to time, I have mentioned Programmed Learning (or Programmed Instruction) in this blog (here and here, for example). It felt like time to go into a little more detail about what Programmed Instruction was (and is) and why I think it’s important to know about it.

A brief description

The basic idea behind Programmed Instruction was that subject matter could be broken down into very small parts, which could be organised into an optimal path for presentation to students. Students worked, at their own speed, through a series of micro-tasks, building their mastery of each nugget of learning that was presented, not progressing from one to the next until they had demonstrated they could respond accurately to the previous task.

There were two main types of Programmed Instruction: linear programming and branching programming. In the former, every student would follow the same path, the same sequence of frames. This could be used in classrooms for whole-class instruction and I tracked down a book (illustrated below) called ‘Programmed English Course Student’s Book 1’ (Hill, 1966), which was an attempt to transfer the ideas behind Programmed Instruction to a zero-tech, class environment. This is very similar in approach to the material I had to use when working at an Inlingua school in the 1980s.

Programmed English Course

Comparatives strip

An example of how self-paced programming worked is illustrated here, with a section on comparatives.

With branching programming, ‘extra frames (or branches) are provided for students who do not get the correct answer’ (Kay et al., 1968: 19). This was only suitable for self-study, but it was clearly preferable, as it allowed for self-pacing and some personalization. The material could be presented in books (which meant that students had to flick back and forth in their books) or with special ‘teaching machines’, but the latter were preferred.

In the words of an early enthusiast, Programmed Instruction was essentially ‘a device to control a student’s behaviour and help him to learn without the supervision of a teacher’ (Kay et al.,1968: 58). The approach was inspired by the work of Skinner and it was first used as part of a university course in behavioural psychology taught by Skinner at Harvard University in 1957. It moved into secondary schools for teaching mathematics in 1959 (Saettler, 2004: 297).

Enthusiasm and uptake

The parallels between current enthusiasm for the power of digital technology to transform education and the excitement about Programmed Instruction and teaching machines in the 1960s are very striking (McDonald et al., 2005: 90). In 1967, it was reported that ‘we are today on the verge of what promises to be a revolution in education’ (Goodman, 1967: 3) and that ‘tremors of excitement ran through professional journals and conferences and department meetings from coast to coast’ (Kennedy, 1967: 871). The following year, another commentator referred to the way that the field of education had been stirred ‘with an almost Messianic promise of a breakthrough’ (Ornstein, 1968: 401). Programmed instruction was also seen as an exciting business opportunity: ‘an entire industry is just coming into being and significant sales and profits should not be too long in coming’, wrote one hopeful financial analyst as early as 1961 (Kozlowski, 1967: 47).

The new technology seemed to offer a solution to the ‘problems of education’. Media reports in 1963 in Germany, for example, discussed a shortage of teachers, large classes and inadequate learning progress … ‘an ‘urgent pedagogical emergency’ that traditional teaching methods could not resolve’ (Hof, 2018). Individualised learning, through Programmed Instruction, would equalise educational opportunity and if you weren’t part of it, you would be left behind. In the US, two billion dollars were spent on educational technology by the government in the decade following the passing of the National Defense Education Act, and this was added to by grants from private foundations. As a result, ‘the production of teaching machines began to flourish, accompanied by the marketing of numerous ‘teaching units’ stamped into punch cards as well as less expensive didactic programme books and index cards. The market grew dramatically in a short time’ (Hof, 2018).

In the field of language learning, however, enthusiasm was more muted. In the year in which he completed his doctoral studies[1], the eminent linguist, Bernard Spolsky noted that ‘little use is actually being made of the new technique’ (Spolsky, 1966). A year later, a survey of over 600 foreign language teachers at US colleges and universities reported that only about 10% of them had programmed materials in their departments (Valdman, 1968: 1). In most of these cases, the materials ‘were being tried out on an experimental basis under the direction of their developers’. And two years after that, it was reported that ‘programming has not yet been used to any very great extent in language teaching, so there is no substantial body of experience from which to draw detailed, water-tight conclusions’ (Howatt, 1969: 164).

By the early 1970s, Programmed Instruction was already beginning to seem like yesterday’s technology, even though the principles behind it are still very much alive today (Thornbury (2017) refers to Duolingo as ‘Programmed Instruction’). It would be nice to think that language teachers of the day were more sceptical than, for example, their counterparts teaching mathematics. It would be nice to think that, like Spolsky, they had taken on board Chomsky’s (1959) demolition of Skinner. But the widespread popularity of Audiolingual methods suggests otherwise. Audiolingualism, based essentially on the same Skinnerian principles as Programmed Instruction, needed less outlay on technology. The machines (a slide projector and a record or tape player) were cheaper than the teaching machines, could be used for other purposes and did not become obsolete so quickly. The method also lent itself more readily to established school systems (i.e. whole-class teaching) and the skills sets of teachers of the day. Significantly, too, there was relatively little investment in Programmed Instruction for language teaching (compared to, say, mathematics), since this was a smallish and more localized market. There was no global market for English language learning as there is today.

Lessons to be learned

1 Shaping attitudes

It was not hard to persuade some educational authorities of the value of Programmed Instruction. As discussed above, it offered a solution to the problem of ‘the chronic shortage of adequately trained and competent teachers at all levels in our schools, colleges and universities’, wrote Goodman (1967: 3), who added, there is growing realisation of the need to give special individual attention to handicapped children and to those apparently or actually retarded’. The new teaching machines ‘could simulate the human teacher and carry out at least some of his functions quite efficiently’ (Goodman, 1967: 4). This wasn’t quite the same thing as saying that the machines could replace teachers, although some might have hoped for this. The official line was more often that the machines could ‘be used as devices, actively co-operating with the human teacher as adaptive systems and not just merely as aids’ (Goodman, 1967: 37). But this more nuanced message did not always get through, and ‘the Press soon stated that robots would replace teachers and conjured up pictures of classrooms of students with little iron men in front of them’ (Kay et al., 1968: 161).

For teachers, though, it was one thing to be told that the machines would free their time to perform more meaningful tasks, but harder to believe when this was accompanied by a ‘rhetoric of the instructional inadequacies of the teacher’ (McDonald, et al., 2005: 88). Many teachers felt threatened. They ‘reacted against the ‘unfeeling machine’ as a poor substitute for the warm, responsive environment provided by a real, live teacher. Others have seemed to take it more personally, viewing the advent of programmed instruction as the end of their professional career as teachers. To these, even the mention of programmed instruction produces a momentary look of panic followed by the appearance of determination to stave off the ominous onslaught somehow’ (Tucker, 1972: 63).

Some of those who were pushing for Programmed Instruction had a bigger agenda, with their sights set firmly on broader school reform made possible through technology (Hof, 2018). Individualised learning and Programmed Instruction were not just ends in themselves: they were ways of facilitating bigger changes. The trouble was that teachers were necessary for Programmed Instruction to work. On the practical level, it became apparent that a blend of teaching machines and classroom teaching was more effective than the machines alone (Saettler, 2004: 299). But the teachers’ attitudes were crucial: a research study involving over 6000 students of Spanish showed that ‘the more enthusiastic the teacher was about programmed instruction, the better the work the students did, even though they worked independently’ (Saettler, 2004: 299). In other researched cases, too, ‘teacher attitudes proved to be a critical factor in the success of programmed instruction’ (Saettler, 2004: 301).

2 Returns on investment

Pricing a hyped edtech product is a delicate matter. Vendors need to see a relatively quick return on their investment, before a newer technology knocks them out of the market. Developments in computing were fast in the late 1960s, and the first commercially successful personal computer, the Altair 8800, appeared in 1974. But too high a price carried obvious risks. In 1967, the cheapest teaching machine in the UK, the Tutorpack (from Packham Research Ltd), cost £7 12s (equivalent to about £126 today), but machines like these were disparagingly referred to as ‘page-turners’ (Higgins, 1983: 4). A higher-end linear programming machine cost twice this amount. Branching programme machines cost a lot more. The Mark II AutoTutor (from USI Great Britain Limited), for example, cost £31 per month (equivalent to £558), with eight reels of programmes thrown in (Goodman, 1967: 26). A lower-end branching machine, the Grundytutor, could be bought for £ 230 (worth about £4140 today).

Teaching machines (from Goodman)AutoTutor Mk II (from Goodman)

This was serious money, and any institution splashing out on teaching machines needed to be confident that they would be well used for a long period of time (Nordberg, 1965). The programmes (the software) were specific to individual machines and the content could not be updated easily. At the same time, other technological developments (cine projectors, tape recorders, record players) were arriving in classrooms, and schools found themselves having to pay for technical assistance and maintenance. The average teacher was ‘unable to avail himself fully of existing aids because, to put it bluntly, he is expected to teach for too many hours a day and simply has not the time, with all the administrative chores he is expected to perform, either to maintain equipment, to experiment with it, let alone keeping up with developments in his own and wider fields. The advent of teaching machines which can free the teacher to fulfil his role as an educator will intensify and not diminish the problem’ (Goodman, 1967: 44). Teaching machines, in short, were ‘oversold and underused’ (Cuban, 2001).

3 Research and theory

Looking back twenty years later, B. F. Skinner conceded that ‘the machines were crude, [and] the programs were untested’ (Skinner, 1986: 105). The documentary record suggests that the second part of this statement is not entirely true. Herrick (1966: 695) reported that ‘an overwhelming amount of research time has been invested in attempts to determine the relative merits of programmed instruction when compared to ‘traditional’ or ‘conventional’ methods of instruction. The results have been almost equally overwhelming in showing no significant differences’. In 1968, Kay et al (1968: 96) noted that ‘there has been a definite effort to examine programmed instruction’. A later meta-analysis of research in secondary education (Kulik et al.: 1982) confirmed that ‘Programmed Instruction did not typically raise student achievement […] nor did it make students feel more positively about the subjects they were studying’.

It was not, therefore, the case that research was not being done. It was that many people were preferring not to look at it. The same holds true for theoretical critiques. In relation to language learning, Spolsky (1966) referred to Chomsky’s (1959) rebuttal of Skinner’s arguments, adding that ‘there should be no need to rehearse these inadequacies, but as some psychologists and even applied linguists appear to ignore their existence it might be as well to remind readers of a few’. Programmed Instruction might have had a limited role to play in language learning, but vendors’ claims went further than that and some people believed them: ‘Rather than addressing themselves to limited and carefully specified FL tasks – for example the teaching of spelling, the teaching of grammatical concepts, training in pronunciation, the acquisition of limited proficiency within a restricted number of vocabulary items and grammatical features – most programmers aimed at self-sufficient courses designed to lead to near-native speaking proficiency’ (Valdman, 1968: 2).

4 Content

When learning is conceptualised as purely the acquisition of knowledge, technological optimists tend to believe that machines can convey it more effectively and more efficiently than teachers (Hof, 2018). The corollary of this is the belief that, if you get the materials right (plus the order in which they are presented and appropriate feedback), you can ‘to a great extent control and engineer the quality and quantity of learning’ (Post, 1972: 14). Learning, in other words, becomes an engineering problem, and technology is its solution.

One of the problems was that technology vendors were, first and foremost, technology specialists. Content was almost an afterthought. Materials writers needed to be familiar with the technology and, if not, they were unlikely to be employed. Writers needed to believe in the potential of the technology, so those familiar with current theory and research would clearly not fit in. The result was unsurprising. Kennedy (1967: 872) reported that ‘there are hundreds of programs now available. Many more will be published in the next few years. Watch for them. Examine them critically. They are not all of high quality’. He was being polite.

5 Motivation

As is usually the case with new technologies, there was a positive novelty effect with Programmed Instruction. And, as is always the case, the novelty effect wears off: ‘students quickly tired of, and eventually came to dislike, programmed instruction’ (McDonald et al.: 89). It could not really have been otherwise: ‘human learning and intrinsic motivation are optimized when persons experience a sense of autonomy, competence, and relatedness in their activity. Self-determination theorists have also studied factors that tend to occlude healthy functioning and motivation, including, among others, controlling environments, rewards contingent on task performance, the lack of secure connection and care by teachers, and situations that do not promote curiosity and challenge’ (McDonald et al.: 93). The demotivating experience of using these machines was particularly acute with younger and ‘less able’ students, as was noted at the time (Valdman, 1968: 9).

The unlearned lessons

I hope that you’ll now understand why I think the history of Programmed Instruction is so relevant to us today. In the words of my favourite Yogi-ism, it’s like deja vu all over again. I have quoted repeatedly from the article by McDonald et al (2005) and I would highly recommend it – available here. Hopefully, too, Audrey Watters’ forthcoming book, ‘Teaching Machines’, will appear before too long, and she will, no doubt, have much more of interest to say on this topic.

References

Chomsky N. 1959. ‘Review of Skinner’s Verbal Behavior’. Language, 35:26–58.

Cuban, L. 2001. Oversold & Underused: Computers in the Classroom. (Cambridge, MA: Harvard University Press)

Goodman, R. 1967. Programmed Learning and Teaching Machines 3rd edition. (London: English Universities Press)

Herrick, M. 1966. ‘Programmed Instruction: A critical appraisal’ The American Biology Teacher, 28 (9), 695 -698

Higgins, J. 1983. ‘Can computers teach?’ CALICO Journal, 1 (2)

Hill, L. A. 1966. Programmed English Course Student’s Book 1. (Oxford: Oxford University Press)

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47:4, 445-465

Howatt, A. P. R. 1969. Programmed Learning and the Language Teacher. (London: Longmans)

Kay, H., Dodd, B. & Sime, M. 1968. Teaching Machines and Programmed Instruction. (Harmondsworth: Penguin)

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 / 6, 47 – 54

Kulik, C.-L., Schwalb, B. & Kulik, J. 1982. ‘Programmed Instruction in Secondary Education: A Meta-analysis of Evaluation Findings’ Journal of Educational Research, 75: 133 – 138

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 / 2, 84 – 98

Nordberg, R. B. 1965. Teaching machines-six dangers and one advantage. In J. S. Roucek (Ed.), Programmed teaching: A symposium on automation in education (pp. 1–8). (New York: Philosophical Library)

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 / 7, 401 – 410

Post, D. 1972. ‘Up the programmer: How to stop PI from boring learners and strangling results’. Educational Technology, 12(8), 14–1

Saettler, P. 2004. The Evolution of American Educational Technology. (Greenwich, Conn.: Information Age Publishing)

Skinner, B. F. 1986. ‘Programmed Instruction Revisited’ The Phi Delta Kappan, 68 (2), 103 – 110

Spolsky, B. 1966. ‘A psycholinguistic critique of programmed foreign language instruction’ International Review of Applied Linguistics in Language Teaching, Volume 4, Issue 1-4: 119–130

Thornbury, S. 2017. Scott Thornbury’s 30 Language Teaching Methods. (Cambridge: Cambridge University Press)

Tucker, C. 1972. ‘Programmed Dictation: An Example of the P.I. Process in the Classroom’. TESOL Quarterly, 6(1), 61-70

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14

 

 

 

[1] Spolsky’ doctoral thesis for the University of Montreal was entitled ‘The psycholinguistic basis of programmed foreign language instruction’.

 

 

 

 

 

Book_coverIn my last post, I looked at shortcomings in edtech research, mostly from outside the world of ELT. I made a series of recommendations of ways in which such research could become more useful. In this post, I look at two very recent collections of ELT edtech research. The first of these is Digital Innovations and Research in Language Learning, edited by Mavridi and Saumell, and published this February by the Learning Technologies SIG of IATEFL. I’ll refer to it here as DIRLL. It’s available free to IATEFL LT SIG members, and can be bought for $10.97 as an ebook on Amazon (US). The second is the most recent edition (February 2020) of the Language Learning & Technology journal, which is open access and available here. I’ll refer to it here as LLTJ.

In both of these collections, the focus is not on ‘technology per se, but rather issues related to language learning and language teaching, and how they are affected or enhanced by the use of digital technologies’. However, they are very different kinds of publication. Nobody involved in the production of DIRLL got paid in any way (to the best of my knowledge) and, in keeping with its provenance from a teachers’ association, has ‘a focus on the practitioner as teacher-researcher’. Almost all of the contributing authors are university-based, but they are typically involved more in language teaching than in research. With one exception (a grant from the EU), their work was unfunded.

The triannual LLTJ is funded by two American universities and published by the University of Hawaii Press. The editors and associate editors are well-known scholars in their fields. The journal’s impact factor is high, close to the impact factor of the paywalled reCALL (published by the University of Cambridge), which is the highest-ranking journal in the field of CALL. The contributing authors are all university-based, many with a string of published articles (in prestige journals), chapters or books behind them. At least six of the studies were funded by national grant-awarding bodies.

I should begin by making clear that there was much in both collections that I found interesting. However, it was not usually the research itself that I found informative, but the literature review that preceded it. Two of the chapters in DIRLL were not really research, anyway. One was the development of a template for evaluating ICT-mediated tasks in CLIL, another was an advocacy of comics as a resource for language teaching. Both of these were new, useful and interesting to me. LLTJ included a valuable literature review of research into VR in FL learning (but no actual new research). With some exceptions in both collections, though, I felt that I would have been better off curtailing my reading after the reviews. Admittedly, there wouldn’t be much in the way of literature reviews if there were no previous research to report …

It was no surprise to see the learners who were the subjects of this research were overwhelmingly university students. In fact, only one article (about a high-school project in Israel, reported in DIRLL) was not about university students. The research areas focused on reflected this bias towards tertiary contexts: online academic reading skills, academic writing, online reflective practices in teacher training programmes, etc.

In a couple of cases, the selection of experimental subjects seemed plain bizarre. Why, if you want to find out about the extent to which Moodle use can help EAP students become better academic readers (in DIRLL), would you investigate this with a small volunteer cohort of postgraduate students of linguistics, with previous experience of using Moodle and experience of teaching? Is a less representative sample imaginable? Why, if you want to investigate the learning potential of the English File Pronunciation app (reported in LLTJ), which is clearly most appropriate for A1 – B1 levels, would you do this with a group of C1-level undergraduates following a course in phonetics as part of an English Studies programme?

More problematic, in my view, was the small sample size in many of the research projects. The Israeli virtual high school project (DIRLL), previously referred to, started out with only 11 students, but 7 dropped out, primarily, it seems, because of institutional incompetence: ‘the project was probably doomed […] to failure from the start’, according to the author. Interesting as this was as an account of how not to set up a project of this kind, it is simply impossible to draw any conclusions from 4 students about the potential of a VLE for ‘interaction, focus and self-paced learning’. The questionnaire investigating experience of and attitudes towards VR (in DIRLL) was completed by only 7 (out of 36 possible) students and 7 (out of 70+ possible) teachers. As the author acknowledges, ‘no great claims can be made’, but then goes on to note the generally ‘positive attitudes to VR’. Perhaps those who did not volunteer had different attitudes? We will never know. The study of motivational videos in tertiary education (DIRLL) started off with 15 subjects, but 5 did not complete the necessary tasks. The research into L1 use in videoconferencing (LLTJ) started off with 10 experimental subjects, all with the same L1 and similar cultural backgrounds, but there was no data available from 4 of them (because they never switched into L1). The author claims that the paper demonstrates ‘how L1 is used by language learners in videoconferencing as a social semiotic resource to support social presence’ – something which, after reading the literature review, we already knew. But the paper also demonstrates quite clearly how L1 is not used by language learners in videoconferencing as a social semiotic resource to support social presence. In all these cases, it is the participants who did not complete or the potential participants who did not want to take part that have the greatest interest for me.

Unsurprisingly, the LLTJ articles had larger sample sizes than those in DIRLL, but in both collections the length of the research was limited. The production of one motivational video (DIRLL) does not really allow us to draw any conclusions about the development of students’ critical thinking skills. Two four-week interventions do not really seem long enough to me to discover anything about learner autonomy and Moodle (DIRLL). An experiment looking at different feedback modes needs more than two written assignments to reach any conclusions about student preferences (LLTJ).

More research might well be needed to compensate for the short-term projects with small sample sizes, but I’m not convinced that this is always the case. Lacking sufficient information about the content of the technologically-mediated tools being used, I was often unable to reach any conclusions. A gamified Twitter environment was developed in one project (DIRLL), using principles derived from contemporary literature on gamification. The authors concluded that the game design ‘failed to generate interaction among students’, but without knowing a lot more about the specific details of the activity, it is impossible to say whether the problem was the principles or the particular instantiation of those principles. Another project, looking at the development of pronunciation materials for online learning (LLTJ), came to the conclusion that online pronunciation training was helpful – better than none at all. Claims are then made about the value of the method used (called ‘innovative Cued Pronunciation Readings’), but this is not compared to any other method / materials, and only a very small selection of these materials are illustrated. Basically, the reader of this research has no choice but to take things on trust. The study looking at the use of Alexa to help listening comprehension and speaking fluency (LLTJ) cannot really tell us anything about IPAs unless we know more about the particular way that Alexa is being used. Here, it seems that the students were using Alexa in an interactive storytelling exercise, but so little information is given about the exercise itself that I didn’t actually learn anything at all. The author’s own conclusion is that the results, such as they are, need to be treated with caution. Nevertheless, he adds ‘the current study illustrates that IPAs may have some value to foreign language learners’.

This brings me onto my final gripe. To be told that IPAs like Alexa may have some value to foreign language learners is to be told something that I already know. This wasn’t the only time this happened during my reading of these collections. I appreciate that research cannot always tell us something new and interesting, but a little more often would be nice. I ‘learnt’ that goal-setting plays an important role in motivation and that gamification can boost short-term motivation. I ‘learnt’ that reflective journals can take a long time for teachers to look at, and that reflective video journals are also very time-consuming. I ‘learnt’ that peer feedback can be very useful. I ‘learnt’ from two papers that intercultural difficulties may be exacerbated by online communication. I ‘learnt’ that text-to-speech software is pretty good these days. I ‘learnt’ that multimodal literacy can, most frequently, be divided up into visual and auditory forms.

With the exception of a piece about online safety issues (DIRLL), I did not once encounter anything which hinted that there may be problems in using technology. No mention of the use to which student data might be put. No mention of the costs involved (except for the observation that many students would not be happy to spend money on the English File Pronunciation app) or the cost-effectiveness of digital ‘solutions’. No consideration of the institutional (or other) pressures (or the reasons behind them) that may be applied to encourage teachers to ‘leverage’ edtech. No suggestion that a zero-tech option might actually be preferable. In both collections, the language used is invariably positive, or, at least, technology is associated with positive things: uncovering the possibilities, promoting autonomy, etc. Even if the focus of these publications is not on technology per se (although I think this claim doesn’t really stand up to close examination), it’s a little disingenuous to claim (as LLTJ does) that the interest is in how language learning and language teaching is ‘affected or enhanced by the use of digital technologies’. The reality is that the overwhelming interest is in potential enhancements, not potential negative effects.

I have deliberately not mentioned any names in referring to the articles I have discussed. I would, though, like to take my hat off to the editors of DIRLL, Sophia Mavridi and Vicky Saumell, for attempting to do something a little different. I think that Alicia Artusi and Graham Stanley’s article (DIRLL) about CPD for ‘remote’ teachers was very good and should interest the huge number of teachers working online. Chryssa Themelis and Julie-Ann Sime have kindled my interest in the potential of comics as a learning resource (DIRLL). Yu-Ju Lan’s article about VR (LLTJ) is surely the most up-to-date, go-to article on this topic. There were other pieces, or parts of pieces, that I liked, too. But, to me, it’s clear that ‘more research is needed’ … much less than (1) better and more critical research, and (2) more digestible summaries of research.

Colloquium

At the beginning of March, I’ll be going to Cambridge to take part in a Digital Learning Colloquium (for more information about the event, see here ). One of the questions that will be explored is how research might contribute to the development of digital language learning. In this, the first of two posts on the subject, I’ll be taking a broad overview of the current state of play in edtech research.

I try my best to keep up to date with research. Of the main journals, there are Language Learning and Technology, which is open access; CALICO, which offers quite a lot of open access material; and reCALL, which is the most restricted in terms of access of the three. But there is something deeply frustrating about most of this research, and this is what I want to explore in these posts. More often than not, research articles end with a call for more research. And more often than not, I find myself saying ‘Please, no, not more research like this!’

First, though, I would like to turn to a more reader-friendly source of research findings. Systematic reviews are, basically literature reviews which can save people like me from having to plough through endless papers on similar subjects, all of which contain the same (or similar) literature review in the opening sections. If only there were more of them. Others agree with me: the conclusion of one systematic review of learning and teaching with technology in higher education (Lillejord et al., 2018) was that more systematic reviews were needed.

Last year saw the publication of a systematic review of research on artificial intelligence applications in higher education (Zawacki-Richter, et al., 2019) which caught my eye. The first thing that struck me about this review was that ‘out of 2656 initially identified publications for the period between 2007 and 2018, 146 articles were included for final synthesis’. In other words, only just over 5% of the research was considered worthy of inclusion.

The review did not paint a very pretty picture of the current state of AIEd research. As the second part of the title of this review (‘Where are the educators?’) makes clear, the research, taken as a whole, showed a ‘weak connection to theoretical pedagogical perspectives’. This is not entirely surprising. As Bates (2019) has noted: ‘since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). As a result, such AI applications tend to adopt a very behaviourist model of learning: present / test / feedback.’ More generally, it is clear that technology adoption (and research) is being driven by technology enthusiasts, with insufficient expertise in education. The danger is that edtech developers ‘will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning’ (Lynch, 2017).

This, then, is the first of my checklist of things that, collectively, researchers need to do to improve the value of their work. The rest of this list is drawn from observations mostly, but not exclusively, from the authors of systematic reviews, and mostly come from reviews of general edtech research. In the next blog post, I’ll look more closely at a recent collection of ELT edtech research (Mavridi & Saumell, 2020) to see how it measures up.

1 Make sure your research is adequately informed by educational research outside the field of edtech

Unproblematised behaviourist assumptions about the nature of learning are all too frequent. References to learning styles are still fairly common. The most frequently investigated skill that is considered in the context of edtech is critical thinking (Sosa Neira, et al., 2017), but this is rarely defined and almost never problematized, despite a broad literature that questions the construct.

2 Adopt a sceptical attitude from the outset

Know your history. Decades of technological innovation in education have shown precious little in the way of educational gains and, more than anything else, have taught us that we need to be sceptical from the outset. ‘Enthusiasm and praise that are directed towards ‘virtual education, ‘school 2.0’, ‘e-learning and the like’ (Selwyn, 2014: vii) are indications that the lessons of the past have not been sufficiently absorbed (Levy, 2016: 102). The phrase ‘exciting potential’, for example, should be banned from all edtech research. See, for example, a ‘state-of-the-art analysis of chatbots in education’ (Winkler & Söllner, 2018), which has nothing to conclude but ‘exciting potential’. Potential is fine (indeed, it is perhaps the only thing that research can unambiguously demonstrate – see section 3 below), but can we try to be a little more grown-up about things?

3 Know what you are measuring

Measuring learning outcomes is tricky, to say the least, but it’s understandable that researchers should try to focus on them. Unfortunately, ‘the vast array of literature involving learning technology evaluation makes it challenging to acquire an accurate sense of the different aspects of learning that are evaluated, and the possible approaches that can be used to evaluate them’ (Lai & Bower, 2019). Metrics such as student grades are hard to interpret, not least because of the large number of variables and the danger of many things being conflated in one score. Equally, or possibly even more, problematic, are self-reporting measures which are rarely robust. It seems that surveys are the most widely used instrument in qualitative research (Sosa Neira, et al., 2017), but these will tell us little or nothing when used for short-term interventions (see point 5 below).

4 Ensure that the sample size is big enough to mean something

In most of the research into digital technology in education that was analysed in a literature review carried out for the Scottish government (ICF Consulting Services Ltd, 2015), there were only ‘small numbers of learners or teachers or schools’.

5 Privilege longitudinal studies over short-term projects

The Scottish government literature review (ICF Consulting Services Ltd, 2015), also noted that ‘most studies that attempt to measure any outcomes focus on short and medium term outcomes’. The fact that the use of a particular technology has some sort of impact over the short or medium term tells us very little of value. Unless there is very good reason to suspect the contrary, we should assume that it is a novelty effect that has been captured (Levy, 2016: 102).

6 Don’t forget the content

The starting point of much edtech research is the technology, but most edtech, whether it’s a flashcard app or a full-blown Moodle course, has content. Research reports rarely give details of this content, assuming perhaps that it’s just fine, and all that’s needed is a little tech to ‘present learners with the ‘right’ content at the ‘right’ time’ (Lynch, 2017). It’s a foolish assumption. Take a random educational app from the Play Store, a random MOOC or whatever, and the chances are you’ll find it’s crap.

7 Avoid anecdotal accounts of technology use in quasi-experiments as the basis of a ‘research article’

Control (i.e technology-free) groups may not always be possible but without them, we’re unlikely to learn much from a single study. What would, however, be extremely useful would be a large, collated collection of such action-research projects, using the same or similar technology, in a variety of settings. There is a marked absence of this kind of work.

8 Enough already of higher education contexts

Researchers typically work in universities where they have captive students who they can carry out research on. But we have a problem here. The systematic review of Lundin et al (2018), for example, found that ‘studies on flipped classrooms are dominated by studies in the higher education sector’ (besides lacking anchors in learning theory or instructional design). With some urgency, primary and secondary contexts need to be investigated in more detail, not just regarding flipped learning.

9 Be critical

Very little edtech research considers the downsides of edtech adoption. Online safety, privacy and data security are hardly peripheral issues, especially with younger learners. Ignoring them won’t make them go away.

More research?

So do we need more research? For me, two things stand out. We might benefit more from, firstly, a different kind of research, and, secondly, more syntheses of the work that has already been done. Although I will probably continue to dip into the pot-pourri of articles published in the main CALL journals, I’m looking forward to a change at the CALICO journal. From September of this year, one issue a year will be thematic, with a lead article written by established researchers which will ‘first discuss in broad terms what has been accomplished in the relevant subfield of CALL. It should then outline which questions have been answered to our satisfaction and what evidence there is to support these conclusions. Finally, this article should pose a “soft” research agenda that can guide researchers interested in pursuing empirical work in this area’. This will be followed by two or three empirical pieces that ‘specifically reflect the research agenda, methodologies, and other suggestions laid out in the lead article’.

But I think I’ll still have a soft spot for some of the other journals that are coyer about their impact factor and that can be freely accessed. How else would I discover (it would be too mean to give the references here) that ‘the effective use of new technologies improves learners’ language learning skills’? Presumably, the ineffective use of new technologies has the opposite effect? Or that ‘the application of modern technology represents a significant advance in contemporary English language teaching methods’?

References

Bates, A. W. (2019). Teaching in a Digital Age Second Edition. Vancouver, B.C.: Tony Bates Associates Ltd. Retrieved from https://pressbooks.bccampus.ca/teachinginadigitalagev2/

ICF Consulting Services Ltd (2015). Literature Review on the Impact of Digital Technology on Learning and Teaching. Edinburgh: The Scottish Government. https://dera.ioe.ac.uk/24843/1/00489224.pdf

Lai, J.W.M. & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education, 133(1), 27-42. Elsevier Ltd. Retrieved January 14, 2020 from https://www.learntechlib.org/p/207137/

Levy, M. 2016. Researching in language learning and technology. In Farr, F. & Murray, L. (Eds.) The Routledge Handbook of Language Learning and Technology. Abingdon, Oxon.: Routledge. pp.101 – 114

Lillejord S., Børte K., Nesje K. & Ruud E. (2018). Learning and teaching with technology in higher education – a systematic review. Oslo: Knowledge Centre for Education https://www.forskningsradet.no/siteassets/publikasjoner/1254035532334.pdf

Lundin, M., Bergviken Rensfeldt, A., Hillman, T. et al. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education 15, 20 (2018) doi:10.1186/s41239-018-0101-6

Lynch, J. (2017). How AI Will Destroy Education. Medium, November 13, 2017. https://buzzrobot.com/how-ai-will-destroy-education-20053b7b88a6

Mavridi, S. & Saumell, V. (Eds.) (2020). Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL

Selwyn, N. (2014). Distrusting Educational Technology. New York: Routledge

Sosa Neira, E. A., Salinas, J. and de Benito Crosetti, B. (2017). Emerging Technologies (ETs) in Education: A Systematic Review of the Literature Published between 2006 and 2016. International Journal of Emerging Technologies in Education, 12 (5). https://online-journals.org/index.php/i-jet/article/view/6939

Winkler, R. & Söllner, M. (2018): Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. In: Academy of Management Annual Meeting (AOM). Chicago, USA. https://www.alexandria.unisg.ch/254848/1/JML_699.pdf

Zawacki-Richter, O., Bond, M., Marin, V. I. And Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education 2019

In my last post , I asked why it is so easy to believe that technology (in particular, technological innovations) will offer solutions to whatever problems exist in language learning and teaching. A simple, but inadequate, answer is that huge amounts of money have been invested in persuading us. Without wanting to detract from the significance of this, it is clearly not sufficient as an explanation. In an attempt to develop my own understanding, I have been turning more and more to the idea of ‘social imaginaries’. In many ways, this is also an attempt to draw together the various interests that I have had since starting this blog.

The Canadian philosopher, Charles Taylor, describes a ‘social imaginary’ as a ‘common understanding that makes possible common practices and a widely shared sense of legitimacy’ (Taylor, 2004: 23). As a social imaginary develops over time, it ‘begins to define the contours of [people’s] worlds and can eventually come to count as the taken-for-granted shape of things, too obvious to mention’ (Taylor, 2004: 29). It is, however, not just a set of ideas or a shared narrative: it is also a set of social practices that enact those understandings, whilst at the same time modifying or solidifying them. The understandings make the practices possible, and it is the practices that largely carry the understanding (Taylor, 2004: 25). In the process, the language we use is filled with new associations and our familiarity with these associations shapes ‘our perceptions and expectations’ (Worster, 1994, quoted in Moore, 2015: 33). A social imaginary, then, is a complex system that is not technological or economic or social or political or educational, but all of these (Urry, 2016). The image of the patterns of an amorphous mass of moving magma (Castoriadis, 1987), flowing through pre-existing channels, but also, at times, striking out along new paths, may offer a helpful metaphor.

Lava flow Hawaii

Technology, of course, plays a key role in contemporary social imaginaries and the term ‘sociotechnical imaginary’ is increasingly widely used. The understandings of the sociotechnical imaginary typically express visions of social progress and a desirable future that is made possible by advances in science and technology (Jasanoff & Kim, 2015: 4). In education, technology is presented as capable of overcoming human failings and the dark ways of the past, of facilitating a ‘pedagogical utopia of natural, authentic teaching and learning’ (Friesen, forthcoming). As such understandings become more widespread and as the educational practices (platforms, apps, etc.) which both shape and are shaped by them become equally widespread, technology has come to be seen as a ‘solution’ to the ‘problem’ of education (Friesen, forthcoming). We need to be careful, however, that having shaped the technology, it does not comes to shape us (see Cobo, 2019, for a further exploration of this idea).

As a way of beginning to try to understand what is going on in edtech in ELT, which is not so very different from what is taking place in education more generally, I have sketched a number of what I consider key components of the shared understandings and the social practices that are related to them. These are closely interlocking pieces and each of them is itself embedded in much broader understandings. They evolve over time and their history can be traced quite easily. Taken together, they do, I think, help us to understand a little more why technology in ELT seems so seductive.

1 The main purpose of English language teaching is to prepare people for the workplace

There has always been a strong connection between learning an additional living language (such as English) and preparing for the world of work. The first modern language schools, such as the Berlitz schools at the end of the 19th century with their native-speaker teachers and monolingual methods, positioned themselves as primarily vocational, in opposition to the kinds of language teaching taking place in schools and universities, which were more broadly humanistic in their objectives. Throughout the 20th century, and especially as English grew as a global language, the public sector, internationally, grew closer to the methods and objectives of the private schools. The idea that learning English might serve other purposes (e.g. cultural enrichment or personal development) has never entirely gone away, as witnessed by the Council of Europe’s list of objectives (including the promotion of mutual understanding and European co-operation, and the overcoming of prejudice and discrimination) in the Common European Framework, but it is often forgotten.

The clarion calls from industry to better align education with labour markets, present and future, grow louder all the time, often finding expression in claims that ‘education is unfit for purpose.’ It is invariably assumed that this purpose is to train students in the appropriate skills to enhance their ‘human capital’ in an increasingly competitive and global market (Lingard & Gale, 2007). Educational agendas are increasingly set by the world of business (bodies like the OECD or the World Economic Forum, corporations like Google or Microsoft, and national governments which share their priorities (see my earlier post about neo-liberalism and solutionism ).

One way in which this shift is reflected in English language teaching is in the growing emphasis that is placed on ‘21st century skills’ in teaching material. Sometimes called ‘life skills’, they are very clearly concerned with the world of work, rather than the rest of our lives. The World Economic Forum’s 2018 Future of Jobs survey lists the soft skills that are considered important in the near future and they include ‘creativity’, ‘critical thinking’, ‘emotional intelligence’ and ‘leadership’. (The fact that the World Economic Forum is made up of a group of huge international corporations (e.g. J.P. Morgan, HSBC, UBS, Johnson & Johnson) with a very dubious track record of embezzlement, fraud, money-laundering and tax evasion has not resulted in much serious, public questioning of the view of education expounded by the WEF.)

Without exception, the ELT publishers have brought these work / life skills into their courses, and the topic is an extremely popular one in ELT blogs and magazines, and at conferences. Two of the four plenaries at this year’s international IATEFL conference are concerned with these skills. Pearson has a wide range of related products, including ‘a four-level competency-based digital course that provides engaging instruction in the essential work and life skills competencies that adult learners need’. Macmillan ELT made ‘life skills’ the central plank of their marketing campaign and approach to product design, and even won a British Council ELTon (see below) Award for ‘Innovation in teacher resources) in 2015 for their ‘life skills’ marketing campaign. Cambridge University Press has developed a ‘Framework for Life Competencies’ which allows these skills to be assigned numerical values.

The point I am making here is not that these skills do not play an important role in contemporary society, nor that English language learners may not benefit from some training in them. The point, rather, is that the assumption that English language learning is mostly concerned with preparation for the workplace has become so widespread that it becomes difficult to think in another way.

2 Technological innovation is good and necessary

The main reason that soft skills are deemed to be so important is that we live in a rapidly-changing world, where the unsubstantiated claim that 85% (or whatever other figure comes to mind) of current jobs won’t exist 10 years from now is so often repeated that it is taken as fact . Whether or not this is true is perhaps less important to those who make the claim than the present and the future that they like to envisage. The claim is, at least, true-ish enough to resonate widely. Since these jobs will disappear, and new ones will emerge, because of technological innovations, education, too, will need to innovate to keep up.

English language teaching has not been slow to celebrate innovation. There were coursebooks called ‘Cutting Edge’ (1998) and ‘Innovations’ (2005), but more recently the connections between innovation and technology have become much stronger. The title of the recent ‘Language Hub’ (2019) was presumably chosen, in part, to conjure up images of digital whizzkids in fashionable co-working start-up spaces. Technological innovation is explicitly promoted in the Special Interest Groups of IATEFL and TESOL. Despite a singular lack of research that unequivocally demonstrates a positive connection between technology and language learning, the former’s objective is ‘to raise awareness among ELT professionals of the power of learning technologies to assist with language learning’. There is a popular annual conference, called InnovateELT , which has the tagline ‘Be Part of the Solution’, and the first problem that this may be a solution to is that our students need to be ‘ready to take on challenging new careers’.

Last, but by no means least, there are the annual British Council ELTon awards  with a special prize for digital innovation. Among the British Council’s own recent innovations are a range of digitally-delivered resources to develop work / life skills among teens.

Again, my intention (here) is not to criticise any of the things mentioned in the preceding paragraphs. It is merely to point to a particular structure of feeling and the way that is enacted and strengthened through material practices like books, social groups, conferences and other events.

3 Technological innovations are best driven by the private sector

The vast majority of people teaching English language around the world work in state-run primary and secondary schools. They are typically not native-speakers of English, they hold national teaching qualifications and they are frequently qualified to teach other subjects in addition to English (often another language). They may or may not self-identify as teachers of ‘ELT’ or ‘EFL’, often seeing themselves more as ‘school teachers’ or ‘language teachers’. People who self-identify as part of the world of ‘ELT or ‘TEFL’ are more likely to be native speakers and to work in the private sector (including private or semi-private language schools, universities (which, in English-speaking countries, are often indistinguishable from private sector institutions), publishing companies, and freelancers). They are more likely to hold international (TEFL) qualifications or higher degrees, and they are less likely to be involved in the teaching of other languages.

The relationship between these two groups is well illustrated by the practice of training days, where groups of a few hundred state-school teachers participate in workshops organised by publishing companies and delivered by ELT specialists. In this context, state-school teachers are essentially in a client role when they are in contact with the world of ‘ELT’ – as buyers or potential buyers of educational products, training or technology.

Technological innovation is invariably driven by the private sector. This may be in the development of technologies (platforms, apps and so on), in the promotion of technology (through training days and conference sponsorship, for example), or in training for technology (with consultancy companies like ELTjam or The Consultants-E, which offer a wide range of technologically oriented ‘solutions’).

As in education more generally, it is believed that the private sector can be more agile and more efficient than state-run bodies, which continue to decline in importance in educational policy-setting. When state-run bodies are involved in technological innovation in education, it is normal for them to work in partnership with the private sector.

4 Accountability is crucial

Efficacy is vital. It makes no sense to innovate unless the innovations improve something, but for us to know this, we need a way to measure it. In a previous post , I looked at Pearson’s ‘Asking More: the Path to Efficacy’ by CEO John Fallon (who will be stepping down later this year). Efficacy in education, says Fallon, is ‘making a measurable impact on someone’s life through learning’. ‘Measurable’ is the key word, because, as Fallon claims, ‘it is increasingly possible to determine what works and what doesn’t in education, just as in healthcare.’ We need ‘a relentless focus’ on ‘the learning outcomes we deliver’ because it is these outcomes that can be measured in ‘a systematic, evidence-based fashion’. Measurement, of course, is all the easier when education is delivered online, ‘real-time learner data’ can be captured, and the power of analytics can be deployed.

Data is evidence, and it’s as easy to agree on the importance of evidence as it is hard to decide on (1) what it is evidence of, and (2) what kind of data is most valuable. While those questions remain largely unanswered, the data-capturing imperative invades more and more domains of the educational world.

English language teaching is becoming data-obsessed. From language scales, like Pearson’s Global Scale of English to scales of teacher competences, from numerically-oriented formative assessment practices (such as those used on many LMSs) to the reporting of effect sizes in meta-analyses (such as those used by John Hattie and colleagues), datafication in ELT accelerates non-stop.

The scales and frameworks are all problematic in a number of ways (see, for example, this post on ‘The Mismeasure of Language’) but they have undeniably shaped the way that we are able to think. Of course, we need measurable outcomes! If, for the present, there are privacy and security issues, it is to be hoped that technology will find solutions to them, too.

REFERENCES

Castoriadis, C. (1987). The Imaginary Institution of Society. Cambridge: Polity Press.

Cobo, C. (2019). I Accept the Terms and Conditions. Montevideo: International Development Research Centre / Center for Research Ceibal Foundation. https://adaptivelearninginelt.files.wordpress.com/2020/01/41acf-cd84b5_7a6e74f4592c460b8f34d1f69f2d5068.pdf

Friesen, N. (forthcoming) The technological imaginary in education, or: Myth and enlightenment in ‘Personalized Learning’. In M. Stocchetti (Ed.) The Digital Age and its Discontents. University of Helsinki Press. Available at https://www.academia.edu/37960891/The_Technological_Imaginary_in_Education_or_Myth_and_Enlightenment_in_Personalized_Learning_

Jasanoff, S. & Kim, S.-H. (2015). Dreamscapes of Modernity. Chicago: University of Chicago Press.

Lingard, B. & Gale, T. (2007). The emergent structure of feeling: what does it mean for critical educational studies and research?, Critical Studies in Education, 48:1, pp. 1-23

Moore, J. W. (2015). Capitalism in the Web of Life. London: Verso.

Robbins, K. & Webster, F. (1989]. The Technical Fix. Basingstoke: Macmillan Education.

Taylor, C. (2014). Modern Social Imaginaries. Durham, NC: Duke University Press.

Urry, J. (2016). What is the Future? Cambridge: Polity Press.