Posts Tagged ‘platforms’

Online teaching is big business. Very big business. Online language teaching is a significant part of it, expected to be worth over $5 billion by 2025. Within this market, the biggest demand is for English and the lion’s share of the demand comes from individual learners. And a sizable number of them are Chinese kids.

There are a number of service providers, and the competition between them is hot. To give you an idea of the scale of this business, here are a few details taken from a report in USA Today. VIPKid, is valued at over $3 billion, attracts celebrity investors, and has around 70,000 tutors who live in the US and Canada. 51Talk has 14,800 English teachers from a variety of English-speaking countries. BlingABC gets over 1,000 American applicants a month for its online tutoring jobs. There are many, many others.

Demand for English teachers in China is huge. The Pie News, citing a Chinese state media announcement, reported in September of last year that there were approximately 400,000 foreign citizens working in China as English language teachers, two-thirds of whom were working illegally. Recruitment problems, exacerbated by quotas and more stringent official requirements for qualifications, along with a very restricted desired teacher profile (white, native-speakers from a few countries like the US and the UK), have led more providers to look towards online solutions. Eric Yang, founder of the Shanghai-based iTutorGroup, which operates under a number of different brands and claims to be the ‘largest English-language learning institution in the world’, said that he had been expecting online tutoring to surpass F2F classes within a few years. With coronavirus, he now thinks it will come ‘much earlier’.

Typically, the work does not require much, if anything, in the way of training (besides familiarity with the platform), although a 40-hour TEFL course is usually preferred. Teachers deliver pre-packaged lessons. According to the USA Today report, Chinese students pay between $49 and $80 dollars an hour for the classes.

It’s a highly profitable business and the biggest cost to the platform providers is the rates they pay the tutors. If you google “Teaching TEFL jobs online”, you’ll quickly find claims that teachers can earn $40 / hour and up. Such claims are invariably found on the sites of recruitment agencies, who are competing for attention. However, although it’s possible that a small number of people might make this kind of money, the reality is that most will get nowhere near it. Scroll down the pages a little and you’ll discover that a more generally quoted and accepted figure is between $14 and $20 / hour. These tutors are, of course, freelancers, so the wages are before tax, and there is no health coverage or pension plan.

Reed job advertVIPKid, for example, considered to be one of the better companies, offers payment in the $14 – $22 / hour range. Others offer considerably less, especially if you are not a white, graduate US citizen. Current rates advertised on OETJobs include work for Ziktalk ($10 – 15 / hour), NiceTalk ($10 – 11 / hour), 247MyTutor ($5 – 8 / hour) and Weblio ($5 – 6 / hour). The number of hours that you get is rarely fixed and tutors need to build up a client base by getting good reviews. They will often need to upload short introductory videos, selling their skills. They are in direct competition with other tutors.

They also need to make themselves available when demand for their services is highest. Peak hours for VIPKid, for example, are between 2 and 8 in the morning, depending on where you live in the US. Weekends, too, are popular. With VIPKid, classes are scheduled in advance, but this is not always the case with other companies, where you log on to show that you are available and hope someone wants you. This is the case with, for example, Cambly (which pays $10.20 / hour … or rather $0.17 / minute) and NiceTalk. According to one review, Cambly has a ‘priority hours system [which] allows teachers who book their teaching slots in advance to feature higher on the teacher list than those who have just logged in, meaning that they will receive more calls’. Teachers have to commit to a set schedule and any changes are heavily penalised. The review states that ‘new tutors on the platform should expect to receive calls for about 50% of the time they’re logged on’.

 

Taking the gig economy to its logical conclusion, there are other companies where tutors can fix their own rates. SkimaTalk, for example, offers a deal where tutors first teach three unpaid lessons (‘to understand how the system works and build up their initial reputation on the platform’), then the system sets $16 / hour as a default rate, but tutors can change this to anything they wish. With another, Palfish, where tutors set their own rate, the typical rate is $10 – 18 / hour, and the company takes a 20% commission. With Preply, here is the deal on offer:

Your earnings depend on the hourly rate you set in your profile and how often you can provide lessons. Preply takes a 100% commission fee of your first lesson payment with every new student. For all subsequent lessons, the commission varies from 33 to 18% and depends on the number of completed lesson hours with students. The more tutoring you do through Preply, the less commission you pay.

Not one to miss a trick, Ziktalk (‘currently focusing on language learning and building global audience’) encourages teachers ‘to upload educational videos in order to attract more students’. Or, to put it another way, teachers provide free content in order to have more chance of earning $10 – 15 / hour. Ah, the joys of digital labour!

And, then, coronavirus came along. With schools shutting down, first in China and then elsewhere, tens of millions of students are migrating online. In Hong Kong, for example, the South China Morning Post reports that schools will remain closed until April 20, at the earliest, but university entrance exams will be going ahead as planned in late March. CNBC reported yesterday that classes are being cancelled across the US, and the same is happening, or is likely to happen, in many other countries.

Shares in the big online providers soared in February, with Forbes reporting that $3.2 billion had been added to the share value of China’s e-Learning leaders. Stock in New Oriental (owners of BlingABC, mentioned above) ‘rose 7.3% last month, adding $190 million to the wealth of its founder Yu Minhong [whose] current net worth is estimated at $3.4 billion’.

DingTalk, a communication and management app owned by Alibaba (and the most downloaded free app in China’s iOS App Store), has been adapted to offer online services for schools, reports Xinhua, the official state-run Chinese news agency. The scale of operations is enormous: more than 10,000 new cloud servers were deployed within just two hours.

Current impacts are likely to be dwarfed by what happens in the future. According to Terry Weng, a Shenzhen-based analyst, ‘The gradual exit of smaller education firms means there are more opportunities for TAL and New Oriental. […] Investors are more keen for their future performance.’ Zhu Hong, CTO of DingTalk, observes ‘the epidemic is like a catalyst for many enterprises and schools to adopt digital technology platforms and products’.

For edtech investors, things look rosy. Smaller, F2F providers are in danger of going under. In an attempt to mop up this market and gain overall market share, many elearning providers are offering weighty discounts and free services. Profits can come later.

For the hundreds of thousands of illegal or semi-legal English language teachers in China, things look doubly bleak. Their situation is likely to become even more precarious, with the online gig economy their obvious fall-back path. But English language teachers everywhere are likely to be affected one way or another, as will the whole world of TEFL.

Now seems like a pretty good time to find out more about precarity (see the Teachers as Workers website) and native-speakerism (see TEFL Equity Advocates).

Google search resultsUnconditional calls for language teachers to incorporate digital technology into their teaching are common. The reasons that are given are many and typically include the fact that (1) our students are ‘digital natives’ and expect technology to be integrated into their learning, (2) and digital technology is ubiquitous and has so many affordances for learning. Writing on the topic is almost invariably enthusiastic and the general conclusion is that the integration of technology is necessary and essential. Here’s a fairly typical example: digital technology is ‘an essential multisensory extension to the textbook’ (Torben Schmidt and Thomas Strasser in Surkamp & Viebrock, 2018: 221).

 

Teachers who are reluctant or fail to embrace technology are often ‘characterised as technophobic, or too traditional in their teaching style, or reluctant to adopt change’ (Watson, 2001: 253). (It’s those pesky teachers again.)

Claims for the importance of digital technology are often backed up by vague references to research. Michael Carrier, for example, in his introductory chapter to ‘Digital Language Learning and Teaching’ (Carrier et al. 2017: 3) writes that ‘research results […] seem to show conclusively that the use of educational technology adds certain degrees of richness to the learning and teaching process […] at the very least, digital learning seems to provide enhanced motivation for learners’.

Unfortunately, this is simply not true. Neither in language learning / teaching, nor in education more generally, is there any clear evidence of the necessary benefits of introducing educational technology. In the broader context, the ‘PISA analysis of the impact of Information Communication Technology (ICT) on reading, mathematics, and science (OECD, 2015: 3) in countries heavily invested in educational technology showed mixed effects and “no appreciable improvements”’ (Herodotou et al., 2019). Educational technology can or might  ‘add certain degrees of richness’ or ‘provide enhanced motivation’, but that is not the same as saying that it does or will. The shift from can to will, a piece of modal legerdemain used to advocate for educational technology, is neatly illustrated in a quote from the MIT’s Office of Digital Learning, whose remit is to improve learning and teaching across the university via digital learning: ‘Digital Learning technologies can enable students to grasp concepts more quickly [etc….] Digital technologies will enable this in new and better ways and create possibilities beyond the limits of our current imagination’ (quoted by Carrier, 2017: 1).

Before moving on, here’s another example. The introduction to Li Li’s ‘New Technologies and Language Learning’ (Li, 2017: x) states, with a cautious can, that one of the objectives of the book is ‘to provide examples of how technologies can be used in assisting language education’. In the next paragraph, however, caution is thrown to the wind and we are told, unequivocally, that ‘technology is beneficial for language learning’.

Pedagogy before technology

Examples of gratuitous technology use are not hard to find. Mark Warschauer (who, as the founding director of the Digital Learning Lab at the University of California, Irvine, could be fairly described as an edtech enthusiast) describes one example: ‘I remember observing a beginners’ French class a number of years ago, the teacher bragged about how engaged the learners were in creating multimedia in French. However, the students were spending most of their time and energy talking with each other in English about how to make PowerPoints, when, as beginning learners, they really needed to be spending time hearing as much French as possible’ (quoted in the Guardian, May 2014).

As a result, no doubt, of having similar experiences, it seems that many people are becoming a little more circumspect in their enthusiasm for edtech. In the same Guardian article as Warschauer’s recollections, Russell Stannard ‘says the trick is to put the pedagogy first, not the technology. “You’ve got to know why you’re using it. Teachers do need to learn to use new technology, but the driving force should always be the pedagogy behind it’. Nicky Hockly, Gavin Dudeney and Mark Pegrum (Hockly et al., 2013: 45) concur: ‘Content and pedagogy come before technology. We must decide on our content and pedagogical aims before determining whether our students should use pens or keyboards, write essays or blogs, or design posters or videos’. And Graham Stanley (2013: 1) in the introduction to his ‘Language Learning With Technology’ states that his ‘book makes a point of putting pedagogy at the forefront of the lesson, which is why content has been organised around specific learning content goals rather than specific technologies’.

But, Axel Krommer, of the Friedrich-Alexander University of Erlangen-Nürnberg, has argued that the principle of ‘pedagogy before technology’ is ‘trivial at best’. In a piece for the Goethe Institute he writes ‘a theory with which everyone agrees and whose opposite no-one believes true is meaningless’, although he adds that it may be useful as ‘an admonitory wake-up call when educational institutions risk being blinded by technological possibilities that cause them to neglect pedagogical principles that should really be taken for granted’. It was this piece that set me thinking more about ‘pedagogy before technology’.

Pedagogy before technology (on condition that there is technology)

Another person to lament the placing of technology before pedagogy is Nik Peachey. In an opinion piece for the Guardian, entitled ‘Technology can sometimes be wasted on English language teaching’, he complains about how teachers are left to sort out how to use technology ‘in a pedagogically effective way, often with very little training or support’. He appears to take it as given that technology is a positive force, and argues that it shouldn’t be wasted. The issue, he says, is that better teacher training is needed so that teachers’ ‘digital literacies’ are improved and to ensure that technological potential is fulfilled.

His position, therefore, cannot really be said to be one of ‘pedagogy before technology’. Like the other writers mentioned above, he comes to the pedagogy through and after an interest in the technology. The educational use of digital technology per se is never seriously questioned. The same holds true for almost the entirety of the world of CALL research.

confer

A Canadian conference ‘Pedagogy b4 Technology’ illustrates my point beautifully.

There are occasional exceptions. A recent example which I found interesting was an article by Herodotou et al (2019), in which the authors take as their starting point a set of OECD educational goals (quality of life, including health, civic engagement, social connections, education, security, life satisfaction and the environment), and then investigate the extent to which a variety of learning approaches (formative analytics, teachback, place-based learning, learning with robots, learning with drones, citizen inquiry) – not all of which involve technology – might contribute to the realisation of these goals.

Technology before pedagogy as policy

Some of the high school English teachers I work with have to use tablets in one lesson a week. Some welcome it, some accept it (they can catch up with other duties while the kids are busy with exercises on the tablet), others just roll their eyes at the mention of this policy. In the same school system, English language learning materials can only be bought if they come in digital versions (even if it is the paper versions that are actually used). The digital versions are mostly used for projecting pages onto the IWBs. Meanwhile, budgets and the time available for in-service training have been cut.

Elsewhere, a chain of universities decides that a certain proportion of all courses must be taught online. English language courses, being less prestigious than major subjects, are one of the first to be migrated to platforms. The staff, few of whom have tenure or time to spare, cope as best as they can, with some support from a department head. Training is provided in the mechanics of operating the platform, and, hopefully before too long, more training will become available to optimize the use of the platform for pedagogical purposes. An adequate budget has yet to be agreed.

The reasons why so many educational authorities introduce such policies are, at best, only superficially related to pedagogy. There is a belief, widely held, that technology cannot fail to make things better. In the words of Tony Blair: ‘Technology has revolutionised the way we work and is now set to transform education. Children cannot be effective in tomorrow’s world if they are trained in yesterday’s skills’. But there is also the potential of education technology to scale education up (i.e. increase student numbers), to reduce long-term costs, to facilitate accountability, to increase productivity, to restrict the power of teachers (and their unions), and so on.

In such circumstances, which are not uncommon, it seems to me that there are more pressing things to worry about than teachers who are not sufficiently thinking about the pedagogical uses to which they put the technology that they have to use. Working conditions, pay and hours, are all affected by the digitalisation of education. These things do get talked about (see, for example, Walsh, 2019), but only rarely.

Technology as pedagogy

Blended learning, described by Pete Sharma in 2010 as a ‘buzz word’ in ELT, remains a popular pedagogical approach. In a recent article (2019), he enthuses about the possibilities of blended learning, suggesting that teachers should use it all the time: ‘teaching in this new digital age should use the technologies which students meet in their everyday lives, such as the Internet, laptop, smartphone and tablet’. It’s also, he claims, time-efficient, but other pedagogical justifications are scant: ‘some language areas are really suited to be studied outside the classroom. Extensive reading and practising difficult phonemes, for instance’.

Blended learning and digital technology are inseparable. Hockley (2018) explains the spread of blended learning in ELT as being driven primarily by ‘the twin drivers of economics (i.e. lower costs) and increasingly accessible and affordable hardware and software’. It might be nice to believe that ‘it is pedagogy, rather than technology, that should underpin the design of blended learning programmes’ (McCarthy, 2016, back cover), but the technology is the pedagogy here. Precisely how it is used is almost inevitably an afterthought.

Which pedagogy, anyway?

We can talk about putting pedagogy before technology, but this raises the question of which particular pedagogy we want to put in the driving seat. Presumably not all pedagogies are of equal value.

One of the most common uses of digital technology that has been designed specifically for language learning is the IWB- or platform-delivered coursebook and its accompanying digital workbook. We know that a majority of teachers using online coursebook packages direct their students more readily to tasks with clear right / wrong answers (e.g. drag-and-drop or gap-fill grammar exercises) than they do to the forum facilities where communicative language use is possible. Here, technology is merely replicating and, perhaps (because of its ease of use), encouraging established pedagogical practices. The pedagogy precedes the technology, but it’s probably not the best pedagogy in the world. Nor does it make best use of the technology’s potential. Would the affordances of the technology make a better starting point for course design?

Graham Stanley’s book (2013) offers suggestions for using technology for a variety of purposes, ranging from deliberate practice of grammar and vocabulary to ways of facilitating opportunities for skills practice. It’s an eclectic mix, similar to the range of activities on offer in the average coursebook for adults or teenagers. It is pedagogy-neutral in the sense that it does not offer a set of principles of language learning or teaching, and from these derive a set of practices for using the technology. It is a recipe book for using technological tools and, like all recipe books, prioritises activities over principles. I like the book and I don’t intend these comments as criticism. My point is simply that it’s not easy to take pedagogical principles as a starting point. Does the world of ELT even have generally agreed pedagogical principles?

And what is it that we’re teaching?

One final thought … If we consider how learners are likely to be using the English they are learning in their real-world futures, technology will not be far away: reading online, listening to / watching online material, writing and speaking with messaging apps, writing with text, email or Google Docs … If, in designing pedagogical approaches, we wish to include features of authentic language use, it’s hard to see how we can avoid placing technology fairly near the centre of the stage. Technologically-mediated language use is inseparable from pedagogy: one does not precede the other.

Similarly, if we believe that it is part of the English teacher’s job to develop the digital literacy (e.g. Hockly et al., 2013), visual literacy (e.g. Donaghy, 2015) or multimodal literacy of their students – not, incidentally, a belief that I share – then, again, technology cannot be separated from pedagogy.

Pedagogy before technology, OK??

So, I ask myself what precisely it is that people mean when they say that pedagogy should come before technology. The locutionary force, or referential meaning, usually remains unclear: in the absence of a particular pedagogy and particular contexts, what exactly is being said? The illocutionary force, likewise, is difficult to understand in the absence of a particular addressee: is the message only intended for teachers suffering from Everest Syndrome? And the perlocutionary force is equally intriguing: how are people who make the statement positioning themselves, and in relation to which addressee? Along the lines of green-washing and woke-washing, are we sometimes seeing cases of pedagogy-washing?

REFERENCES

Carrier, M., Damerow, R. M. & Bailey, K. M. (2017) Digital Language Learning and Teaching: Research, theory, and practice. New York: Routledge

Donaghy, K. (2015) Film in Action. Peaslake, Surrey: DELTA Publishing

Herodotou, C., Sharples, M., Gaved, M., Kukulska-Hulme, A., Rienties, B., Scanlon, E. & Whitelock, D. (2019) Innovative Pedagogies of the Future: An Evidence-Based Selection. Frontiers in Education, 4 (113)

Hockly, N. (2018) Blended Learning. ELT Journal 72 (1): pp. 97 – 101

Hockly, N., Dudeney, G. & Pegrum, M. (2013) Digital Literacies. Harlow: Pearson

Li, L. (2017) New Technologies and Language Learning. London: Palgrave

McCarthy, M. (Ed.) (2016) The Cambridge Guide to Blended Learning for Language Teaching. Cambridge: Cambridge University Press

OECD (2015) Students, Computers and Learning: Making the Connection, PISA. Paris: OECD Publishing

Sharma, P. (2010) Blended Learning. ELT Journal, 64 (4): pp. 456 – 458

Sharma, P. (2019) The Complete Guide to Running a Blended Learning Course. Oxford University Press English Language Teaching Global Blog 17 October 2019. Available at: https://oupeltglobalblog.com/2019/10/17/complete-guidagogyde-blended-learning/

Stanley, G. (2013) Language Learning with Technology. Cambridge: Cambridge University Press

Surkamp, C. & Viebrock, B. (Eds.) (2018) Teaching English as a Foreign Language: An Introduction. Stuttgart: J. B. Metzler

Walsh, P. (2019) Precarity. ELT Journal, 73 (4): pp. 459–462

Watson, D. M. (2001) Pedagogy before Technology: Re-thinking the Relationship between ICT and Teaching. Education and Information Technologies 6:4: pp.251–26

At the start of the last decade, ELT publishers were worried, Macmillan among them. The financial crash of 2008 led to serious difficulties, not least in their key Spanish market. In 2011, Macmillan’s parent company was fined ₤11.3 million for corruption. Under new ownership, restructuring was a constant. At the same time, Macmillan ELT was getting ready to move from its Oxford headquarters to new premises in London, a move which would inevitably lead to the loss of a sizable proportion of its staff. On top of that, Macmillan, like the other ELT publishers, was aware that changes in the digital landscape (the first 3G iPhone had appeared in June 2008 and wifi access was spreading rapidly around the world) meant that they needed to shift away from the old print-based model. With her finger on the pulse, Caroline Moore, wrote an article in October 2010 entitled ‘No Future? The English Language Teaching Coursebook in the Digital Age’ . The publication (at the start of the decade) and runaway success of the online ‘Touchstone’ course, from arch-rivals, Cambridge University Press, meant that Macmillan needed to change fast if they were to avoid being left behind.

Macmillan already had a platform, Campus, but it was generally recognised as being clunky and outdated, and something new was needed. In the summer of 2012, Macmillan brought in two new executives – people who could talk the ‘creative-disruption’ talk and who believed in the power of big data to shake up English language teaching and publishing. At the time, the idea of big data was beginning to reach public consciousness and ‘Big Data: A Revolution that Will Transform how We Live, Work, and Think’ by Viktor Mayer-Schönberger and Kenneth Cukier, was a major bestseller in 2013 and 2014. ‘Big data’ was the ‘hottest trend’ in technology and peaked in Google Trends in October 2014. See the graph below.

Big_data_Google_Trend

Not long after taking up their positions, the two executives began negotiations with Knewton, an American adaptive learning company. Knewton’s technology promised to gather colossal amounts of data on students using Knewton-enabled platforms. Its founder, Jose Ferreira, bragged that Knewton had ‘more data about our students than any company has about anybody else about anything […] We literally know everything about what you know and how you learn best, everything’. This data would, it was claimed, enable publishers to multiply, by orders of magnitude, the efficacy of learning materials, allowing publishers, like Macmillan, to provide a truly personalized and optimal offering to learners using their platform.

The contract between Macmillan and Knewton was agreed in May 2013 ‘to build next-generation English Language Learning and Teaching materials’. Perhaps fearful of being left behind in what was seen to be a winner-takes-all market (Pearson already had a financial stake in Knewton), Cambridge University Press duly followed suit, signing a contract with Knewton in September of the same year, in order ‘to create personalized learning experiences in [their] industry-leading ELT digital products’. Things moved fast because, by the start of 2014 when Macmillan’s new catalogue appeared, customers were told to ‘watch out for the ‘Big Tree’’, Macmillans’ new platform, which would be powered by Knewton. ‘The power that will come from this world of adaptive learning takes my breath away’, wrote the international marketing director.

Not a lot happened next, at least outwardly. In the following year, 2015, the Macmillan catalogue again told customers to ‘look out for the Big Tree’ which would offer ‘flexible blended learning models’ which could ‘give teachers much more freedom to choose what they want to do in the class and what they want the students to do online outside of the classroom’.

Macmillan_catalogue_2015

But behind the scenes, everything was going wrong. It had become clear that a linear model of language learning, which was a necessary prerequisite of the Knewton system, simply did not lend itself to anything which would be vaguely marketable in established markets. Skills development, not least the development of so-called 21st century skills, which Macmillan was pushing at the time, would not be facilitated by collecting huge amounts of data and algorithms offering personalized pathways. Even if it could, teachers weren’t ready for it, and the projections for platform adoptions were beginning to seem very over-optimistic. Costs were spiralling. Pushed to meet unrealistic deadlines for a product that was totally ill-conceived in the first place, in-house staff were suffering, and this was made worse by what many staffers thought was a toxic work environment. By the end of 2014 (so, before the copy for the 2015 catalogue had been written), the two executives had gone.

For some time previously, skeptics had been joking that Macmillan had been barking up the wrong tree, and by the time that the 2016 catalogue came out, the ‘Big Tree’ had disappeared without trace. The problem was that so much time and money had been thrown at this particular tree that not enough had been left to develop new course materials (for adults). The whole thing had been a huge cock-up of an extraordinary kind.

Cambridge, too, lost interest in their Knewton connection, but were fortunate (or wise) not to have invested so much energy in it. Language learning was only ever a small part of Knewton’s portfolio, and the company had raised over $180 million in venture capital. Its founder, Jose Ferreira, had been a master of marketing hype, but the business model was not delivering any better than the educational side of things. Pearson pulled out. In December 2016, Ferreira stepped down and was replaced as CEO. The company shifted to ‘selling digital courseware directly to higher-ed institutions and students’ but this could not stop the decline. In September of 2019, Knewton was sold for something under $17 million dollars, with investors taking a hit of over $160 million. My heart bleeds.

It was clear, from very early on (see, for example, my posts from 2014 here and here) that Knewton’s product was little more than what Michael Feldstein called ‘snake oil’. Why and how could so many people fall for it for so long? Why and how will so many people fall for it again in the coming decade, although this time it won’t be ‘big data’ that does the seduction, but AI (which kind of boils down to the same thing)? The former Macmillan executives are still at the game, albeit in new companies and talking a slightly modified talk, and Jose Ferreira (whose new venture has already raised $3.7 million) is promising to revolutionize education with a new start-up which ‘will harness the power of technology to improve both access and quality of education’ (thanks to Audrey Watters for the tip). Investors may be desperate to find places to spread their portfolio, but why do the rest of us lap up the hype? It’s a question to which I will return.

 

 

 

 

When the startup, AltSchool, was founded in 2013 by Max Ventilla, the former head of personalization at Google, it quickly drew the attention of venture capitalists and within a few years had raised $174 million from the likes of the Zuckerberg Foundation, Peter Thiel, Laurene Powell Jobs and Pierre Omidyar. It garnered gushing articles in a fawning edtech press which enthused about ‘how successful students can be when they learn in small, personalized communities that champion project-based learning, guided by educators who get a say in the technology they use’. It promised ‘a personalized learning approach that would far surpass the standardized education most kids receive’.

altschoolVentilla was an impressive money-raiser who used, and appeared to believe, every cliché in the edTech sales manual. Dressed in regulation jeans, polo shirt and fleece, he claimed that schools in America were ‘stuck in an industrial-age model, [which] has been in steady decline for the last century’ . What he offered, instead, was a learner-centred, project-based curriculum providing real-world lessons. There was a focus on social-emotional learning activities and critical thinking was vital.

The key to the approach was technology. From the start, software developers, engineers and researchers worked alongside teachers everyday, ‘constantly tweaking the Personalized Learning Plan, which shows students their assignments for each day and helps teachers keep track of and assess student’s learning’. There were tablets for pre-schoolers, laptops for older kids and wall-mounted cameras to record the lessons. There were, of course, Khan Academy videos. Ventilla explained that “we start with a representation of each child”, and even though “the vast majority of the learning should happen non-digitally”, the child’s habits and preferences gets converted into data, “a digital representation of the important things that relate to that child’s learning, not just their academic learning but also their non-academic learning. Everything logistic that goes into setting up the experience for them, whether it’s who has permission to pick them up or their allergy information. You name it.” And just like Netflix matches us to TV shows, “If you have that accurate and actionable representation for each child, now you can start to personalize the whole experience for that child. You can create that kind of loop you described where because we can represent a child well, we can match them to the right experiences.”

AltSchool seemed to offer the possibility of doing something noble, of transforming education, ‘bringing it into the digital age’, and, at the same time, a healthy return on investors’ money. Expanding rapidly, nine AltSchool microschools were opened in New York and the Bay Area, and plans were afoot for further expansion in Chicago. But, by then, it was already clear that something was going wrong. Five of the schools were closed before they had really got started and the attrition rate in some classrooms had reached about 30%. Revenue in 2018 was only $7 million and there were few buyers for the AltSchool platform. Quoting once more from the edTech bible, Ventilla explained the situation: ‘Our whole strategy is to spend more than we make,’ he says. Since software is expensive to develop and cheap to distribute, the losses, he believes, will turn into steep profits once AltSchool refines its product and lands enough customers.

The problems were many and apparent. Some of the buildings were simply not appropriate for schools, with no playgrounds or gyms, malfunctioning toilets, among other issues. Parents were becoming unhappy and accused AltSchool of putting ‘its ambitions as a tech company above its responsibility to teach their children. […] We kind of came to the conclusion that, really, AltSchool as a school was kind of a front for what Max really wants to do, which is develop software that he’s selling,’ a parent of a former AltSchool student told Business Insider. ‘We had really mediocre educators using technology as a crutch,’ said one father who transferred his child to a different private school after two years at AltSchool. […] We learned that it’s almost impossible to really customize the learning experience for each kid.’ Some parents began to wonder whether AltSchool had enticed families into its program merely to extract data from their children, then toss them aside?

With the benefit of hindsight, it would seem that the accusations were hardly unfair. In June of this year, AltSchool announced that its four remaining schools would be operated by a new partner, Higher Ground Education (a well-funded startup founded in 2016 which promotes and ‘modernises’ Montessori education). Meanwhile, AltSchool has been rebranded as Altitude Learning, focusing its ‘resources on the development and expansion of its personalized learning platform’ for licensing to other schools across the country.

Quoting once more from the edTech sales manual, Ventilla has said that education should drive the tech, not the other way round. Not so many years earlier, before starting AltSchool, Ventilla also said that he had read two dozen books on education and emerged a fan of Sir Ken Robinson. He had no experience as a teacher or as an educational administrator. Instead, he had ‘extensive knowledge of networks, and he understood the kinds of insights that can be gleaned from big data’.

9781316629178More and more language learning is taking place, fully or partially, on online platforms and the affordances of these platforms for communicative interaction are exciting. Unfortunately, most platform-based language learning experiences are a relentless diet of drag-and-drop, drag-till-you-drop grammar or vocabulary gap-filling. The chat rooms and discussion forums that the platforms incorporate are underused or ignored. Lindsay Clandfield and Jill Hadfield’s new book is intended to promote online interaction between and among learners and the instructor, rather than between learners and software.

Interaction Online is a recipe book, containing about 80 different activities (many more if you consider the suggested variations). Subtitled ‘Creative activities for blended learning’, the authors have selected and designed the activities so that any teacher using any degree of blend (from platform-based instruction to occasional online homework) will be able to use them. The activities do not depend on any particular piece of software, as they are all designed for basic tools like Facebook, Skype and chat rooms. Indeed, almost every single activity could be used, sometimes with some slight modification, for teachers in face-to-face settings.

A recipe book must be judged on the quality of the activities it contains, and the standard here is high. They range from relatively simple, short activities to much longer tasks which will need an hour or more to complete. An example of the former is a sentence-completion activity (‘Don’t you hate / love it when ….?’ – activity 2.5). As an example of the latter, there is a complex problem-solving information-gap where students have to work out the solution to a mystery (activity 6.13), an activity which reminds me of some of the material in Jill Hadfield’s much-loved Communication Games books.

In common with many recipe books, Interaction Online is not an easy book to use, in the sense that it is hard to navigate. The authors have divided up the tasks into five kinds of interaction (personal, factual, creative, critical and fanciful), but it is not always clear precisely why one activity has been assigned to one category rather than another. In any case, the kind of interaction is likely to be less important to many teachers than the kind and amount of language that will be generated (among other considerations), and the table of contents is less than helpful. The index at the back of the book helps to some extent, but a clearer tabulation of activities by interaction type, level, time required, topic and language focus (if any) would be very welcome. Teachers will need to devise their own system of referencing so that they can easily find activities they want to try out.

Again, like many recipe books, Interaction Online is a mix of generic task-types and activities that will only work with the supporting materials that are provided. Teachers will enjoy the latter, but will want to experiment with the former and it is these generic task-types that they are most likely to add to their repertoire. In activity 2.7 (‘Foodies’ – personal interaction), for example, students post pictures of items of food and drink, to which other students must respond with questions. The procedure is clear and effective, but, as the authors note, the pictures could be of practically anything. ‘From pictures to questions’ might be a better title for the activity than ‘Foodies’. Similarly, activity 3.4 (‘Find a festival’ –factual interaction) uses a topic (‘festivals’), rather than a picture, to generate questions and responses. The procedure is slightly different from activity 2.7, but the interactional procedures of the two activities could be swapped around as easily as the topics could be changed.

Perhaps the greatest strength of this book is the variety of interactional procedures that is suggested. The majority of activities contain (1) suggestions for a stimulus, (2) suggestions for managing initial responses to this stimulus, and (3) suggestions for further interaction. As readers work their way through the book, they will be struck by similarities between the activities. The final chapter (chapter 8: ‘Task design’) provides an excellent summary of the possibilities of communicative online interaction, and more experienced teachers may want to read this chapter first.

Chapter 7 provides a useful, but necessarily fairly brief, overview of considerations regarding feedback and assessment

Overall, Interaction Online is a very rich resource, and one that will be best mined in multiple visits. For most readers, I would suggest an initial flick through and a cherry-picking of a small number of activities to try out. For materials writers and course designers, a better starting point may be the final two chapters, followed by a sampling of activities. For everyone, though, Online Interaction is a powerful reminder that technology-assisted language learning could and should be far more than what is usually is.

(This review first appeared in the International House Journal of Education and Development.)

 

In my last post, I looked at the way that, in the absence of a clear, shared understanding of what ‘personalization’ means, it has come to be used as a slogan for the promoters of edtech. In this post, I want to look a little more closely at the constellation of meanings that are associated with the term, suggest a way of evaluating just how ‘personalized’ an instructional method might be, and look at recent research into ‘personalized learning’.

In English language teaching, ‘personalization’ often carries a rather different meaning than it does in broader educational discourse. Jeremy Harmer (Harmer, 2012: 276) defines it as ‘when students use language to talk about themselves and things which interest them’. Most commonly, this is in the context of ‘freer’ language practice of grammar or vocabulary of the following kind: ‘Complete the sentences so that they are true for you’. It is this meaning that Scott Thornbury refers to first in his entry for ‘Personalization’ in his ‘An A-Z of ELT’ (Thornbury, 2006: 160). He goes on, however, to expand his definition of the term to include humanistic approaches such as Community Language Learning / Counseling learning (CLL), where learners decide the content of a lesson, where they have agency. I imagine that no one would disagree that an approach such as this is more ‘personalized’ than a ‘complete-the-sentences-so-they-are-true-for you’ exercise to practise the present perfect.

Outside of ELT, ‘personalization’ has been used to refer to everything from ‘from customized interfaces to adaptive tutors, from student-centered classrooms to learning management systems’ (Bulger, 2016: 3). The graphic below (from Bulger, 2016: 3) illustrates just how wide the definitional reach of ‘personalization’ is.

TheBulger_pie_chart

As with Thornbury’s entry in his ‘A – Z of ELT’, it seems uncontentious to say that some things are more ‘personalized’ than others.

Given the current and historical problems with defining the term, it’s not surprising that a number of people have attempted to develop frameworks that can help us to get to grips with the thorny question of ‘personalization’. In the context of language teaching / learning, Renée Disick (Disick, 1975: 58) offered the following categorisation:

Disick

In a similar vein, a few years later, Howard Altman (Altman, 1980) suggested that teaching activities can differ in four main ways: the time allocated for learning, the curricular goal, the mode of learning and instructional expectations (personalized goal setting). He then offered eight permutations of these variables (see below, Altman, 1980: 9), although many more are imaginable.

Altman 1980 chart

Altman and Disick were writing, of course, long before our current technology-oriented view of ‘personalization’ became commonplace. The recent classification of technologically-enabled personalized learning systems by Monica Bulger (see below, Bulger, 2016: 6) reflects how times have changed.

5_types_of_personalized_learning_system

Bulger’s classification focusses on the technology more than the learning, but her continuum is very much in keeping with the views of Disick and Altman. Some approaches are more personalized than others.

The extent to which choices are offered determines the degree of individualization in a particular program. (Disick, 1975: 5)

It is important to remember that learner-centered language teaching is not a point, but rather a continuum. (Altman, 1980: 6)

Larry Cuban has also recently begun to use a continuum as a way of understanding the practices of ‘personalization’ that he observes as part of his research. The overall goals of schooling at both ends of the curriculum are not dissimilar: helping ‘children grow into adults who are creative thinkers, help their communities, enter jobs and succeed in careers, and become thoughtful, mindful adults’.

Cubans curriculum

As Cuban and others before him (e.g. Januszewski, 2001: 57) make clear, the two perspectives are not completely independent of each other. Nevertheless, we can see that one end of this continuum is likely to be materials-centred with the other learner-centred (Dickinson, 1987: 57). At one end, teachers (or their LMS replacements) are more likely to be content-providers and enact traditional roles. At the other, teachers’ roles are ‘more like those of coaches or facilitators’ (Cavanagh, 2014). In short, one end of the continuum is personalization for the learner; the other end is personalization by the learner.

It makes little sense, therefore, to talk about personalized learning as being a ‘good’ or a ‘bad’ thing. We might perceive one form of personalized learning to be more personalized than another, but that does not mean it is any ‘better’ or more effective. The only possible approach is to consider and evaluate the different elements of personalization in an attempt to establish, first, from a theoretical point of view whether they are likely to lead to learning gains, and, second, from an evidence-based perspective whether any learning gains are measurable. In recent posts on this blog, I have been attempting to do that with elements such as learning styles , self-pacing and goal-setting.

Unfortunately, but perhaps not surprisingly, none of the elements that we associate with ‘personalization’ will lead to clear, demonstrable learning gains. A report commissioned by the Gates Foundation (Pane et al, 2015) to find evidence of the efficacy of personalized learning did not, despite its subtitle (‘Promising Evidence on Personalized Learning’), manage to come up with any firm and unequivocal evidence (see Riley, 2017). ‘No single element of personalized learning was able to discriminate between the schools with the largest achievement effects and the others in the sample; however, we did identify groups of elements that, when present together, distinguished the success cases from others’, wrote the authors (Pane et al., 2015: 28). Undeterred, another report (Pane et al., 2017) was commissioned: in this the authors were unable to do better than a very hedged conclusion: ‘There is suggestive evidence that greater implementation of PL practices may be related to more positive effects on achievement; however, this finding requires confirmation through further research’ (my emphases). Don’t hold your breath!

In commissioning the reports, the Gates Foundation were probably asking the wrong question. The conceptual elasticity of the term ‘personalization’ makes its operationalization in any empirical study highly problematic. Meaningful comparison of empirical findings would, as David Hartley notes, be hard because ‘it is unlikely that any conceptual consistency would emerge across studies’ (Hartley, 2008: 378). The question of what works is unlikely to provide a useful (in the sense of actionable) response.

In a new white paper out this week, “A blueprint for breakthroughs,” Michael Horn and I argue that simply asking what works stops short of the real question at the heart of a truly personalized system: what works, for which students, in what circumstances? Without this level of specificity and understanding of contextual factors, we’ll be stuck understanding only what works on average despite aspirations to reach each individual student (not to mention mounting evidence that “average” itself is a flawed construct). Moreover, we’ll fail to unearth theories of why certain interventions work in certain circumstances. And without that theoretical underpinning, scaling personalized learning approaches with predictable quality will remain challenging. Otherwise, as more schools embrace personalized learning, at best each school will have to go at it alone and figure out by trial and error what works for each student. Worse still, if we don’t support better research, “personalized” schools could end up looking radically different but yielding similar results to our traditional system. In other words, we risk rushing ahead with promising structural changes inherent to personalized learning—reorganizing space, integrating technology tools, freeing up seat-time—without arming educators with reliable and specific information about how to personalize to their particular students or what to do, for which students, in what circumstances. (Freeland Fisher, 2016)

References

Altman, H.B. 1980. ‘Foreign language teaching: focus on the learner’ in Altman, H.B. & James, C.V. (eds.) 1980. Foreign Language Teaching: Meeting Individual Needs. Oxford: Pergamon Press, pp.1 – 16

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. New York: Data and Society Research Institute. https://www.datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week http://www.edweek.org/ew/articles/2014/10/22/09pl-overview.h34.html

Dickinson, L. 1987. Self-instruction in Language Learning. Cambridge: Cambridge University Press

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Freeland Fisher, J. 2016. ‘The inconvenient truth about personalized learning’ [Blog post] retrieved from http://www.christenseninstitute.org/blog/the-inconvenient-truth-about-personalized-learning/ (May 4, 2016)

Harmer, J. 2012. Essential Teacher Knowledge. Harlow: Pearson Education

Hartley, D. 2008. ‘Education, Markets and the Pedagogy of Personalisation’ British Journal of Educational Studies 56 / 4: 365 – 381

Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited

Pane, J. F., Steiner, E. D., Baird, M. D. & Hamilton, L. S. 2015. Continued Progress: Promising Evidence on Personalized Learning. Seattle: Rand Corporation retrieved from http://www.rand.org/pubs/research_reports/RR1365.html

Pane, J.F., Steiner, E. D., Baird, M. D., Hamilton, L. S. & Pane, J.D. 2017. Informing Progress: Insights on Personalized Learning Implementation and Effects. Seattle: Rand Corporation retrieved from https://www.rand.org/pubs/research_reports/RR2042.html

Riley, B. 2017. ‘Personalization vs. How People Learn’ Educational Leadership 74 / 6: 68-73

Thornbury, S. 2006. An A – Z of ELT. Oxford: Macmillan Education

 

 

 

Introduction

In the last post, I looked at issues concerning self-pacing in personalized language learning programmes. This time, I turn to personalized goal-setting. Most definitions of personalized learning, such as that offered by Next Generation Learning Challenges http://nextgenlearning.org/ (a non-profit supported by Educause, the Gates Foundation, the Broad Foundation, the Hewlett Foundation, among others), argue that ‘the default perspective [should be] the student’s—not the curriculum, or the teacher, and that schools need to adjust to accommodate not only students’ academic strengths and weaknesses, but also their interests, and what motivates them to succeed.’ It’s a perspective shared by the United States National Education Technology Plan 2017 https://tech.ed.gov/netp/ , which promotes the idea that learning objectives should vary based on learner needs, and should often be self-initiated. It’s shared by the massively funded Facebook initiative that is developing software that ‘puts students in charge of their lesson plans’, as the New York Times https://www.nytimes.com/2016/08/10/technology/facebook-helps-develop-software-that-puts-students-in-charge-of-their-lesson-plans.html?_r=0 put it. How, precisely, personalized goal-setting can be squared with standardized, high-stakes testing is less than clear. Are they incompatible by any chance?

In language learning, the idea that learners should have some say in what they are learning is not new, going back, at least, to the humanistic turn in the 1970s. Wilga Rivers advocated ‘giving the students opportunity to choose what they want to learn’ (Rivers, 1971: 165). A few years later, Renee Disick argued that the extent to which a learning programme can be called personalized (although she used the term ‘individualized’) depends on the extent to which learners have a say in the choice of learning objectives and the content of learning (Disick, 1975). Coming more up to date, Penny Ur advocated giving learners ‘a measure of freedom to choose how and what to learn’ (Ur, 1996: 233).

The benefits of personalized goal-setting

Personalized goal-setting is closely related to learner autonomy and learner agency. Indeed, it is hard to imagine any meaningful sense of learner autonomy or agency without some control of learning objectives. Without this control, it will be harder for learners to develop an L2 self. This matters because ‘ultimate attainment in second-language learning relies on one’s agency … [it] is crucial at the point where the individuals must not just start memorizing a dozen new words and expressions but have to decide on whether to initiate a long, painful, inexhaustive, and, for some, never-ending process of self-translation. (Pavlenko & Lantolf, 2000: 169 – 170). Put bluntly, if learners ‘have some responsibility for their own learning, they are more likely to be engaged than if they are just doing what the teacher tells them to’ (Harmer, 2012: 90). A degree of autonomy should lead to increased motivation which, in turn, should lead to increased achievement (Dickinson, 1987: 32; Cordova & Lepper, 1996: 726).

Strong evidence for these claims is not easy to provide, not least since autonomy and agency cannot be measured. However, ‘negative evidence clearly shows that a lack of agency can stifle learning by denying learners control over aspects of the language-learning process’ (Vandergriff, 2016: 91). Most language teachers (especially in compulsory education) have witnessed the negative effects that a lack of agency can generate in some students. Irrespective of the extent to which students are allowed to influence learning objectives, the desirability of agency / autonomy appears to be ‘deeply embedded in the professional consciousness of the ELT community’ (Borg and Al-Busaidi, 2012; Benson, 2016: 341). Personalized goal-setting may not, for a host of reasons, be possible in a particular learning / teaching context, but in principle it would seem to be a ‘good thing’.

Goal-setting and technology

The idea that learners might learn more and better if allowed to set their own learning objectives is hardly new, dating back at least one hundred years to the establishment of Montessori’s first Casa dei Bambini. In language teaching, the interest in personalized learning that developed in the 1970s (see my previous post) led to numerous classroom experiments in personalized goal-setting. These did not result in lasting changes, not least because the workload of teachers became ‘overwhelming’ (Disick, 1975: 128).

Closely related was the establishment of ‘self-access centres’. It was clear to anyone, like myself, who was involved in the setting-up and maintenance of a self-access centre, that they cost a lot, in terms of both money and work (Ur, 2012: 236). But there were also nagging questions about how effective they were (Morrison, 2005). Even more problematic was a bigger question: did they actually promote the learner autonomy that was their main goal?

Post-2000, online technology rendered self-access centres redundant: who needs the ‘walled garden’ of a self-access centre when ‘learners are able to connect with multiple resources and communities via the World Wide Web in entirely individual ways’ (Reinders, 2012)? The cost problem of self-access centres was solved by the web. Readily available now were ‘myriad digital devices, software, and learning platforms offering educators a once-unimaginable array of options for tailoring lessons to students’ needs’ (Cavanagh, 2014). Not only that … online technology promised to grant agency, to ‘empower language learners to take charge of their own learning’ and ‘to provide opportunities for learners to develop their L2 voice’ (Vandergriff, 2016: 32). The dream of personalized learning has become inseparable from the affordances of educational technologies.

It is, however, striking just how few online modes of language learning offer any degree of personalized goal-setting. Take a look at some of the big providers – Voxy, Busuu, Duolingo, Rosetta Stone or Babbel, for example – and you will find only the most token nods to personalized learning objectives. Course providers appear to be more interested in claiming their products are personalized (‘You decide what you want to learn and when!’) than in developing a sufficient amount of content to permit personalized goal-setting. We are left with the ELT equivalent of personalized cans of Coke: a marketing tool.

coke_cans

The problems with personalized goal-setting

Would language learning products, such as those mentioned above, be measurably any better if they did facilitate the personalization of learning objectives in a significant way? Would they be able to promote learner autonomy and agency in a way that self-access centres apparently failed to achieve? It’s time to consider the square quotes that I put around ‘good thing’.

Researchers have identified a number of potential problems with goal-setting. I have already mentioned the problem of reconciling personalized goals and standardized testing. In most learning contexts, educational authorities (usually the state) regulate the curriculum and determine assessment practices. It is difficult to see, as Campbell et al. (Campbell et al., 2007: 138) point out, how such regulation ‘could allow individual interpretations of the goals and values of education’. Most assessment systems ‘aim at convergent outcomes and homogeneity’ (Benson, 2016: 345) and this is especially true of online platforms, irrespective of their claims to ‘personalization’. In weak (typically internal) assessment systems, the potential for autonomy is strongest, but these are rare.

In all contexts, it is likely that personalized goal-setting will only lead to learning gains when a number of conditions are met. The goals that are chosen need to be both specific, measurable, challenging and non-conflicting (Ordóñez et al. 2009: 2-3). They need to be realistic: if not, it is unlikely that self-efficacy (a person’s belief about their own capability to achieve or perform to a certain level) will be promoted (Koda-Dallow & Hobbs, 2005), and without self-efficacy, improved performance is also unlikely (Bandura, 1997). The problem is that many learners lack self-efficacy and are poor self-regulators. These things are teachable / learnable, but require time and support. Many learners need help in ‘becoming aware of themselves and their own understandings’ (McMahon & Oliver, 2001: 1304). If they do not get it, the potential advantages of personalized goal-setting will be negated. As learners become better self-regulators, they will want and need to redefine their learning goals: goal-setting should be an iterative process (Hussey & Smith, 2003: 358). Again, support will be needed. In online learning, such support is not common.

A further problem that has been identified is that goal-setting can discourage a focus on non-goal areas (Ordóñez et al. 2009: 2) and can lead to ‘a focus on reaching the goal rather than on acquiring the skills required to reach it’ (Locke & Latham, 2006: 266). We know that much language learning is messy and incidental. Students do not only learn the particular thing that they are studying at the time (the belief that they do was described by Dewey as ‘the greatest of all pedagogical fallacies’). Goal-setting, even when personalized, runs the risk of promoting tunnel-vision.

The incorporation of personalized goal-setting in online language learning programmes is, in so many ways, a far from straightforward matter. Simply tacking it onto existing programmes is unlikely to result in anything positive: it is not an ‘over-the-counter treatment for motivation’ (Ordóñez et al.:2). Course developers will need to look at ‘the complex interplay between goal-setting and organizational contexts’ (Ordóñez et al. 2009: 16). Motivating students is not simply ‘a matter of the teacher deploying the correct strategies […] it is an intensely interactive process’ (Lamb, M. 2017). More generally, developers need to move away from a positivist and linear view of learning as a technical process where teaching interventions (such as the incorporation of goal-setting, the deployment of gamification elements or the use of a particular algorithm) will lead to predictable student outcomes. As Larry Cuban reminds us, ‘no persuasive body of evidence exists yet to confirm that belief (Cuban, 1986: 88). The most recent research into personalized learning has failed to identify any single element of personalization that can be clearly correlated with improved outcomes (Pane et al., 2015: 28).

In previous posts, I considered learning styles and self-pacing, two aspects of personalized learning that are highly problematic. Personalized goal-setting is no less so.

References

Bandura, A. 1997. Self-efficacy: The exercise of control. New York: W.H. Freeman and Company

Benson, P. 2016. ‘Learner Autonomy’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon: Routledge. pp.339 – 352

Borg, S. & Al-Busaidi, S. 2012. ‘Teachers’ beliefs and practices regarding learner autonomy’ ELT Journal 66 / 3: 283 – 292

Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week http://www.edweek.org/ew/articles/2014/10/22/09pl-overview.h34.html

Cordova, D. I. & Lepper, M. R. 1996. ‘Intrinsic Motivation and the Process of Learning: Beneficial Effects of Contextualization, Personalization, and Choice’ Journal of Educational Psychology 88 / 4: 715 -739

Cuban, L. 1986. Teachers and Machines. New York: Teachers College Press

Dickinson, L. 1987. Self-instruction in Language Learning. Cambridge: Cambridge University Press

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Harmer, J. 2012. Essential Teacher Knowledge. Harlow: Pearson Education

Hussey, T. & Smith, P. 2003. ‘The Uses of Learning Outcomes’ Teaching in Higher Education 8 / 3: 357 – 368

Lamb, M. 2017 (in press) ‘The motivational dimension of language teaching’ Language Teaching 50 / 3

Locke, E. A. & Latham, G. P. 2006. ‘New Directions in Goal-Setting Theory’ Current Directions in Psychological Science 15 / 5: 265 – 268

McMahon, M. & Oliver, R. (2001). Promoting self-regulated learning in an on-line environment. In C. Montgomerie & J. Viteli (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2001 (pp. 1299-1305). Chesapeake, VA: AACE

Morrison, B. 2005. ‘Evaluating learning gain in a self-access learning centre’ Language Teaching Research 9 / 3: 267 – 293

Ordóñez, L. D., Schweitzer, M. E., Galinsky, A. D. & Bazerman, M. H. 2009. Goals Gone Wild: The Systematic Side Effects of Over-Prescribing Goal Setting. Harvard Business School Working Paper 09-083

Pane, J. F., Steiner, E. D., Baird, M. D. & Hamilton, L. S. 2015. Continued Progress: Promising Evidence on Personalized Learning. Seattle: Rand Corporation

Pavlenko, A. & Lantolf, J. P. 2000. ‘Second language learning as participation and the (re)construction of selves’ In J.P. Lantolf (ed.), Sociocultural Theory and Second Language Learning. Oxford: Oxford University Press, pp. 155 – 177

Reinders, H. 2012. ‘The end of self-access? From walled garden to public park’ ELT World Online 4: 1 – 5

Rivers, W. M. 1971. ‘Techniques for Developing Proficiency in the Spoken Language in an Individualized Foreign Language program’ in Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare. pp. 165 – 169

Ur, P. 1996. A Course in Language Teaching: Practice and Theory. Cambridge: Cambridge University Press

Ur, P. 2012. A Course in English Language Teaching. Cambridge: Cambridge University Press

Vandergriff, I. Second-language Discourse in the Digital World. 2016. Amsterdam: John Benjamins

About two and a half years ago when I started writing this blog, there was a lot of hype around adaptive learning and the big data which might drive it. Two and a half years are a long time in technology. A look at Google Trends suggests that interest in adaptive learning has been pretty static for the last couple of years. It’s interesting to note that 3 of the 7 lettered points on this graph are Knewton-related media events (including the most recent, A, which is Knewton’s latest deal with Hachette) and 2 of them concern McGraw-Hill. It would be interesting to know whether these companies follow both parts of Simon Cowell’s dictum of ‘Create the hype, but don’t ever believe it’.

Google_trends

A look at the Hype Cycle (see here for Wikipedia’s entry on the topic and for criticism of the hype of Hype Cycles) of the IT research and advisory firm, Gartner, indicates that both big data and adaptive learning have now slid into the ‘trough of disillusionment’, which means that the market has started to mature, becoming more realistic about how useful the technologies can be for organizations.

A few years ago, the Gates Foundation, one of the leading cheerleaders and financial promoters of adaptive learning, launched its Adaptive Learning Market Acceleration Program (ALMAP) to ‘advance evidence-based understanding of how adaptive learning technologies could improve opportunities for low-income adults to learn and to complete postsecondary credentials’. It’s striking that the program’s aims referred to how such technologies could lead to learning gains, not whether they would. Now, though, with the publication of a report commissioned by the Gates Foundation to analyze the data coming out of the ALMAP Program, things are looking less rosy. The report is inconclusive. There is no firm evidence that adaptive learning systems are leading to better course grades or course completion. ‘The ultimate goal – better student outcomes at lower cost – remains elusive’, the report concludes. Rahim Rajan, a senior program office for Gates, is clear: ‘There is no magical silver bullet here.’

The same conclusion is being reached elsewhere. A report for the National Education Policy Center (in Boulder, Colorado) concludes: Personalized Instruction, in all its many forms, does not seem to be the transformational technology that is needed, however. After more than 30 years, Personalized Instruction is still producing incremental change. The outcomes of large-scale studies and meta-analyses, to the extent they tell us anything useful at all, show mixed results ranging from modest impacts to no impact. Additionally, one must remember that the modest impacts we see in these meta-analyses are coming from blended instruction, which raises the cost of education rather than reducing it (Enyedy, 2014: 15 -see reference at the foot of this post). In the same vein, a recent academic study by Meg Coffin Murray and Jorge Pérez (2015, ‘Informing and Performing: A Study Comparing Adaptive Learning to Traditional Learning’) found that ‘adaptive learning systems have negligible impact on learning outcomes’.

future-ready-learning-reimagining-the-role-of-technology-in-education-1-638In the latest educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Reimagining the Role of Technology in Education’, 2016) the only mentions of the word ‘adaptive’ are in the context of testing. And the latest OECD report on ‘Students, Computers and Learning: Making the Connection’ (2015), finds, more generally, that information and communication technologies, when they are used in the classroom, have, at best, a mixed impact on student performance.

There is, however, too much money at stake for the earlier hype to disappear completely. Sponsored cheerleading for adaptive systems continues to find its way into blogs and national magazines and newspapers. EdSurge, for example, recently published a report called ‘Decoding Adaptive’ (2016), sponsored by Pearson, that continues to wave the flag. Enthusiastic anecdotes take the place of evidence, but, for all that, it’s a useful read.

In the world of ELT, there are plenty of sales people who want new products which they can call ‘adaptive’ (and gamified, too, please). But it’s striking that three years after I started following the hype, such products are rather thin on the ground. Pearson was the first of the big names in ELT to do a deal with Knewton, and invested heavily in the company. Their relationship remains close. But, to the best of my knowledge, the only truly adaptive ELT product that Pearson offers is the PTE test.

Macmillan signed a contract with Knewton in May 2013 ‘to provide personalized grammar and vocabulary lessons, exam reviews, and supplementary materials for each student’. In December of that year, they talked up their new ‘big tree online learning platform’: ‘Look out for the Big Tree logo over the coming year for more information as to how we are using our partnership with Knewton to move forward in the Language Learning division and create content that is tailored to students’ needs and reactive to their progress.’ I’ve been looking out, but it’s all gone rather quiet on the adaptive / platform front.

In September 2013, it was the turn of Cambridge to sign a deal with Knewton ‘to create personalized learning experiences in its industry-leading ELT digital products for students worldwide’. This year saw the launch of a major new CUP series, ‘Empower’. It has an online workbook with personalized extra practice, but there’s nothing (yet) that anyone would call adaptive. More recently, Cambridge has launched the online version of the 2nd edition of Touchstone. Nothing adaptive there, either.

Earlier this year, Cambridge published The Cambridge Guide to Blended Learning for Language Teaching, edited by Mike McCarthy. It contains a chapter by M.O.Z. San Pedro and R. Baker on ‘Adaptive Learning’. It’s an enthusiastic account of the potential of adaptive learning, but it doesn’t contain a single reference to language learning or ELT!

So, what’s going on? Skepticism is becoming the order of the day. The early hype of people like Knewton’s Jose Ferreira is now understood for what it was. Companies like Macmillan got their fingers badly burnt when they barked up the wrong tree with their ‘Big Tree’ platform.

Noel Enyedy captures a more contemporary understanding when he writes: Personalized Instruction is based on the metaphor of personal desktop computers—the technology of the 80s and 90s. Today’s technology is not just personal but mobile, social, and networked. The flexibility and social nature of how technology infuses other aspects of our lives is not captured by the model of Personalized Instruction, which focuses on the isolated individual’s personal path to a fixed end-point. To truly harness the power of modern technology, we need a new vision for educational technology (Enyedy, 2014: 16).

Adaptive solutions aren’t going away, but there is now a much better understanding of what sorts of problems might have adaptive solutions. Testing is certainly one. As the educational technology plan from the U.S. Department of Education (‘Future Ready Learning: Re-imagining the Role of Technology in Education’, 2016) puts it: Computer adaptive testing, which uses algorithms to adjust the difficulty of questions throughout an assessment on the basis of a student’s responses, has facilitated the ability of assessments to estimate accurately what students know and can do across the curriculum in a shorter testing session than would otherwise be necessary. In ELT, Pearson and EF have adaptive tests that have been well researched and designed.

Vocabulary apps which deploy adaptive technology continue to become more sophisticated, although empirical research is lacking. Automated writing tutors with adaptive corrective feedback are also developing fast, and I’ll be writing a post about these soon. Similarly, as speech recognition software improves, we can expect to see better and better automated adaptive pronunciation tutors. But going beyond such applications, there are bigger questions to ask, and answers to these will impact on whatever direction adaptive technologies take. Large platforms (LMSs), with or without adaptive software, are already beginning to look rather dated. Will they be replaced by integrated apps, or are apps themselves going to be replaced by bots (currently riding high in the Hype Cycle)? In language learning and teaching, the future of bots is likely to be shaped by developments in natural language processing (another topic about which I’ll be blogging soon). Nobody really has a clue where the next two and a half years will take us (if anywhere), but it’s becoming increasingly likely that adaptive learning will be only one very small part of it.

 

Enyedy, N. 2014. Personalized Instruction: New Interest, Old Rhetoric, Limited Results, and the Need for a New Direction for Computer-Mediated Learning. Boulder, CO: National Education Policy Center. Retrieved 17.07.16 from http://nepc.colorado.edu/publication/personalized-instruction

Ok, let’s be honest here. This post is about teacher training, but ‘development’ sounds more respectful, more humane, more modern. Teacher development (self-initiated, self-evaluated, collaborative and holistic) could be adaptive, but it’s unlikely that anyone will want to spend the money on developing an adaptive teacher development platform any time soon. Teacher training (top-down, pre-determined syllabus and externally evaluated) is another matter. If you’re not too clear about this distinction, see Penny Ur’s article in The Language Teacher.

decoding_adaptive jpgThe main point of adaptive learning tools is to facilitate differentiated instruction. They are, as Pearson’s latest infomercial booklet describes them, ‘educational technologies that can respond to a student’s interactions in real-time by automatically providing the student with individual support’. Differentiation or personalization (or whatever you call it) is, as I’ve written before  , the declared goal of almost everyone in educational power these days. What exactly it is may be open to question (see Michael Feldstein’s excellent article), as may be the question of whether or not it is actually such a desideratum (see, for example, this article ). But, for the sake of argument, let’s agree that it’s mostly better than one-size-fits-all.

Teachers around the world are being encouraged to adopt a differentiated approach with their students, and they are being encouraged to use technology to do so. It is technology that can help create ‘robust personalized learning environments’ (says the White House)  . Differentiation for language learners could be facilitated by ‘social networking systems, podcasts, wikis, blogs, encyclopedias, online dictionaries, webinars, online English courses,’ etc. (see Alexandra Chistyakova’s post on eltdiary ).

But here’s the crux. If we want teachers to adopt a differentiated approach, they really need to have experienced it themselves in their training. An interesting post on edweek  sums this up: If professional development is supposed to lead to better pedagogy that will improve student learning AND we are all in agreement that modeling behaviors is the best way to show people how to do something, THEN why not ensure all professional learning opportunities exhibit the qualities we want classroom teachers to have?

Differentiated teacher development / training is rare. According to the Center for Public Education’s Teaching the Teachers report , almost all teachers participate in ‘professional development’ (PD) throughout the year. However, a majority of those teachers find the PD in which they participate ineffective. Typically, the development is characterised by ‘drive-by’ workshops, one-size-fits-all presentations, ‘been there, done that’ topics, little or no modelling of what is being taught, a focus on rotating fads and a lack of follow-up. This report is not specifically about English language teachers, but it will resonate with many who are working in English language teaching around the world.cindy strickland

The promotion of differentiated teacher development is gaining traction: see here or here , for example, or read Cindy A. Strickland’s ‘Professional Development for Differentiating Instruction’.

Remember, though, that it’s really training, rather than development, that we’re talking about. After all, if one of the objectives is to equip teachers with a skills set that will enable them to become more effective instructors of differentiated learning, this is most definitely ‘training’ (notice the transitivity of the verbs ‘enable’ and ‘equip’!). In this context, a necessary starting point will be some sort of ‘knowledge graph’ (which I’ve written about here ). For language teachers, these already exist, including the European Profiling Grid , the Eaquals Framework for Language Teacher Training and Development, the Cambridge English Teaching Framework and the British Council’s Continuing Professional Development Framework (CPD) for Teachers  . We can expect these to become more refined and more granularised, and a partial move in this direction is the Cambridge English Digital Framework for Teachers  . Once a knowledge graph is in place, the next step will be to tag particular pieces of teacher training content (e.g. webinars, tasks, readings, etc.) to locations in the framework that is being used. It would not be too complicated to engineer dynamic frameworks which could be adapted to individual or institutional needs.cambridge_english_teaching_framework jpg

This process will be facilitated by the fact that teacher training content is already being increasingly granularised. Whether it’s an MA in TESOL or a shorter, more practically oriented course, things are getting more and more bite-sized, with credits being awarded to these short bites, as course providers face stiffer competition and respond to market demands.

Visible classroom home_page_screenshotClassroom practice could also form part of such an adaptive system. One tool that could be deployed would be Visible Classroom , an automated system for providing real-time evaluative feedback for teachers. There is an ‘online dashboard providing teachers with visual information about their teaching for each lesson in real-time. This includes proportion of teacher talk to student talk, number and type of questions, and their talking speed.’ John Hattie, who is behind this project, says that teachers ‘account for about 30% of the variance in student achievement and [are] the largest influence outside of individual student effort.’ Teacher development with a tool like Visible Classroom is ultimately all about measuring teacher performance (against a set of best-practice benchmarks identified by Hattie’s research) in order to improve the learning outcomes of the students.Visible_classroom_panel_image jpg

You may have noticed the direction in which this part of this blog post is going. I began by talking about social networking systems, podcasts, wikis, blogs and so on, and just now I’ve mentioned the summative, credit-bearing possibilities of an adaptive teacher development training programme. It’s a tension that is difficult to resolve. There’s always a paradox in telling anyone that they are going to embark on a self-directed course of professional development. Whoever pays the piper calls the tune and, if an institution decides that it is worth investing significant amounts of money in teacher development, they will want a return for their money. The need for truly personalised teacher development is likely to be overridden by the more pressing need for accountability, which, in turn, typically presupposes pre-determined course outcomes, which can be measured in some way … so that quality (and cost-effectiveness and so on) can be evaluated.

Finally, it’s worth asking if language teaching (any more than language learning) can be broken down into small parts that can be synthesized later into a meaningful and valuable whole. Certainly, there are some aspects of language teaching (such as the ability to use a dashboard on an LMS) which lend themselves to granularisation. But there’s a real danger of losing sight of the forest of teaching if we focus on the individual trees that can be studied and measured.

It’s a good time to be in Turkey if you have digital ELT products to sell. Not so good if you happen to be an English language learner. This post takes a look at both sides of the Turkish lira.

OUP, probably the most significant of the big ELT publishers in Turkey, recorded ‘an outstanding performance’ in the country in the last financial year, making it their 5th largest ELT market. OUP’s annual report for 2013 – 2014 describes the particularly strong demand for digital products and services, a demand which is now influencing OUP’s global strategy for digital resources. When asked about the future of ELT, Peter Marshall , Managing Director of OUP’s ELT Division, suggested that Turkey was a country that could point us in the direction of an answer to the question. Marshall and OUP will be hoping that OUP’s recently launched Digital Learning Platform (DLP) ‘for the global distribution of adult and secondary ELT materials’ will be an important part of that future, in Turkey and elsewhere. I can’t think of any good reason for doubting their belief.

tbl-ipad1OUP aren’t the only ones eagerly checking the pound-lira exchange rates. For the last year, CUP also reported ‘significant sales successes’ in Turkey in their annual report . For CUP, too, it was a year in which digital development has been ‘a top priority’. CUP’s Turkish success story has been primarily driven by a deal with Anadolu University (more about this below) to provide ‘a print and online solution to train 1.7 million students’ using their Touchstone course. This was the biggest single sale in CUP’s history and has inspired publishers, both within CUP and outside, to attempt to emulate the deal. The new blended products will, of course, be adaptive.

Just how big is the Turkish digital ELT pie? According to a 2014 report from Ambient Insight , revenues from digital ELT products reached $32.0 million in 2013. They are forecast to more than double to $72.6 million in 2018. This is a growth rate of 17.8%, a rate which is practically unbeatable in any large economy, and Turkey is the 17th largest economy in the world, according to World Bank statistics .

So, what makes Turkey special?

  • Turkey has a large and young population that is growing by about 1.4% each year, which is equivalent to approximately 1 million people. According to the Turkish Ministry of Education, there are currently about 5.5 million students enrolled in upper-secondary schools. Significant growth in numbers is certain.
  • Turkey is currently in the middle of a government-sponsored $990 million project to increase the level of English proficiency in schools. The government’s target is to position the country as one of the top ten global economies by 2023, the centenary of the Turkish Republic, and it believes that this position will be more reachable if it has a population with the requisite foreign language (i.e. English) skills. As part of this project, the government has begun to introduce English in the 1st grade (previously it was in the 4th grade).
  • The level of English in Turkey is famously low and has been described as a ‘national weakness’. In October/November 2011, the Turkish research institute SETA and the Turkish Ministry for Youth and Sports conducted a large survey across Turkey of 10,174 young citizens, aged 15 to 29. The result was sobering: 59 per cent of the young people said they “did not know any foreign language.” A recent British Council report (2013) found the competence level in English of most (90+%) students across Turkey was evidenced as rudimentary – even after 1000+ hours (estimated at end of Grade 12) of English classes. This is, of course, good news for vendors of English language learning / teaching materials.
  • Turkey has launched one of the world’s largest educational technology projects: the FATIH Project (The Movement to Enhance Opportunities and Improve Technology). One of its objectives is to provide tablets for every student between grades 5 and 12. At the same time, according to the Ambient report , the intention is to ‘replace all print-based textbooks with digital content (both eTextbooks and online courses).’
  • Purchasing power in Turkey is concentrated in a relatively small number of hands, with the government as the most important player. Institutions are often very large. Anadolu University, for example, is the second largest university in the world, with over 2 million students, most of whom are studying in virtual classrooms. There are two important consequences of this. Firstly, it makes scalable, big-data-driven LMS-delivered courses with adaptive software a more attractive proposition to purchasers. Secondly, it facilitates the B2B sales model that is now preferred by vendors (including the big ELT publishers).
  • Turkey also has a ‘burgeoning private education sector’, according to Peter Marshall, and a thriving English language school industry. According to Ambient ‘commercial English language learning in Turkey is a $400 million industry with over 600 private schools across the country’. Many of these are grouped into large chains (see the bullet point above).
  • Turkey is also ‘in the vanguard of the adoption of educational technology in ELT’, according to Peter Marshall. With 36 million internet users, the 5th largest internet population in Europe, and the 3rd highest online engagement in Europe, measured by time spent online, (reported by Sina Afra ), the country’s enthusiasm for educational technology is not surprising. Ambient reports that ‘the growth rate for mobile English educational apps is 27.3%’. This enthusiasm is reflected in Turkey’s thriving ELT conference scene. The most popular conference themes and conference presentations are concerned with edtech. A keynote speech by Esat Uğurlu at the ISTEK schools 3rd international ELT conference at Yeditepe in April 2013 gives a flavour of the current interests. The talk was entitled ‘E-Learning: There is nothing to be afraid of and plenty to discover’.

All of the above makes Turkey a good place to be if you’re selling digital ELT products, even though the competition is pretty fierce. If your product isn’t adaptive, personalized and gamified, you may as well not bother.

What impact will all this have on Turkey’s English language learners? A report co-produced by TEPAV (the Economic Policy Research Foundation of Turkey) and the British Council in November 2013 suggests some of the answers, at least in the school population. The report  is entitled ‘Turkey National Needs Assessment of State School English Language Teaching’ and its Executive Summary is brutally frank in its analysis of the low achievements in English language learning in the country. It states:

The teaching of English as a subject and not a language of communication was observed in all schools visited. This grammar-based approach was identified as the first of five main factors that, in the opinion of this report, lead to the failure of Turkish students to speak/ understand English on graduation from High School, despite having received an estimated 1000+ hours of classroom instruction.

In all classes observed, students fail to learn how to communicate and function independently in English. Instead, the present teacher-centric, classroom practice focuses on students learning how to answer teachers’ questions (where there is only one, textbook-type ‘right’ answer), how to complete written exercises in a textbook, and how to pass a grammar-based test. Thus grammar-based exams/grammar tests (with right/wrong answers) drive the teaching and learning process from Grade 4 onwards. This type of classroom practice dominates all English lessons and is presented as the second causal factor with respect to the failure of Turkish students to speak/understand English.

The problem, in other words, is the curriculum and the teaching. In its recommendations, the report makes this crystal clear. Priority needs to be given to developing a revised curriculum and ‘a comprehensive and sustainable system of in-service teacher training for English teachers’. Curriculum renewal and programmes of teacher training / development are the necessary prerequisites for the successful implementation of a programme of educational digitalization. Unfortunately, research has shown again and again that these take a long time and outcomes are difficult to predict in advance.

By going for digitalization first, Turkey is taking a huge risk. What LMSs, adaptive software and most apps do best is the teaching of language knowledge (grammar and vocabulary), not the provision of opportunities for communicative practice (for which there is currently no shortage of opportunity … it is just that these opportunities are not being taken). There is a real danger, therefore, that the technology will push learning priorities in precisely the opposite direction to that which is needed. Without significant investments in curriculum reform and teacher training, how likely is it that the transmission-oriented culture of English language teaching and learning will change?

Even if the money for curriculum reform and teacher training were found, it is also highly unlikely that effective country-wide approaches to blended learning for English would develop before the current generation of tablets and their accompanying content become obsolete.

Sadly, the probability is, once more, that educational technology will be a problem-changer, even a problem-magnifier, rather than a problem-solver. I’d love to be wrong.