Posts Tagged ‘research’

From time to time, I have mentioned Programmed Learning (or Programmed Instruction) in this blog (here and here, for example). It felt like time to go into a little more detail about what Programmed Instruction was (and is) and why I think it’s important to know about it.

A brief description

The basic idea behind Programmed Instruction was that subject matter could be broken down into very small parts, which could be organised into an optimal path for presentation to students. Students worked, at their own speed, through a series of micro-tasks, building their mastery of each nugget of learning that was presented, not progressing from one to the next until they had demonstrated they could respond accurately to the previous task.

There were two main types of Programmed Instruction: linear programming and branching programming. In the former, every student would follow the same path, the same sequence of frames. This could be used in classrooms for whole-class instruction and I tracked down a book (illustrated below) called ‘Programmed English Course Student’s Book 1’ (Hill, 1966), which was an attempt to transfer the ideas behind Programmed Instruction to a zero-tech, class environment. This is very similar in approach to the material I had to use when working at an Inlingua school in the 1980s.

Programmed English Course

Comparatives strip

An example of how self-paced programming worked is illustrated here, with a section on comparatives.

With branching programming, ‘extra frames (or branches) are provided for students who do not get the correct answer’ (Kay et al., 1968: 19). This was only suitable for self-study, but it was clearly preferable, as it allowed for self-pacing and some personalization. The material could be presented in books (which meant that students had to flick back and forth in their books) or with special ‘teaching machines’, but the latter were preferred.

In the words of an early enthusiast, Programmed Instruction was essentially ‘a device to control a student’s behaviour and help him to learn without the supervision of a teacher’ (Kay et al.,1968: 58). The approach was inspired by the work of Skinner and it was first used as part of a university course in behavioural psychology taught by Skinner at Harvard University in 1957. It moved into secondary schools for teaching mathematics in 1959 (Saettler, 2004: 297).

Enthusiasm and uptake

The parallels between current enthusiasm for the power of digital technology to transform education and the excitement about Programmed Instruction and teaching machines in the 1960s are very striking (McDonald et al., 2005: 90). In 1967, it was reported that ‘we are today on the verge of what promises to be a revolution in education’ (Goodman, 1967: 3) and that ‘tremors of excitement ran through professional journals and conferences and department meetings from coast to coast’ (Kennedy, 1967: 871). The following year, another commentator referred to the way that the field of education had been stirred ‘with an almost Messianic promise of a breakthrough’ (Ornstein, 1968: 401). Programmed instruction was also seen as an exciting business opportunity: ‘an entire industry is just coming into being and significant sales and profits should not be too long in coming’, wrote one hopeful financial analyst as early as 1961 (Kozlowski, 1967: 47).

The new technology seemed to offer a solution to the ‘problems of education’. Media reports in 1963 in Germany, for example, discussed a shortage of teachers, large classes and inadequate learning progress … ‘an ‘urgent pedagogical emergency’ that traditional teaching methods could not resolve’ (Hof, 2018). Individualised learning, through Programmed Instruction, would equalise educational opportunity and if you weren’t part of it, you would be left behind. In the US, two billion dollars were spent on educational technology by the government in the decade following the passing of the National Defense Education Act, and this was added to by grants from private foundations. As a result, ‘the production of teaching machines began to flourish, accompanied by the marketing of numerous ‘teaching units’ stamped into punch cards as well as less expensive didactic programme books and index cards. The market grew dramatically in a short time’ (Hof, 2018).

In the field of language learning, however, enthusiasm was more muted. In the year in which he completed his doctoral studies[1], the eminent linguist, Bernard Spolsky noted that ‘little use is actually being made of the new technique’ (Spolsky, 1966). A year later, a survey of over 600 foreign language teachers at US colleges and universities reported that only about 10% of them had programmed materials in their departments (Valdman, 1968: 1). In most of these cases, the materials ‘were being tried out on an experimental basis under the direction of their developers’. And two years after that, it was reported that ‘programming has not yet been used to any very great extent in language teaching, so there is no substantial body of experience from which to draw detailed, water-tight conclusions’ (Howatt, 1969: 164).

By the early 1970s, Programmed Instruction was already beginning to seem like yesterday’s technology, even though the principles behind it are still very much alive today (Thornbury (2017) refers to Duolingo as ‘Programmed Instruction’). It would be nice to think that language teachers of the day were more sceptical than, for example, their counterparts teaching mathematics. It would be nice to think that, like Spolsky, they had taken on board Chomsky’s (1959) demolition of Skinner. But the widespread popularity of Audiolingual methods suggests otherwise. Audiolingualism, based essentially on the same Skinnerian principles as Programmed Instruction, needed less outlay on technology. The machines (a slide projector and a record or tape player) were cheaper than the teaching machines, could be used for other purposes and did not become obsolete so quickly. The method also lent itself more readily to established school systems (i.e. whole-class teaching) and the skills sets of teachers of the day. Significantly, too, there was relatively little investment in Programmed Instruction for language teaching (compared to, say, mathematics), since this was a smallish and more localized market. There was no global market for English language learning as there is today.

Lessons to be learned

1 Shaping attitudes

It was not hard to persuade some educational authorities of the value of Programmed Instruction. As discussed above, it offered a solution to the problem of ‘the chronic shortage of adequately trained and competent teachers at all levels in our schools, colleges and universities’, wrote Goodman (1967: 3), who added, there is growing realisation of the need to give special individual attention to handicapped children and to those apparently or actually retarded’. The new teaching machines ‘could simulate the human teacher and carry out at least some of his functions quite efficiently’ (Goodman, 1967: 4). This wasn’t quite the same thing as saying that the machines could replace teachers, although some might have hoped for this. The official line was more often that the machines could ‘be used as devices, actively co-operating with the human teacher as adaptive systems and not just merely as aids’ (Goodman, 1967: 37). But this more nuanced message did not always get through, and ‘the Press soon stated that robots would replace teachers and conjured up pictures of classrooms of students with little iron men in front of them’ (Kay et al., 1968: 161).

For teachers, though, it was one thing to be told that the machines would free their time to perform more meaningful tasks, but harder to believe when this was accompanied by a ‘rhetoric of the instructional inadequacies of the teacher’ (McDonald, et al., 2005: 88). Many teachers felt threatened. They ‘reacted against the ‘unfeeling machine’ as a poor substitute for the warm, responsive environment provided by a real, live teacher. Others have seemed to take it more personally, viewing the advent of programmed instruction as the end of their professional career as teachers. To these, even the mention of programmed instruction produces a momentary look of panic followed by the appearance of determination to stave off the ominous onslaught somehow’ (Tucker, 1972: 63).

Some of those who were pushing for Programmed Instruction had a bigger agenda, with their sights set firmly on broader school reform made possible through technology (Hof, 2018). Individualised learning and Programmed Instruction were not just ends in themselves: they were ways of facilitating bigger changes. The trouble was that teachers were necessary for Programmed Instruction to work. On the practical level, it became apparent that a blend of teaching machines and classroom teaching was more effective than the machines alone (Saettler, 2004: 299). But the teachers’ attitudes were crucial: a research study involving over 6000 students of Spanish showed that ‘the more enthusiastic the teacher was about programmed instruction, the better the work the students did, even though they worked independently’ (Saettler, 2004: 299). In other researched cases, too, ‘teacher attitudes proved to be a critical factor in the success of programmed instruction’ (Saettler, 2004: 301).

2 Returns on investment

Pricing a hyped edtech product is a delicate matter. Vendors need to see a relatively quick return on their investment, before a newer technology knocks them out of the market. Developments in computing were fast in the late 1960s, and the first commercially successful personal computer, the Altair 8800, appeared in 1974. But too high a price carried obvious risks. In 1967, the cheapest teaching machine in the UK, the Tutorpack (from Packham Research Ltd), cost £7 12s (equivalent to about £126 today), but machines like these were disparagingly referred to as ‘page-turners’ (Higgins, 1983: 4). A higher-end linear programming machine cost twice this amount. Branching programme machines cost a lot more. The Mark II AutoTutor (from USI Great Britain Limited), for example, cost £31 per month (equivalent to £558), with eight reels of programmes thrown in (Goodman, 1967: 26). A lower-end branching machine, the Grundytutor, could be bought for £ 230 (worth about £4140 today).

Teaching machines (from Goodman)AutoTutor Mk II (from Goodman)

This was serious money, and any institution splashing out on teaching machines needed to be confident that they would be well used for a long period of time (Nordberg, 1965). The programmes (the software) were specific to individual machines and the content could not be updated easily. At the same time, other technological developments (cine projectors, tape recorders, record players) were arriving in classrooms, and schools found themselves having to pay for technical assistance and maintenance. The average teacher was ‘unable to avail himself fully of existing aids because, to put it bluntly, he is expected to teach for too many hours a day and simply has not the time, with all the administrative chores he is expected to perform, either to maintain equipment, to experiment with it, let alone keeping up with developments in his own and wider fields. The advent of teaching machines which can free the teacher to fulfil his role as an educator will intensify and not diminish the problem’ (Goodman, 1967: 44). Teaching machines, in short, were ‘oversold and underused’ (Cuban, 2001).

3 Research and theory

Looking back twenty years later, B. F. Skinner conceded that ‘the machines were crude, [and] the programs were untested’ (Skinner, 1986: 105). The documentary record suggests that the second part of this statement is not entirely true. Herrick (1966: 695) reported that ‘an overwhelming amount of research time has been invested in attempts to determine the relative merits of programmed instruction when compared to ‘traditional’ or ‘conventional’ methods of instruction. The results have been almost equally overwhelming in showing no significant differences’. In 1968, Kay et al (1968: 96) noted that ‘there has been a definite effort to examine programmed instruction’. A later meta-analysis of research in secondary education (Kulik et al.: 1982) confirmed that ‘Programmed Instruction did not typically raise student achievement […] nor did it make students feel more positively about the subjects they were studying’.

It was not, therefore, the case that research was not being done. It was that many people were preferring not to look at it. The same holds true for theoretical critiques. In relation to language learning, Spolsky (1966) referred to Chomsky’s (1959) rebuttal of Skinner’s arguments, adding that ‘there should be no need to rehearse these inadequacies, but as some psychologists and even applied linguists appear to ignore their existence it might be as well to remind readers of a few’. Programmed Instruction might have had a limited role to play in language learning, but vendors’ claims went further than that and some people believed them: ‘Rather than addressing themselves to limited and carefully specified FL tasks – for example the teaching of spelling, the teaching of grammatical concepts, training in pronunciation, the acquisition of limited proficiency within a restricted number of vocabulary items and grammatical features – most programmers aimed at self-sufficient courses designed to lead to near-native speaking proficiency’ (Valdman, 1968: 2).

4 Content

When learning is conceptualised as purely the acquisition of knowledge, technological optimists tend to believe that machines can convey it more effectively and more efficiently than teachers (Hof, 2018). The corollary of this is the belief that, if you get the materials right (plus the order in which they are presented and appropriate feedback), you can ‘to a great extent control and engineer the quality and quantity of learning’ (Post, 1972: 14). Learning, in other words, becomes an engineering problem, and technology is its solution.

One of the problems was that technology vendors were, first and foremost, technology specialists. Content was almost an afterthought. Materials writers needed to be familiar with the technology and, if not, they were unlikely to be employed. Writers needed to believe in the potential of the technology, so those familiar with current theory and research would clearly not fit in. The result was unsurprising. Kennedy (1967: 872) reported that ‘there are hundreds of programs now available. Many more will be published in the next few years. Watch for them. Examine them critically. They are not all of high quality’. He was being polite.

5 Motivation

As is usually the case with new technologies, there was a positive novelty effect with Programmed Instruction. And, as is always the case, the novelty effect wears off: ‘students quickly tired of, and eventually came to dislike, programmed instruction’ (McDonald et al.: 89). It could not really have been otherwise: ‘human learning and intrinsic motivation are optimized when persons experience a sense of autonomy, competence, and relatedness in their activity. Self-determination theorists have also studied factors that tend to occlude healthy functioning and motivation, including, among others, controlling environments, rewards contingent on task performance, the lack of secure connection and care by teachers, and situations that do not promote curiosity and challenge’ (McDonald et al.: 93). The demotivating experience of using these machines was particularly acute with younger and ‘less able’ students, as was noted at the time (Valdman, 1968: 9).

The unlearned lessons

I hope that you’ll now understand why I think the history of Programmed Instruction is so relevant to us today. In the words of my favourite Yogi-ism, it’s like deja vu all over again. I have quoted repeatedly from the article by McDonald et al (2005) and I would highly recommend it – available here. Hopefully, too, Audrey Watters’ forthcoming book, ‘Teaching Machines’, will appear before too long, and she will, no doubt, have much more of interest to say on this topic.

References

Chomsky N. 1959. ‘Review of Skinner’s Verbal Behavior’. Language, 35:26–58.

Cuban, L. 2001. Oversold & Underused: Computers in the Classroom. (Cambridge, MA: Harvard University Press)

Goodman, R. 1967. Programmed Learning and Teaching Machines 3rd edition. (London: English Universities Press)

Herrick, M. 1966. ‘Programmed Instruction: A critical appraisal’ The American Biology Teacher, 28 (9), 695 -698

Higgins, J. 1983. ‘Can computers teach?’ CALICO Journal, 1 (2)

Hill, L. A. 1966. Programmed English Course Student’s Book 1. (Oxford: Oxford University Press)

Hof, B. 2018. ‘From Harvard via Moscow to West Berlin: educational technology, programmed instruction and the commercialisation of learning after 1957’ History of Education, 47:4, 445-465

Howatt, A. P. R. 1969. Programmed Learning and the Language Teacher. (London: Longmans)

Kay, H., Dodd, B. & Sime, M. 1968. Teaching Machines and Programmed Instruction. (Harmondsworth: Penguin)

Kennedy, R.H. 1967. ‘Before using Programmed Instruction’ The English Journal, 56 (6), 871 – 873

Kozlowski, T. 1961. ‘Programmed Teaching’ Financial Analysts Journal, 17 / 6, 47 – 54

Kulik, C.-L., Schwalb, B. & Kulik, J. 1982. ‘Programmed Instruction in Secondary Education: A Meta-analysis of Evaluation Findings’ Journal of Educational Research, 75: 133 – 138

McDonald, J. K., Yanchar, S. C. & Osguthorpe, R.T. 2005. ‘Learning from Programmed Instruction: Examining Implications for Modern Instructional Technology’ Educational Technology Research and Development, 53 / 2, 84 – 98

Nordberg, R. B. 1965. Teaching machines-six dangers and one advantage. In J. S. Roucek (Ed.), Programmed teaching: A symposium on automation in education (pp. 1–8). (New York: Philosophical Library)

Ornstein, J. 1968. ‘Programmed Instruction and Educational Technology in the Language Field: Boon or Failure?’ The Modern Language Journal, 52 / 7, 401 – 410

Post, D. 1972. ‘Up the programmer: How to stop PI from boring learners and strangling results’. Educational Technology, 12(8), 14–1

Saettler, P. 2004. The Evolution of American Educational Technology. (Greenwich, Conn.: Information Age Publishing)

Skinner, B. F. 1986. ‘Programmed Instruction Revisited’ The Phi Delta Kappan, 68 (2), 103 – 110

Spolsky, B. 1966. ‘A psycholinguistic critique of programmed foreign language instruction’ International Review of Applied Linguistics in Language Teaching, Volume 4, Issue 1-4: 119–130

Thornbury, S. 2017. Scott Thornbury’s 30 Language Teaching Methods. (Cambridge: Cambridge University Press)

Tucker, C. 1972. ‘Programmed Dictation: An Example of the P.I. Process in the Classroom’. TESOL Quarterly, 6(1), 61-70

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German, 1 (2), 1 – 14

 

 

 

[1] Spolsky’ doctoral thesis for the University of Montreal was entitled ‘The psycholinguistic basis of programmed foreign language instruction’.

 

 

 

 

 

Book_coverIn my last post, I looked at shortcomings in edtech research, mostly from outside the world of ELT. I made a series of recommendations of ways in which such research could become more useful. In this post, I look at two very recent collections of ELT edtech research. The first of these is Digital Innovations and Research in Language Learning, edited by Mavridi and Saumell, and published this February by the Learning Technologies SIG of IATEFL. I’ll refer to it here as DIRLL. It’s available free to IATEFL LT SIG members, and can be bought for $10.97 as an ebook on Amazon (US). The second is the most recent edition (February 2020) of the Language Learning & Technology journal, which is open access and available here. I’ll refer to it here as LLTJ.

In both of these collections, the focus is not on ‘technology per se, but rather issues related to language learning and language teaching, and how they are affected or enhanced by the use of digital technologies’. However, they are very different kinds of publication. Nobody involved in the production of DIRLL got paid in any way (to the best of my knowledge) and, in keeping with its provenance from a teachers’ association, has ‘a focus on the practitioner as teacher-researcher’. Almost all of the contributing authors are university-based, but they are typically involved more in language teaching than in research. With one exception (a grant from the EU), their work was unfunded.

The triannual LLTJ is funded by two American universities and published by the University of Hawaii Press. The editors and associate editors are well-known scholars in their fields. The journal’s impact factor is high, close to the impact factor of the paywalled reCALL (published by the University of Cambridge), which is the highest-ranking journal in the field of CALL. The contributing authors are all university-based, many with a string of published articles (in prestige journals), chapters or books behind them. At least six of the studies were funded by national grant-awarding bodies.

I should begin by making clear that there was much in both collections that I found interesting. However, it was not usually the research itself that I found informative, but the literature review that preceded it. Two of the chapters in DIRLL were not really research, anyway. One was the development of a template for evaluating ICT-mediated tasks in CLIL, another was an advocacy of comics as a resource for language teaching. Both of these were new, useful and interesting to me. LLTJ included a valuable literature review of research into VR in FL learning (but no actual new research). With some exceptions in both collections, though, I felt that I would have been better off curtailing my reading after the reviews. Admittedly, there wouldn’t be much in the way of literature reviews if there were no previous research to report …

It was no surprise to see the learners who were the subjects of this research were overwhelmingly university students. In fact, only one article (about a high-school project in Israel, reported in DIRLL) was not about university students. The research areas focused on reflected this bias towards tertiary contexts: online academic reading skills, academic writing, online reflective practices in teacher training programmes, etc.

In a couple of cases, the selection of experimental subjects seemed plain bizarre. Why, if you want to find out about the extent to which Moodle use can help EAP students become better academic readers (in DIRLL), would you investigate this with a small volunteer cohort of postgraduate students of linguistics, with previous experience of using Moodle and experience of teaching? Is a less representative sample imaginable? Why, if you want to investigate the learning potential of the English File Pronunciation app (reported in LLTJ), which is clearly most appropriate for A1 – B1 levels, would you do this with a group of C1-level undergraduates following a course in phonetics as part of an English Studies programme?

More problematic, in my view, was the small sample size in many of the research projects. The Israeli virtual high school project (DIRLL), previously referred to, started out with only 11 students, but 7 dropped out, primarily, it seems, because of institutional incompetence: ‘the project was probably doomed […] to failure from the start’, according to the author. Interesting as this was as an account of how not to set up a project of this kind, it is simply impossible to draw any conclusions from 4 students about the potential of a VLE for ‘interaction, focus and self-paced learning’. The questionnaire investigating experience of and attitudes towards VR (in DIRLL) was completed by only 7 (out of 36 possible) students and 7 (out of 70+ possible) teachers. As the author acknowledges, ‘no great claims can be made’, but then goes on to note the generally ‘positive attitudes to VR’. Perhaps those who did not volunteer had different attitudes? We will never know. The study of motivational videos in tertiary education (DIRLL) started off with 15 subjects, but 5 did not complete the necessary tasks. The research into L1 use in videoconferencing (LLTJ) started off with 10 experimental subjects, all with the same L1 and similar cultural backgrounds, but there was no data available from 4 of them (because they never switched into L1). The author claims that the paper demonstrates ‘how L1 is used by language learners in videoconferencing as a social semiotic resource to support social presence’ – something which, after reading the literature review, we already knew. But the paper also demonstrates quite clearly how L1 is not used by language learners in videoconferencing as a social semiotic resource to support social presence. In all these cases, it is the participants who did not complete or the potential participants who did not want to take part that have the greatest interest for me.

Unsurprisingly, the LLTJ articles had larger sample sizes than those in DIRLL, but in both collections the length of the research was limited. The production of one motivational video (DIRLL) does not really allow us to draw any conclusions about the development of students’ critical thinking skills. Two four-week interventions do not really seem long enough to me to discover anything about learner autonomy and Moodle (DIRLL). An experiment looking at different feedback modes needs more than two written assignments to reach any conclusions about student preferences (LLTJ).

More research might well be needed to compensate for the short-term projects with small sample sizes, but I’m not convinced that this is always the case. Lacking sufficient information about the content of the technologically-mediated tools being used, I was often unable to reach any conclusions. A gamified Twitter environment was developed in one project (DIRLL), using principles derived from contemporary literature on gamification. The authors concluded that the game design ‘failed to generate interaction among students’, but without knowing a lot more about the specific details of the activity, it is impossible to say whether the problem was the principles or the particular instantiation of those principles. Another project, looking at the development of pronunciation materials for online learning (LLTJ), came to the conclusion that online pronunciation training was helpful – better than none at all. Claims are then made about the value of the method used (called ‘innovative Cued Pronunciation Readings’), but this is not compared to any other method / materials, and only a very small selection of these materials are illustrated. Basically, the reader of this research has no choice but to take things on trust. The study looking at the use of Alexa to help listening comprehension and speaking fluency (LLTJ) cannot really tell us anything about IPAs unless we know more about the particular way that Alexa is being used. Here, it seems that the students were using Alexa in an interactive storytelling exercise, but so little information is given about the exercise itself that I didn’t actually learn anything at all. The author’s own conclusion is that the results, such as they are, need to be treated with caution. Nevertheless, he adds ‘the current study illustrates that IPAs may have some value to foreign language learners’.

This brings me onto my final gripe. To be told that IPAs like Alexa may have some value to foreign language learners is to be told something that I already know. This wasn’t the only time this happened during my reading of these collections. I appreciate that research cannot always tell us something new and interesting, but a little more often would be nice. I ‘learnt’ that goal-setting plays an important role in motivation and that gamification can boost short-term motivation. I ‘learnt’ that reflective journals can take a long time for teachers to look at, and that reflective video journals are also very time-consuming. I ‘learnt’ that peer feedback can be very useful. I ‘learnt’ from two papers that intercultural difficulties may be exacerbated by online communication. I ‘learnt’ that text-to-speech software is pretty good these days. I ‘learnt’ that multimodal literacy can, most frequently, be divided up into visual and auditory forms.

With the exception of a piece about online safety issues (DIRLL), I did not once encounter anything which hinted that there may be problems in using technology. No mention of the use to which student data might be put. No mention of the costs involved (except for the observation that many students would not be happy to spend money on the English File Pronunciation app) or the cost-effectiveness of digital ‘solutions’. No consideration of the institutional (or other) pressures (or the reasons behind them) that may be applied to encourage teachers to ‘leverage’ edtech. No suggestion that a zero-tech option might actually be preferable. In both collections, the language used is invariably positive, or, at least, technology is associated with positive things: uncovering the possibilities, promoting autonomy, etc. Even if the focus of these publications is not on technology per se (although I think this claim doesn’t really stand up to close examination), it’s a little disingenuous to claim (as LLTJ does) that the interest is in how language learning and language teaching is ‘affected or enhanced by the use of digital technologies’. The reality is that the overwhelming interest is in potential enhancements, not potential negative effects.

I have deliberately not mentioned any names in referring to the articles I have discussed. I would, though, like to take my hat off to the editors of DIRLL, Sophia Mavridi and Vicky Saumell, for attempting to do something a little different. I think that Alicia Artusi and Graham Stanley’s article (DIRLL) about CPD for ‘remote’ teachers was very good and should interest the huge number of teachers working online. Chryssa Themelis and Julie-Ann Sime have kindled my interest in the potential of comics as a learning resource (DIRLL). Yu-Ju Lan’s article about VR (LLTJ) is surely the most up-to-date, go-to article on this topic. There were other pieces, or parts of pieces, that I liked, too. But, to me, it’s clear that ‘more research is needed’ … much less than (1) better and more critical research, and (2) more digestible summaries of research.

Colloquium

At the beginning of March, I’ll be going to Cambridge to take part in a Digital Learning Colloquium (for more information about the event, see here ). One of the questions that will be explored is how research might contribute to the development of digital language learning. In this, the first of two posts on the subject, I’ll be taking a broad overview of the current state of play in edtech research.

I try my best to keep up to date with research. Of the main journals, there are Language Learning and Technology, which is open access; CALICO, which offers quite a lot of open access material; and reCALL, which is the most restricted in terms of access of the three. But there is something deeply frustrating about most of this research, and this is what I want to explore in these posts. More often than not, research articles end with a call for more research. And more often than not, I find myself saying ‘Please, no, not more research like this!’

First, though, I would like to turn to a more reader-friendly source of research findings. Systematic reviews are, basically literature reviews which can save people like me from having to plough through endless papers on similar subjects, all of which contain the same (or similar) literature review in the opening sections. If only there were more of them. Others agree with me: the conclusion of one systematic review of learning and teaching with technology in higher education (Lillejord et al., 2018) was that more systematic reviews were needed.

Last year saw the publication of a systematic review of research on artificial intelligence applications in higher education (Zawacki-Richter, et al., 2019) which caught my eye. The first thing that struck me about this review was that ‘out of 2656 initially identified publications for the period between 2007 and 2018, 146 articles were included for final synthesis’. In other words, only just over 5% of the research was considered worthy of inclusion.

The review did not paint a very pretty picture of the current state of AIEd research. As the second part of the title of this review (‘Where are the educators?’) makes clear, the research, taken as a whole, showed a ‘weak connection to theoretical pedagogical perspectives’. This is not entirely surprising. As Bates (2019) has noted: ‘since AI tends to be developed by computer scientists, they tend to use models of learning based on how computers or computer networks work (since of course it will be a computer that has to operate the AI). As a result, such AI applications tend to adopt a very behaviourist model of learning: present / test / feedback.’ More generally, it is clear that technology adoption (and research) is being driven by technology enthusiasts, with insufficient expertise in education. The danger is that edtech developers ‘will simply ‘discover’ new ways to teach poorly and perpetuate erroneous ideas about teaching and learning’ (Lynch, 2017).

This, then, is the first of my checklist of things that, collectively, researchers need to do to improve the value of their work. The rest of this list is drawn from observations mostly, but not exclusively, from the authors of systematic reviews, and mostly come from reviews of general edtech research. In the next blog post, I’ll look more closely at a recent collection of ELT edtech research (Mavridi & Saumell, 2020) to see how it measures up.

1 Make sure your research is adequately informed by educational research outside the field of edtech

Unproblematised behaviourist assumptions about the nature of learning are all too frequent. References to learning styles are still fairly common. The most frequently investigated skill that is considered in the context of edtech is critical thinking (Sosa Neira, et al., 2017), but this is rarely defined and almost never problematized, despite a broad literature that questions the construct.

2 Adopt a sceptical attitude from the outset

Know your history. Decades of technological innovation in education have shown precious little in the way of educational gains and, more than anything else, have taught us that we need to be sceptical from the outset. ‘Enthusiasm and praise that are directed towards ‘virtual education, ‘school 2.0’, ‘e-learning and the like’ (Selwyn, 2014: vii) are indications that the lessons of the past have not been sufficiently absorbed (Levy, 2016: 102). The phrase ‘exciting potential’, for example, should be banned from all edtech research. See, for example, a ‘state-of-the-art analysis of chatbots in education’ (Winkler & Söllner, 2018), which has nothing to conclude but ‘exciting potential’. Potential is fine (indeed, it is perhaps the only thing that research can unambiguously demonstrate – see section 3 below), but can we try to be a little more grown-up about things?

3 Know what you are measuring

Measuring learning outcomes is tricky, to say the least, but it’s understandable that researchers should try to focus on them. Unfortunately, ‘the vast array of literature involving learning technology evaluation makes it challenging to acquire an accurate sense of the different aspects of learning that are evaluated, and the possible approaches that can be used to evaluate them’ (Lai & Bower, 2019). Metrics such as student grades are hard to interpret, not least because of the large number of variables and the danger of many things being conflated in one score. Equally, or possibly even more, problematic, are self-reporting measures which are rarely robust. It seems that surveys are the most widely used instrument in qualitative research (Sosa Neira, et al., 2017), but these will tell us little or nothing when used for short-term interventions (see point 5 below).

4 Ensure that the sample size is big enough to mean something

In most of the research into digital technology in education that was analysed in a literature review carried out for the Scottish government (ICF Consulting Services Ltd, 2015), there were only ‘small numbers of learners or teachers or schools’.

5 Privilege longitudinal studies over short-term projects

The Scottish government literature review (ICF Consulting Services Ltd, 2015), also noted that ‘most studies that attempt to measure any outcomes focus on short and medium term outcomes’. The fact that the use of a particular technology has some sort of impact over the short or medium term tells us very little of value. Unless there is very good reason to suspect the contrary, we should assume that it is a novelty effect that has been captured (Levy, 2016: 102).

6 Don’t forget the content

The starting point of much edtech research is the technology, but most edtech, whether it’s a flashcard app or a full-blown Moodle course, has content. Research reports rarely give details of this content, assuming perhaps that it’s just fine, and all that’s needed is a little tech to ‘present learners with the ‘right’ content at the ‘right’ time’ (Lynch, 2017). It’s a foolish assumption. Take a random educational app from the Play Store, a random MOOC or whatever, and the chances are you’ll find it’s crap.

7 Avoid anecdotal accounts of technology use in quasi-experiments as the basis of a ‘research article’

Control (i.e technology-free) groups may not always be possible but without them, we’re unlikely to learn much from a single study. What would, however, be extremely useful would be a large, collated collection of such action-research projects, using the same or similar technology, in a variety of settings. There is a marked absence of this kind of work.

8 Enough already of higher education contexts

Researchers typically work in universities where they have captive students who they can carry out research on. But we have a problem here. The systematic review of Lundin et al (2018), for example, found that ‘studies on flipped classrooms are dominated by studies in the higher education sector’ (besides lacking anchors in learning theory or instructional design). With some urgency, primary and secondary contexts need to be investigated in more detail, not just regarding flipped learning.

9 Be critical

Very little edtech research considers the downsides of edtech adoption. Online safety, privacy and data security are hardly peripheral issues, especially with younger learners. Ignoring them won’t make them go away.

More research?

So do we need more research? For me, two things stand out. We might benefit more from, firstly, a different kind of research, and, secondly, more syntheses of the work that has already been done. Although I will probably continue to dip into the pot-pourri of articles published in the main CALL journals, I’m looking forward to a change at the CALICO journal. From September of this year, one issue a year will be thematic, with a lead article written by established researchers which will ‘first discuss in broad terms what has been accomplished in the relevant subfield of CALL. It should then outline which questions have been answered to our satisfaction and what evidence there is to support these conclusions. Finally, this article should pose a “soft” research agenda that can guide researchers interested in pursuing empirical work in this area’. This will be followed by two or three empirical pieces that ‘specifically reflect the research agenda, methodologies, and other suggestions laid out in the lead article’.

But I think I’ll still have a soft spot for some of the other journals that are coyer about their impact factor and that can be freely accessed. How else would I discover (it would be too mean to give the references here) that ‘the effective use of new technologies improves learners’ language learning skills’? Presumably, the ineffective use of new technologies has the opposite effect? Or that ‘the application of modern technology represents a significant advance in contemporary English language teaching methods’?

References

Bates, A. W. (2019). Teaching in a Digital Age Second Edition. Vancouver, B.C.: Tony Bates Associates Ltd. Retrieved from https://pressbooks.bccampus.ca/teachinginadigitalagev2/

ICF Consulting Services Ltd (2015). Literature Review on the Impact of Digital Technology on Learning and Teaching. Edinburgh: The Scottish Government. https://dera.ioe.ac.uk/24843/1/00489224.pdf

Lai, J.W.M. & Bower, M. (2019). How is the use of technology in education evaluated? A systematic review. Computers & Education, 133(1), 27-42. Elsevier Ltd. Retrieved January 14, 2020 from https://www.learntechlib.org/p/207137/

Levy, M. 2016. Researching in language learning and technology. In Farr, F. & Murray, L. (Eds.) The Routledge Handbook of Language Learning and Technology. Abingdon, Oxon.: Routledge. pp.101 – 114

Lillejord S., Børte K., Nesje K. & Ruud E. (2018). Learning and teaching with technology in higher education – a systematic review. Oslo: Knowledge Centre for Education https://www.forskningsradet.no/siteassets/publikasjoner/1254035532334.pdf

Lundin, M., Bergviken Rensfeldt, A., Hillman, T. et al. (2018). Higher education dominance and siloed knowledge: a systematic review of flipped classroom research. International Journal of Educational Technology in Higher Education 15, 20 (2018) doi:10.1186/s41239-018-0101-6

Lynch, J. (2017). How AI Will Destroy Education. Medium, November 13, 2017. https://buzzrobot.com/how-ai-will-destroy-education-20053b7b88a6

Mavridi, S. & Saumell, V. (Eds.) (2020). Digital Innovations and Research in Language Learning. Faversham, Kent: IATEFL

Selwyn, N. (2014). Distrusting Educational Technology. New York: Routledge

Sosa Neira, E. A., Salinas, J. and de Benito Crosetti, B. (2017). Emerging Technologies (ETs) in Education: A Systematic Review of the Literature Published between 2006 and 2016. International Journal of Emerging Technologies in Education, 12 (5). https://online-journals.org/index.php/i-jet/article/view/6939

Winkler, R. & Söllner, M. (2018): Unleashing the Potential of Chatbots in Education: A State-Of-The-Art Analysis. In: Academy of Management Annual Meeting (AOM). Chicago, USA. https://www.alexandria.unisg.ch/254848/1/JML_699.pdf

Zawacki-Richter, O., Bond, M., Marin, V. I. And Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education 2019

There has been wide agreement for a long time that one of the most important ways of building the mental lexicon is by having extended exposure to language input through reading and listening. Some researchers (e.g. Krashen, 2008) have gone as far as to say that direct vocabulary instruction serves little purpose, as there is no interface between explicit and implicit knowledge. This remains, however, a minority position, with a majority of researchers agreeing with Barcroft (2015) that deliberate learning plays an important role, even if it is only ‘one step towards knowing the word’ (Nation, 2013: 46).

There is even more agreement when it comes to the differences between deliberate study and extended exposure to language input, in terms of the kinds of learning that takes place. Whilst basic knowledge of lexical items (the pairings of meaning and form) may be developed through deliberate learning (e.g. flash cards), it is suggested that ‘the more ‘contextualized’ aspects of vocabulary (e.g. collocation) cannot be easily taught explicitly and are best learned implicitly through extensive exposure to the use of words in context’ (Schmitt, 2008: 333). In other words, deliberate study may develop lexical breadth, but, for lexical depth, reading and listening are the way to go.

This raises the question of how many times a learner would need to encounter a word (in reading or listening) in order to learn its meaning. Learners may well be developing other aspects of word knowledge at the same time, of course, but a precondition for this is probably that the form-meaning relationship is sorted out. Laufer and Nation (2012: 167) report that ‘researchers seem to agree that with ten exposures, there is some chance of recognizing the meaning of a new word later on’. I’ve always found this figure interesting, but strangely unsatisfactory, unsure of what, precisely, it was actually telling me. Now, with the recent publication of a meta-analysis looking at the effects of repetition on incidental vocabulary learning (Uchihara, Webb & Yanagisawa, 2019), things are becoming a little clearer.

First of all, the number ten is a ballpark figure, rather than a scientifically proven statistic. In their literature review, Uchihara et al. report that ‘the number of encounters necessary to learn words rang[es] from 6, 10, 12, to more than 20 times. That is to say, ‘the number of encounters necessary for learning of vocabulary to occur during meaning-focussed input remains unclear’. If you ask a question to which there is a great variety of answers, there is a strong probability that there is something wrong with the question. That, it would appear, is the case here.

Unsurprisingly, there is, at least, a correlation between repeated encounters of a word and learning, described by Uchihara et al as statistically significant (with a medium effect size). More interesting are the findings about the variables in the studies that were looked at. These included ‘learner variables’ (age and the current size of the learner’s lexicon), ‘treatment variables’ (the amount of spacing between the encounters, listening versus reading, the presence or absence of visual aids, the degree to which learners ‘engage’ with the words they encounter) and ‘methodological variables’ in the design of the research (the kinds of words that are being looked at, word characteristics, the use of non-words, the test format and whether or not learners were told that they were going to be tested).

Here is a selection of the findings:

  • Older learners tend to benefit more from repeated encounters than younger learners.
  • Learners with a smaller vocabulary size tend to benefit more from repeated encounters with L2 words, but this correlation was not statistically significant. ‘Beyond a certain point in vocabulary growth, learners may be able to acquire L2 words in fewer encounters and need not receive as many encounters as learners with smaller vocabulary size’.
  • Learners made greater gains when the repeated exposure took place under massed conditions (e.g. on the same day), rather than under ‘spaced conditions’ (spread out over a longer period of time).
  • Repeated exposure during reading and, to a slightly lesser extent, listening resulted in more gains than reading while listening and viewing.
  • ‘Learners presented with visual information during meaning-focused tasks benefited less from repeated encounters than those who had no access to the information’. This does not mean that visual support is counter-productive: only that the positive effect of repeated encounters is not enhanced by visual support.
  • ‘A significantly larger effect was found for treatments involving no engagement compared to treatment involving engagement’. Again, this does not mean that ‘no engagement’ is better than ‘engagement’: only that the positive effect of repeated encounters is not enhanced by ‘engagement’.
  • ‘The frequency-learning correlation does not seem to increase beyond a range of around 20 encounters with a word’.
  • Experiments using non-words may exaggerate the effect of frequent encounters (i.e. in the real world, with real words, the learning potential of repeated encounters may be less than indicated by some research).
  • Forewarning learners of an upcoming comprehension test had a positive impact on gains in vocabulary learning. Again, this does not mean that teachers should systematically test their students’ comprehension of what they have read.

For me, the most interesting finding was that ‘about 11% of the variance in word learning through meaning-focused input was explained by frequency of encounters’. This means, quite simply, that a wide range of other factors, beyond repeated encounters, will determine the likelihood of learners acquiring vocabulary items from extensive reading and listening. The frequency of word encounters is just one factor among many.

I’m still not sure what the takeaways from this meta-analysis should be, besides the fact that it’s all rather complex. The research does not, in any way, undermine the importance of massive exposure to meaning-focussed input in learning a language. But I will be much more circumspect in my teacher training work about making specific claims concerning the number of times that words need to be encountered before they are ‘learnt’. And I will be even more sceptical about claims for the effectiveness of certain online language learning programs which use algorithms to ensure that words reappear a certain number of times in written, audio and video texts that are presented to learners.

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Laufer, B. & Nation, I.S.P. 2012. Vocabulary. In Gass, S.M. & Mackey, A. (Eds.) The Routledge Handbook of Second Language Acquisition (pp.163 – 176). Abingdon, Oxon.: Routledge

Nation, I.S.P. 2013. Learning Vocabulary in Another Language 2nd edition. Cambridge: Cambridge University Press

Krashen, S. 2008. The comprehension hypothesis extended. In T. Piske & M. Young-Scholten (Eds.), Input Matters in SLA (pp.81 – 94). Bristol, UK: Multilingual Matters

Schmitt, N. 2008. Review article: instructed second language vocabulary learning. Language Teaching Research 12 (3): 329 – 363

Uchihara, T., Webb, S. & Yanagisawa, A. 2019. The Effects of Repetition on Incidental Vocabulary Learning: A Meta-Analysis of Correlational Studies. Language Learning, 69 (3): 559 – 599) Available online: https://www.researchgate.net/publication/330774796_The_Effects_of_Repetition_on_Incidental_Vocabulary_Learning_A_Meta-Analysis_of_Correlational_Studies

Digital flashcard systems like Memrise and Quizlet remain among the most popular language learning apps. Their focus is on the deliberate learning of vocabulary, an approach described by Paul Nation (Nation, 2005) as ‘one of the least efficient ways of developing learners’ vocabulary knowledge but nonetheless […] an important part of a well-balanced vocabulary programme’. The deliberate teaching of vocabulary also features prominently in most platform-based language courses.

For both vocabulary apps and bigger courses, the lexical items need to be organised into sets for the purposes of both presentation and practice. A common way of doing this, especially at lower levels, is to group the items into semantic clusters (sets with a classifying superordinate, like body part, and a collection of example hyponyms, like arm, leg, head, chest, etc.).

The problem, as Keith Folse puts it, is that such clusters ‘are not only unhelpful, they actually hinder vocabulary retention’ (Folse, 2004: 52). Evidence for this claim may be found in Higa (1963), Tinkham (1993, 1997), Waring (1997), Erten & Tekin (2008) and Barcroft (2015), to cite just some of the more well-known studies. The results, says Folse, ‘are clear and, I think, very conclusive’. The explanation that is usually given draws on interference theory: semantic similarity may lead to confusion (e.g. when learners mix up days of the week, colour words or adjectives to describe personality).

It appears, then, to be long past time to get rid of semantic clusters in language teaching. Well … not so fast. First of all, although most of the research sides with Folse, not all of it does. Nakata and Suzuki (2019) in their survey of more recent research found that results were more mixed. They found one study which suggested that there was no significant difference in learning outcomes between presenting words in semantic clusters and semantically unrelated groups (Ishii, 2015). And they found four studies (Hashemi & Gowdasiaei, 2005; Hoshino, 2010; Schneider, Healy, & Bourne, 1998, 2002) where semantic clusters had a positive effect on learning.

Nakata and Suzuki (2019) offer three reasons why semantic clustering might facilitate vocabulary learning: it (1) ‘reflects how vocabulary is stored in the mental lexicon, (2) introduces desirable difficulty, and (3) leads to extra attention, effort, or engagement from learners’. Finkbeiner and Nicol (2003) make a similar point: ‘although learning semantically related words appears to take longer, it is possible that words learned under these conditions are learned better for the purpose of actual language use (e.g., the retrieval of vocabulary during production and comprehension). That is, the very difficulty associated with learning the new labels may make them easier to process once they are learned’. Both pairs of researcher cited in this paragraph conclude that semantic clusters are best avoided, but their discussion of the possible benefits of this clustering is a recognition that the research (for reasons which I will come on to) cannot lead to categorical conclusions.

The problem, as so often with pedagogical research, is the gap between research conditions and real-world classrooms. Before looking at this in a little more detail, one relatively uncontentious observation can be made. Even those scholars who advise against semantic clustering (e.g. Papathanasiou, 2009), acknowledge that the situation is complicated by other factors, especially the level of proficiency of the learner and whether or not one or more of the hyponyms are known to the learner. At higher levels (when it is more likely that one or more of the hyponyms are already, even partially, known), semantic clustering is not a problem. I would add that, on the whole at higher levels, the deliberate learning of vocabulary is even less efficient than at lower levels and should be an increasingly small part of a well-balanced vocabulary programme.

So, why is there a problem drawing practical conclusions from the research? In order to have any scientific validity at all, researchers need to control a large number of variable. They need, for example, to be sure that learners do not already know any of the items that are being presented. The only practical way of doing this is to present sets of invented words, and this is what most of the research does (Sarioğlu, 2018). These artificial words solve one problem, but create others, the most significant of which is item difficulty. Many factors impact on item difficulty, and these include word frequency (obviously a problem with invented words), word length, pronounceability and the familiarity and length of the corresponding item in L1. None of the studies which support the abandonment of semantic clusters have controlled all of these variables (Nakata and Suzuki, 2019). Indeed, it would be practically impossible to do so. Learning pseudo-words is a very different proposition to learning real words, which a learner may subsequently encounter or want to use.

Take, for example, the days of the week. It’s quite common for learners to muddle up Tuesday and Thursday. The reason for this is not just semantic similarity (Tuesday and Monday are less frequently confused). They are also very similar in terms of both spelling and pronunciation. They are ‘synforms’ (see Laufer, 2009), which, like semantic clusters, can hinder learning of new items. But, now imagine a French-speaking learner of Spanish studying the days of the week. It is much less likely that martes and jueves will be muddled, because of their similarity to the French words mardi and jeudi. There would appear to be no good reason not to teach the complete set of days of the week to a learner like this. All other things being equal, it is probably a good idea to avoid semantic clusters, but all other things are very rarely equal.

Again, in an attempt to control for variables, researchers typically present the target items in isolation (in bilingual pairings). But, again, the real world does not normally conform to this condition. Leo Sellivan (2014) suggests that semantic clusters (e.g. colours) are taught as part of collocations. He gives the examples of red dress, green grass and black coffee, and points out that the alliterative patterns can serve as mnemonic devices which will facilitate learning. The suggestion is, I think, a very good one, but, more generally, it’s worth noting that the presentation of lexical items in both digital flashcards and platform courses is rarely context-free. Contexts will inevitably impact on learning and may well obviate the risks of semantic clustering.

Finally, this kind of research typically gives participants very restricted time to memorize the target words (Sarioğlu, 2018) and they are tested in very controlled recall tasks. In the case of language platform courses, practice of target items is usually spread out over a much longer period of time, with a variety of exposure opportunities (in controlled practice tasks, exposure in texts, personalisation tasks, revision exercises, etc.) both within and across learning units. In this light, it is not unreasonable to argue that laboratory-type research offers only limited insights into what should happen in the real world of language learning and teaching. The choice of learning items, the way they are presented and practised, and the variety of activities in the well-balanced vocabulary programme are probably all more significant than the question of whether items are organised into semantic clusters.

Although semantic clusters are quite common in language learning materials, much more common are thematic clusters (i.e. groups of words which are topically related, but include a variety of parts of speech (see below). Researchers, it seems, have no problem with this way of organising lexical sets. By way of conclusion, here’s an extract from a recent book:

‘Introducing new words together that are similar in meaning (synonyms), such as scared and frightened, or forms (synforms), like contain and maintain, can be confusing, and students are less likely to remember them. This problem is known as ‘interference’. One way to avoid this is to choose words that are around the same theme, but which include a mix of different parts of speech. For example, if you want to focus on vocabulary to talk about feelings, instead of picking lots of adjectives (happy, sad, angry, scared, frightened, nervous, etc.) include some verbs (feel, enjoy, complain) and some nouns (fun, feelings, nerves). This also encourages students to use a variety of structures with the vocabulary.’ (Hughes, et al., 2015: 25)

 

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Erten, I.H., & Tekin, M. 2008. Effects on vocabulary acquisition of presenting new words in semantic sets versus semantically-unrelated sets. System, 36 (3), 407-422

Finkbeiner, M. & Nicol, J. 2003. Semantic category effects in second language word learning. Applied Psycholinguistics 24 (2003), 369–383

Folse, K. S. 2004. Vocabulary Myths. Ann Arbor: University of Michigan Press

Hashemi, M.R., & Gowdasiaei, F. 2005. An attribute-treatment interaction study: Lexical-set versus semantically-unrelated vocabulary instruction. RELC Journal, 36 (3), 341-361

Higa, M. 1963. Interference effects of intralist word relationships in verbal learning. Journal of Verbal Learning and Verbal Behavior, 2, 170-175

Hoshino, Y. 2010. The categorical facilitation effects on L2 vocabulary learning in a classroom setting. RELC Journal, 41, 301–312

Hughes, S. H., Mauchline, F. & Moore, J. 2019. ETpedia Vocabulary. Shoreham-by-Sea: Pavilion Publishing and Media

Ishii, T. 2015. Semantic connection or visual connection: Investigating the true source of confusion. Language Teaching Research, 19, 712–722

Laufer, B. 2009. The concept of ‘synforms’ (similar lexical forms) in vocabulary acquisition. Language and Education, 2 (2): 113 – 132

Nakata, T. & Suzuki, Y. 2019. Effects Of Massing And Spacing On The Learning Of Semantically Related And Unrelated Words. Studies in Second Language Acquisition 41 (2), 287 – 311

Nation, P. 2005. Teaching Vocabulary. Asian EFL Journal. http://www.asian-efl-journal.com/sept_05_pn.pdf

Papathanasiou, E. 2009. An investigation of two ways of presenting vocabulary. ELT Journal 63 (4), 313 – 322

Sarioğlu, M. 2018. A Matter of Controversy: Teaching New L2 Words in Semantic Sets or Unrelated Sets. Journal of Higher Education and Science Vol 8 / 1: 172 – 183

Schneider, V. I., Healy, A. F., & Bourne, L. E. 1998. Contextual interference effects in foreign language vocabulary acquisition and retention. In Healy, A. F. & Bourne, L. E. (Eds.), Foreign language learning: Psycholinguistic studies on training and retention (pp. 77–90). Mahwah, NJ: Erlbaum

Schneider, V. I., Healy, A. F., & Bourne, L. E. 2002. What is learned under difficult conditions is hard to forget: Contextual interference effects in foreign vocabulary acquisition, retention, and transfer. Journal of Memory and Language, 46, 419–440

Sellivan, L. 2014. Horizontal alternatives to vertical lists. Blog post: http://leoxicon.blogspot.com/2014/03/horizontal-alternatives-to-vertical.html

Tinkham, T. 1993. The effect of semantic clustering on the learning of second language vocabulary. System 21 (3), 371-380.

Tinkham, T. 1997. The effects of semantic and thematic clustering on the learning of a second language vocabulary. Second Language Research, 13 (2),138-163

Waring, R. 1997. The negative effects of learning words in semantic sets: a replication. System, 25 (2), 261 – 274

It’s international ELT conference season again, with TESOL Chicago having just come to a close and IATEFL Brighton soon to start. I decided to take a look at how the subject of personalized learning will be covered at the second of these. Taking the conference programme , I trawled through looking for references to my topic.

Jing_word_cloudMy first question was: how do conference presenters feel about personalised learning? One way of finding out is by looking at the adjectives that are found in close proximity. This is what you get.

The overall enthusiasm is even clearer when the contexts are looked at more closely. Here are a few examples:

  • inspiring assessment, personalising learning
  • personalised training can contribute to professionalism and […] spark ideas for teacher trainers
  • a personalised educational experience that ultimately improves learner outcomes
  • personalised teacher development: is it achievable?

Particularly striking is the complete absence of anything that suggests that personalized learning might not be a ‘good thing’. The assumption throughout is that personalized learning is desirable and the only question that is asked is how it can be achieved. Unfortunately (and however much we might like to believe that it is a ‘good thing’), there is a serious lack of research evidence which demonstrates that this is the case. I have written about this here and here and here . For a useful summary of the current situation, see Benjamin Riley’s article where he writes that ‘it seems wise to ask what evidence we presently have that personalized learning works. Answer: Virtually none. One remarkable aspect of the personalized-learning craze is how quickly the concept has spread despite the almost total absence of rigorous research in support of it, at least thus far.’

Given that personalized learning can mean so many things and given the fact that people do not have space to define their terms in their conference abstracts, it is interesting to see what other aspects of language learning / teaching it is associated with. The four main areas are as follows (in alphabetical order):

  • assessment (especially formative assessment) / learning outcomes
  • continuous professional development
  • learner autonomy
  • technology / blended learning

The IATEFL TD SIG would appear to be one of the main promoters of personalized learning (or personalized teacher development) with a one-day pre-conference event entitled ‘Personalised teacher development – is it achievable?’ and a ‘showcase’ forum entitled ‘Forum on Effective & personalised: the holy grail of CPD’. Amusingly (but coincidentally, I suppose), the forum takes place in the ‘Cambridge room’ (see below).

I can understand why the SIG organisers may have chosen this focus. It’s something of a hot topic, and getting hotter. For example:

  • Cambridge University Press has identified personalization as one of the ‘six key principles of effective teacher development programmes’ and is offering tailor-made teacher development programmes for institutions.
  • NILE and Macmillan recently launched a partnership whose brief is to ‘curate personalised professional development with an appropriate mix of ‘formal’ and ‘informal’ learning delivered online, blended and face to face’.
  • Pearson has developed the Pearson’s Teacher Development Interactive (TDI) – ‘an interactive online course to train and certify teachers to deliver effective instruction in English as a foreign language […] You can complete each module on your own time, at your own pace from anywhere you have access to the internet.’

These examples do not, of course, provide any explanation for why personalized learning is a hot topic, but the answer to that is simple. Money. Billions and billions, and if you want a breakdown, have a look at the appendix of Monica Bulger’s report, ‘Personalized Learning: The Conversations We’re Not Having’ . Starting with Microsoft and the Gates Foundation plus Facebook and the Chan / Zuckerberg Foundation, dozens of venture philanthropists have thrown unimaginable sums of money at the idea of personalized learning. They have backed up their cash with powerful lobbying and their message has got through. Consent has been successfully manufactured.

PearsonOne of the most significant players in this field is Pearson, who have long been one of the most visible promoters of personalized learning (see the screen capture). At IATEFL, two of the ten conference abstracts which include the word ‘personalized’ are directly sponsored by Pearson. Pearson actually have ten presentations they have directly sponsored or are very closely associated with. Many of these do not refer to personalized learning in the abstract, but would presumably do so in the presentations themselves. There is, for example, a report on a professional development programme in Brazil using TDI (see above). There are two talks about the GSE, described as a tool ‘used to provide a personalised view of students’ language’. The marketing intent is clear: Pearson is to be associated with personalized learning (which is, in turn, associated with a variety of tech tools) – they even have a VP of data analytics, data science and personalized learning.

But the direct funding of the message is probably less important these days than the reinforcement, by those with no vested interests, of the set of beliefs, the ideology, which underpin the selling of personalized learning products. According to this script, personalized learning can promote creativity, empowerment, inclusiveness and preparedness for the real world of work. It sets itself up in opposition to lockstep and factory models of education, and sets learners free as consumers in a world of educational choice. It is a message with which it is hard for many of us to disagree.

manufacturing consentIt is also a marvellous example of propaganda, of the way that consent is manufactured. (If you haven’t read it yet, it’s probably time to read Herman and Chomsky’s ‘Manufacturing Consent: The Political Economy of the Mass Media’.) An excellent account of the way that consent for personalized learning has been manufactured can be found at Benjamin Doxtdator’s blog .

So, a hot topic it is, and its multiple inclusion in the conference programme will no doubt be welcomed by those who are selling ‘personalized’ products. It must be very satisfying to see how normalised the term has become, how it’s no longer necessary to spend too much on promoting the idea, how it’s so associated with technology, (formative) assessment, autonomy and teacher development … since others are doing it for you.

In my last post, I looked at the way that, in the absence of a clear, shared understanding of what ‘personalization’ means, it has come to be used as a slogan for the promoters of edtech. In this post, I want to look a little more closely at the constellation of meanings that are associated with the term, suggest a way of evaluating just how ‘personalized’ an instructional method might be, and look at recent research into ‘personalized learning’.

In English language teaching, ‘personalization’ often carries a rather different meaning than it does in broader educational discourse. Jeremy Harmer (Harmer, 2012: 276) defines it as ‘when students use language to talk about themselves and things which interest them’. Most commonly, this is in the context of ‘freer’ language practice of grammar or vocabulary of the following kind: ‘Complete the sentences so that they are true for you’. It is this meaning that Scott Thornbury refers to first in his entry for ‘Personalization’ in his ‘An A-Z of ELT’ (Thornbury, 2006: 160). He goes on, however, to expand his definition of the term to include humanistic approaches such as Community Language Learning / Counseling learning (CLL), where learners decide the content of a lesson, where they have agency. I imagine that no one would disagree that an approach such as this is more ‘personalized’ than a ‘complete-the-sentences-so-they-are-true-for you’ exercise to practise the present perfect.

Outside of ELT, ‘personalization’ has been used to refer to everything from ‘from customized interfaces to adaptive tutors, from student-centered classrooms to learning management systems’ (Bulger, 2016: 3). The graphic below (from Bulger, 2016: 3) illustrates just how wide the definitional reach of ‘personalization’ is.

TheBulger_pie_chart

As with Thornbury’s entry in his ‘A – Z of ELT’, it seems uncontentious to say that some things are more ‘personalized’ than others.

Given the current and historical problems with defining the term, it’s not surprising that a number of people have attempted to develop frameworks that can help us to get to grips with the thorny question of ‘personalization’. In the context of language teaching / learning, Renée Disick (Disick, 1975: 58) offered the following categorisation:

Disick

In a similar vein, a few years later, Howard Altman (Altman, 1980) suggested that teaching activities can differ in four main ways: the time allocated for learning, the curricular goal, the mode of learning and instructional expectations (personalized goal setting). He then offered eight permutations of these variables (see below, Altman, 1980: 9), although many more are imaginable.

Altman 1980 chart

Altman and Disick were writing, of course, long before our current technology-oriented view of ‘personalization’ became commonplace. The recent classification of technologically-enabled personalized learning systems by Monica Bulger (see below, Bulger, 2016: 6) reflects how times have changed.

5_types_of_personalized_learning_system

Bulger’s classification focusses on the technology more than the learning, but her continuum is very much in keeping with the views of Disick and Altman. Some approaches are more personalized than others.

The extent to which choices are offered determines the degree of individualization in a particular program. (Disick, 1975: 5)

It is important to remember that learner-centered language teaching is not a point, but rather a continuum. (Altman, 1980: 6)

Larry Cuban has also recently begun to use a continuum as a way of understanding the practices of ‘personalization’ that he observes as part of his research. The overall goals of schooling at both ends of the curriculum are not dissimilar: helping ‘children grow into adults who are creative thinkers, help their communities, enter jobs and succeed in careers, and become thoughtful, mindful adults’.

Cubans curriculum

As Cuban and others before him (e.g. Januszewski, 2001: 57) make clear, the two perspectives are not completely independent of each other. Nevertheless, we can see that one end of this continuum is likely to be materials-centred with the other learner-centred (Dickinson, 1987: 57). At one end, teachers (or their LMS replacements) are more likely to be content-providers and enact traditional roles. At the other, teachers’ roles are ‘more like those of coaches or facilitators’ (Cavanagh, 2014). In short, one end of the continuum is personalization for the learner; the other end is personalization by the learner.

It makes little sense, therefore, to talk about personalized learning as being a ‘good’ or a ‘bad’ thing. We might perceive one form of personalized learning to be more personalized than another, but that does not mean it is any ‘better’ or more effective. The only possible approach is to consider and evaluate the different elements of personalization in an attempt to establish, first, from a theoretical point of view whether they are likely to lead to learning gains, and, second, from an evidence-based perspective whether any learning gains are measurable. In recent posts on this blog, I have been attempting to do that with elements such as learning styles , self-pacing and goal-setting.

Unfortunately, but perhaps not surprisingly, none of the elements that we associate with ‘personalization’ will lead to clear, demonstrable learning gains. A report commissioned by the Gates Foundation (Pane et al, 2015) to find evidence of the efficacy of personalized learning did not, despite its subtitle (‘Promising Evidence on Personalized Learning’), manage to come up with any firm and unequivocal evidence (see Riley, 2017). ‘No single element of personalized learning was able to discriminate between the schools with the largest achievement effects and the others in the sample; however, we did identify groups of elements that, when present together, distinguished the success cases from others’, wrote the authors (Pane et al., 2015: 28). Undeterred, another report (Pane et al., 2017) was commissioned: in this the authors were unable to do better than a very hedged conclusion: ‘There is suggestive evidence that greater implementation of PL practices may be related to more positive effects on achievement; however, this finding requires confirmation through further research’ (my emphases). Don’t hold your breath!

In commissioning the reports, the Gates Foundation were probably asking the wrong question. The conceptual elasticity of the term ‘personalization’ makes its operationalization in any empirical study highly problematic. Meaningful comparison of empirical findings would, as David Hartley notes, be hard because ‘it is unlikely that any conceptual consistency would emerge across studies’ (Hartley, 2008: 378). The question of what works is unlikely to provide a useful (in the sense of actionable) response.

In a new white paper out this week, “A blueprint for breakthroughs,” Michael Horn and I argue that simply asking what works stops short of the real question at the heart of a truly personalized system: what works, for which students, in what circumstances? Without this level of specificity and understanding of contextual factors, we’ll be stuck understanding only what works on average despite aspirations to reach each individual student (not to mention mounting evidence that “average” itself is a flawed construct). Moreover, we’ll fail to unearth theories of why certain interventions work in certain circumstances. And without that theoretical underpinning, scaling personalized learning approaches with predictable quality will remain challenging. Otherwise, as more schools embrace personalized learning, at best each school will have to go at it alone and figure out by trial and error what works for each student. Worse still, if we don’t support better research, “personalized” schools could end up looking radically different but yielding similar results to our traditional system. In other words, we risk rushing ahead with promising structural changes inherent to personalized learning—reorganizing space, integrating technology tools, freeing up seat-time—without arming educators with reliable and specific information about how to personalize to their particular students or what to do, for which students, in what circumstances. (Freeland Fisher, 2016)

References

Altman, H.B. 1980. ‘Foreign language teaching: focus on the learner’ in Altman, H.B. & James, C.V. (eds.) 1980. Foreign Language Teaching: Meeting Individual Needs. Oxford: Pergamon Press, pp.1 – 16

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. New York: Data and Society Research Institute. https://www.datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week http://www.edweek.org/ew/articles/2014/10/22/09pl-overview.h34.html

Dickinson, L. 1987. Self-instruction in Language Learning. Cambridge: Cambridge University Press

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Freeland Fisher, J. 2016. ‘The inconvenient truth about personalized learning’ [Blog post] retrieved from http://www.christenseninstitute.org/blog/the-inconvenient-truth-about-personalized-learning/ (May 4, 2016)

Harmer, J. 2012. Essential Teacher Knowledge. Harlow: Pearson Education

Hartley, D. 2008. ‘Education, Markets and the Pedagogy of Personalisation’ British Journal of Educational Studies 56 / 4: 365 – 381

Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited

Pane, J. F., Steiner, E. D., Baird, M. D. & Hamilton, L. S. 2015. Continued Progress: Promising Evidence on Personalized Learning. Seattle: Rand Corporation retrieved from http://www.rand.org/pubs/research_reports/RR1365.html

Pane, J.F., Steiner, E. D., Baird, M. D., Hamilton, L. S. & Pane, J.D. 2017. Informing Progress: Insights on Personalized Learning Implementation and Effects. Seattle: Rand Corporation retrieved from https://www.rand.org/pubs/research_reports/RR2042.html

Riley, B. 2017. ‘Personalization vs. How People Learn’ Educational Leadership 74 / 6: 68-73

Thornbury, S. 2006. An A – Z of ELT. Oxford: Macmillan Education

 

 

 

Introduction

In the last post, I looked at issues concerning self-pacing in personalized language learning programmes. This time, I turn to personalized goal-setting. Most definitions of personalized learning, such as that offered by Next Generation Learning Challenges http://nextgenlearning.org/ (a non-profit supported by Educause, the Gates Foundation, the Broad Foundation, the Hewlett Foundation, among others), argue that ‘the default perspective [should be] the student’s—not the curriculum, or the teacher, and that schools need to adjust to accommodate not only students’ academic strengths and weaknesses, but also their interests, and what motivates them to succeed.’ It’s a perspective shared by the United States National Education Technology Plan 2017 https://tech.ed.gov/netp/ , which promotes the idea that learning objectives should vary based on learner needs, and should often be self-initiated. It’s shared by the massively funded Facebook initiative that is developing software that ‘puts students in charge of their lesson plans’, as the New York Times https://www.nytimes.com/2016/08/10/technology/facebook-helps-develop-software-that-puts-students-in-charge-of-their-lesson-plans.html?_r=0 put it. How, precisely, personalized goal-setting can be squared with standardized, high-stakes testing is less than clear. Are they incompatible by any chance?

In language learning, the idea that learners should have some say in what they are learning is not new, going back, at least, to the humanistic turn in the 1970s. Wilga Rivers advocated ‘giving the students opportunity to choose what they want to learn’ (Rivers, 1971: 165). A few years later, Renee Disick argued that the extent to which a learning programme can be called personalized (although she used the term ‘individualized’) depends on the extent to which learners have a say in the choice of learning objectives and the content of learning (Disick, 1975). Coming more up to date, Penny Ur advocated giving learners ‘a measure of freedom to choose how and what to learn’ (Ur, 1996: 233).

The benefits of personalized goal-setting

Personalized goal-setting is closely related to learner autonomy and learner agency. Indeed, it is hard to imagine any meaningful sense of learner autonomy or agency without some control of learning objectives. Without this control, it will be harder for learners to develop an L2 self. This matters because ‘ultimate attainment in second-language learning relies on one’s agency … [it] is crucial at the point where the individuals must not just start memorizing a dozen new words and expressions but have to decide on whether to initiate a long, painful, inexhaustive, and, for some, never-ending process of self-translation. (Pavlenko & Lantolf, 2000: 169 – 170). Put bluntly, if learners ‘have some responsibility for their own learning, they are more likely to be engaged than if they are just doing what the teacher tells them to’ (Harmer, 2012: 90). A degree of autonomy should lead to increased motivation which, in turn, should lead to increased achievement (Dickinson, 1987: 32; Cordova & Lepper, 1996: 726).

Strong evidence for these claims is not easy to provide, not least since autonomy and agency cannot be measured. However, ‘negative evidence clearly shows that a lack of agency can stifle learning by denying learners control over aspects of the language-learning process’ (Vandergriff, 2016: 91). Most language teachers (especially in compulsory education) have witnessed the negative effects that a lack of agency can generate in some students. Irrespective of the extent to which students are allowed to influence learning objectives, the desirability of agency / autonomy appears to be ‘deeply embedded in the professional consciousness of the ELT community’ (Borg and Al-Busaidi, 2012; Benson, 2016: 341). Personalized goal-setting may not, for a host of reasons, be possible in a particular learning / teaching context, but in principle it would seem to be a ‘good thing’.

Goal-setting and technology

The idea that learners might learn more and better if allowed to set their own learning objectives is hardly new, dating back at least one hundred years to the establishment of Montessori’s first Casa dei Bambini. In language teaching, the interest in personalized learning that developed in the 1970s (see my previous post) led to numerous classroom experiments in personalized goal-setting. These did not result in lasting changes, not least because the workload of teachers became ‘overwhelming’ (Disick, 1975: 128).

Closely related was the establishment of ‘self-access centres’. It was clear to anyone, like myself, who was involved in the setting-up and maintenance of a self-access centre, that they cost a lot, in terms of both money and work (Ur, 2012: 236). But there were also nagging questions about how effective they were (Morrison, 2005). Even more problematic was a bigger question: did they actually promote the learner autonomy that was their main goal?

Post-2000, online technology rendered self-access centres redundant: who needs the ‘walled garden’ of a self-access centre when ‘learners are able to connect with multiple resources and communities via the World Wide Web in entirely individual ways’ (Reinders, 2012)? The cost problem of self-access centres was solved by the web. Readily available now were ‘myriad digital devices, software, and learning platforms offering educators a once-unimaginable array of options for tailoring lessons to students’ needs’ (Cavanagh, 2014). Not only that … online technology promised to grant agency, to ‘empower language learners to take charge of their own learning’ and ‘to provide opportunities for learners to develop their L2 voice’ (Vandergriff, 2016: 32). The dream of personalized learning has become inseparable from the affordances of educational technologies.

It is, however, striking just how few online modes of language learning offer any degree of personalized goal-setting. Take a look at some of the big providers – Voxy, Busuu, Duolingo, Rosetta Stone or Babbel, for example – and you will find only the most token nods to personalized learning objectives. Course providers appear to be more interested in claiming their products are personalized (‘You decide what you want to learn and when!’) than in developing a sufficient amount of content to permit personalized goal-setting. We are left with the ELT equivalent of personalized cans of Coke: a marketing tool.

coke_cans

The problems with personalized goal-setting

Would language learning products, such as those mentioned above, be measurably any better if they did facilitate the personalization of learning objectives in a significant way? Would they be able to promote learner autonomy and agency in a way that self-access centres apparently failed to achieve? It’s time to consider the square quotes that I put around ‘good thing’.

Researchers have identified a number of potential problems with goal-setting. I have already mentioned the problem of reconciling personalized goals and standardized testing. In most learning contexts, educational authorities (usually the state) regulate the curriculum and determine assessment practices. It is difficult to see, as Campbell et al. (Campbell et al., 2007: 138) point out, how such regulation ‘could allow individual interpretations of the goals and values of education’. Most assessment systems ‘aim at convergent outcomes and homogeneity’ (Benson, 2016: 345) and this is especially true of online platforms, irrespective of their claims to ‘personalization’. In weak (typically internal) assessment systems, the potential for autonomy is strongest, but these are rare.

In all contexts, it is likely that personalized goal-setting will only lead to learning gains when a number of conditions are met. The goals that are chosen need to be both specific, measurable, challenging and non-conflicting (Ordóñez et al. 2009: 2-3). They need to be realistic: if not, it is unlikely that self-efficacy (a person’s belief about their own capability to achieve or perform to a certain level) will be promoted (Koda-Dallow & Hobbs, 2005), and without self-efficacy, improved performance is also unlikely (Bandura, 1997). The problem is that many learners lack self-efficacy and are poor self-regulators. These things are teachable / learnable, but require time and support. Many learners need help in ‘becoming aware of themselves and their own understandings’ (McMahon & Oliver, 2001: 1304). If they do not get it, the potential advantages of personalized goal-setting will be negated. As learners become better self-regulators, they will want and need to redefine their learning goals: goal-setting should be an iterative process (Hussey & Smith, 2003: 358). Again, support will be needed. In online learning, such support is not common.

A further problem that has been identified is that goal-setting can discourage a focus on non-goal areas (Ordóñez et al. 2009: 2) and can lead to ‘a focus on reaching the goal rather than on acquiring the skills required to reach it’ (Locke & Latham, 2006: 266). We know that much language learning is messy and incidental. Students do not only learn the particular thing that they are studying at the time (the belief that they do was described by Dewey as ‘the greatest of all pedagogical fallacies’). Goal-setting, even when personalized, runs the risk of promoting tunnel-vision.

The incorporation of personalized goal-setting in online language learning programmes is, in so many ways, a far from straightforward matter. Simply tacking it onto existing programmes is unlikely to result in anything positive: it is not an ‘over-the-counter treatment for motivation’ (Ordóñez et al.:2). Course developers will need to look at ‘the complex interplay between goal-setting and organizational contexts’ (Ordóñez et al. 2009: 16). Motivating students is not simply ‘a matter of the teacher deploying the correct strategies […] it is an intensely interactive process’ (Lamb, M. 2017). More generally, developers need to move away from a positivist and linear view of learning as a technical process where teaching interventions (such as the incorporation of goal-setting, the deployment of gamification elements or the use of a particular algorithm) will lead to predictable student outcomes. As Larry Cuban reminds us, ‘no persuasive body of evidence exists yet to confirm that belief (Cuban, 1986: 88). The most recent research into personalized learning has failed to identify any single element of personalization that can be clearly correlated with improved outcomes (Pane et al., 2015: 28).

In previous posts, I considered learning styles and self-pacing, two aspects of personalized learning that are highly problematic. Personalized goal-setting is no less so.

References

Bandura, A. 1997. Self-efficacy: The exercise of control. New York: W.H. Freeman and Company

Benson, P. 2016. ‘Learner Autonomy’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon: Routledge. pp.339 – 352

Borg, S. & Al-Busaidi, S. 2012. ‘Teachers’ beliefs and practices regarding learner autonomy’ ELT Journal 66 / 3: 283 – 292

Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week http://www.edweek.org/ew/articles/2014/10/22/09pl-overview.h34.html

Cordova, D. I. & Lepper, M. R. 1996. ‘Intrinsic Motivation and the Process of Learning: Beneficial Effects of Contextualization, Personalization, and Choice’ Journal of Educational Psychology 88 / 4: 715 -739

Cuban, L. 1986. Teachers and Machines. New York: Teachers College Press

Dickinson, L. 1987. Self-instruction in Language Learning. Cambridge: Cambridge University Press

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Harmer, J. 2012. Essential Teacher Knowledge. Harlow: Pearson Education

Hussey, T. & Smith, P. 2003. ‘The Uses of Learning Outcomes’ Teaching in Higher Education 8 / 3: 357 – 368

Lamb, M. 2017 (in press) ‘The motivational dimension of language teaching’ Language Teaching 50 / 3

Locke, E. A. & Latham, G. P. 2006. ‘New Directions in Goal-Setting Theory’ Current Directions in Psychological Science 15 / 5: 265 – 268

McMahon, M. & Oliver, R. (2001). Promoting self-regulated learning in an on-line environment. In C. Montgomerie & J. Viteli (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2001 (pp. 1299-1305). Chesapeake, VA: AACE

Morrison, B. 2005. ‘Evaluating learning gain in a self-access learning centre’ Language Teaching Research 9 / 3: 267 – 293

Ordóñez, L. D., Schweitzer, M. E., Galinsky, A. D. & Bazerman, M. H. 2009. Goals Gone Wild: The Systematic Side Effects of Over-Prescribing Goal Setting. Harvard Business School Working Paper 09-083

Pane, J. F., Steiner, E. D., Baird, M. D. & Hamilton, L. S. 2015. Continued Progress: Promising Evidence on Personalized Learning. Seattle: Rand Corporation

Pavlenko, A. & Lantolf, J. P. 2000. ‘Second language learning as participation and the (re)construction of selves’ In J.P. Lantolf (ed.), Sociocultural Theory and Second Language Learning. Oxford: Oxford University Press, pp. 155 – 177

Reinders, H. 2012. ‘The end of self-access? From walled garden to public park’ ELT World Online 4: 1 – 5

Rivers, W. M. 1971. ‘Techniques for Developing Proficiency in the Spoken Language in an Individualized Foreign Language program’ in Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare. pp. 165 – 169

Ur, P. 1996. A Course in Language Teaching: Practice and Theory. Cambridge: Cambridge University Press

Ur, P. 2012. A Course in English Language Teaching. Cambridge: Cambridge University Press

Vandergriff, I. Second-language Discourse in the Digital World. 2016. Amsterdam: John Benjamins

All aboard …

The point of adaptive learning is that it can personalize learning. When we talk about personalization, mention of learning styles is rarely far away. Jose Ferreira of Knewton (but now ex-CEO Knewton) made his case for learning styles in a blog post that generated a superb and, for Ferreira, embarrassing  discussion in the comments that were subsequently deleted by Knewton. fluentu_learning_stylesFluentU (which I reviewed here) clearly approves of learning styles, or at least sees them as a useful way to market their product, even though it is unclear how their product caters to different styles. Busuu claims to be ‘personalised to fit your style of learning’. Voxy, Inc. (according to their company overview) ‘operates a language learning platform that creates custom curricula for English language learners based on their interests, routines, goals, and learning styles’. Bliu Bliu (which I reviewed here) recommended, in a recent blog post, that learners should ‘find out their language learner type and use it to their advantage’ and suggests, as a starter, trying out ‘Bliu Bliu, where pretty much any learner can find what suits them best’. Memrise ‘uses clever science to adapt to your personal learning style’.  Duolingo’s learning tree ‘effectively rearranges itself to suit individual learning styles’ according to founder, Louis Von Ahn. This list could go on and on.

Learning styles are thriving in ELT coursebooks, too. Here are just three recent examples for learners of various ages. Today! by Todd, D. & Thompson, T. (Pearson, 2014) ‘shapes learning around individual students with graded difficulty practice for mixed-ability classes’ and ‘makes testing mixed-ability classes easier with tests that you can personalise to students’ abilities’.today

Move  it! by Barraclough, C., Beddall, F., Stannett, K., Wildman, J. (Pearson, 2015) offers ‘personalized pathways [which] allow students to optimize their learning outcomes’ and a ‘complete assessment package to monitor students’ learning process’. pearson_move_it

Open Mind Elementary (A2) 2nd edition by Rogers, M., Taylor-Knowles, J. & Taylor-Knowles, S. (Macmillan, 2014) has a whole page devoted to learning styles in the ‘Life Skills’ strand of the course. The scope and sequence describes it in the following terms: ‘Thinking about what you like to do to find your learning style and improve how you learn English’. Here’s the relevant section:macmillan_coursebook

rosenber-learning-stylesMethodology books offer more tips for ways that teachers can cater to different learning styles. Recent examples include Patrycja Kamińska’s  Learning Styles and Second Language Education (Cambridge Scholars, 2014), Tammy Gregersen & Peter D. MacIntyre’s Capitalizing on Language Learners’ Individuality (Multilingual Matters, 2014) and Marjorie Rosenberg’s Spotlight on Learning Styles (Delta Publishing, 2013). Teacher magazines show a continuing interest  in the topic. Humanising Language Teaching and English Teaching Professional are particularly keen. The British Council offers courses about learning styles and its Teaching English website has many articles and lesson plans on the subject (my favourite explains that your students will be more successful if you match your teaching style to their learning styles), as do the websites of all the major publishers. Most ELT conferences will also offer something on the topic.oup_learning_styles

How about language teaching qualifications and frameworks? The Cambridge English Teaching Framework contains a component entitled ‘Understanding learners’ and this specifies as the first part of the component a knowledge of concepts such as learning styles (e.g., visual, auditory, kinaesthetic), multiple intelligences, learning strategies, special needs, affect. Unsurprisingly, the Cambridge CELTA qualification requires successful candidates to demonstrate an awareness of the different learning styles and preferences that adults bring to learning English. The Cambridge DELTA requires successful candidates to accommodate learners according to their different abilities, motivations, and learning styles. The Eaquals Framework for Language Teacher Training and Development requires teachers at Development Phase 2 t0 have the skill of determining and anticipating learners’ language learning needs and learning styles at a range of levels, selecting appropriate ways of finding out about these.

Outside of ELT, learning styles also continue to thrive. Phil Newton (2015 ‘The learning styles myth is thriving in higher education’ Frontiers in Psychology 6: 1908) carried out a survey of educational publications  (higher education) between 2013 and 2016, and found that an overwhelming majority (89%) implicitly or directly endorse the use of learning styles. He also cites research showing that 93% of UK schoolteachers believe that ‘individuals learn better when they receive information in their preferred Learning Style’, with similar figures in other countries. 72% of Higher Education institutions in the US teach ‘learning style theory’ as part of faculty development for online teachers. Advocates of learning styles in English language teaching are not alone.

But, unfortunately, …

In case you weren’t aware of it, there is a rather big problem with learning styles. There is a huge amount of research  which suggests that learning styles (and, in particular, teaching attempts to cater to learning styles) need to be approached with extreme scepticism. Much of this research was published long before the blog posts, advertising copy, books and teaching frameworks (listed above) were written.  What does this research have to tell us?

The first problem concerns learning styles taxonomies. There are three issues here: many people do not fit one particular style, the information used to assign people to styles is often inadequate, and there are so many different styles that it becomes cumbersome to link particular learners to particular styles (Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48 / 3, 169-183). To summarise, given the lack of clarity as to which learning styles actually exist, it may be ‘neither viable nor justified’ for learning styles to form the basis of lesson planning (Hall, G. 2011. Exploring English Language Teaching. Abingdon, Oxon.: Routledge p.140). More detailed information about these issues can be found in the following sources:

Coffield, F., Moseley, D., Hall, E. & Ecclestone, K. 2004. Learning styles and pedagogy in post-16 learning: a systematic and critical review. London: Learning and Skills Research Centre

Dembo, M. H. & Howard, K. 2007. Advice about the use of learning styles: a major myth in education. Journal of College Reading & Learning 37 / 2: 101 – 109

Kirschner, P. A. 2017. Stop propagating the learning styles myth. Computers & Education 106: 166 – 171

Pashler, H., McDaniel, M., Rohrer, D. & Bjork, E. 2008. Learning styles concepts and evidence. Psychological Science in the Public Interest 9 / 3: 105 – 119

Riener, C. & Willingham, D. 2010. The myth of learning styles. Change – The Magazine of Higher Learning

The second problem concerns what Pashler et al refer to as the ‘meshing hypothesis’: the idea that instructional interventions can be effectively tailored to match particular learning styles. Pashler et al concluded that the available taxonomies of student types do not offer any valid help in deciding what kind of instruction to offer each individual. Even in 2008, their finding was not new. Back in 1978, a review of 15 studies that looked at attempts to match learning styles to approaches to first language reading instruction, concluded that modality preference ‘has not been found to interact significantly with the method of teaching’ (Tarver, Sara & M. M. Dawson. 1978. Modality preference and the teaching of reading. Journal of Learning Disabilities 11: 17 – 29). The following year, two other researchers concluded that [the assumption that one can improve instruction by matching materials to children’s modality strengths] appears to lack even minimal empirical support. (Arter, J.A. & Joseph A. Jenkins 1979 ‘Differential diagnosis-prescriptive teaching: A critical appraisal’ Review of Educational Research 49: 517-555). Fast forward 20 years to 1999, and Stahl (Different strokes for different folks?’ American Educator Fall 1999 pp. 1 – 5) was writing the reason researchers roll their eyes at learning styles is the utter failure to find that assessing children’s learning styles and matching to instructional methods has any effect on learning. The area with the most research has been the global and analytic styles […]. Over the past 30 years, the names of these styles have changed – from ‘visual’ to ‘global’ and from ‘auditory’ to ‘analytic’ – but the research results have not changed. For a recent evaluation of the practical applications of learning styles, have a look at Rogowsky, B. A., Calhoun, B. M. & Tallal, P. 2015. ‘Matching Learning Style to Instructional Method: Effects on Comprehension’ Journal of Educational Psychology 107 / 1: 64 – 78. Even David Kolb, the Big Daddy of learning styles, now concedes that there is no strong evidence that teachers should tailor their instruction to their student’s particular learning styles (reported in Glenn, D. 2009. ‘Matching teaching style to learning style may not help students’ The Chronicle of Higher Education). To summarise, the meshing hypothesis is entirely unsupported in the scientific literature. It is a myth (Howard-Jones, P. A. 2014. ‘Neuroscience and education: myths and messages’ Nature Reviews Neuroscience).

This brings me back to the blog posts, advertising blurb, coursebooks, methodology books and so on that continue to tout learning styles. The writers of these texts typically do not acknowledge that there’s a problem of any kind. Are they unaware of the research? Or are they aware of it, but choose not to acknowledge it? I suspect that the former is often the case with the app developers. But if the latter is the case, what  might those reasons be? In the case of teacher training specifications, the reason is probably practical. Changing a syllabus is an expensive and time-consuming operation. But in the case of some of the ELT writers, I suspect that they hang on in there because they so much want to believe.

As Newton (2015: 2) notes, intuitively, there is much that is attractive about the concept of Learning Styles. People are obviously different and Learning Styles appear to offer educators a way to accommodate individual learner differences.  Pashler et al (2009:107) add that another related factor that may play a role in the popularity of the learning-styles approach has to do with responsibility. If a person or a person’s child is not succeeding or excelling in school, it may be more comfortable for the person to think that the educational system, not the person or the child himself or herself, is responsible. That is, rather than attribute one’s lack of success to any lack of ability or effort on one’s part, it may be more appealing to think that the fault lies with instruction being inadequately tailored to one’s learning style. In that respect, there may be linkages to the self-esteem movement that became so influential, internationally, starting in the 1970s. There is no reason to doubt that many of those who espouse learning styles have good intentions.

No one, I think, seriously questions whether learners might not benefit from a wide variety of input styles and learning tasks. People are obviously different. MacIntyre et al (MacIntyre, P.D., Gregersen, T. & Clément, R. 2016. ‘Individual Differences’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon, Oxon: Routledge, pp.310 – 323, p.319) suggest that teachers might consider instructional methods that allow them to capitalise on both variety and choice and also help learners find ways to do this for themselves inside and outside the classroom. Jill Hadfield (2006. ‘Teacher Education and Trainee Learning Style’ RELC Journal 37 / 3: 369 – 388) recommends that we design our learning tasks across the range of learning styles so that our trainees can move across the spectrum, experiencing both the comfort of matching and the challenge produced by mismatching. But this is not the same thing as claiming that identification of a particular learning style can lead to instructional decisions. The value of books like Rosenberg’s Spotlight on Learning Styles lies in the wide range of practical suggestions for varying teaching styles and tasks. They contain ideas of educational value: it is unfortunate that the theoretical background is so thin.

In ELT things are, perhaps, beginning to change. Russ Mayne’s blog post Learning styles: facts and fictions in 2012 got a few heads nodding, and he followed this up 2 years later with a presentation at IATEFL looking at various aspects of ELT, including learning styles, which have little or no scientific credibility. Carol Lethaby and Patricia Harries gave a talk at IATEFL 2016, Changing the way we approach learning styles in teacher education, which was also much discussed and shared online. They also had an article in ELT Journal called Learning styles and teacher training: are we perpetuating neuromyths? (2016 ELTJ 70 / 1: 16 – 27). Even Pearson, in a blog post of November 2016, (Mythbusters: A review of research on learning styles) acknowledges that there is a shocking lack of evidence to support the core learning styles claim that customizing instruction based on students’ preferred learning styles produces better learning than effective universal instruction, concluding that  it is impossible to recommend learning styles as an effective strategy for improving learning outcomes.

 

 

Every now and then, someone recommends me to take a look at a flashcard app. It’s often interesting to see what developers have done with design, gamification and UX features, but the content is almost invariably awful. Most recently, I was encouraged to look at Word Pash. The screenshots below are from their promotional video.

word-pash-1 word-pash-2 word-pash-3 word-pash-4

The content problems are immediately apparent: an apparently random selection of target items, an apparently random mix of high and low frequency items, unidiomatic language examples, along with definitions and distractors that are less frequent than the target item. I don’t know if these are representative of the rest of the content. The examples seem to come from ‘Stage 1 Level 3’, whatever that means. (My confidence in the product was also damaged by the fact that the Word Pash website includes one testimonial from a certain ‘Janet Reed – Proud Mom’, whose son ‘was able to increase his score and qualify for academic scholarships at major universities’ after using the app. The picture accompanying ‘Janet Reed’ is a free stock image from Pexels and ‘Janet Reed’ is presumably fictional.)

According to the website, ‘WordPash is a free-to-play mobile app game for everyone in the global audience whether you are a 3rd grader or PhD, wordbuff or a student studying for their SATs, foreign student or international business person, you will become addicted to this fast paced word game’. On the basis of the promotional video, the app couldn’t be less appropriate for English language learners. It seems unlikely that it would help anyone improve their ACT or SAT test scores. The suggestion that the vocabulary development needs of 9-year-olds and doctoral students are comparable is pure chutzpah.

The deliberate study of more or less random words may be entertaining, but it’s unlikely to lead to very much in practical terms. For general purposes, the deliberate learning of the highest frequency words, up to about a frequency ranking of #7500, makes sense, because there’s a reasonably high probability that you’ll come across these items again before you’ve forgotten them. Beyond that frequency level, the value of the acquisition of an additional 1000 words tails off very quickly. Adding 1000 words from frequency ranking #8000 to #9000 is likely to result in an increase in lexical understanding of general purpose texts of about 0.2%. When we get to frequency ranks #19,000 to #20,000, the gain in understanding decreases to 0.01%[1]. In other words, deliberate vocabulary learning needs to be targeted. The data is relatively recent, but the principle goes back to at least the middle of the last century when Michael West argued that a principled approach to vocabulary development should be driven by a comparison of the usefulness of a word and its ‘learning cost’[2]. Three hundred years before that, Comenius had articulated something very similar: ‘in compiling vocabularies, my […] concern was to select the words in most frequent use[3].

I’ll return to ‘general purposes’ later in this post, but, for now, we should remember that very few language learners actually study a language for general purposes. Globally, the vast majority of English language learners study English in an academic (school) context and their immediate needs are usually exam-specific. For them, general purpose frequency lists are unlikely to be adequate. If they are studying with a coursebook and are going to be tested on the lexical content of that book, they will need to use the wordlist that matches the book. Increasingly, publishers make such lists available and content producers for vocabulary apps like Quizlet and Memrise often use them. Many examinations, both national and international, also have accompanying wordlists. Examples of such lists produced by examination boards include the Cambridge English young learners’ exams (Starters, Movers and Flyers) and Cambridge English Preliminary. Other exams do not have official word lists, but reasonably reliable lists have been produced by third parties. Examples include Cambridge First, IELTS and SAT. There are, in addition, well-researched wordlists for academic English, including the Academic Word List (AWL)  and the Academic Vocabulary List  (AVL). All of these make sensible starting points for deliberate vocabulary learning.

When we turn to other, out-of-school learners the number of reasons for studying English is huge. Different learners have different lexical needs, and working with a general purpose frequency list may be, at least in part, a waste of time. EFL and ESL learners are likely to have very different needs, as will EFL and ESP learners, as will older and younger learners, learners in different parts of the world, learners who will find themselves in English-speaking countries and those who won’t, etc., etc. For some of these demographics, specialised corpora (from which frequency-based wordlists can be drawn) exist. For most learners, though, the ideal list simply does not exist. Either it will have to be created (requiring a significant amount of time and expertise[4]) or an available best-fit will have to suffice. Paul Nation, in his recent ‘Making and Using Word Lists for Language Learning and Testing’ (John Benjamins, 2016) includes a useful chapter on critiquing wordlists. For anyone interested in better understanding the issues surrounding the development and use of wordlists, three good articles are freely available online. These are:making-and-using-word-lists-for-language-learning-and-testing

Lessard-Clouston, M. 2012 / 2013. ‘Word Lists for Vocabulary Learning and Teaching’ The CATESOL Journal 24.1: 287- 304

Lessard-Clouston, M. 2016. ‘Word lists and vocabulary teaching: options and suggestions’ Cornerstone ESL Conference 2016

Sorell, C. J. 2013. A study of issues and techniques for creating core vocabulary lists for English as an International Language. Doctoral thesis.

But, back to ‘general purposes’ …. Frequency lists are the obvious starting point for preparing a wordlist for deliberate learning, but they are very problematic. Frequency rankings depend on the corpus on which they are based and, since these are different, rankings vary from one list to another. Even drawing on just one corpus, rankings can be a little strange. In the British National Corpus, for example, ‘May’ (the month) is about twice as frequent as ‘August’[5], but we would be foolish to infer from this that the learning of ‘May’ should be prioritised over the learning of ‘August’. An even more striking example from the same corpus is the fact that ‘he’ is about twice as frequent as ‘she’[6]: should, therefore, ‘he’ be learnt before ‘she’?

List compilers have to make a number of judgement calls in their work. There is not space here to consider these in detail, but two particularly tricky questions concerning the way that words are chosen may be mentioned: Is a verb like ‘list’, with two different and unrelated meanings, one word or two? Should inflected forms be considered as separate words? The judgements are not usually informed by considerations of learners’ needs. Learners will probably best approach vocabulary development by building their store of word senses: attempting to learn all the meanings and related forms of any given word is unlikely to be either useful or successful.

Frequency lists, in other words, are not statements of scientific ‘fact’: they are interpretative documents. They have been compiled for descriptive purposes, not as ways of structuring vocabulary learning, and it cannot be assumed they will necessarily be appropriate for a purpose for which they were not designed.

A further major problem concerns the corpus on which the frequency list is based. Large databases, such as the British National Corpus or the Corpus of Contemporary American English, are collections of language used by native speakers in certain parts of the world, usually of a restricted social class. As such, they are of relatively little value to learners who will be using English in contexts that are not covered by the corpus. A context where English is a lingua franca is one such example.

A different kind of corpus is the Cambridge Learner Corpus (CLC), a collection of exam scripts produced by candidates in Cambridge exams. This has led to the development of the English Vocabulary Profile (EVP) , where word senses are tagged as corresponding to particular levels in the Common European Framework scale. At first glance, this looks like a good alternative to frequency lists based on native-speaker corpora. But closer consideration reveals many problems. The design of examination tasks inevitably results in the production of language of a very different kind from that produced in other contexts. Many high frequency words simply do not appear in the CLC because it is unlikely that a candidate would use them in an exam. Other items are very frequent in this corpus just because they are likely to be produced in examination tasks. Unsurprisingly, frequency rankings in EVP do not correlate very well with frequency rankings from other corpora. The EVP, then, like other frequency lists, can only serve, at best, as a rough guide for the drawing up of target item vocabulary lists in general purpose apps or coursebooks[7].

There is no easy solution to the problems involved in devising suitable lexical content for the ‘global audience’. Tagging words to levels (i.e. grouping them into frequency bands) will always be problematic, unless very specific user groups are identified. Writers, like myself, of general purpose English language teaching materials are justifiably irritated by some publishers’ insistence on allocating words to levels with numerical values. The policy, taken to extremes (as is increasingly the case), has little to recommend it in linguistic terms. But it’s still a whole lot better than the aleatory content of apps like Word Pash.

[1] See Nation, I.S.P. 2013. Learning Vocabulary in Another Language 2nd edition. (Cambridge: Cambridge University Press) p. 21 for statistical tables. See also Nation, P. & R. Waring 1997. ‘Vocabulary size, text coverage and word lists’ in Schmitt & McCarthy (eds.) 1997. Vocabulary: Description, Acquisition and Pedagogy. (Cambridge: Cambridge University Press) pp. 6 -19

[2] See Kelly, L.G. 1969. 25 Centuries of Language Teaching. (Rowley, Mass.: Rowley House) p.206 for a discussion of West’s ideas.

[3] Kelly, L.G. 1969. 25 Centuries of Language Teaching. (Rowley, Mass.: Rowley House) p. 184

[4] See Timmis, I. 2015. Corpus Linguistics for ELT (Abingdon: Routledge) for practical advice on doing this.

[5] Nation, I.S.P. 2016. Making and Using Word Lists for Language Learning and Testing. (Amsterdam: John Benjamins) p.58

[6] Taylor, J.R. 2012. The Mental Corpus. (Oxford: Oxford University Press) p.151

[7] For a detailed critique of the limitations of using the CLC as a guide to syllabus design and textbook development, see Swan, M. 2014. ‘A Review of English Profile Studies’ ELTJ 68/1: 89-96