Posts Tagged ‘learning styles’

There’s a video on YouTube from Oxford University Press in which the presenter, the author of a coursebook for primary English language learners (‘Oxford Discover’), describes an activity where students have a short time to write some sentences about a picture they have been shown. Then, working in pairs, they read aloud their partner’s sentences and award themselves points, with more points being given for sentences that others have not come up with. For lower level, young learners, it’s not a bad activity. It provides opportunities for varied skills practice of a limited kind and, if it works, may be quite fun and motivating. However, what I found interesting about the video is that it is entitled ‘How to teach critical thinking skills: speaking’ and the book that is being promoted claims to develop ‘21st Century Skills in critical thinking, communication, collaboration and creativity’. The presenter says that the activity achieves its critical thinking goals by promoting ‘both noticing and giving opinions, […] two very important critical thinking skills.’

Noticing (or observation) and giving opinions are often included in lists of critical thinking skills, but, for this to be the case, they must presumably be exercised in a critical way – some sort of reasoning must be involved. This is not the case here, so only the most uncritical understanding of critical thinking could consider this activity to have any connection to critical thinking. Whatever other benefits might accrue from it, it seems highly unlikely that the students’ ability to notice or express opinions will be developed.

My scepticism is not shared by many users of the book. Oxford University Press carried out a scientific-sounding ‘impact study’: this consisted of a questionnaire (n = 198) in which ‘97% of teachers reported that using Oxford Discover helps their students to improve in the full range of 21st century skills, with critical thinking and communication scoring the highest’.

Enthusiasm for critical thinking activities is extremely widespread. In 2018, TALIS, the OECD Teaching and Learning International Survey (with more than 4000 respondents) found that ‘over 80% of teachers feel confident in their ability to vary instructional strategies in their classroom and help students think critically’ and almost 60% ‘frequently or always’ ‘give students tasks that require students to think critically.’ Like the Oxford ‘impact study’, it’s worth remembering that these are self-reporting figures.

This enthusiasm is shared in the world of English language teaching, reflected in at least 17 presentations at the 2021 IATEFL conference that discussed practical ideas for promoting critical thinking. These ranged from the more familiar (e.g. textual analysis in EAP) to the more original – developing critical thinking through the use of reading reaction journals, multicultural literature, fables, creative arts performances, self-expression, escape rooms, and dice games.

In most cases, it would appear that the precise nature of the critical thinking that was ostensibly being developed was left fairly vague. This vagueness is not surprising. Practically the only thing that writers about critical thinking in education can agree on is that there is no general agreement about what, precisely, critical thinking is. Lai (2011) offers an accessible summary of a range of possible meanings, but points out that, in educational contexts, its meaning is often rather vague and encompasses other concepts (such as higher order thinking skills) which also lack clarity. Paul Dummett and John Hughes (2019: 4) plump for ‘a mindset that involves thinking reflectively, rationally and reasonably’ – a vague definition which leaves unanswered two key questions: to what extent is it a skill set or a disposition? Are these skills generic or domain specific?

When ‘critical thinking’ is left undefined, it is impossible to evaluate the claims that a particular classroom activity will contribute to the development of critical thinking. However, irrespective of the definition, there are good reasons to be sceptical about the ability of educational activities to have a positive impact on the generic critical thinking skills of learners in English language classes. There can only be critical-thinking value in the activity described at the beginning of this post if learners somehow transfer the skills they practise in the activity to other domains of their lives. This is, of course, possible, but, if we approach the question with a critical disposition, we have to conclude that it is unlikely. We may continue to believe the opposite, but this would be an uncritical act of faith.

The research evidence on the efficacy of teaching generic critical thinking is not terribly encouraging (Tricot & Sweller, 2014). There’s no shortage of anecdotal support for classroom critical thinking, but ‘education researchers have spent over a century searching for, and failing to find evidence of, transfer to unrelated domains by the use of generic-cognitive skills’ (Sweller, 2022). One recent meta-analysis (Huber & Kuncel, 2016) found insufficient evidence to justify the explicit teaching of generic critical thinking skills at college level. In an earlier blog post https://adaptivelearninginelt.wordpress.com/2020/10/16/fake-news-and-critical-thinking-in-elt/ looking at the impact of critical thinking activities on our susceptibility to fake news, I noted that research was unable to find much evidence of the value of media literacy training. When considerable time is devoted to generic critical thinking training and little or no impact is found, how likely is it that the kind of occasional, brief one-off activity in the ELT classroom will have the desired impact? Without going as far as to say that critical thinking activities in the ELT classroom have no critical-thinking value, it is uncontentious to say that we still do not know how to define critical thinking, how to assess evidence of it, or how to effectively practise and execute it (Gay & Clark, 2021).

It is ironic that there is so little critical thinking about critical thinking in the world of English language teaching, but it should not be particularly surprising. Teachers are no more immune to fads than anyone else (Fuertes-Prieto et al., 2020). Despite a complete lack of robust evidence to support them, learning styles and multiple intelligences influenced language teaching for many years. Mindfulness, growth mindsets, grit are more contemporary influences and, like critical thinking, will go the way of learning styles when the commercial and institutional forces that currently promote them find the lack of empirical supporting evidence problematic.

Critical thinking is an educational aim shared by educational authorities around the world, promoted by intergovernmental bodies like the OECD, the World Bank, the EU, and the United Nations. In Japan, for example, the ‘Ministry of Education (MEXT) puts critical thinking (CT) at the forefront of its ‘global jinzai’ (human capital for a global society) directive’ (Gay & Clark, 2021). It is taught as an academic discipline in some universities in Russia (Ivlev et al, 2021) and plans are underway to introduce it into schools in Saudi Arabia. https://www.arabnews.com/node/1764601/saudi-arabia I suspect that it doesn’t mean quite the same thing in all these places.

Critical thinking is also an educational aim that most teachers can share. Few like to think of themselves as Gradgrinds, bashing facts into their pupils’ heads: turning children into critical thinkers is what education is supposed to be all about. It holds an intuitive appeal, and even if we (20% of teachers in the TALIS survey) lack confidence in our ability to promote critical thinking in the classroom, few of us doubt the importance of trying to do so. Like learning styles, multiple intelligences and growth mindsets, it seems possible that, with critical thinking, we are pushing the wrong thing, but for the right reasons. But just how much evidence, or lack of evidence, do we need before we start getting critical about critical thinking?

References

Dummett, P. & Hughes, J. (2019) Critical Thinking in ELT. Boston: National Geographic Learning

Fuertes-Prieto, M.Á., Andrés-Sánchez, S., Corrochano-Fernández, D. et al. (2020) Pre-service Teachers’ False Beliefs in Superstitions and Pseudosciences in Relation to Science and Technology. Science & Education 29, 1235–1254 (2020). https://doi.org/10.1007/s11191-020-00140-8

Gay, S. & Clark, G. (2021) Revisiting Critical Thinking Constructs and What This Means for ELT. Critical Thinking and Language Learning, 8 (1): pp. 110 – 147

Huber, C.R. & Kuncel, N.R. (2016) Does College Teach Critical Thinking? A Meta-Analysis. Review of Educational Research. 2016: 86 (2) pp.:431-468. doi:10.3102/0034654315605917

Ivlev, V. Y., Pozdnyakov, M. V., Inozemtsez, V. A. & Chernyak, A. Z. (2021) Critical Thinking in the Structure of Educational Programs in Russian Universities. Advances in Social Science, Education and Humanities Research, volume 555: pp. 121 -128

Lai, E.R. 2011. Critical Thinking: A Literature Review. Pearson. http://images.pearsonassessments.com/images/tmrs/CriticalThinkingReviewFINAL.pdf

Sweller, J. (2022) Some Critical Thoughts about Critical and Creative Thinking. Sydney: The Centre for Independent Studies Analysis Paper 32

Tricot, A., & Sweller, J. (2014) Domain-specific knowledge and why teaching generic skills does not work. Educational Psychology Review, 26, 265- 283.

Five years ago, in 2016, there was an interesting debate in the pages of the journal ‘Psychological Review’. It began with an article by Jeffrey Bowers (2016a), a psychologist at the University of Bristol, who argued that neuroscience (as opposed to psychology) has little, or nothing, to offer us, and is unlikely ever to be able to do so, in terms of improving classroom instruction. He wasn’t the first to question the relevance of neuroscience to education (see, for example, Willingham, 2009), but this was a full-frontal attack. Bowers argued that ‘neuroscience rarely offers insights into instruction above and beyond psychology’ and that neuroscientific evidence that the brain changes in response to instruction are irrelevant. His article was followed by two counter-arguments (Gabrieli, 2016; Howard-Jones, et al., 2016), which took him to task for too narrowly limiting the scope of education to classroom instruction (neglecting, for example, educational policy), for ignoring the predictive power of neuroimaging on neurodevelopmental differences (and, therefore, its potential value in individualising curricula), and for failing to take account of the progress that neuroscience, in collaboration with educators, has already made. Bowers’ main argument, that educational neuroscience had little to tell us about teaching, was not really addressed in the counter-arguments, and Bowers (2016b) came back with a counter-counter-rebuttal.

The brain responding to seductive details

In some ways, the debate, like so many of the kind, suffered from the different priorities of the participants. For Gabriele and Howard-Jones et al., Bowers had certainly overstated his case, but they weren’t entirely in disagreement with him. Paul Howard-Jones has been quoted by André Hedlund as saying that ‘all neuroscience can do is confirm what we’ve been doing all along and give us new insights into a couple of new things’. One of Howard-Jones’ co-authors, Usha Goswami, director of the Centre for Neuroscience in Education at the University of Cambridge, has said that ‘there is a gulf between current science and classroom applications’ (Goswami, 2006).

For teachers, though, it is the classroom applications that are of interest. Claims for the relevance of neuroscience to ELT have been made by many. We [in ESL / EFL] need it, writes Curtis Kelly (2017). Insights from neuroscience can, apparently, make textbooks more ‘brain friendly’ (Helgesen & Kelly, 2015). Herbert Puchta’s books are advertised by Cambridge University Press as ‘based on the latest insights into how the brain works fresh from the field of neuroscience’. You can watch a British Council talk by Rachael Roberts, entitled ‘Using your brain: what neuroscience can teach us about learning’. And, in the year following the Bowers debate, Carol Lethaby and Patricia Harries gave a presentation at IATEFL Glasgow (Lethaby & Harries, 2018) entitled ‘Research and teaching: What has neuroscience ever done for us?’ – a title that I have lifted for this blog post. Lethaby and Harries provide a useful short summary of the relevance of neuroscience to ELT, and I will begin my discussion with that. They expand on this in their recent book (Lethaby, Mayne & Harries, 2021), a book I highly recommend.

So what, precisely, does neuroscience have to tell English language teachers? Lethaby and Harries put forward three main arguments. Firstly, neuroscience can help us to bust neuromyths (the examples they give are right / left brain dominance and learning styles). Secondly, it can provide information that informs teaching (the examples given are the importance of prior knowledge and the value of translation). Finally, it can validate existing best practice (the example given is the importance of prior knowledge). Let’s take a closer look.

I have always enjoyed a bit of neuromyth busting and I wrote about ‘Left brains and right brains in English language teaching’ a long time ago. It is certainly true that neuroscience has helped to dispel this myth: it is ‘simplistic at best and utter hogwash at worst’ (Dörnyei, 2009: 49). However, we did not need neuroscience to rubbish the practical teaching applications of this myth, which found their most common expression in Neuro-Linguistic Programming (NLP) and Brain Gym. Neuroscience simply banged in the final nail in the coffin of these trends. The same is true for learning styles and the meshing hypothesis. It’s also worth noting that, despite the neuroscientific evidence, such myths are taking a long time to die … a point I will return to at the end of this post.

Lethaby and Harries’s second and third arguments are essentially the same, unless, in their second point they are arguing that neuroscience can provide new information. I struggle, however, to see anything that is new. Neuroimaging apparently shows that the medial prefrontal cortex is activated when prior knowledge is accessed, but we have long known (since Vygotsky, at least!) that effective learning builds on previous knowledge. Similarly, the amygdala (known to be associated with the processing of emotions) may play an important role in learning, but we don’t need to know about the amygdala to understand the role of affect in learning. Lastly, the neuroscientific finding that different languages are not ‘stored’ in separate parts of the brain (Spivey & Hirsch, 2003) is useful to substantiate arguments that translation can have a positive role to play in learning another language, but convincing arguments predate findings such as these by many, many years. This would all seem to back up Howard-Jones’s observation about confirming what we’ve been doing and giving us new insights into a couple of new things. It isn’t the most compelling case for the relevance of neuroscience to ELT.

Chapter 2 of Carol Lethaby’s new book, ‘An Introduction to Evidence-based Teaching in the English Language Classroom’ is devoted to ‘Science and neuroscience’. The next chapter is called ‘Psychology and cognitive science’ and practically all the evidence for language teaching approaches in the rest of the book is drawn from cognitive (rather than neuro-) science. I think the same is true for the work of Kelly, Helgesen, Roberts and Puchta that I mentioned earlier.

It is perhaps the case these days that educationalists prefer to refer to ‘Mind, Brain, and Education Science’ (MBE) – the ‘intersection of neuroscience, education, and psychology’ – rather than educational neuroscience, but, looking at the literature of MBE, there’s a lot more education and psychology than there is neuroscience (although the latter always gets a mention). Probably the most comprehensive and well-known volume of practical ideas deriving from MBE is ‘Making Classrooms Better’ (Tokuhama-Espinosa, 2014). Of the 50 practical applications listed, most are either inspired by the work of John Hattie (2009) or the work of cognitive psychologists. Neuroscience hardly gets a look in.

To wrap up, I’d like to return to the question of neuroscience’s role in busting neuromyths. References to neuroscience, especially when accompanied by fMRI images, have a seductive appeal to many: they confer a sense of ‘scientific’ authority. Many teachers, it seems, are keen to hear about neuroscience (Pickering & Howard-Jones, 2007). Even when the discourse contains irrelevant neuroscientific information (diagrams of myelination come to mind), it seems that many of us find this satisfying (Weisberg et al., 2015; Weisberg et al., 2008). It gives an illusion of explanatory depth (Rozenblit & Keil, 2002), the so-called ‘seductive details effect’. You are far more likely to see conference presentations, blog posts and magazine articles extolling the virtues of neuroscientific findings than you are to come across things like I am writing here. But is it possible that the much-touted idea that neuroscience can bust neuromyths is itself a myth?

Sadly, we have learnt in recent times that scientific explanations have only very limited impact on the beliefs of large swathes of the population (including teachers, of course). Think of climate change and COVID. Why should neuroscience be any different? It probably isn’t. Scurich & Shniderman (2014) found that ‘neuroscience is more likely to be accepted and credited when it confirms prior beliefs’. We are more likely to accept neuroscientific findings because we ‘find them intuitively satisfying, not because they are accurate’ (Weisberg, et al. 2008). Teaching teachers about educational neuroscience may not make much, if any, difference (Tham et al., 2019). I think there is a danger in using educational neuroscience, seductive details and all, to validate what we already do (as opposed to questioning what we do). And for those who don’t already do these things, they’ll probably ignore such findings as there are, anyway.

References

Bowers, J. (2016a) The practical and principled problems with educational Neuroscience. Psychological Review 123 (5) 600 – 612

Bowers, J.S. (2016b) Psychology, not educational neuroscience, is the way forward for improving educational outcomes for all children: Reply to Gabrieli (2016) and Howard-Jones et al. (2016). Psychological Review. 123 (5):628-35.

Dörnyei, Z. (2009) The Psychology of Second Language Acquisition. Oxford: Oxford University Press

Gabrieli, J.D. (2016) The promise of educational neuroscience: Comment on Bowers (2016). Psychological Review. 123 (5):613-9

Goswami , U. (2006). Neuroscience and education: From research to practice? Nature Reviews Neuroscience, 7: 406 – 413

Hattie, J. (2009) Visible Learning: A synthesis of over 800 meta-analyses relating to achievement. London: Routledge

Helgesen, M. & Kelly, C. (2015) Do-it-yourself: Ways to make your textbook more brain-friendly’ SPELT Quarterly, 30 (3): 32 – 37

Howard-Jones, P.A., Varma. S., Ansari, D., Butterworth, B., De Smedt, B., Goswami, U., Laurillard, D. & Thomas, M. S. (2016) The principles and practices of educational neuroscience: Comment on Bowers (2016). Psychological Review. 123 (5):620-7

Kelly, C. (2017) The Brain Studies Boom: Using Neuroscience in ESL/EFL Teacher Training. In Gregersen, T. S. & MacIntyre, P. D. (Eds.) Innovative Practices in Language Teacher Education pp.79-99 Springer

Lethaby, C. & Harries, P. (2018) Research and teaching: What has neuroscience ever done for us?’ in Pattison, T. (Ed.) IATEFL Glasgow Conference Selections 2017. Faversham, Kent, UK: IATEFL  p. 36- 37

Lethaby, C., Mayne, R. & Harries, P. (2021) An Introduction to Evidence-Based Teaching in the English Language Classroom. Shoreham-by-Sea: Pavilion Publishing

McCabe, D.P. & Castel, A.D. (2008) Seeing is believing: The effect of brain images on judgments of scientific reasoning. Cognition 107: 343–352.

Pickering, S. J. & Howard-Jones, P. (2007) Educators’ views on the role of neuroscience in education: findings from a study of UK and international perspectives. Mind Brain Education 1: 109–113.

Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive science, 26(5), 521–562.

Scurich, N., & Shniderman, A. (2014) The selective allure of neuroscientific explanations. PLOS One, 9 (9), e107529. http://dx.doi.org/10.1371/journal.pone. 0107529.

Spivey, M. V. & Hirsch, J. (2003) ‘Shared and separate systems in bilingual language processing: Converging evidence from eyetracking and brain imaging’ Brain and Language, 86: 70 – 82

Tham, R., Walker, Z., Tan, S.H.D., Low, L.T. & Annabel Chan, S.H. (2019) Translating educational neuroscience for teachers. Learning: Research and Practice, 5 (2): 149-173 Singapore: National Institute of Education

Tokuhama-Espinosa, T. (2014) Making Classrooms Better. New York: Norton

Weisberg, D. S., Taylor, J. C. V. & Hopkins, E.J. (2015) Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision Making, Vol. 10, No. 5, September 2015, pp. 429–441

Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of cognitive neuroscience, 20 (3): 470–477.

Willingham, D. T. (2009). Three problems in the marriage of neuroscience and education. Cortex, 45: 54-55.

Learners are different, the argument goes, so learning paths will be different, too. And, the argument continues, if learners will benefit from individualized learning pathways, so instruction should be based around an analysis of the optimal learning pathways for individuals and tailored to match them. In previous posts, I have questioned whether such an analysis is meaningful or reliable and whether the tailoring leads to any measurable learning gains. In this post, I want to focus primarily on the analysis of learner differences.

Family / social background and previous educational experiences are obvious ways in which learners differ when they embark on any course of study. The way they impact on educational success is well researched and well established. Despite this research, there are some who disagree. For example, Dominic Cummings (former adviser to Michael Gove when he was UK Education minister and former campaign director of the pro-Brexit Vote Leave group) has argued  that genetic differences, especially in intelligence, account for more than 50% of the differences in educational achievement.

Cummings got his ideas from Robert Plomin , one of the world’s most cited living psychologists. Plomin, in a recent paper in Nature, ‘The New Genetics of Intelligence’ , argues that ‘intelligence is highly heritable and predicts important educational, occupational and health outcomes better than any other trait’. In an earlier paper, ‘Genetics affects choice of academic subjects as well as achievement’, Plomin and his co-authors argued that ‘choosing to do A-levels and the choice of subjects show substantial genetic influence, as does performance after two years studying the chosen subjects’. Environment matters, says Plomin , but it’s possible that genes matter more.

All of which leads us to the field known as ‘educational genomics’. In an article of breathless enthusiasm entitled ‘How genetics could help future learners unlock hidden potential’ , University of Sussex psychologist, Darya Gaysina, describes educational genomics as the use of ‘detailed information about the human genome – DNA variants – to identify their contribution to particular traits that are related to education [… ] it is thought that one day, educational genomics could enable educational organisations to create tailor-made curriculum programmes based on a pupil’s DNA profile’. It could, she writes, ‘enable schools to accommodate a variety of different learning styles – both well-worn and modern – suited to the individual needs of the learner [and] help society to take a decisive step towards the creation of an education system that plays on the advantages of genetic background. Rather than the current system, that penalises those individuals who do not fit the educational mould’.

The goal is not just personalized learning. It is ‘Personalized Precision Education’ where researchers ‘look for patterns in huge numbers of genetic factors that might explain behaviors and achievements in individuals. It also focuses on the ways that individuals’ genotypes and environments interact, or how other “epigenetic” factors impact on whether and how genes become active’. This will require huge amounts of ‘data gathering from learners and complex analysis to identify patterns across psychological, neural and genetic datasets’. Why not, suggests Darya Gaysina, use the same massive databases that are being used to identify health risks and to develop approaches to preventative medicine?

BG-for-educationIf I had a spare 100 Euros, I (or you) could buy Darya Gaysina’s book, ‘Behavioural Genetics for Education’ (Palgrave Macmillan, 2016) and, no doubt, I’d understand the science better as a result. There is much about the science that seems problematic, to say the least (e.g. the definition and measurement of intelligence, the lack of reference to other research that suggests academic success is linked to non-genetic factors), but it isn’t the science that concerns me most. It’s the ethics. I don’t share Gaysina’s optimism that ‘every child in the future could be given the opportunity to achieve their maximum potential’. Her utopianism is my fear of Gattaca-like dystopias. IQ testing, in its early days, promised something similarly wonderful, but look what became of that. When you already have reporting of educational genomics using terms like ‘dictate’, you have to fear for the future of Gaysina’s brave new world.

Futurism.pngEducational genomics could equally well lead to expectations of ‘certain levels of achievement from certain groups of children – perhaps from different socioeconomic or ethnic groups’ and you can be pretty sure it will lead to ‘companies with the means to assess students’ genetic identities [seeking] to create new marketplaces of products to sell to schools, educators and parents’. The very fact that people like Dominic Cummings (described by David Cameron as a ‘career psychopath’ ) have opted to jump on this particular bandwagon is, for me, more than enough cause for concern.

Underlying my doubts about educational genomics is a much broader concern. It’s the apparent belief of educational genomicists that science can provide technical solutions to educational problems. It’s called ‘solutionism’ and it doesn’t have a pretty history.

In my last post, I looked at the way that, in the absence of a clear, shared understanding of what ‘personalization’ means, it has come to be used as a slogan for the promoters of edtech. In this post, I want to look a little more closely at the constellation of meanings that are associated with the term, suggest a way of evaluating just how ‘personalized’ an instructional method might be, and look at recent research into ‘personalized learning’.

In English language teaching, ‘personalization’ often carries a rather different meaning than it does in broader educational discourse. Jeremy Harmer (Harmer, 2012: 276) defines it as ‘when students use language to talk about themselves and things which interest them’. Most commonly, this is in the context of ‘freer’ language practice of grammar or vocabulary of the following kind: ‘Complete the sentences so that they are true for you’. It is this meaning that Scott Thornbury refers to first in his entry for ‘Personalization’ in his ‘An A-Z of ELT’ (Thornbury, 2006: 160). He goes on, however, to expand his definition of the term to include humanistic approaches such as Community Language Learning / Counseling learning (CLL), where learners decide the content of a lesson, where they have agency. I imagine that no one would disagree that an approach such as this is more ‘personalized’ than a ‘complete-the-sentences-so-they-are-true-for you’ exercise to practise the present perfect.

Outside of ELT, ‘personalization’ has been used to refer to everything from ‘from customized interfaces to adaptive tutors, from student-centered classrooms to learning management systems’ (Bulger, 2016: 3). The graphic below (from Bulger, 2016: 3) illustrates just how wide the definitional reach of ‘personalization’ is.

TheBulger_pie_chart

As with Thornbury’s entry in his ‘A – Z of ELT’, it seems uncontentious to say that some things are more ‘personalized’ than others.

Given the current and historical problems with defining the term, it’s not surprising that a number of people have attempted to develop frameworks that can help us to get to grips with the thorny question of ‘personalization’. In the context of language teaching / learning, Renée Disick (Disick, 1975: 58) offered the following categorisation:

Disick

In a similar vein, a few years later, Howard Altman (Altman, 1980) suggested that teaching activities can differ in four main ways: the time allocated for learning, the curricular goal, the mode of learning and instructional expectations (personalized goal setting). He then offered eight permutations of these variables (see below, Altman, 1980: 9), although many more are imaginable.

Altman 1980 chart

Altman and Disick were writing, of course, long before our current technology-oriented view of ‘personalization’ became commonplace. The recent classification of technologically-enabled personalized learning systems by Monica Bulger (see below, Bulger, 2016: 6) reflects how times have changed.

5_types_of_personalized_learning_system

Bulger’s classification focusses on the technology more than the learning, but her continuum is very much in keeping with the views of Disick and Altman. Some approaches are more personalized than others.

The extent to which choices are offered determines the degree of individualization in a particular program. (Disick, 1975: 5)

It is important to remember that learner-centered language teaching is not a point, but rather a continuum. (Altman, 1980: 6)

Larry Cuban has also recently begun to use a continuum as a way of understanding the practices of ‘personalization’ that he observes as part of his research. The overall goals of schooling at both ends of the curriculum are not dissimilar: helping ‘children grow into adults who are creative thinkers, help their communities, enter jobs and succeed in careers, and become thoughtful, mindful adults’.

Cubans curriculum

As Cuban and others before him (e.g. Januszewski, 2001: 57) make clear, the two perspectives are not completely independent of each other. Nevertheless, we can see that one end of this continuum is likely to be materials-centred with the other learner-centred (Dickinson, 1987: 57). At one end, teachers (or their LMS replacements) are more likely to be content-providers and enact traditional roles. At the other, teachers’ roles are ‘more like those of coaches or facilitators’ (Cavanagh, 2014). In short, one end of the continuum is personalization for the learner; the other end is personalization by the learner.

It makes little sense, therefore, to talk about personalized learning as being a ‘good’ or a ‘bad’ thing. We might perceive one form of personalized learning to be more personalized than another, but that does not mean it is any ‘better’ or more effective. The only possible approach is to consider and evaluate the different elements of personalization in an attempt to establish, first, from a theoretical point of view whether they are likely to lead to learning gains, and, second, from an evidence-based perspective whether any learning gains are measurable. In recent posts on this blog, I have been attempting to do that with elements such as learning styles , self-pacing and goal-setting.

Unfortunately, but perhaps not surprisingly, none of the elements that we associate with ‘personalization’ will lead to clear, demonstrable learning gains. A report commissioned by the Gates Foundation (Pane et al, 2015) to find evidence of the efficacy of personalized learning did not, despite its subtitle (‘Promising Evidence on Personalized Learning’), manage to come up with any firm and unequivocal evidence (see Riley, 2017). ‘No single element of personalized learning was able to discriminate between the schools with the largest achievement effects and the others in the sample; however, we did identify groups of elements that, when present together, distinguished the success cases from others’, wrote the authors (Pane et al., 2015: 28). Undeterred, another report (Pane et al., 2017) was commissioned: in this the authors were unable to do better than a very hedged conclusion: ‘There is suggestive evidence that greater implementation of PL practices may be related to more positive effects on achievement; however, this finding requires confirmation through further research’ (my emphases). Don’t hold your breath!

In commissioning the reports, the Gates Foundation were probably asking the wrong question. The conceptual elasticity of the term ‘personalization’ makes its operationalization in any empirical study highly problematic. Meaningful comparison of empirical findings would, as David Hartley notes, be hard because ‘it is unlikely that any conceptual consistency would emerge across studies’ (Hartley, 2008: 378). The question of what works is unlikely to provide a useful (in the sense of actionable) response.

In a new white paper out this week, “A blueprint for breakthroughs,” Michael Horn and I argue that simply asking what works stops short of the real question at the heart of a truly personalized system: what works, for which students, in what circumstances? Without this level of specificity and understanding of contextual factors, we’ll be stuck understanding only what works on average despite aspirations to reach each individual student (not to mention mounting evidence that “average” itself is a flawed construct). Moreover, we’ll fail to unearth theories of why certain interventions work in certain circumstances. And without that theoretical underpinning, scaling personalized learning approaches with predictable quality will remain challenging. Otherwise, as more schools embrace personalized learning, at best each school will have to go at it alone and figure out by trial and error what works for each student. Worse still, if we don’t support better research, “personalized” schools could end up looking radically different but yielding similar results to our traditional system. In other words, we risk rushing ahead with promising structural changes inherent to personalized learning—reorganizing space, integrating technology tools, freeing up seat-time—without arming educators with reliable and specific information about how to personalize to their particular students or what to do, for which students, in what circumstances. (Freeland Fisher, 2016)

References

Altman, H.B. 1980. ‘Foreign language teaching: focus on the learner’ in Altman, H.B. & James, C.V. (eds.) 1980. Foreign Language Teaching: Meeting Individual Needs. Oxford: Pergamon Press, pp.1 – 16

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. New York: Data and Society Research Institute. https://www.datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week http://www.edweek.org/ew/articles/2014/10/22/09pl-overview.h34.html

Dickinson, L. 1987. Self-instruction in Language Learning. Cambridge: Cambridge University Press

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Freeland Fisher, J. 2016. ‘The inconvenient truth about personalized learning’ [Blog post] retrieved from http://www.christenseninstitute.org/blog/the-inconvenient-truth-about-personalized-learning/ (May 4, 2016)

Harmer, J. 2012. Essential Teacher Knowledge. Harlow: Pearson Education

Hartley, D. 2008. ‘Education, Markets and the Pedagogy of Personalisation’ British Journal of Educational Studies 56 / 4: 365 – 381

Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited

Pane, J. F., Steiner, E. D., Baird, M. D. & Hamilton, L. S. 2015. Continued Progress: Promising Evidence on Personalized Learning. Seattle: Rand Corporation retrieved from http://www.rand.org/pubs/research_reports/RR1365.html

Pane, J.F., Steiner, E. D., Baird, M. D., Hamilton, L. S. & Pane, J.D. 2017. Informing Progress: Insights on Personalized Learning Implementation and Effects. Seattle: Rand Corporation retrieved from https://www.rand.org/pubs/research_reports/RR2042.html

Riley, B. 2017. ‘Personalization vs. How People Learn’ Educational Leadership 74 / 6: 68-73

Thornbury, S. 2006. An A – Z of ELT. Oxford: Macmillan Education

 

 

 

All aboard …

The point of adaptive learning is that it can personalize learning. When we talk about personalization, mention of learning styles is rarely far away. Jose Ferreira of Knewton (but now ex-CEO Knewton) made his case for learning styles in a blog post that generated a superb and, for Ferreira, embarrassing  discussion in the comments that were subsequently deleted by Knewton. fluentu_learning_stylesFluentU (which I reviewed here) clearly approves of learning styles, or at least sees them as a useful way to market their product, even though it is unclear how their product caters to different styles. Busuu claims to be ‘personalised to fit your style of learning’. Voxy, Inc. (according to their company overview) ‘operates a language learning platform that creates custom curricula for English language learners based on their interests, routines, goals, and learning styles’. Bliu Bliu (which I reviewed here) recommended, in a recent blog post, that learners should ‘find out their language learner type and use it to their advantage’ and suggests, as a starter, trying out ‘Bliu Bliu, where pretty much any learner can find what suits them best’. Memrise ‘uses clever science to adapt to your personal learning style’.  Duolingo’s learning tree ‘effectively rearranges itself to suit individual learning styles’ according to founder, Louis Von Ahn. This list could go on and on.

Learning styles are thriving in ELT coursebooks, too. Here are just three recent examples for learners of various ages. Today! by Todd, D. & Thompson, T. (Pearson, 2014) ‘shapes learning around individual students with graded difficulty practice for mixed-ability classes’ and ‘makes testing mixed-ability classes easier with tests that you can personalise to students’ abilities’.today

Move  it! by Barraclough, C., Beddall, F., Stannett, K., Wildman, J. (Pearson, 2015) offers ‘personalized pathways [which] allow students to optimize their learning outcomes’ and a ‘complete assessment package to monitor students’ learning process’. pearson_move_it

Open Mind Elementary (A2) 2nd edition by Rogers, M., Taylor-Knowles, J. & Taylor-Knowles, S. (Macmillan, 2014) has a whole page devoted to learning styles in the ‘Life Skills’ strand of the course. The scope and sequence describes it in the following terms: ‘Thinking about what you like to do to find your learning style and improve how you learn English’. Here’s the relevant section:macmillan_coursebook

rosenber-learning-stylesMethodology books offer more tips for ways that teachers can cater to different learning styles. Recent examples include Patrycja Kamińska’s  Learning Styles and Second Language Education (Cambridge Scholars, 2014), Tammy Gregersen & Peter D. MacIntyre’s Capitalizing on Language Learners’ Individuality (Multilingual Matters, 2014) and Marjorie Rosenberg’s Spotlight on Learning Styles (Delta Publishing, 2013). Teacher magazines show a continuing interest  in the topic. Humanising Language Teaching and English Teaching Professional are particularly keen. The British Council offers courses about learning styles and its Teaching English website has many articles and lesson plans on the subject (my favourite explains that your students will be more successful if you match your teaching style to their learning styles), as do the websites of all the major publishers. Most ELT conferences will also offer something on the topic.oup_learning_styles

How about language teaching qualifications and frameworks? The Cambridge English Teaching Framework contains a component entitled ‘Understanding learners’ and this specifies as the first part of the component a knowledge of concepts such as learning styles (e.g., visual, auditory, kinaesthetic), multiple intelligences, learning strategies, special needs, affect. Unsurprisingly, the Cambridge CELTA qualification requires successful candidates to demonstrate an awareness of the different learning styles and preferences that adults bring to learning English. The Cambridge DELTA requires successful candidates to accommodate learners according to their different abilities, motivations, and learning styles. The Eaquals Framework for Language Teacher Training and Development requires teachers at Development Phase 2 t0 have the skill of determining and anticipating learners’ language learning needs and learning styles at a range of levels, selecting appropriate ways of finding out about these.

Outside of ELT, learning styles also continue to thrive. Phil Newton (2015 ‘The learning styles myth is thriving in higher education’ Frontiers in Psychology 6: 1908) carried out a survey of educational publications  (higher education) between 2013 and 2016, and found that an overwhelming majority (89%) implicitly or directly endorse the use of learning styles. He also cites research showing that 93% of UK schoolteachers believe that ‘individuals learn better when they receive information in their preferred Learning Style’, with similar figures in other countries. 72% of Higher Education institutions in the US teach ‘learning style theory’ as part of faculty development for online teachers. Advocates of learning styles in English language teaching are not alone.

But, unfortunately, …

In case you weren’t aware of it, there is a rather big problem with learning styles. There is a huge amount of research  which suggests that learning styles (and, in particular, teaching attempts to cater to learning styles) need to be approached with extreme scepticism. Much of this research was published long before the blog posts, advertising copy, books and teaching frameworks (listed above) were written.  What does this research have to tell us?

The first problem concerns learning styles taxonomies. There are three issues here: many people do not fit one particular style, the information used to assign people to styles is often inadequate, and there are so many different styles that it becomes cumbersome to link particular learners to particular styles (Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48 / 3, 169-183). To summarise, given the lack of clarity as to which learning styles actually exist, it may be ‘neither viable nor justified’ for learning styles to form the basis of lesson planning (Hall, G. 2011. Exploring English Language Teaching. Abingdon, Oxon.: Routledge p.140). More detailed information about these issues can be found in the following sources:

Coffield, F., Moseley, D., Hall, E. & Ecclestone, K. 2004. Learning styles and pedagogy in post-16 learning: a systematic and critical review. London: Learning and Skills Research Centre

Dembo, M. H. & Howard, K. 2007. Advice about the use of learning styles: a major myth in education. Journal of College Reading & Learning 37 / 2: 101 – 109

Kirschner, P. A. 2017. Stop propagating the learning styles myth. Computers & Education 106: 166 – 171

Pashler, H., McDaniel, M., Rohrer, D. & Bjork, E. 2008. Learning styles concepts and evidence. Psychological Science in the Public Interest 9 / 3: 105 – 119

Riener, C. & Willingham, D. 2010. The myth of learning styles. Change – The Magazine of Higher Learning

The second problem concerns what Pashler et al refer to as the ‘meshing hypothesis’: the idea that instructional interventions can be effectively tailored to match particular learning styles. Pashler et al concluded that the available taxonomies of student types do not offer any valid help in deciding what kind of instruction to offer each individual. Even in 2008, their finding was not new. Back in 1978, a review of 15 studies that looked at attempts to match learning styles to approaches to first language reading instruction, concluded that modality preference ‘has not been found to interact significantly with the method of teaching’ (Tarver, Sara & M. M. Dawson. 1978. Modality preference and the teaching of reading. Journal of Learning Disabilities 11: 17 – 29). The following year, two other researchers concluded that [the assumption that one can improve instruction by matching materials to children’s modality strengths] appears to lack even minimal empirical support. (Arter, J.A. & Joseph A. Jenkins 1979 ‘Differential diagnosis-prescriptive teaching: A critical appraisal’ Review of Educational Research 49: 517-555). Fast forward 20 years to 1999, and Stahl (Different strokes for different folks?’ American Educator Fall 1999 pp. 1 – 5) was writing the reason researchers roll their eyes at learning styles is the utter failure to find that assessing children’s learning styles and matching to instructional methods has any effect on learning. The area with the most research has been the global and analytic styles […]. Over the past 30 years, the names of these styles have changed – from ‘visual’ to ‘global’ and from ‘auditory’ to ‘analytic’ – but the research results have not changed. For a recent evaluation of the practical applications of learning styles, have a look at Rogowsky, B. A., Calhoun, B. M. & Tallal, P. 2015. ‘Matching Learning Style to Instructional Method: Effects on Comprehension’ Journal of Educational Psychology 107 / 1: 64 – 78. Even David Kolb, the Big Daddy of learning styles, now concedes that there is no strong evidence that teachers should tailor their instruction to their student’s particular learning styles (reported in Glenn, D. 2009. ‘Matching teaching style to learning style may not help students’ The Chronicle of Higher Education). To summarise, the meshing hypothesis is entirely unsupported in the scientific literature. It is a myth (Howard-Jones, P. A. 2014. ‘Neuroscience and education: myths and messages’ Nature Reviews Neuroscience).

This brings me back to the blog posts, advertising blurb, coursebooks, methodology books and so on that continue to tout learning styles. The writers of these texts typically do not acknowledge that there’s a problem of any kind. Are they unaware of the research? Or are they aware of it, but choose not to acknowledge it? I suspect that the former is often the case with the app developers. But if the latter is the case, what  might those reasons be? In the case of teacher training specifications, the reason is probably practical. Changing a syllabus is an expensive and time-consuming operation. But in the case of some of the ELT writers, I suspect that they hang on in there because they so much want to believe.

As Newton (2015: 2) notes, intuitively, there is much that is attractive about the concept of Learning Styles. People are obviously different and Learning Styles appear to offer educators a way to accommodate individual learner differences.  Pashler et al (2009:107) add that another related factor that may play a role in the popularity of the learning-styles approach has to do with responsibility. If a person or a person’s child is not succeeding or excelling in school, it may be more comfortable for the person to think that the educational system, not the person or the child himself or herself, is responsible. That is, rather than attribute one’s lack of success to any lack of ability or effort on one’s part, it may be more appealing to think that the fault lies with instruction being inadequately tailored to one’s learning style. In that respect, there may be linkages to the self-esteem movement that became so influential, internationally, starting in the 1970s. There is no reason to doubt that many of those who espouse learning styles have good intentions.

No one, I think, seriously questions whether learners might not benefit from a wide variety of input styles and learning tasks. People are obviously different. MacIntyre et al (MacIntyre, P.D., Gregersen, T. & Clément, R. 2016. ‘Individual Differences’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon, Oxon: Routledge, pp.310 – 323, p.319) suggest that teachers might consider instructional methods that allow them to capitalise on both variety and choice and also help learners find ways to do this for themselves inside and outside the classroom. Jill Hadfield (2006. ‘Teacher Education and Trainee Learning Style’ RELC Journal 37 / 3: 369 – 388) recommends that we design our learning tasks across the range of learning styles so that our trainees can move across the spectrum, experiencing both the comfort of matching and the challenge produced by mismatching. But this is not the same thing as claiming that identification of a particular learning style can lead to instructional decisions. The value of books like Rosenberg’s Spotlight on Learning Styles lies in the wide range of practical suggestions for varying teaching styles and tasks. They contain ideas of educational value: it is unfortunate that the theoretical background is so thin.

In ELT things are, perhaps, beginning to change. Russ Mayne’s blog post Learning styles: facts and fictions in 2012 got a few heads nodding, and he followed this up 2 years later with a presentation at IATEFL looking at various aspects of ELT, including learning styles, which have little or no scientific credibility. Carol Lethaby and Patricia Harries gave a talk at IATEFL 2016, Changing the way we approach learning styles in teacher education, which was also much discussed and shared online. They also had an article in ELT Journal called Learning styles and teacher training: are we perpetuating neuromyths? (2016 ELTJ 70 / 1: 16 – 27). Even Pearson, in a blog post of November 2016, (Mythbusters: A review of research on learning styles) acknowledges that there is a shocking lack of evidence to support the core learning styles claim that customizing instruction based on students’ preferred learning styles produces better learning than effective universal instruction, concluding that  it is impossible to recommend learning styles as an effective strategy for improving learning outcomes.

 

 

One could be forgiven for thinking that there are no problems associated with adaptive learning in ELT. Type the term into a search engine and you’ll mostly come up with enthusiasm or sales talk. There are, however, a number of reasons to be deeply skeptical about the whole business. In the post after this, I will be considering the political background.

1. Learning theory

Jose Fereira, the CEO of Knewton, spoke, in an interview with Digital Journal[1] in October 2009, about getting down to the ‘granular level’ of learning. He was referencing, in an original turn of phrase, the commonly held belief that learning is centrally concerned with ‘gaining knowledge’, knowledge that can be broken down into very small parts that can be put together again. In this sense, the adaptive learning machine is very similar to the ‘teaching machine’ of B.F. Skinner, the psychologist who believed that learning was a complex process of stimulus and response. But how many applied linguists would agree, firstly, that language can be broken down into atomised parts (rather than viewed as a complex, dynamic system), and, secondly, that these atomised parts can be synthesized in a learning program to reform a complex whole? Human cognitive and linguistic development simply does not work that way, despite the strongly-held contrary views of ‘folk’ theories of learning (Selwyn Education and Technology 2011, p.3).

machine

Furthermore, even if an adaptive system delivers language content in personalized and interesting ways, it is still premised on a view of learning where content is delivered and learners receive it. The actual learning program is not personalized in any meaningful way: it is only the way that it is delivered that responds to the algorithms. This is, again, a view of learning which few educationalists (as opposed to educational leaders) would share. Is language learning ‘simply a technical business of well managed information processing’ or is it ‘a continuing process of ‘participation’ (Selwyn, Education and Technology 2011, p.4)?

Finally, adaptive learning is also premised on the idea that learners have particular learning styles, that these can be identified by the analytics (even if they are not given labels), and that actionable insights can be gained from this analysis (i.e. the software can decide on the most appropriate style of content delivery for an individual learner). Although the idea that teaching programs can be modified to cater to individual learning styles continues to have some currency among language teachers (e.g. those who espouse Neuro-Linguistic Programming or Multiple Intelligences Theory), it is not an idea that has much currency in the research community.

It might be the case that adaptive learning programs will work with some, or even many, learners, but it would be wise to carry out more research (see the section on Research below) before making grand claims about its efficacy. If adaptive learning can be shown to be more effective than other forms of language learning, it will be either because our current theories of language learning are all wrong, or because the learning takes place despite the theory, (and not because of it).

2. Practical problems

However good technological innovations may sound, they can only be as good, in practice, as the way they are implemented. Language laboratories and interactive whiteboards both sounded like very good ideas at the time, but they both fell out of favour long before they were technologically superseded. The reasons are many, but one of the most important is that classroom teachers did not understand sufficiently the potential of these technologies or, more basically, how to use them. Given the much more radical changes that seem to be implied by the adoption of adaptive learning, we would be wise to be cautious. The following is a short, selected list of questions that have not yet been answered.

  • Language teachers often struggle with mixed ability classes. If adaptive programs (as part of a blended program) allow students to progress at their own speed, the range of abilities in face-to-face lessons may be even more marked. How will teachers cope with this? Teacher – student ratios are unlikely to improve!
  • Who will pay for the training that teachers will need to implement effective blended learning and when will this take place?
  • How will teachers respond to a technology that will be perceived by some as a threat to their jobs and their professionalism and as part of a growing trend towards the accommodation of commercial interests (see the next post)?
  • How will students respond to online (adaptive) learning when it becomes the norm, rather than something ‘different’?

3 Research

Technological innovations in education are rarely, if ever, driven by solidly grounded research, but they are invariably accompanied by grand claims about their potential. Motion pictures, radio, television and early computers were all seen, in their time, as wonder technologies that would revolutionize education (Cuban, Teachers and Machines: The Classroom Use of Technology since 1920 1986). Early research seemed to support the claims, but the passage of time has demonstrated all too clearly the precise opposite. The arrival on the scene of e-learning in general, and adaptive learning in particular, has also been accompanied by much cheer-leading and claims of research support.

Examples of such claims of research support for adaptive learning in higher education in the US and Australia include an increase in pass rates of between 7 and 18%, a decrease of between 14 and 47% in student drop-outs, and an acceleration of 25% in the time needed to complete courses[2]. However, research of this kind needs to be taken with a liberal pinch of salt. First of all, the research has usually been commissioned, and sometimes carried out, by those with vested commercial interests in positive results. Secondly, the design of the research study usually guarantees positive results. Finally, the results cannot be interpreted to have any significance beyond their immediate local context. There is no reason to expect that what happened in a particular study into adaptive learning in, say, the University of Arizona would be replicated in, say, the Universities of Amman, Astana or anywhere else. Very often, when this research is reported, the subject of the students’ study is not even mentioned, as if this were of no significance.

The lack of serious research into the effectiveness of adaptive learning does not lead us to the conclusion that it is ineffective. It is simply too soon to say, and if the examples of motion pictures, radio and television are any guide, it will be a long time before we have any good evidence. By that time, it is reasonable to assume, adaptive learning will be a very different beast from what it is today. Given the recency of this kind of learning, the lack of research is not surprising. For online learning in general, a meta-analysis commissioned by the US Department of Education (Means et al, Evaluation of Evidence-Based Practice in Online Learning 2009, p.9) found that there were only a small number of rigorous published studies, and that it was not possible to attribute any gains in learning outcomes to online or blended learning modes. As the authors of this report were aware, there are too many variables (social, cultural and economic) to compare in any direct way the efficacy of one kind of learning with another. This is as true of attempts to compare adaptive online learning with face-to-face instruction as it is with comparisons of different methodological approaches in purely face-to-face teaching. There is, however, an irony in the fact that advocates of adaptive learning (whose interest in analytics leads them to prioritise correlational relationships over causal ones) should choose to make claims about the causal relationship between learning outcomes and adaptive learning.

Perhaps, as Selwyn (Education and Technology 2011, p.87) suggests, attempts to discover the relative learning advantages of adaptive learning are simply asking the wrong question, not least as there cannot be a single straightforward answer. Perhaps a more useful critique would be to look at the contexts in which the claims for adaptive learning are made, and by whom. Selwyn also suggests that useful insights may be gained from taking a historical perspective. It is worth noting that the technicist claims for adaptive learning (that ‘it works’ or that it is ‘effective’) are essentially the same as those that have been made for other education technologies. They take a universalising position and ignore local contexts, forgetting that ‘pedagogical approach is bound up with a web of cultural assumption’ (Wiske, ‘A new culture of teaching for the 21st century’ in Gordon, D.T. (ed.) The Digital Classroom: How Technology is Changing the Way we teach and Learn 2000, p.72). Adaptive learning might just possibly be different from other technologies, but history advises us to be cautious.


[2] These figures are quoted in Learning to Adapt: A Case for Accelerating Adaptive Learning in Higher Education, a booklet produced in March 2013 by Education Growth Advisors, an education consultancy firm. Their research is available at http://edgrowthadvisors.com/research/