Posts Tagged ‘feedback’

I’ve written about the relationship (or, rather, the lack of one) between language teachers and language teaching research before. I’m talking about the kind of research that is primarily of the ‘what-works’ variety, since that is likely to be of most relevance to teachers. It’s the kind of research that asks questions like: can correction be beneficial to language learners? Or: can spaced repetition be helpful in vocabulary acquisition? Whether teachers find this relevant or not, there is ample evidence that the vast majority rarely look at it (Borg, 2009).

See here, for example, for a discussion of calls from academic researchers for more dialogue between researchers and teachers. The desire, on the part of researchers, for teachers to engage more (or even a little) with research, continues to grow, as shown by two examples. The first is the development of TESOLgraphics, which aims to make research ‘easy to read and understand to ESL, EFL, EAP, ESP, ESOL, EAL, TEFL teachers’ by producing infographic summaries. The second is a proposed special issue of the journal ‘System’ devoted to ‘the nexus of research and practice in and for language teacher education’ and hopes to find ways of promoting more teacher engagement with research. Will either of these initiatives have much impact? I doubt it, and to explain why, I need to take you on a little detour.

The map and the territory

Riffing off an ultra-short story by Jorge Luis Borges (‘On Exactitude in Science’, 1946), the corpus linguist Michael Stubbs (2013) wrote a piece entitled ‘Of Exactitude in Linguistics’, which marked his professional retirement. In it, he described a world where

the craft of Descriptive Linguistics attained such Perfection that the Transcription of a single short Conversation covered the floor of an entire University seminar room, and the Transcription of a Representative Sample of a single Text-Type covered the floor area of a small department to a depth of several feet. In the course of time, especially after the development of narrow phonetic transcription with intonational and proxemic annotation, even these extensive Records were found somehow wanting, and with the advent of fully automatic voice-to-orthography transcription, the weight of the resulting Text Collections threatened severe structural damage to University buildings.

As with all humour, there’s more than a grain of truth behind this Borgesian fantasy. These jokes pick up on what is known as the Richardson Effect, named after a British mathematician who noted that the length of the coastline of Great Britain varies according to the size of the units that are used to measure it – the smaller the unit, the longer the coastline. But at what point does increasing exactitude cease to tell us anything of value?

Both Borges and Lewis Fry Richardson almost certainly knew Lewis Carroll’s novel ‘Sylvie and Bruno Concluded’ (1893) which features a map that has the scale of a mile to a mile. This extraordinarily accurate map is, however, never used, since it is too large to spread out. The cost of increasing exactitude is practical usefulness.

The map of language

Language is rather like a coastline when it comes to drilling down in order to capture its features with smaller and smaller units of measurement. Before very long, you are forced into making decisions about the variety of the language and the contexts of use that you are studying. Precisely what kind of English are you measuring? At some point, you get down to the level of idiolect, but idiolects can be broken down further as they vary depending on the contexts of use. The trouble, of course, is that idiolects tell us little that is of value about the much broader ‘language’ that you set out to measure in the first place. The linguistic map obscures the linguistic terrain.

In ultra close-up, we can no longer distinguish one named language from another just by using linguistic criteria (Makoni & Pennycook, 20077:1). Extending this logic further, it makes little sense to even talk about named languages like English, to talk about first or second languages, about native speakers or about language errors. The close-up view requires us to redefine the thing – language – that we set out to define and describe. English is no longer a fixed and largely territorial system owned by native-speakers, but a dynamic, complex, social, deterritorialized practice owned by its users (May, 2013; Meier, 2017; Li Wei, 2018). In this view, both the purpose and the consequence of describing language in this way is to get away from the social injustice of native-speaker norms, of accentism, and linguistic prejudice.

A load of Ballungs

Language is a fuzzy and context-dependent concept. It is ‘too multifaceted to be measured on a single metric without loss of meaning, and must be represented by a matrix of indices or by several different measures depending on which goals and values are at play’ (Tal, 2020). In the philosophy of measurement, concepts like these are known as ‘Ballung’ concepts (Cartwright & Bradburn, 2011). Much of what is studied by researchers into language learning are also ‘Ballung’ concepts. Language proficiency and language acquisition are ‘Ballung’ concepts, too. As are reading and listening skills, mediation, metacognition and motivation. Critical thinking and digital literacies … the list goes on. Research into all these areas is characterised by multiple and ever-more detailed taxonomies, as researchers struggle to define precisely what it is that they are studying. It is in the nature of most academic study that it strives towards exactitude by becoming more and more specialised in its analysis of ‘ever more particular fractions of our world’ (Pardo-Guerra, 2022: 17).

But the perspective on language of Makoni, Pennycook, Li Wei et al is not what we might call the ‘canonical view’, the preferred viewpoint of the majority of people in apprehending the reality of the outside world (Palmer, 1981). Canonical views of language are much less close-up and allow for the unproblematic differentiation of one language from another. Canonical views – whether of social constructs like language or everyday objects like teacups or birds – become canonical because they are more functional for many people for everyday purposes than less familiar perspectives. If you want to know how far it is to walk from A to B along a coastal footpath, the more approximate measure of metres is more useful than one that counts every nook and cranny in microns. Canonical views can, of course, change over time – if the purpose to which they are put changes, too.

Language teaching research

There is a clear preference in academia for quantitative, empirical research where as many variables as possible are controlled. Research into language teaching is no different. It’s not enough to ask, in general terms, about the impact on learning of correction or spaced repetition. ‘What works’ is entirely context-dependent (Al-Hoorie, et al., 2023: 278). Since all languages, language learners and language learning contexts are ‘ultimately different’ (Widdowson, 2023: 397), there’s never any end to the avenues that researchers can explore: it is a ‘self-generating academic area of inquiry’ (ibid.). So we can investigate the impact of correction on the writing (as opposed to the speaking) of a group of Spanish (as opposed to another nationality) university students (as opposed to another age group) in an online setting (as opposed to face-to-face) where the correction is delayed (as opposed to immediate) and delivered by WhatsApp (as opposed to another medium) (see, for example, Murphy et al., 2023). We could carry on playing around with the variables for as long as we like – this kind of research has already been going on for decades.

When it comes to spaced repetition, researchers need to consider the impact of different algorithms (e.g. the length of the spaces) on different kinds of learners (age, level, motivation, self-regulation, etc.) in their acquisition of different kinds of lexical items (frequency, multi-word units, etc.) and how these items are selected and grouped, the nature of this acquisition (e.g. is it for productive use or is it purely recognition?). And so on (see the work of Tatsuya Nakata, for example).

Such attempts to control the variables are a necessary part of scientific enquiry, they are part of the ‘disciplinary agenda’, but they are unlikely to be of much relevance to most teachers. Researchers need precision, but the more they attempt to ‘approximate the complexities of real life, the more unwieldy [their] theories inevitably become’ (Al-Hoorie et al., 2023). Teachers, on the other hand, are typically more interested in canonical views that can lead to general take-aways that can be easily applied in their lessons. It is only secondary research in the form of meta-analyses or literature reviews (of the kind that TESOLgraphics) that can avoid the Richardson Effect and might offer something of help to the average classroom practitioner. But this secondary research, stripped of the contextual variables, can only be fairly vague. It can only really tell us, for example, that some form of written correction or spaced repetition may be helpful to some learners in some contexts some of the time. In need of ‘substantial localization’, it has been argued that the broad-stroke generalisations are often closer to ‘pseudo-applications’ (Al-Hoorie et al., 2023) than anything that is reliably actionable. That is not to say, however, that broad-stroke generalisations are of no value at all.

Finding the right map

Henry Widdowson (e.g. 2023) has declared himself sceptical about the practical relevance of SLA research. Reading journals like ‘Studies in Second Language Acquisition’ or ‘System’, it’s hard not to agree. Attempts to increase the accessibility of research (e.g. open-access or simple summaries) may not have the desired impact since they do not do anything about ‘the tenuous link between research and practice’ (Hwang, 2023). They cannot bridge the ‘gap between two sharply contrasting kinds of knowledge’ (McIntyre, 2006).

There is an alternative: classroom-based action research carried out by teachers. One of the central ideas behind it is that teachers may benefit more from carrying out their own research than from reading someone else’s. Enthusiasm for action research has been around for a long time: it was very fashionable in the 1980s when I trained as a teacher. In the 1990s, there was a series of conferences for English language teachers called ‘Teachers Develop Teachers Research’ (see, for example, Field et al., 1997). Tirelessly promoted by people like Richard Smith, Paula Rebolledo (Smith et al., 2014) and Anne Burns, action research seems to be gaining traction. A recent British Council publication (Burns, 2023) is a fine example of what insights teachers may gain and act on with an exploratory action research approach.

References

Al-Hoorie A. H., Hiver, P., Larsen-Freeman, D. & Lowie, W. (2023) From replication to substantiation: A complexity theory perspective. Language Teaching, 56 (2): pp. 276 – 291

Borg, S. (2009) English language teachers’ conceptions of research. Applied Linguistics, 30 (3): 358 – 88

Burns, A. (Ed.) (2023) Exploratory Action Research in Thai Schools: English teachers identifying problems, taking action and assessing results. Bangkok, Thailand: British Council

Cartwright, N., Bradburn, N. M., & Fuller, J. (2016) A theory of measurement. Working Paper. Centre for Humanities Engaging Science and Society (CHESS), Durham.

Field, J., Graham, A., Griffiths, E. & Head. K. (Eds.) (1997) Teachers Develop Teachers Research 2. Whitstable, Kent: IATEFl

Hwang, H.-B. (2023) Is evidence-based L2 pedagogy achievable? The research–practice dialogue in grammar instruction. The Modern Language Journal, 2023: 1 – 22 https://onlinelibrary.wiley.com/doi/full/10.1111/modl.12864

Li Wei. (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39 (1): 9 – 30

Makoni, S. & Pennycook, A. (Eds.) (2007) Disinventing and Reconstituting Languages. Clevedon: Multilingual Matters

May. S. (Ed.) (2013) The multilingual turn: Implications for SLA, TESOL and Bilingual education. New York: Routledge

McIntyre, D. (2006) Bridging the gap between research and practice. Cambridge Journal of Education 35 (3): 357 – 382

Meier, G. S. (2017) The multilingual turn as a critical movement in education: assumptions, challenges and a need for reflection. Applied Linguistics Review, 8 (1): 131-161

Murphy, B., Mackay J. & Tragant, E. (2023) ‘(Ok I think I was totally wrong: new try!)’: language learning in WhatsApp through the provision of delayed corrective feedback provided during and after task performance’, The Language Learning Journal, DOI: 10.1080/09571736.2023.2223217

Palmer, S.E. et al. (1981) Canonical perspective and the perception of objects. In Longand, J. & Baddeley. A. (Eds.) Attention and Performance IX. Hillsdale, NJ: Erlbaum. pp. 135 – 151

Pardo-Guerra, J. P. (2022) The Quantified Scholar. New York: Columbia University Press

Smith, R., Connelly, T. & Rebolledo, P. (2014). Teacher research as CPD: A project with Chilean secondary school teachers. In D. Hayes (Ed.), Innovations in the continuing professional development of English language teachers (pp. 111–128). The British Council.

Tal, E. “Measurement in Science”, In The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (Ed.), https://plato.stanford.edu/archives/fall2020/entries/measurement-science/

Widdowson, H. (2023) Webinar on the subject of English and applied linguistics. Language Teaching, 56 (3): 393 – 401

I’ve long felt that the greatest value of technology in language learning is to facilitate interaction between learners, rather than interaction between learners and software. I can’t claim any originality here. Twenty years ago, Kern and Warschauer (2000) described ‘the changing nature of computer use in language teaching’, away from ‘grammar and vocabulary tutorials, drill and practice programs’, towards computer-mediated communication (CMC). This change has even been described as a paradigm shift (Ciftci & Kocoglu, 2012: 62), although I suspect that the shift has affected approaches to research much more than it has actual practices.

However, there is one application of CMC that is probably at least as widespread in actual practice as it is in the research literature: online peer feedback. Online peer feedback on writing, especially in the development of academic writing skills in higher education, is certainly very common. To a much lesser extent, online peer feedback on speaking (e.g. in audio and video blogs) has also been explored (see, for example, Yeh et al., 2019 and Rodríguez-González & Castañeda, 2018).

Peer feedback

Interest in feedback has spread widely since the publication of Hattie and Timperley’s influential ‘The Power of Feedback’, which argued that ‘feedback is one of the most powerful influences on learning and achievement’ (Hattie & Timperley, 2007: 81). Peer feedback, in particular, has generated much optimism in the general educational literature as a formative practice (Double et al., 2019) because of its potential to:

  • ‘promote a sense of ownership, personal responsibility, and motivation,
  • reduce assessee anxiety and improve acceptance of negative feedback,
  • increase variety and interest, activity and interactivity, identification and bonding, self-confidence, and empathy for others’ (Topping, 1988: 256)
  • improve academic performance (Double et al., 2019).

In the literature on language learning, this enthusiasm is mirrored and peer feedback is generally recommended by both methodologists and researchers (Burkert & Wally, 2013). The reasons given, in addition to those listed above, include the following:

  • it can benefit both the receiver and the giver of feedback (Storch & Aldossary, 2019: 124),
  • it requires the givers of feedback to listen to or read attentively the language of their peers, and, in the process, may provide opportunities for them to make improvements in their own speaking and writing (Alshuraidah & Storch, 2019: 166–167,
  • it can facilitate a move away from a teacher centred classroom, and promote independent learning (and the skill of self-correction) as well as critical thinking (Hyland & Hyland, 2019: 7),
  • the target reader is an important consideration in any piece of writing (it is often specified in formal assessment tasks). Peer feedback may be especially helpful in developing the idea of what audience the writer is writing for (Nation, 2009: 139),
  • many learners are very receptive to peer feedback (Biber et al., 2011: 54),
  • it can reduce a teacher’s workload.

The theoretical arguments in support of peer feedback are supported to some extent by research. A recent meta-analysis found ‘an overall small to medium effect of peer assessment on academic performance’ (Double et al., 2019) in general educational settings. In language learning, ‘recent research has provided generally positive evidence to support the use of peer feedback in L2 writing classes’ (Yu & Lee, 2016: 467). However, ‘firm causal evidence is as yet unavailable’ (Yu & Lee, 2016: 466).

Online peer feedback

Taking peer feedback online would seem to offer a number of advantages over traditional face-to-face oral or written channels. These include:

  • a significant reduction of the logistical burden (Double et al.: 2019) because there are fewer constraints of time and place (Ho, 2015: 1),
  • the possibility (with many platforms) of monitoring students’ interactions more closely (DiGiovanni & Nagaswami, 2001: 268),
  • the encouragement of ‘greater and more equal member participation than face-to-face feedback’ (Yu & Lee, 2016: 469),
  • the possibility of reducing learners’ anxiety (which may be greater in face-to-face settings and / or when an immediate response to feedback is required) (Yeh et al.: 2019: 1).

Given these potential advantages, it is disappointing to find that a meta-analysis of peer assessment in general educational contexts did not find any significant difference between online and offline feedback (Double et al.:2019). Similarly, in language learning contexts, Yu & Lee (2016: 469) report that ‘there is inconclusive evidence about the impact of computer-mediated peer feedback on the quality of peer comments and text revisions’. The rest of this article is an exploration of possible reasons why online peer feedback is not more effective than it is.

The challenges of online peer feedback

Peer feedback is usually of greatest value when it focuses on the content and organization of what has been expressed. Learners, however, have a tendency to focus on formal accuracy, rather than on the communicative success (or otherwise) of their peers’ writing or speaking. Training can go a long way towards remedying this situation (Yu & Lee, 2016: 472 – 473): indeed, ‘the importance of properly training students to provide adequately useful peer comments cannot be over-emphasized’ (Bailey & Cassidy, 2018: 82). In addition, clearly organised rubrics to guide the feedback giver, such as those offered by feedback platforms like Peergrade, may also help to steer feedback in appropriate directions. There are, however, caveats which I will come on to.

A bigger problem occurs when the interaction which takes places when learners are supposedly engaged in peer feedback is completely off-task. In one analysis of students’ online discourse in two writing tasks, ‘meaning negotiation, error correction, and technical actions seldom occurred and […] social talk, task management, and content discussion predominated the chat’ (Liang, 2010: 45). One proposed solution to this is to grade peer comments: ‘reviewers will be more motivated to spend time in their peer review process if they know that their instructors will assess or even grade their comments’ (Choi, 2014: 225). Whilst this may sometimes be an effective strategy, the curtailment of social chat may actually create more problems than it solves, as we will see later.

Other challenges of peer feedback may be even less amenable to solutions. The most common problem concerns learners’ attitudes towards peer feedback: some learners are not receptive to feedback from their peers, preferring feedback from their teachers (Maas, 2017), and some learners may be reluctant to offer peer feedback for fear of giving offence. Attitudinal issues may derive from personal or cultural factors, or a combination of both. Whatever the cause, ‘interpersonal variables play a substantial role in determining the type and quality of peer assessment’ (Double et al., 2019). One proposed solution to this is to anonymise the peer feedback process, since it might be thought that this would lead to greater honesty and fewer concerns about loss of face. Research into this possibility, however, offers only very limited support: two studies out of three found little benefit of anonymity (Double et al., 2019). What is more, as with the curtailment of social chat, the practice must limit the development of the interpersonal relationship, and therefore positive pair / group dynamics (Liang, 2010: 45), that is necessary for effective collaborative work.

Towards solutions?

Online peer feedback is a form of computer-supported collaborative learning (CSCL), and it is to research in this broader field that I will now turn. The claim that CSCL ‘can facilitate group processes and group dynamics in ways that may not be achievable in face-to-face collaboration’ (Dooly, 2007: 64) is not contentious, but, in order for this to happen, a number of ‘motivational or affective perceptions are important preconditions’ (Chen et al., 2018: 801). Collaborative learning presupposes a collaborative pattern of peer interaction, as opposed to expert-novice, dominant- dominant, dominant-passive, or passive-passive patterns (Yu & Lee, 2016: 475).

Simply putting students together into pairs or groups does not guarantee collaboration. Collaboration is less likely to take place when instructional management focusses primarily on cognitive processes, and ‘socio-emotional processes are ignored, neglected or forgotten […] Social interaction is equally important for affiliation, impression formation, building social relationships and, ultimately, the development of a healthy community of learning’ (Kreijns et al., 2003: 336, 348 – 9). This can happen in all contexts, but in online environments, the problem becomes ‘more salient and critical’ (Kreijns et al., 2003: 336). This is why the curtailment of social chat, the grading of peer comments, and the provision of tight rubrics may be problematic.

There is no ‘single learning tool or strategy’ that can be deployed to address the challenges of online peer feedback and CSCL more generally (Chen et al., 2018: 833). In some cases, for personal or cultural reasons, peer feedback may simply not be a sensible option. In others, where effective online peer feedback is a reasonable target, the instructional approach must find ways to train students in the specifics of giving feedback on a peer’s work, to promote mutual support, to show how to work effectively with others, and to develop the language skills needed to do this (assuming that the target language is the language that will be used in the feedback).

So, what can we learn from looking at online peer feedback? I think it’s the same old answer: technology may confer a certain number of potential advantages, but, unfortunately, it cannot provide a ‘solution’ to complex learning issues.

 

Note: Some parts of this article first appeared in Kerr, P. (2020). Giving feedback to language learners. Part of the Cambridge Papers in ELT Series. Cambridge: Cambridge University Press. Available at: https://www.cambridge.org/gb/files/4415/8594/0876/Giving_Feedback_minipaper_ONLINE.pdf

 

References

Alshuraidah, A. and Storch, N. (2019). Investigating a collaborative approach to feedback. ELT Journal, 73 (2), pp. 166–174

Bailey, D. and Cassidy, R. (2018). Online Peer Feedback Tasks: Training for Improved L2 Writing Proficiency, Anxiety Reduction, and Language Learning Strategies. CALL-EJ, 20(2), pp. 70-88

Biber, D., Nekrasova, T., and Horn, B. (2011). The Effectiveness of Feedback for L1-English and L2-Writing Development: A Meta-Analysis, TOEFL iBT RR-11-05. Princeton: Educational Testing Service. Available at: https://www.ets.org/Media/Research/pdf/RR-11-05.pdf

Burkert, A. and Wally, J. (2013). Peer-reviewing in a collaborative teaching and learning environment. In Reitbauer, M., Campbell, N., Mercer, S., Schumm Fauster, J. and Vaupetitsch, R. (Eds.) Feedback Matters. Frankfurt am Main: Peter Lang, pp. 69–85

Chen, J., Wang, M., Kirschner, P.A. and Tsai, C.C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88 (6) (2018), pp. 799-843

Choi, J. (2014). Online Peer Discourse in a Writing Classroom. International Journal of Teaching and Learning in Higher Education, 26 (2): pp. 217 – 231

Ciftci, H. and Kocoglu, Z. (2012). Effects of Peer E-Feedback on Turkish EFL Students’ Writing Performance. Journal of Educational Computing Research, 46 (1), pp. 61 – 84

DiGiovanni, E. and Nagaswami. G. (2001). Online peer review: an alternative to face-to-face? ELT Journal 55 (3), pp. 263 – 272

Dooly, M. (2007). Joining forces: Promoting metalinguistic awareness through computer-supported collaborative learning. Language Awareness, 16 (1), pp. 57-74

Double, K.S., McGrane, J.A. and Hopfenbeck, T.N. (2019). The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies. Educational Psychology Review (2019)

Hattie, J. and Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), pp. 81–112

Ho, M. (2015). The effects of face-to-face and computer-mediated peer review on EFL writers’ comments and revisions. Australasian Journal of Educational Technology, 2015, 31(1)

Hyland K. and Hyland, F. (2019). Contexts and issues in feedback on L2 writing. In Hyland K. & Hyland, F. (Eds.) Feedback in Second Language Writing. Cambridge: Cambridge University Press, pp. 1–22

Kern, R. and Warschauer, M. (2000). Theory and practice of network-based language teaching. In M. Warschauer and R. Kern (eds) Network-Based Language Teaching: Concepts and Practice. New York: Cambridge University Press. pp. 1 – 19

Kreijns, K., Kirschner, P. A. and Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research. Computers in Human Behavior, 19(3), pp. 335-353

Liang, M. (2010). Using Synchronous Online Peer Response Groups in EFL Writing: Revision-Related Discourse. Language Learning and Technology, 14 (1), pp. 45 – 64

Maas, C. (2017). Receptivity to learner-driven feedback. ELT Journal, 71 (2), pp. 127–140

Nation, I. S. P. (2009). Teaching ESL / EFL Reading and Writing. New York: Routledge

Panadero, E. and Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 1–26

Rodríguez-González, E. and Castañeda, M. E. (2018). The effects and perceptions of trained peer feedback in L2 speaking: impact on revision and speaking quality, Innovation in Language Learning and Teaching, 12 (2), pp. 120-136, DOI: 10.1080/17501229.2015.1108978

Storch, N. and Aldossary, K. (2019). Peer Feedback: An activity theory perspective on givers’ and receivers’ stances. In Sato, M. and Loewen, S. (Eds.) Evidence-based Second Language Pedagogy. New York: Routledge, pp. 123–144

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68 (3), pp. 249-276.

Yeh, H.-C., Tseng, S.-S., and Chen, Y.-S. (2019). Using Online Peer Feedback through Blogs to Promote Speaking Performance. Educational Technology & Society, 22 (1), pp. 1–14

Yu, S. and Lee, I. (2016). Peer feedback in second language writing (2005 – 2014). Language Teaching, 49 (4), pp. 461 – 493

9781316629178More and more language learning is taking place, fully or partially, on online platforms and the affordances of these platforms for communicative interaction are exciting. Unfortunately, most platform-based language learning experiences are a relentless diet of drag-and-drop, drag-till-you-drop grammar or vocabulary gap-filling. The chat rooms and discussion forums that the platforms incorporate are underused or ignored. Lindsay Clandfield and Jill Hadfield’s new book is intended to promote online interaction between and among learners and the instructor, rather than between learners and software.

Interaction Online is a recipe book, containing about 80 different activities (many more if you consider the suggested variations). Subtitled ‘Creative activities for blended learning’, the authors have selected and designed the activities so that any teacher using any degree of blend (from platform-based instruction to occasional online homework) will be able to use them. The activities do not depend on any particular piece of software, as they are all designed for basic tools like Facebook, Skype and chat rooms. Indeed, almost every single activity could be used, sometimes with some slight modification, for teachers in face-to-face settings.

A recipe book must be judged on the quality of the activities it contains, and the standard here is high. They range from relatively simple, short activities to much longer tasks which will need an hour or more to complete. An example of the former is a sentence-completion activity (‘Don’t you hate / love it when ….?’ – activity 2.5). As an example of the latter, there is a complex problem-solving information-gap where students have to work out the solution to a mystery (activity 6.13), an activity which reminds me of some of the material in Jill Hadfield’s much-loved Communication Games books.

In common with many recipe books, Interaction Online is not an easy book to use, in the sense that it is hard to navigate. The authors have divided up the tasks into five kinds of interaction (personal, factual, creative, critical and fanciful), but it is not always clear precisely why one activity has been assigned to one category rather than another. In any case, the kind of interaction is likely to be less important to many teachers than the kind and amount of language that will be generated (among other considerations), and the table of contents is less than helpful. The index at the back of the book helps to some extent, but a clearer tabulation of activities by interaction type, level, time required, topic and language focus (if any) would be very welcome. Teachers will need to devise their own system of referencing so that they can easily find activities they want to try out.

Again, like many recipe books, Interaction Online is a mix of generic task-types and activities that will only work with the supporting materials that are provided. Teachers will enjoy the latter, but will want to experiment with the former and it is these generic task-types that they are most likely to add to their repertoire. In activity 2.7 (‘Foodies’ – personal interaction), for example, students post pictures of items of food and drink, to which other students must respond with questions. The procedure is clear and effective, but, as the authors note, the pictures could be of practically anything. ‘From pictures to questions’ might be a better title for the activity than ‘Foodies’. Similarly, activity 3.4 (‘Find a festival’ –factual interaction) uses a topic (‘festivals’), rather than a picture, to generate questions and responses. The procedure is slightly different from activity 2.7, but the interactional procedures of the two activities could be swapped around as easily as the topics could be changed.

Perhaps the greatest strength of this book is the variety of interactional procedures that is suggested. The majority of activities contain (1) suggestions for a stimulus, (2) suggestions for managing initial responses to this stimulus, and (3) suggestions for further interaction. As readers work their way through the book, they will be struck by similarities between the activities. The final chapter (chapter 8: ‘Task design’) provides an excellent summary of the possibilities of communicative online interaction, and more experienced teachers may want to read this chapter first.

Chapter 7 provides a useful, but necessarily fairly brief, overview of considerations regarding feedback and assessment

Overall, Interaction Online is a very rich resource, and one that will be best mined in multiple visits. For most readers, I would suggest an initial flick through and a cherry-picking of a small number of activities to try out. For materials writers and course designers, a better starting point may be the final two chapters, followed by a sampling of activities. For everyone, though, Online Interaction is a powerful reminder that technology-assisted language learning could and should be far more than what is usually is.

(This review first appeared in the International House Journal of Education and Development.)

 

Adaptive learning providers make much of their ability to provide learners with personalised feedback and to provide teachers with dashboard feedback on the performance of both individuals and groups. All well and good, but my interest here is in the automated feedback that software could provide on very specific learning tasks. Scott Thornbury, in a recent talk, ‘Ed Tech: The Mouse that Roared?’, listed six ‘problems’ of language acquisition that educational technology for language learning needs to address. One of these he framed as follows: ‘The feedback problem, i.e. how does the learner get optimal feedback at the point of need?’, and suggested that technological applications ‘have some way to go.’ He was referring, not to the kind of feedback that dashboards can provide, but to the kind of feedback that characterises a good language teacher: corrective feedback (CF) – the way that teachers respond to learner utterances (typically those containing errors, but not necessarily restricted to these) in what Ellis and Shintani call ‘form-focused episodes’[1]. These responses may include a direct indication that there is an error, a reformulation, a request for repetition, a request for clarification, an echo with questioning intonation, etc. Basically, they are correction techniques.

These days, there isn’t really any debate about the value of CF. There is a clear research consensus that it can aid language acquisition. Discussing learning in more general terms, Hattie[2] claims that ‘the most powerful single influence enhancing achievement is feedback’. The debate now centres around the kind of feedback, and when it is given. Interestingly, evidence[3] has been found that CF is more effective in the learning of discrete items (e.g. some grammatical structures) than in communicative activities. Since it is precisely this kind of approach to language learning that we are more likely to find in adaptive learning programs, it is worth exploring further.

What do we know about CF in the learning of discrete items? First of all, it works better when it is explicit than when it is implicit (Li, 2010), although this needs to be nuanced. In immediate post-tests, explicit CF is better than implicit variations. But over a longer period of time, implicit CF provides better results. Secondly, formative feedback (as opposed to right / wrong testing-style feedback) strengthens retention of the learning items: this typically involves the learner repairing their error, rather than simply noticing that an error has been made. This is part of what cognitive scientists[4] sometimes describe as the ‘generation effect’. Whilst learners may benefit from formative feedback without repairing their errors, Ellis and Shintani (2014: 273) argue that the repair may result in ‘deeper processing’ and, therefore, assist learning. Thirdly, there is evidence that some delay in receiving feedback aids subsequent recall, especially over the longer term. Ellis and Shintani (2014: 276) suggest that immediate CF may ‘benefit the development of learners’ procedural knowledge’, while delayed CF is ‘perhaps more likely to foster metalinguistic understanding’. You can read a useful summary of a meta-analysis of feedback effects in online learning here, or you can buy the whole article here.

I have yet to see an online language learning program which can do CF well, but I think it’s a matter of time before things improve significantly. First of all, at the moment, feedback is usually immediate, or almost immediate. This is unlikely to change, for a number of reasons – foremost among them being the pride that ed tech takes in providing immediate feedback, and the fact that online learning is increasingly being conceptualised and consumed in bite-sized chunks, something you do on your phone between doing other things. What will change in better programs, however, is that feedback will become more formative. As things stand, tasks are usually of a very closed variety, with drag-and-drop being one of the most popular. Only one answer is possible and feedback is usually of the right / wrong-and-here’s-the-correct-answer kind. But tasks of this kind are limited in their value, and, at some point, tasks are needed where more than one answer is possible.

Here’s an example of a translation task from Duolingo, where a simple sentence could be translated into English in quite a large number of ways.

i_am_doing_a_basketDecontextualised as it is, the sentence could be translated in the way that I have done it, although it’s unlikely. The feedback, however, is of relatively little help to the learner, who would benefit from guidance of some sort. The simple reason that Duolingo doesn’t offer useful feedback is that the programme is static. It has been programmed to accept certain answers (e.g. in this case both the present simple and the present continuous are acceptable), but everything else will be rejected. Why? Because it would take too long and cost too much to anticipate and enter in all the possible answers. Why doesn’t it offer formative feedback? Because in order to do so, it would need to identify the kind of error that has been made. If we can identify the kind of error, we can make a reasonable guess about the cause of the error, and select appropriate CF … this is what good teachers do all the time.

Analysing the kind of error that has been made is the first step in providing appropriate CF, and it can be done, with increasing accuracy, by current technology, but it requires a lot of computing. Let’s take spelling as a simple place to start. If you enter ‘I am makeing a basket for my mother’ in the Duolingo translation above, the program tells you ‘Nice try … there’s a typo in your answer’. Given the configuration of keyboards, it is highly unlikely that this is a typo. It’s a simple spelling mistake and teachers recognise it as such because they see it so often. For software to achieve the same insight, it would need, as a start, to trawl a large English dictionary database and a large tagged database of learner English. The process is quite complicated, but it’s perfectably do-able, and learners could be provided with CF in the form of a ‘spelling hint’.i_am_makeing_a_basket

Rather more difficult is the error illustrated in my first screen shot. What’s the cause of this ‘error’? Teachers know immediately that this is probably a classic confusion of ‘do’ and ‘make’. They know that the French verb ‘faire’ can be translated into English as ‘make’ or ‘do’ (among other possibilities), and the error is a common language transfer problem. Software could do the same thing. It would need a large corpus (to establish that ‘make’ collocates with ‘a basket’ more often than ‘do’), a good bilingualised dictionary (plenty of these now exist), and a tagged database of learner English. Again, appropriate automated feedback could be provided in the form of some sort of indication that ‘faire’ is only sometimes translated as ‘make’.

These are both relatively simple examples, but it’s easy to think of others that are much more difficult to analyse automatically. Duolingo rejects ‘I am making one basket for my mother’: it’s not very plausible, but it’s not wrong. Teachers know why learners do this (again, it’s probably a transfer problem) and know how to respond (perhaps by saying something like ‘Only one?’). Duolingo also rejects ‘I making a basket for my mother’ (a common enough error), but is unable to provide any help beyond the correct answer. Automated CF could, however, be provided in both cases if more tools are brought into play. Multiple parsing machines (one is rarely accurate enough on its own) and semantic analysis will be needed. Both the range and the complexity of the available tools are increasing so rapidly (see here for the sort of research that Google is doing and here for an insight into current applications of this research in language learning) that Duolingo-style right / wrong feedback will very soon seem positively antediluvian.

One further development is worth mentioning here, and it concerns feedback and gamification. Teachers know from the way that most learners respond to written CF that they are usually much more interested in knowing what they got right or wrong, rather than the reasons for this. Most students are more likely to spend more time looking at the score at the bottom of a corrected piece of written work than at the laborious annotations of the teacher throughout the text. Getting students to pay close attention to the feedback we provide is not easy. Online language learning systems with gamification elements, like Duolingo, typically reward learners for getting things right, and getting things right in the fewest attempts possible. They encourage learners to look for the shortest or cheapest route to finding the correct answers: learning becomes a sexed-up form of test. If, however, the automated feedback is good, this sort of gamification encourages the wrong sort of learning behaviour. Gamification designers will need to shift their attention away from the current concern with right / wrong, and towards ways of motivating learners to look at and respond to feedback. It’s tricky, because you want to encourage learners to take more risks (and reward them for doing so), but it makes no sense to penalise them for getting things right. The probable solution is to have a dual points system: one set of points for getting things right, another for employing positive learning strategies.

The provision of automated ‘optimal feedback at the point of need’ may not be quite there yet, but it seems we’re on the way for some tasks in discrete-item learning. There will probably always be some teachers who can outperform computers in providing appropriate feedback, in the same way that a few top chess players can beat ‘Deep Blue’ and its scions. But the rest of us had better watch our backs: in the provision of some kinds of feedback, computers are catching up with us fast.

[1] Ellis, R. & N. Shintani (2014) Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon: Routledge p. 249

[2] Hattie, K. (2009) Visible Learning. Abingdon: Routledge p.12

[3] Li, S. (2010) ‘The effectiveness of corrective feedback in SLA: a meta-analysis’ Language Learning 60 / 2: 309 -365

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)