Archive for the ‘Feedback’ Category

I’ve long felt that the greatest value of technology in language learning is to facilitate interaction between learners, rather than interaction between learners and software. I can’t claim any originality here. Twenty years ago, Kern and Warschauer (2000) described ‘the changing nature of computer use in language teaching’, away from ‘grammar and vocabulary tutorials, drill and practice programs’, towards computer-mediated communication (CMC). This change has even been described as a paradigm shift (Ciftci & Kocoglu, 2012: 62), although I suspect that the shift has affected approaches to research much more than it has actual practices.

However, there is one application of CMC that is probably at least as widespread in actual practice as it is in the research literature: online peer feedback. Online peer feedback on writing, especially in the development of academic writing skills in higher education, is certainly very common. To a much lesser extent, online peer feedback on speaking (e.g. in audio and video blogs) has also been explored (see, for example, Yeh et al., 2019 and Rodríguez-González & Castañeda, 2018).

Peer feedback

Interest in feedback has spread widely since the publication of Hattie and Timperley’s influential ‘The Power of Feedback’, which argued that ‘feedback is one of the most powerful influences on learning and achievement’ (Hattie & Timperley, 2007: 81). Peer feedback, in particular, has generated much optimism in the general educational literature as a formative practice (Double et al., 2019) because of its potential to:

  • ‘promote a sense of ownership, personal responsibility, and motivation,
  • reduce assessee anxiety and improve acceptance of negative feedback,
  • increase variety and interest, activity and interactivity, identification and bonding, self-confidence, and empathy for others’ (Topping, 1988: 256)
  • improve academic performance (Double et al., 2019).

In the literature on language learning, this enthusiasm is mirrored and peer feedback is generally recommended by both methodologists and researchers (Burkert & Wally, 2013). The reasons given, in addition to those listed above, include the following:

  • it can benefit both the receiver and the giver of feedback (Storch & Aldossary, 2019: 124),
  • it requires the givers of feedback to listen to or read attentively the language of their peers, and, in the process, may provide opportunities for them to make improvements in their own speaking and writing (Alshuraidah & Storch, 2019: 166–167,
  • it can facilitate a move away from a teacher centred classroom, and promote independent learning (and the skill of self-correction) as well as critical thinking (Hyland & Hyland, 2019: 7),
  • the target reader is an important consideration in any piece of writing (it is often specified in formal assessment tasks). Peer feedback may be especially helpful in developing the idea of what audience the writer is writing for (Nation, 2009: 139),
  • many learners are very receptive to peer feedback (Biber et al., 2011: 54),
  • it can reduce a teacher’s workload.

The theoretical arguments in support of peer feedback are supported to some extent by research. A recent meta-analysis found ‘an overall small to medium effect of peer assessment on academic performance’ (Double et al., 2019) in general educational settings. In language learning, ‘recent research has provided generally positive evidence to support the use of peer feedback in L2 writing classes’ (Yu & Lee, 2016: 467). However, ‘firm causal evidence is as yet unavailable’ (Yu & Lee, 2016: 466).

Online peer feedback

Taking peer feedback online would seem to offer a number of advantages over traditional face-to-face oral or written channels. These include:

  • a significant reduction of the logistical burden (Double et al.: 2019) because there are fewer constraints of time and place (Ho, 2015: 1),
  • the possibility (with many platforms) of monitoring students’ interactions more closely (DiGiovanni & Nagaswami, 2001: 268),
  • the encouragement of ‘greater and more equal member participation than face-to-face feedback’ (Yu & Lee, 2016: 469),
  • the possibility of reducing learners’ anxiety (which may be greater in face-to-face settings and / or when an immediate response to feedback is required) (Yeh et al.: 2019: 1).

Given these potential advantages, it is disappointing to find that a meta-analysis of peer assessment in general educational contexts did not find any significant difference between online and offline feedback (Double et al.:2019). Similarly, in language learning contexts, Yu & Lee (2016: 469) report that ‘there is inconclusive evidence about the impact of computer-mediated peer feedback on the quality of peer comments and text revisions’. The rest of this article is an exploration of possible reasons why online peer feedback is not more effective than it is.

The challenges of online peer feedback

Peer feedback is usually of greatest value when it focuses on the content and organization of what has been expressed. Learners, however, have a tendency to focus on formal accuracy, rather than on the communicative success (or otherwise) of their peers’ writing or speaking. Training can go a long way towards remedying this situation (Yu & Lee, 2016: 472 – 473): indeed, ‘the importance of properly training students to provide adequately useful peer comments cannot be over-emphasized’ (Bailey & Cassidy, 2018: 82). In addition, clearly organised rubrics to guide the feedback giver, such as those offered by feedback platforms like Peergrade, may also help to steer feedback in appropriate directions. There are, however, caveats which I will come on to.

A bigger problem occurs when the interaction which takes places when learners are supposedly engaged in peer feedback is completely off-task. In one analysis of students’ online discourse in two writing tasks, ‘meaning negotiation, error correction, and technical actions seldom occurred and […] social talk, task management, and content discussion predominated the chat’ (Liang, 2010: 45). One proposed solution to this is to grade peer comments: ‘reviewers will be more motivated to spend time in their peer review process if they know that their instructors will assess or even grade their comments’ (Choi, 2014: 225). Whilst this may sometimes be an effective strategy, the curtailment of social chat may actually create more problems than it solves, as we will see later.

Other challenges of peer feedback may be even less amenable to solutions. The most common problem concerns learners’ attitudes towards peer feedback: some learners are not receptive to feedback from their peers, preferring feedback from their teachers (Maas, 2017), and some learners may be reluctant to offer peer feedback for fear of giving offence. Attitudinal issues may derive from personal or cultural factors, or a combination of both. Whatever the cause, ‘interpersonal variables play a substantial role in determining the type and quality of peer assessment’ (Double et al., 2019). One proposed solution to this is to anonymise the peer feedback process, since it might be thought that this would lead to greater honesty and fewer concerns about loss of face. Research into this possibility, however, offers only very limited support: two studies out of three found little benefit of anonymity (Double et al., 2019). What is more, as with the curtailment of social chat, the practice must limit the development of the interpersonal relationship, and therefore positive pair / group dynamics (Liang, 2010: 45), that is necessary for effective collaborative work.

Towards solutions?

Online peer feedback is a form of computer-supported collaborative learning (CSCL), and it is to research in this broader field that I will now turn. The claim that CSCL ‘can facilitate group processes and group dynamics in ways that may not be achievable in face-to-face collaboration’ (Dooly, 2007: 64) is not contentious, but, in order for this to happen, a number of ‘motivational or affective perceptions are important preconditions’ (Chen et al., 2018: 801). Collaborative learning presupposes a collaborative pattern of peer interaction, as opposed to expert-novice, dominant- dominant, dominant-passive, or passive-passive patterns (Yu & Lee, 2016: 475).

Simply putting students together into pairs or groups does not guarantee collaboration. Collaboration is less likely to take place when instructional management focusses primarily on cognitive processes, and ‘socio-emotional processes are ignored, neglected or forgotten […] Social interaction is equally important for affiliation, impression formation, building social relationships and, ultimately, the development of a healthy community of learning’ (Kreijns et al., 2003: 336, 348 – 9). This can happen in all contexts, but in online environments, the problem becomes ‘more salient and critical’ (Kreijns et al., 2003: 336). This is why the curtailment of social chat, the grading of peer comments, and the provision of tight rubrics may be problematic.

There is no ‘single learning tool or strategy’ that can be deployed to address the challenges of online peer feedback and CSCL more generally (Chen et al., 2018: 833). In some cases, for personal or cultural reasons, peer feedback may simply not be a sensible option. In others, where effective online peer feedback is a reasonable target, the instructional approach must find ways to train students in the specifics of giving feedback on a peer’s work, to promote mutual support, to show how to work effectively with others, and to develop the language skills needed to do this (assuming that the target language is the language that will be used in the feedback).

So, what can we learn from looking at online peer feedback? I think it’s the same old answer: technology may confer a certain number of potential advantages, but, unfortunately, it cannot provide a ‘solution’ to complex learning issues.

 

Note: Some parts of this article first appeared in Kerr, P. (2020). Giving feedback to language learners. Part of the Cambridge Papers in ELT Series. Cambridge: Cambridge University Press. Available at: https://www.cambridge.org/gb/files/4415/8594/0876/Giving_Feedback_minipaper_ONLINE.pdf

 

References

Alshuraidah, A. and Storch, N. (2019). Investigating a collaborative approach to feedback. ELT Journal, 73 (2), pp. 166–174

Bailey, D. and Cassidy, R. (2018). Online Peer Feedback Tasks: Training for Improved L2 Writing Proficiency, Anxiety Reduction, and Language Learning Strategies. CALL-EJ, 20(2), pp. 70-88

Biber, D., Nekrasova, T., and Horn, B. (2011). The Effectiveness of Feedback for L1-English and L2-Writing Development: A Meta-Analysis, TOEFL iBT RR-11-05. Princeton: Educational Testing Service. Available at: https://www.ets.org/Media/Research/pdf/RR-11-05.pdf

Burkert, A. and Wally, J. (2013). Peer-reviewing in a collaborative teaching and learning environment. In Reitbauer, M., Campbell, N., Mercer, S., Schumm Fauster, J. and Vaupetitsch, R. (Eds.) Feedback Matters. Frankfurt am Main: Peter Lang, pp. 69–85

Chen, J., Wang, M., Kirschner, P.A. and Tsai, C.C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88 (6) (2018), pp. 799-843

Choi, J. (2014). Online Peer Discourse in a Writing Classroom. International Journal of Teaching and Learning in Higher Education, 26 (2): pp. 217 – 231

Ciftci, H. and Kocoglu, Z. (2012). Effects of Peer E-Feedback on Turkish EFL Students’ Writing Performance. Journal of Educational Computing Research, 46 (1), pp. 61 – 84

DiGiovanni, E. and Nagaswami. G. (2001). Online peer review: an alternative to face-to-face? ELT Journal 55 (3), pp. 263 – 272

Dooly, M. (2007). Joining forces: Promoting metalinguistic awareness through computer-supported collaborative learning. Language Awareness, 16 (1), pp. 57-74

Double, K.S., McGrane, J.A. and Hopfenbeck, T.N. (2019). The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies. Educational Psychology Review (2019)

Hattie, J. and Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), pp. 81–112

Ho, M. (2015). The effects of face-to-face and computer-mediated peer review on EFL writers’ comments and revisions. Australasian Journal of Educational Technology, 2015, 31(1)

Hyland K. and Hyland, F. (2019). Contexts and issues in feedback on L2 writing. In Hyland K. & Hyland, F. (Eds.) Feedback in Second Language Writing. Cambridge: Cambridge University Press, pp. 1–22

Kern, R. and Warschauer, M. (2000). Theory and practice of network-based language teaching. In M. Warschauer and R. Kern (eds) Network-Based Language Teaching: Concepts and Practice. New York: Cambridge University Press. pp. 1 – 19

Kreijns, K., Kirschner, P. A. and Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research. Computers in Human Behavior, 19(3), pp. 335-353

Liang, M. (2010). Using Synchronous Online Peer Response Groups in EFL Writing: Revision-Related Discourse. Language Learning and Technology, 14 (1), pp. 45 – 64

Maas, C. (2017). Receptivity to learner-driven feedback. ELT Journal, 71 (2), pp. 127–140

Nation, I. S. P. (2009). Teaching ESL / EFL Reading and Writing. New York: Routledge

Panadero, E. and Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 1–26

Rodríguez-González, E. and Castañeda, M. E. (2018). The effects and perceptions of trained peer feedback in L2 speaking: impact on revision and speaking quality, Innovation in Language Learning and Teaching, 12 (2), pp. 120-136, DOI: 10.1080/17501229.2015.1108978

Storch, N. and Aldossary, K. (2019). Peer Feedback: An activity theory perspective on givers’ and receivers’ stances. In Sato, M. and Loewen, S. (Eds.) Evidence-based Second Language Pedagogy. New York: Routledge, pp. 123–144

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68 (3), pp. 249-276.

Yeh, H.-C., Tseng, S.-S., and Chen, Y.-S. (2019). Using Online Peer Feedback through Blogs to Promote Speaking Performance. Educational Technology & Society, 22 (1), pp. 1–14

Yu, S. and Lee, I. (2016). Peer feedback in second language writing (2005 – 2014). Language Teaching, 49 (4), pp. 461 – 493

Adaptive learning providers make much of their ability to provide learners with personalised feedback and to provide teachers with dashboard feedback on the performance of both individuals and groups. All well and good, but my interest here is in the automated feedback that software could provide on very specific learning tasks. Scott Thornbury, in a recent talk, ‘Ed Tech: The Mouse that Roared?’, listed six ‘problems’ of language acquisition that educational technology for language learning needs to address. One of these he framed as follows: ‘The feedback problem, i.e. how does the learner get optimal feedback at the point of need?’, and suggested that technological applications ‘have some way to go.’ He was referring, not to the kind of feedback that dashboards can provide, but to the kind of feedback that characterises a good language teacher: corrective feedback (CF) – the way that teachers respond to learner utterances (typically those containing errors, but not necessarily restricted to these) in what Ellis and Shintani call ‘form-focused episodes’[1]. These responses may include a direct indication that there is an error, a reformulation, a request for repetition, a request for clarification, an echo with questioning intonation, etc. Basically, they are correction techniques.

These days, there isn’t really any debate about the value of CF. There is a clear research consensus that it can aid language acquisition. Discussing learning in more general terms, Hattie[2] claims that ‘the most powerful single influence enhancing achievement is feedback’. The debate now centres around the kind of feedback, and when it is given. Interestingly, evidence[3] has been found that CF is more effective in the learning of discrete items (e.g. some grammatical structures) than in communicative activities. Since it is precisely this kind of approach to language learning that we are more likely to find in adaptive learning programs, it is worth exploring further.

What do we know about CF in the learning of discrete items? First of all, it works better when it is explicit than when it is implicit (Li, 2010), although this needs to be nuanced. In immediate post-tests, explicit CF is better than implicit variations. But over a longer period of time, implicit CF provides better results. Secondly, formative feedback (as opposed to right / wrong testing-style feedback) strengthens retention of the learning items: this typically involves the learner repairing their error, rather than simply noticing that an error has been made. This is part of what cognitive scientists[4] sometimes describe as the ‘generation effect’. Whilst learners may benefit from formative feedback without repairing their errors, Ellis and Shintani (2014: 273) argue that the repair may result in ‘deeper processing’ and, therefore, assist learning. Thirdly, there is evidence that some delay in receiving feedback aids subsequent recall, especially over the longer term. Ellis and Shintani (2014: 276) suggest that immediate CF may ‘benefit the development of learners’ procedural knowledge’, while delayed CF is ‘perhaps more likely to foster metalinguistic understanding’. You can read a useful summary of a meta-analysis of feedback effects in online learning here, or you can buy the whole article here.

I have yet to see an online language learning program which can do CF well, but I think it’s a matter of time before things improve significantly. First of all, at the moment, feedback is usually immediate, or almost immediate. This is unlikely to change, for a number of reasons – foremost among them being the pride that ed tech takes in providing immediate feedback, and the fact that online learning is increasingly being conceptualised and consumed in bite-sized chunks, something you do on your phone between doing other things. What will change in better programs, however, is that feedback will become more formative. As things stand, tasks are usually of a very closed variety, with drag-and-drop being one of the most popular. Only one answer is possible and feedback is usually of the right / wrong-and-here’s-the-correct-answer kind. But tasks of this kind are limited in their value, and, at some point, tasks are needed where more than one answer is possible.

Here’s an example of a translation task from Duolingo, where a simple sentence could be translated into English in quite a large number of ways.

i_am_doing_a_basketDecontextualised as it is, the sentence could be translated in the way that I have done it, although it’s unlikely. The feedback, however, is of relatively little help to the learner, who would benefit from guidance of some sort. The simple reason that Duolingo doesn’t offer useful feedback is that the programme is static. It has been programmed to accept certain answers (e.g. in this case both the present simple and the present continuous are acceptable), but everything else will be rejected. Why? Because it would take too long and cost too much to anticipate and enter in all the possible answers. Why doesn’t it offer formative feedback? Because in order to do so, it would need to identify the kind of error that has been made. If we can identify the kind of error, we can make a reasonable guess about the cause of the error, and select appropriate CF … this is what good teachers do all the time.

Analysing the kind of error that has been made is the first step in providing appropriate CF, and it can be done, with increasing accuracy, by current technology, but it requires a lot of computing. Let’s take spelling as a simple place to start. If you enter ‘I am makeing a basket for my mother’ in the Duolingo translation above, the program tells you ‘Nice try … there’s a typo in your answer’. Given the configuration of keyboards, it is highly unlikely that this is a typo. It’s a simple spelling mistake and teachers recognise it as such because they see it so often. For software to achieve the same insight, it would need, as a start, to trawl a large English dictionary database and a large tagged database of learner English. The process is quite complicated, but it’s perfectably do-able, and learners could be provided with CF in the form of a ‘spelling hint’.i_am_makeing_a_basket

Rather more difficult is the error illustrated in my first screen shot. What’s the cause of this ‘error’? Teachers know immediately that this is probably a classic confusion of ‘do’ and ‘make’. They know that the French verb ‘faire’ can be translated into English as ‘make’ or ‘do’ (among other possibilities), and the error is a common language transfer problem. Software could do the same thing. It would need a large corpus (to establish that ‘make’ collocates with ‘a basket’ more often than ‘do’), a good bilingualised dictionary (plenty of these now exist), and a tagged database of learner English. Again, appropriate automated feedback could be provided in the form of some sort of indication that ‘faire’ is only sometimes translated as ‘make’.

These are both relatively simple examples, but it’s easy to think of others that are much more difficult to analyse automatically. Duolingo rejects ‘I am making one basket for my mother’: it’s not very plausible, but it’s not wrong. Teachers know why learners do this (again, it’s probably a transfer problem) and know how to respond (perhaps by saying something like ‘Only one?’). Duolingo also rejects ‘I making a basket for my mother’ (a common enough error), but is unable to provide any help beyond the correct answer. Automated CF could, however, be provided in both cases if more tools are brought into play. Multiple parsing machines (one is rarely accurate enough on its own) and semantic analysis will be needed. Both the range and the complexity of the available tools are increasing so rapidly (see here for the sort of research that Google is doing and here for an insight into current applications of this research in language learning) that Duolingo-style right / wrong feedback will very soon seem positively antediluvian.

One further development is worth mentioning here, and it concerns feedback and gamification. Teachers know from the way that most learners respond to written CF that they are usually much more interested in knowing what they got right or wrong, rather than the reasons for this. Most students are more likely to spend more time looking at the score at the bottom of a corrected piece of written work than at the laborious annotations of the teacher throughout the text. Getting students to pay close attention to the feedback we provide is not easy. Online language learning systems with gamification elements, like Duolingo, typically reward learners for getting things right, and getting things right in the fewest attempts possible. They encourage learners to look for the shortest or cheapest route to finding the correct answers: learning becomes a sexed-up form of test. If, however, the automated feedback is good, this sort of gamification encourages the wrong sort of learning behaviour. Gamification designers will need to shift their attention away from the current concern with right / wrong, and towards ways of motivating learners to look at and respond to feedback. It’s tricky, because you want to encourage learners to take more risks (and reward them for doing so), but it makes no sense to penalise them for getting things right. The probable solution is to have a dual points system: one set of points for getting things right, another for employing positive learning strategies.

The provision of automated ‘optimal feedback at the point of need’ may not be quite there yet, but it seems we’re on the way for some tasks in discrete-item learning. There will probably always be some teachers who can outperform computers in providing appropriate feedback, in the same way that a few top chess players can beat ‘Deep Blue’ and its scions. But the rest of us had better watch our backs: in the provision of some kinds of feedback, computers are catching up with us fast.

[1] Ellis, R. & N. Shintani (2014) Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon: Routledge p. 249

[2] Hattie, K. (2009) Visible Learning. Abingdon: Routledge p.12

[3] Li, S. (2010) ‘The effectiveness of corrective feedback in SLA: a meta-analysis’ Language Learning 60 / 2: 309 -365

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)