Posts Tagged ‘social interaction’

The VR experience is nothing if it is not immersive, and in language learning, the value of immersion in VR is seen to be the way in which it can lead to what we might call ‘engagement’ or ‘flow’. Fully immersed in a VR world, learning can be maximized, or so the thinking goes (Lan, 2020; Chen & Hsu, 2020). ‘By blocking out visual and auditory distractions in the classroom, VR has the potential to help students deeply connect with the material’ (Gadelha, 2018). ‘There are no distracting classroom windows to stare out of when students are directly immersed into the topic they are investigating’ (Bonner & Reinders, 2018: 36). Such is the allure of immersion that it is no surprise to find the word in the names of VR language learning products like Immerse and ImmerseMe (although the nod to bilingual immersion progammes (such as those in Canada) is an added bonus).

There is, however, immersion and immersion. A common categorisation of VR is into:

  • non-immersive (e.g. a desktop game with a 2D screen and avatars)
  • semi-immersive (e.g. high-end arcade games and flight simulators with large projections)
  • fully immersive (e.g. with a head-mounted display, headphones, body sensors)

Taking things a little further is the possibility of directly inducing responses in the nervous system with molecular nanotechnology. We’re some way off that, but, fear not, people are working on it. At this point, it’s worth noting that this hierarchy of immersivity is driven by technological considerations: more tech = more immersion.

In ELT, the most common VR applications are currently at the low end of this scale. Probably the most talked about currently is the use of 3600 photography and a very simple headset like Google Cardboard, along with headphones, to take students on virtual field trips – anywhere from a museum or a Disney castle to a coral reef or outer space. See Raquel Ribeiro’s blog post for CUP for more ideas. Then, there are self-study packages, like Velawoods, which is a sort of combination of the SIMS with interaction made possible through speech recognition. The syllabus will be familiar to anyone used to using a contemporary coursebooks.

And, now, up a technological notch or two, is Immerse, which requires an Oculus headset. It appears to be a sort of Second Life where language learners can interact with each other and a trainer in a number of role plays, set in, for example, a garden barbecue, a pool bar, a conference or a deserted island. In addition to interacting with each other, students can interact with virtual objects, picking up darts and throw them at questions they want to focus on, for example. ‘Total physical engagement with the environment’ is how this is described by Immerse’s Chief Product Office. You can find out more in this promotional video.

Paul Driver has suggested that the evolution of VR can be ‘traced back through time as a constant struggle to create more immersive experiences. From the intricate scrolls of twelfth-century China, the huge panoramic paintings of the nineteenth century and early experiments in stereoscopic photography, to the promising but over-hyped 1990s arcade machines (which raised hopes and then dashed expectations for a whole generation), the history of virtual reality has been a meandering march forward, punctuated with long periods of stagnation’. Immerse may be fairly sophisticated as a VR language learning platform, but it has a long way to go as an immersive environment in comparison to games like Meeting Rembrandt: Master of Reality or Project VR Fishing. Its animations are crude and clunky, its scenarios short of detail.

But however ‘lifelike’ games like these are, their immersive potential is extremely limited if you have no interest in Rembrandt or fishing. VR is only as immersive as the intrinsic interest of (1) the ‘real world’ it is attempting to replicate, and (2) what you can do in it. The novelty factor may hold attention for a while, but not for long.

With simpler 3600 Google Cardboard versions of VR, you can’t actually do anything in the VR world besides watch, listen and marvel, so the intrinsic interest of the content is even more important. I quite like exploring the Okavango Delta, but I have no interest in rollercoasters or parachute jumps. But, to be immersed, I don’t actually need the 3600 experience at all, if the quality of the video is good enough. In many ways, I prefer an old-fashioned screen where my hands are not tied up with holding the phone into the Cardboard and the Cardboard to my nose.

3600 videos are usually short, and I can see how they can be used in a language class as a springboard for other work. But as a language learning tool, old-fashioned screens (with good content) may offer more potential than headsets (whether Cardboard or Oculus) because we can do other things (like communicate with other people, use a dictionary or take notes) at the same time.

VR technology in language learning cannot, therefore, (whatever its claims) generate immersion or engagement on its own. For the time being, it can, for some, captivate initial curiosity. For others, already used to high-end Oculus games, programmes like Immerse are more likely to generate a resounding ‘meh’. Engagement in learning is a highly complex phenomenon. Mercer and Dörnyei (2020: 102 ff.) argue that engaging learning materials must be designed for particular groups of learners (in terms of level and interests, for example) and they must get learners emotionally invested. Improvements in VR technology won’t really change anything.

VR is already well established and successful in some forms of education: military, healthcare and engineering, especially. Virtual reality is obviously a good place to learn how to defuse a bomb or carry out keyhole surgery. In other areas, such as soft skills training in corporate contexts, its use is growing, but its effectiveness is much less clear. In language learning, the purported advantages of VR (see, for example, Alizadeh, 2019, which has a useful bibliography, or Lloyd et al., 2017) are not convincing. There is no problem in language learning for which VR is the solution. This doesn’t mean that VR does not have a place in language learning / teaching. VR field trips may offer occasional moments of variety. Conversation in VR worlds like Facebook Spaces may be welcomed by some. And there will be markets for dedicated platforms like Velawoods, Mondly or Immerse.

Predictions about edtech are often thinly disguised attempts to accelerate a predicted future. Four years ago I went to a conference presentation by Saul Nassé, Chief Executive of Cambridge Assessment. All the participants were given a Cambridge branded Google Cardboard. At the time, Nassé wrote the following:

The technology is only going to get better and cheaper. In two or three years it will be wireless and cost less than a smart phone. That’s the point when you’ll see whole classrooms equipped with VR. And I like to think we’ll find a way of Cambridge English content being used in those classrooms, with people learning English in a whole new way. It may have been a long time coming, but I think the VR revolution is now truly here to stay’.

The message was echoed in Lloyd et al (2017), all three of whom worked for Cambridge Assessment, and amplified in a series of blog posts and conference presentations around that time. Since then, it has all gone rather quiet. There are still people out there (including the investors who have just pumped $1.5 million into Immerse in Series A funding), who believe that VR will be the next big thing in language learning. But edtech investors have a long track record of turning a blind eye to history. VR, as Saul Nassé observed, ‘has been the next big thing for thirty years’. And maybe for the next thirty years, too.

REFERENCES

Alizadeh, M. (2019). Augmented/virtual reality promises for ELT practitioners. In Clements, P., Krause, A. & Bennett, P. (Eds.), Diversity and inclusion. Tokyo: JALT. https://jalt-publications.org/sites/default/files/pdf-article/jalt2018-pcp-048.pdf

Bonner, E., & Reinders, H. (2018). Augmented and virtual reality in the language classroom: Practical ideas. Teaching English with Technology, 18 (3), pp. 33-53. Retrieved from https://files.eric.ed.gov/fulltext/EJ1186392.pdf

Chen, Y. L. & Hsu, C. C. (2020). Self-regulated mobile game-based English learning in a virtual reality environment. Computers and Education, 154 https://www.sciencedirect.com/science/article/abs/pii/S0360131520301093?dgcid=rss_sd_all

Gadelha, R. (2018). Revolutionizing Education: The promise of virtual reality. Childhood Education, 94 (1), pp. 40-43. doi:10.1080/00094056.2018.1420362

Lan, Y. J. (2020). Immersion, interaction and experience-oriented learning: Bringing virtual reality into FL learning. Language Learning & Technology, 24(1), pp. 1–15. http://hdl.handle.net/10125/44704

Lloyd, A., Rogerson, S. & Stead, G. (2017). Imagining the potential for using Virtual Reality technologies in language learning. In Carrier, M., Damerow, R. M. & Bailey, K. M. (Eds.) Digital Language Learning and Teaching. New York: Routledge. pp. 222 – 234

Mercer, S. & Dörnyei, Z. (2020). Engaging Language Learners in Contemporary Classrooms. Cambridge: Cambridge University Press

I’ve long felt that the greatest value of technology in language learning is to facilitate interaction between learners, rather than interaction between learners and software. I can’t claim any originality here. Twenty years ago, Kern and Warschauer (2000) described ‘the changing nature of computer use in language teaching’, away from ‘grammar and vocabulary tutorials, drill and practice programs’, towards computer-mediated communication (CMC). This change has even been described as a paradigm shift (Ciftci & Kocoglu, 2012: 62), although I suspect that the shift has affected approaches to research much more than it has actual practices.

However, there is one application of CMC that is probably at least as widespread in actual practice as it is in the research literature: online peer feedback. Online peer feedback on writing, especially in the development of academic writing skills in higher education, is certainly very common. To a much lesser extent, online peer feedback on speaking (e.g. in audio and video blogs) has also been explored (see, for example, Yeh et al., 2019 and Rodríguez-González & Castañeda, 2018).

Peer feedback

Interest in feedback has spread widely since the publication of Hattie and Timperley’s influential ‘The Power of Feedback’, which argued that ‘feedback is one of the most powerful influences on learning and achievement’ (Hattie & Timperley, 2007: 81). Peer feedback, in particular, has generated much optimism in the general educational literature as a formative practice (Double et al., 2019) because of its potential to:

  • ‘promote a sense of ownership, personal responsibility, and motivation,
  • reduce assessee anxiety and improve acceptance of negative feedback,
  • increase variety and interest, activity and interactivity, identification and bonding, self-confidence, and empathy for others’ (Topping, 1988: 256)
  • improve academic performance (Double et al., 2019).

In the literature on language learning, this enthusiasm is mirrored and peer feedback is generally recommended by both methodologists and researchers (Burkert & Wally, 2013). The reasons given, in addition to those listed above, include the following:

  • it can benefit both the receiver and the giver of feedback (Storch & Aldossary, 2019: 124),
  • it requires the givers of feedback to listen to or read attentively the language of their peers, and, in the process, may provide opportunities for them to make improvements in their own speaking and writing (Alshuraidah & Storch, 2019: 166–167,
  • it can facilitate a move away from a teacher centred classroom, and promote independent learning (and the skill of self-correction) as well as critical thinking (Hyland & Hyland, 2019: 7),
  • the target reader is an important consideration in any piece of writing (it is often specified in formal assessment tasks). Peer feedback may be especially helpful in developing the idea of what audience the writer is writing for (Nation, 2009: 139),
  • many learners are very receptive to peer feedback (Biber et al., 2011: 54),
  • it can reduce a teacher’s workload.

The theoretical arguments in support of peer feedback are supported to some extent by research. A recent meta-analysis found ‘an overall small to medium effect of peer assessment on academic performance’ (Double et al., 2019) in general educational settings. In language learning, ‘recent research has provided generally positive evidence to support the use of peer feedback in L2 writing classes’ (Yu & Lee, 2016: 467). However, ‘firm causal evidence is as yet unavailable’ (Yu & Lee, 2016: 466).

Online peer feedback

Taking peer feedback online would seem to offer a number of advantages over traditional face-to-face oral or written channels. These include:

  • a significant reduction of the logistical burden (Double et al.: 2019) because there are fewer constraints of time and place (Ho, 2015: 1),
  • the possibility (with many platforms) of monitoring students’ interactions more closely (DiGiovanni & Nagaswami, 2001: 268),
  • the encouragement of ‘greater and more equal member participation than face-to-face feedback’ (Yu & Lee, 2016: 469),
  • the possibility of reducing learners’ anxiety (which may be greater in face-to-face settings and / or when an immediate response to feedback is required) (Yeh et al.: 2019: 1).

Given these potential advantages, it is disappointing to find that a meta-analysis of peer assessment in general educational contexts did not find any significant difference between online and offline feedback (Double et al.:2019). Similarly, in language learning contexts, Yu & Lee (2016: 469) report that ‘there is inconclusive evidence about the impact of computer-mediated peer feedback on the quality of peer comments and text revisions’. The rest of this article is an exploration of possible reasons why online peer feedback is not more effective than it is.

The challenges of online peer feedback

Peer feedback is usually of greatest value when it focuses on the content and organization of what has been expressed. Learners, however, have a tendency to focus on formal accuracy, rather than on the communicative success (or otherwise) of their peers’ writing or speaking. Training can go a long way towards remedying this situation (Yu & Lee, 2016: 472 – 473): indeed, ‘the importance of properly training students to provide adequately useful peer comments cannot be over-emphasized’ (Bailey & Cassidy, 2018: 82). In addition, clearly organised rubrics to guide the feedback giver, such as those offered by feedback platforms like Peergrade, may also help to steer feedback in appropriate directions. There are, however, caveats which I will come on to.

A bigger problem occurs when the interaction which takes places when learners are supposedly engaged in peer feedback is completely off-task. In one analysis of students’ online discourse in two writing tasks, ‘meaning negotiation, error correction, and technical actions seldom occurred and […] social talk, task management, and content discussion predominated the chat’ (Liang, 2010: 45). One proposed solution to this is to grade peer comments: ‘reviewers will be more motivated to spend time in their peer review process if they know that their instructors will assess or even grade their comments’ (Choi, 2014: 225). Whilst this may sometimes be an effective strategy, the curtailment of social chat may actually create more problems than it solves, as we will see later.

Other challenges of peer feedback may be even less amenable to solutions. The most common problem concerns learners’ attitudes towards peer feedback: some learners are not receptive to feedback from their peers, preferring feedback from their teachers (Maas, 2017), and some learners may be reluctant to offer peer feedback for fear of giving offence. Attitudinal issues may derive from personal or cultural factors, or a combination of both. Whatever the cause, ‘interpersonal variables play a substantial role in determining the type and quality of peer assessment’ (Double et al., 2019). One proposed solution to this is to anonymise the peer feedback process, since it might be thought that this would lead to greater honesty and fewer concerns about loss of face. Research into this possibility, however, offers only very limited support: two studies out of three found little benefit of anonymity (Double et al., 2019). What is more, as with the curtailment of social chat, the practice must limit the development of the interpersonal relationship, and therefore positive pair / group dynamics (Liang, 2010: 45), that is necessary for effective collaborative work.

Towards solutions?

Online peer feedback is a form of computer-supported collaborative learning (CSCL), and it is to research in this broader field that I will now turn. The claim that CSCL ‘can facilitate group processes and group dynamics in ways that may not be achievable in face-to-face collaboration’ (Dooly, 2007: 64) is not contentious, but, in order for this to happen, a number of ‘motivational or affective perceptions are important preconditions’ (Chen et al., 2018: 801). Collaborative learning presupposes a collaborative pattern of peer interaction, as opposed to expert-novice, dominant- dominant, dominant-passive, or passive-passive patterns (Yu & Lee, 2016: 475).

Simply putting students together into pairs or groups does not guarantee collaboration. Collaboration is less likely to take place when instructional management focusses primarily on cognitive processes, and ‘socio-emotional processes are ignored, neglected or forgotten […] Social interaction is equally important for affiliation, impression formation, building social relationships and, ultimately, the development of a healthy community of learning’ (Kreijns et al., 2003: 336, 348 – 9). This can happen in all contexts, but in online environments, the problem becomes ‘more salient and critical’ (Kreijns et al., 2003: 336). This is why the curtailment of social chat, the grading of peer comments, and the provision of tight rubrics may be problematic.

There is no ‘single learning tool or strategy’ that can be deployed to address the challenges of online peer feedback and CSCL more generally (Chen et al., 2018: 833). In some cases, for personal or cultural reasons, peer feedback may simply not be a sensible option. In others, where effective online peer feedback is a reasonable target, the instructional approach must find ways to train students in the specifics of giving feedback on a peer’s work, to promote mutual support, to show how to work effectively with others, and to develop the language skills needed to do this (assuming that the target language is the language that will be used in the feedback).

So, what can we learn from looking at online peer feedback? I think it’s the same old answer: technology may confer a certain number of potential advantages, but, unfortunately, it cannot provide a ‘solution’ to complex learning issues.

 

Note: Some parts of this article first appeared in Kerr, P. (2020). Giving feedback to language learners. Part of the Cambridge Papers in ELT Series. Cambridge: Cambridge University Press. Available at: https://www.cambridge.org/gb/files/4415/8594/0876/Giving_Feedback_minipaper_ONLINE.pdf

 

References

Alshuraidah, A. and Storch, N. (2019). Investigating a collaborative approach to feedback. ELT Journal, 73 (2), pp. 166–174

Bailey, D. and Cassidy, R. (2018). Online Peer Feedback Tasks: Training for Improved L2 Writing Proficiency, Anxiety Reduction, and Language Learning Strategies. CALL-EJ, 20(2), pp. 70-88

Biber, D., Nekrasova, T., and Horn, B. (2011). The Effectiveness of Feedback for L1-English and L2-Writing Development: A Meta-Analysis, TOEFL iBT RR-11-05. Princeton: Educational Testing Service. Available at: https://www.ets.org/Media/Research/pdf/RR-11-05.pdf

Burkert, A. and Wally, J. (2013). Peer-reviewing in a collaborative teaching and learning environment. In Reitbauer, M., Campbell, N., Mercer, S., Schumm Fauster, J. and Vaupetitsch, R. (Eds.) Feedback Matters. Frankfurt am Main: Peter Lang, pp. 69–85

Chen, J., Wang, M., Kirschner, P.A. and Tsai, C.C. (2018). The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis. Review of Educational Research, 88 (6) (2018), pp. 799-843

Choi, J. (2014). Online Peer Discourse in a Writing Classroom. International Journal of Teaching and Learning in Higher Education, 26 (2): pp. 217 – 231

Ciftci, H. and Kocoglu, Z. (2012). Effects of Peer E-Feedback on Turkish EFL Students’ Writing Performance. Journal of Educational Computing Research, 46 (1), pp. 61 – 84

DiGiovanni, E. and Nagaswami. G. (2001). Online peer review: an alternative to face-to-face? ELT Journal 55 (3), pp. 263 – 272

Dooly, M. (2007). Joining forces: Promoting metalinguistic awareness through computer-supported collaborative learning. Language Awareness, 16 (1), pp. 57-74

Double, K.S., McGrane, J.A. and Hopfenbeck, T.N. (2019). The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies. Educational Psychology Review (2019)

Hattie, J. and Timperley, H. (2007). The Power of Feedback. Review of Educational Research, 77(1), pp. 81–112

Ho, M. (2015). The effects of face-to-face and computer-mediated peer review on EFL writers’ comments and revisions. Australasian Journal of Educational Technology, 2015, 31(1)

Hyland K. and Hyland, F. (2019). Contexts and issues in feedback on L2 writing. In Hyland K. & Hyland, F. (Eds.) Feedback in Second Language Writing. Cambridge: Cambridge University Press, pp. 1–22

Kern, R. and Warschauer, M. (2000). Theory and practice of network-based language teaching. In M. Warschauer and R. Kern (eds) Network-Based Language Teaching: Concepts and Practice. New York: Cambridge University Press. pp. 1 – 19

Kreijns, K., Kirschner, P. A. and Jochems, W. (2003). Identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the research. Computers in Human Behavior, 19(3), pp. 335-353

Liang, M. (2010). Using Synchronous Online Peer Response Groups in EFL Writing: Revision-Related Discourse. Language Learning and Technology, 14 (1), pp. 45 – 64

Maas, C. (2017). Receptivity to learner-driven feedback. ELT Journal, 71 (2), pp. 127–140

Nation, I. S. P. (2009). Teaching ESL / EFL Reading and Writing. New York: Routledge

Panadero, E. and Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education, 1–26

Rodríguez-González, E. and Castañeda, M. E. (2018). The effects and perceptions of trained peer feedback in L2 speaking: impact on revision and speaking quality, Innovation in Language Learning and Teaching, 12 (2), pp. 120-136, DOI: 10.1080/17501229.2015.1108978

Storch, N. and Aldossary, K. (2019). Peer Feedback: An activity theory perspective on givers’ and receivers’ stances. In Sato, M. and Loewen, S. (Eds.) Evidence-based Second Language Pedagogy. New York: Routledge, pp. 123–144

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68 (3), pp. 249-276.

Yeh, H.-C., Tseng, S.-S., and Chen, Y.-S. (2019). Using Online Peer Feedback through Blogs to Promote Speaking Performance. Educational Technology & Society, 22 (1), pp. 1–14

Yu, S. and Lee, I. (2016). Peer feedback in second language writing (2005 – 2014). Language Teaching, 49 (4), pp. 461 – 493