In the last post, I suggested a range of activities that could be used in class to ‘activate’ a set of vocabulary before doing more communicative revision / recycling practice. In this, I’ll be suggesting a variety of more communicative tasks. As before, the activities require zero or minimal preparation on the part of the teacher.
1 Simple word associations
Write on the board a large selection of words that you want to recycle. Choose one word (at random) and ask the class if they can find another word on the board that they can associate with it. Ask one volunteer to (1) say what the other word is and (2) explain the association they have found between the two words. Then, draw a line through the first word and ask students if they can now choose a third word that they can associate with the second. Again, the nominated volunteer must explain the connection between the two words. Then, draw a line through the second word and ask for a connection between the third and fourth words. After three examples like this, it should be clear to the class what they need to do. Put the students into pairs or small groups and tell them to continue until there are no more words left, or it becomes too difficult to find connections / associations between the words that are left. This activity can be done simply in pairs or it can be turned into a class / group game.
As a follow-up, you might like to rearrange the pairs or groups and get students to see how many of their connections they can remember. As they are listening to the ideas of other students, ask them to decide which of the associations they found the most memorable / entertaining / interesting.
2 Association circles (variation of activity #1)
Ask students to look through their word list or flip through their flashcard set and make a list of the items that they are finding hardest to remember. They should do this with a partner and, together, should come up with a list of twelve or more words. Tell them to write these words in a circle on a sheet of paper.
Tell the students to choose, at random, one word in their circle. Next, they must find another word in the circle which they can associate in some way with the first word that they chose. They must explain this association to their partner. They must then find another word which they can associate with their second word. Again they must explain the association. They should continue in this way until they have connected all the words in their circle. Once students have completed the task with their partner, they should change partners and exchange ideas. All of this can be done orally.
3 Multiple associations
Using the same kind of circle of words, students again work with a partner. Starting with any word, they must find and explain an association with another word. Next, beginning with the word they first chose, they must find and explain an association with another word from the circle. They continue in this way until they have found connections between their first word and all the other words in the circle. Once students have completed the task with their partner, they should change partners and exchange ideas. All of this can be done orally.
4 Association dice
Prepare two lists (six in each list) of words that you want to recycle. Write these two lists on the board (list A and list B) with each word numbered 1 – 6. Each group in the class will need a dice.
First, demonstrate the activity with the whole class. Draw everyone’s attention to the two lists of the words on the board. Then roll a dice twice. Tell the students which numbers you have landed on. Explain that the first number corresponds to a word from List A and the second number to a word from List B. Think of and explain a connection / association between the two words. Organise the class into groups and ask them to continue playing the game.
Conduct feedback with the whole class. Ask them if they had any combinations of words for which they found it hard to think of a connection / association. Elicit suggestions from the whole class.
5 Picture associations #1
You will need a set of approximately eight pictures for this activity. These should be visually interesting and can be randomly chosen. If you do not have a set of pictures, you could ask the students to flick through their coursebooks and find a set of images that they find interesting or attractive. Tell them to note the page numbers. Alternatively, you could use pictures from the classroom: these might include posters on the walls, views out of the window, a mental picture of the teacher’s desk, a mental picture generated by imagining the whiteboard as a mirror, etc.
In the procedure described below, the students select the items they wish to practise. However, you may wish to select the items yourself. Make sure that students have access to dictionaries (print or online) during the lesson.
Ask the students to flip through their flashcard set or word list and make a list of the words that they are finding hardest to remember. They should do this with a partner and, together, should come up with a list of twelve or more words. The students should then find an association between each of the words on their list and one of the pictures that they select. They discuss their ideas with their partner, before comparing their ideas with a new partner.
6 Picture associations #2
Using the pictures and word lists (as in the activity above), students should select one picture, without telling their partner which picture they have selected. They should then look at the word list and choose four words from this list which they can associate with that picture. They then tell their four words to their partner, whose task is to guess which picture the other student was thinking of.
7 Rhyme associations
Prepare a list of approximately eight words that you want to recycle and write these on the board.
Ask the students to look at the words on the board. Tell them to work in pairs and find a word (in either English or their own language) which rhymes with each of the words on the list. If they cannot find a rhyming word, allow them to choose a word which sounds similar even if it is not a perfect rhyme.
The pairs should now find some sort of connection between each of the words on the list and their rhyming partners. When everyone has had enough time to find connections / associations, combine the pairs into groups of four, and ask them to exchange their ideas. Ask them to decide, for each word, which rhyming word and connection will be the most helpful in remembering this vocabulary.
Conduct feedback with the whole class.
8 Associations: truth and lies
In the procedure described below, no preparation is required. However, instead of asking the students to select the items they wish to practise, you may wish to select the items yourself. Make sure that students have access to dictionaries (print or online) during the lesson.
Ask students to flip through their flashcard set or word list and make a list of the words that they are finding hardest to remember. Individually, they should then write a series of sentences which contain these words: the sentences can contain one, two, or more of their target words. Half of the sentences should contain true personal information; the other half should contain false personal information.
Students then work with a partner, read their sentences aloud, and the partner must decide which sentences are true and which are false.
9 Associations: questions and answers
Prepare a list of between 12 and 20 items that you want the students to practise. Write these on the board (in any order) or distribute them as a handout.
Demonstrate the activity with the whole class before putting students into pairs. Make a question beginning with Why / How do you … / Why / How did you … / Why / How were you … which includes one of the target items from the list. The questions can be rather strange or divorced from reality. For example, if one of the words on the list were ankle, you could ask How did you break your ankle yesterday? Pretend that you are wracking your brain to think of an answer while looking at the other words on the board. Then, provide an answer, using one of the other words from the list. For example, if one of the other words were upset, you might answer I was feeling very upset about something and I wasn’t thinking about what I was doing. I fell down some steps. If necessary, do another example with the whole class to ensure that everyone understand the activity.
Tell the students to work in pairs, taking it in turns to ask and answer questions in the same way.
Conduct feedback with the whole class. Ask if there were any particularly strange questions or answers.
(I first came across a variation of this idea in a blog post by Alex Case ‘Playing with our Word Bag’
10 Associations: question and answer fortune telling
Prepare for yourself a list of items that you want to recycle. Number this list. (You will not need to show the list to anyone.)
Organise the class into pairs. Ask each pair to prepare four or five questions about the future. These questions could be personal or about the wider world around them. Give a few examples to make sure everyone understands: How many children will I have? What kind of job will I have five years from now? Who will win the next World Cup?
Tell the class that you have the answers to their questions. Hold up the list of words that you have prepared (without showing what is written on it). Elicit a question from one pair. Tell them that they must choose a number from 1 to X (depending on how many words you have on your list). Say the word aloud or write it on the board.
Tell the class that this is the answer to the question, but the answer must be ‘interpreted’. Ask the students to discuss in pairs the interpretation of the answer. You may need to demonstrate this the first time. If the question was How many children will I have? and the answer selected was precious, you might suggest that Your child will be very precious to you, but you will only have one. This activity requires a free imagination, and some classes will need some time to get used to the idea.
Continue with more questions and more answers selected blindly from the list, with students working in pairs to interpret these answers. Each time, conduct feedback with the whole class to find out who has the best interpretation.
11 Associations: narratives
In the procedure described below, no preparation is required. However, instead of asking the students to select the items they wish to practise, you may wish to select the items yourself. Make sure that students have access to dictionaries (print or online) during the lesson.
This activity often works best if it is used as a follow-up to ‘Picture Associations’. The story that the students prepare and tell should be connected to the picture that they focused on.
Ask students to flip through their flashcard set and make a list of the words that they are finding hardest to remember. They should do this with a partner and, together, should come up with a list of twelve or more words.
Still in pairs, they should prepare a short story which contains at least seven of the items in their list. After preparing their story, they should rehearse it before exchanging stories with another student / pair of students.
To extend this activity, the various stories can be ‘passed around’ the class in the manner of the game ‘Chinese Whispers’ (‘Broken Telephone’).
12 Associations: the sentence game
Prepare a list of approximately 25 items that you want the class to practise. Write these, in any order, on one side of the whiteboard.
Explain to the class that they are going to play a game. The object of the game is to score points by making grammatically correct sentences using the words on the board. If the students use just one of these words in a sentence, they will get one point. If they use two of the words, they’ll get two points. With three words, they’ll get three points. The more ambitious they are, the more points they can score. But if their sentence is incorrect, they will get no points and they will miss their turn. Tell the class that the sentences (1) must be grammatically correct, (2) must make logical sense, (3) must be single sentences. If there is a problem with a sentence, you, the teacher, will say that it is wrong, but you will not make a correction.
Put the class into groups of four students each. Give the groups some time to begin preparing sentences which contain one or more of the words from the list.
Ask a member from one group to come to the board and write one of the sentences they have prepared. If it is an appropriate sentence, award points. Cross out the word(s) that has been used from the list on the board: this word can no longer be used. If the sentence was incorrect, explain that there is a problem and turn to a member of the next group. This person can either (1) write a new sentence that their group has prepared, or (2) try, with the help of other members of their group to correct a sentence that is on the board. If their correction is correct, they score all the points for that sentence. If their correction is incorrect, they score no points and it is the end of their turn.
The game continues in this way with each group taking it in turns to make or correct sentences on the board.

(There are a number of comedy sketches about word associations. My favourite is this one. I’ve used it from time to time in presentations on this topic, but it has absolutely no pedagogical value (… unlike the next autoplay suggestion that was made for me, which has no comedy value).

word associations

Advertisements

A few years ago, I wrote a couple of posts about the sorts of things that teachers can do in classrooms to encourage the use of vocabulary apps and to deepen the learning of the learning items. You can find these here and here. In this and a future post, I want to take this a little further. These activities will be useful and appropriate for any teachers wanting to recycle target vocabulary in the classroom.

The initial deliberate learning of vocabulary usually focuses on the study of word meanings (e.g. target items along with translations), but for these items to be absorbed into the learner’s active vocabulary store, learners will need opportunities to use them in meaningful ways. Classrooms can provide rich opportunities for this. However, before setting up activities that offer learners the chance to do this, teachers will need in some way to draw attention to the items that will be practised. The simplest way of doing this is simply to ask students to review, for a few minutes, the relevant word set in their vocabulary apps or the relevant section of the word list. Here are some more interesting alternatives.

The post after this will suggest a range of activities that promote communicative, meaningful use of the target items (after they have been ‘activated’ using one or more of the activities below).

1             Memory check

Ask the students to spend a few minutes reviewing the relevant word set in their vocabulary apps or the relevant section of the word list (up to about 20 items). Alternatively, project / write the target items on the board. After a minute or two, tell the students to stop looking at the target items. Clean the board, if necessary.

Tell students to work individually and write down all the items they can remember. Allow a minute or two. Then, put the students into pairs: tell them to (1) combine their lists, (2) check their spelling, (3) check that they can translate (or define) the items they have, and (4) add to the lists. After a few minutes, tell the pairs to compare their lists with the work of another pair. Finally, allow students to see the list of target items so they can see which words they forgot.

2             Simple dictation

Tell the class that they are going to do a simple dictation, and ask them to write the numbers 1 to X (depending on how many words you wish to recycle: about 15 is recommended) on a piece of paper or in their notebooks. Dictate the words. Tell the students to work with a partner and check (1) their spelling, and (2) that they can remember the meanings of these words. Allow the students to check their answers in the vocabulary app / a dictionary / their word list / their coursebook.

3             Missing vowels dictation

As above (‘Simple dictation’), but tell the students that they must only write the consonants of the dictated words. When comparing their answers with a partner, they must reinsert the missing vowels.

4             Collocation dictation

As above (‘Simple dictation’), but instead of single words, dictate simple collocations (e.g. verb – complement combinations, adjective – noun pairings, adverb – adjective pairings). Students write down the collocations. When comparing their answers with a partner, they have an additional task: dictate the collocations again and identify one word that the students must underline. In pairs, students must think of one or two different words that can collocate with the underlined word.

5             Simple translation dictation

As above (‘Simple dictation’), but tell the students that must only write down the translation into their own language of the word (or phrase) that you have given them. Afterwards, when they are working with a partner, they must write down the English word. (This activity works well with multilingual groups – students do not need to speak the same language as their partner.)

6             Word count dictation

As above (‘Simple translation dictation’): when the students are doing the dictation, tell them that they must first silently count the number of letters in the English word and write down this number. They must also write down the translation into their own language. Afterwards, when they are working with a partner, they must write down the English word. As an addition / alternative, you can ask them to write down the first letter of the English word. (This activity works well with multilingual groups – students do not need to speak the same language as their partner.)

I first came across this activity in Morgan, J. & M. Rinvolucri (2004) Vocabulary 2nd edition. (Oxford: Oxford University Press).

7             Dictations with tables

Before dictating the target items, draw a simple table on the board of three or more columns. At the top of each column, write the different stress patterns of the words you will dictate. Explain to the students that they must write the words you dictate into the appropriate column.

Stress patterns

As an alternative to stress patterns, you could use different categories for the columns. Examples include: numbers of syllables, vowel sounds that feature in the target items, parts of speech, semantic fields, items that students are confident about / less confident about, etc.

8             Bilingual sentence dictation

Prepare a set of short sentences (eight maximum), each of which contains one of the words that you want to recycle. These sentences could be from a vocabulary exercise that the students have previously studied in their coursebooks or example sentences from vocab apps.

Tell the class that they are going to do a dictation. Tell them that you will read some sentences in English, but they must only write down translations into their own language of these sentences. Dictate the sentences, allowing ample time for students to write their translations. Put the students into pairs or small groups. Ask them to translate these sentences back into English. (This activity works well with multilingual groups – students do not need to speak the same language as their partner.) Conduct feedback with the whole class, or allow the students to check their answers with their apps / the coursebook.

From definitions (or translations) to words

An alternative to providing learners with the learning items and asking them to check the meanings is to get them to work towards the items from the meanings. There are a very wide variety of ways of doing this and a selection of these follows below.

9             Eliciting race

Prepare a list of words that you want to recycle. These lists will need to be printed on a handout. You will need at least two copies of this handout, but for some variations of the game you will need more copies.

Divide the class into two teams. Get one student from each team to come to the front of the class and hand them the list of words. Explain that their task is to elicit from their team each of the words on the list. They must not say the word that they are trying to elicit. The first team to produce the target word wins a point, and everyone moves on to the next word.

The race can also be played with students working in pairs. One student has the list and elicits from their partner.

10          Eliciting race against the clock

As above (‘Eliciting race’), but the race is played ‘against the clock’. The teams have different lists of words (or the same lists but in a different order). Set a time limit. How many words can be elicited in, say, three minutes?

11          Mime eliciting race

As above (‘Eliciting race’), but you can ask the students who are doing the eliciting to do this silently, using mime and gesture only. A further alternative is to get students to do the eliciting by drawing pictures (as in the game of Pictionary).

12          The fly-swatting game

Write the items to be reviewed all over the board. Divide the class into two teams. Taking turns, one member of each group comes to the board. Each of the students at the board is given a fly-swatter (if this is not possible, they can use the palms of their hands). Choose one of the items and define it in some way. The students must find the word and swat it. The first person to do so wins a point for their team. You will probably want to introduce a rule where students are only allowed one swat: this means that if they swat the wrong word, the other player can take as much time as they like (and consult with their tem members) before swatting a word.

13          Word grab

Prepare the target items on one or more sets of pieces of paper / card (one item per piece of paper). With a smallish class of about 8 students, one set is enough. With larger classes, prepare one set per group (of between 4 – 8 students). Students sit in a circle with the pieces of paper spread out on a table or on the floor in the middle. The teacher calls out the definitions and the students try to be the first person to grab the appropriate piece of paper.

As an alternative to this teacher-controlled version of the game, students can work in groups of three or four (more sets of pieces of paper will be needed). One student explains a word and the others compete to grab the right word. The student with the most words at the end is the ‘winner’. In order to cover a large number of items for recycling, each table can have a different set of words. Once a group of students has exhausted the words on their table, they can exchange tables with another group.

14          Word hold-up

The procedures above can be very loud and chaotic! For a calmer class, ensure that everyone (or every group) has a supply of blank pieces of paper. Do the eliciting yourself. The first student or team to hold up the correct answer on a piece of paper wins the point.

15          Original contexts

Find the words in the contexts in which they were originally presented (e.g. in the coursebook); write the sentences with gaps on the board (or prepare this for projection). First, students work with a partner to complete the gaps. Before checking that their answers are correct, insert the first letter of each missing word so students can check their own answers. If you wish, you may also add a second letter. Once the missing words have been checked, ask the students to try to find as many different alternatives (i.e. other words that will fit syntactically and semantically) as they can for the missing words they have just inserted.

Quick follow-up activities

16          Word grouping

Once the learning items for revision have been ‘activated’ using one of the activities above, you may wish to do a quick follow-up activity before moving on to more communicative practice. A simple task type is to ask students (in pairs, so that there is some discussion and sharing of ideas) to group the learning items in one or more ways. Here are a few suggestions for ways that students can be asked to group the words: (1) words they remembered easily / words they had forgotten; (2) words they like / dislike; (3) words they think will be useful to them / will not be useful to them; (4) words that remind them of a particular time or experience (or person) in their life; (5) words they would pack in their holiday bags / words they would put in the deep-freeze and forget about for the time being (thanks to Jeremy Harmer for this last idea).

I’m a sucker for meta-analyses, those aggregates of multiple studies that generate an effect size, and I am even fonder of meta-meta analyses. I skip over the boring stuff about inclusion criteria and statistical procedures and zoom in on the results and discussion. I’ve pored over Hattie (2009) and, more recently, Dunlosky et al (2013), and quoted both more often than is probably healthy. Hardly surprising, then, that I was eager to read Luke Plonsky and Nicole Ziegler’s ‘The CALL–SLA interface: insights from a second-order synthesis’ (Plonsky & Ziegler, 2016), an analysis of nearly 30 meta-analyses (later whittled down to 14) looking at the impact of technology on L2 learning. The big question they were looking to find an answer to? How effective is computer-assisted language learning compared to face-to-face contexts?

Plonsky & Ziegler

Plonsky and Ziegler found that there are unequivocally ‘positive effects of technology on language learning’. In itself, this doesn’t really tell us anything, simply because there are too many variables. It’s a statistical soundbite, ripe for plucking by anyone with an edtech product to sell. Much more useful is to understand which technologies used in which ways are likely to have a positive effect on learning. It appears from Plonsky and Ziegler’s work that the use of CALL glosses (to develop reading comprehension and vocabulary development) provides the strongest evidence of technology’s positive impact on learning. The finding is reinforced by the fact that this particular technology was the most well-represented research area in the meta-analyses under review.

What we know about glosses

gloss_gloss_WordA gloss is ‘a brief definition or synonym, either in L1 or L2, which is provided with [a] text’ (Nation, 2013: 238). They can take many forms (e.g. annotations in the margin or at the foot a printed page), but electronic or CALL glossing is ‘an instant look-up capability – dictionary or linked’ (Taylor, 2006; 2009) which is becoming increasingly standard in on-screen reading. One of the most widely used is probably the translation function in Microsoft Word: here’s the French gloss for the word ‘gloss’.

Language learning tools and programs are making increasing use of glosses. Here are two examples. The first is Lingro , a dictionary tool that learners can have running alongside any webpage: clicking on a word brings up a dictionary entry, and the word can then be exported into a wordlist which can be practised with spaced repetition software. The example here is using the English-English dictionary, but a number of bilingual pairings are available. The second is from Bliu Bliu , a language learning app that I unkindly reviewed here .Lingro_example

Bliu_Bliu_example_2

So, what did Plonsky and Ziegler discover about glosses? There were two key takeways:

  • both L1 and L2 CALL glossing can be beneficial to learners’ vocabulary development (Taylor, 2006, 2009, 2013)
  • CALL / electronic glosses lead to more learning gains than paper-based glosses (p.22)

On the surface, this might seem uncontroversial, but if you took a good look at the three examples (above) of online glosses, you’ll be thinking that something is not quite right here. Lingro’s gloss is a fairly full dictionary entry: it contains too much information for the purpose of a gloss. Cognitive Load Theory suggests that ‘new information be provided concisely so as not to overwhelm the learner’ (Khezrlou et al, 2017: 106): working out which definition is relevant here (the appropriate definition is actually the sixth in this list) will overwhelm many learners and interfere with the process of reading … which the gloss is intended to facilitate. In addition, the language of the definitions is more difficult than the defined item. Cognitive load is, therefore, further increased. Lingro needs to use a decent learner’s dictionary (with a limited defining vocabulary), rather than relying on the free Wiktionary.

Nation (2013: 240) cites research which suggests that a gloss is most effective when it provides a ‘core meaning’ which users will have to adapt to what is in the text. This is relatively unproblematic, from a technological perspective, but few glossing tools actually do this. The alternative is to use NLP tools to identify the context-specific meaning: our ability to do this is improving all the time but remains some way short of total accuracy. At the very least, NLP tools are needed to identify part of speech (which will increase the probability of hitting the right meaning). Bliu Bliu gets things completely wrong, confusing the verb and the adjective ‘own’.

Both Lingro and Bliu Bliu fail to meet the first requirement of a gloss: ‘that it should be understood’ (Nation, 2013: 239). Neither is likely to contribute much to the vocabulary development of learners. We will need to modify Plonsky and Ziegler’s conclusions somewhat: they are contingent on the quality of the glosses. This is not, however, something that can be assumed …. as will be clear from even the most cursory look at the language learning tools that are available.

Nation (2013: 447) also cites research that ‘learning is generally better if the meaning is written in the learner’s first language. This is probably because the meaning can be easily understood and the first language meaning already has many rich associations for the learner. Laufer and Shmueli (1997) found that L1 glosses are superior to L2 glosses in both short-term and long-term (five weeks) retention and irrespective of whether the words are learned in lists, sentences or texts’. Not everyone agrees, and a firm conclusion either way is probably not possible: learner variables (especially learner preferences) preclude anything conclusive, which is why I’ve highlighted Nation’s use of the word ‘generally’. If we have a look at Lingro’s bilingual gloss, I think you’ll agree that the monolingual and bilingual glosses are equally unhelpful, equally unlikely to lead to better learning, whether it’s vocabulary acquisition or reading comprehension.bilingual lingro

 

The issues I’ve just discussed illustrate the complexity of the ‘glossing’ question, but they only scratch the surface. I’ll dig a little deeper.

1 Glosses are only likely to be of value to learning if they are used selectively. Nation (2013: 242) suggests that ‘it is best to assume that the highest density of glossing should be no more than 5% and preferably around 3% of the running words’. Online glosses make the process of look-up extremely easy. This is an obvious advantage over look-ups in a paper dictionary, but there is a real risk, too, that the ease of online look-up encourages unnecessary look-ups. More clicks do not always lead to more learning. The value of glosses cannot therefore be considered independently of a consideration of the level (i.e. appropriacy) of the text that they are being used with.

2 A further advantage of online glosses is that they can offer a wide range of information, e.g. pronunciation, L1 translation, L2 definition, visuals, example sentences. The review of literature by Khezrlou et al (2017: 107) suggests that ‘multimedia glosses can promote vocabulary learning but uncertainty remains as to whether they also facilitate reading comprehension’. Barcroft (2015), however, warns that pictures may help learners with meaning, but at the cost of retention of word form, and the research of Boers et al did not find evidence to support the use of pictures. Even if we were to accept the proposition that pictures might be helpful, we would need to hold two caveats. First, the amount of multimodal support should not lead to cognitive overload. Second, pictures need to be clear and appropriate: a condition that is rarely met in online learning programs. The quality of multimodal glosses is more important than their inclusion / exclusion.

3 It’s a commonplace to state that learners will learn more if they are actively engaged or involved in the learning, rather than simply (receptively) looking up a gloss. So, it has been suggested that cognitive engagement can be stimulated by turning the glosses into a multiple-choice task, and a fair amount of research has investigated this possibility. Barcroft (2015: 143) reports research that suggests that ‘multiple-choice glosses [are] more effective than single glosses’, but Nation (2013: 246) argues that ‘multiple choice glosses are not strongly supported by research’. Basically, we don’t know and even if we have replication studies to re-assess the benefits of multimodal glosses (as advocated by Boers et al, 2017), it is again likely that learner variables will make it impossible to reach a firm conclusion.

Learning from meta-analyses

Discussion of glosses is not new. Back in the late 19th century, ‘most of the Reform Movement teachers, took the view that glossing was a sensible technique’ (Howatt, 2004: 191). Sensible, but probably not all that important in the broader scheme of language learning and teaching. Online glosses offer a number of potential advantages, but there is a huge number of variables that need to be considered if the potential is to be realised. In essence, I have been arguing that asking whether online glosses are more effective than print glosses is the wrong question. It’s not a question that can provide us with a useful answer. When you look at the details of the research that has been brought together in the meta-analysis, you simply cannot conclude that there are unequivocally positive effects of technology on language learning, if the most positive effects are to be found in the digital variation of an old sensible technique.

Interesting and useful as Plonsky and Ziegler’s study is, I think it needs to be treated with caution. More generally, we need to be cautious about using meta-analyses and effect sizes. Mura Nava has a useful summary of an article by Adrian Simpson (Simpson, 2017), that looks at inclusion criteria and statistical procedures and warns us that we cannot necessarily assume that the findings of meta-meta-analyses are educationally significant. More directly related to technology and language learning, Boulton’s paper (Boulton, 2016) makes a similar point: ‘Meta-analyses need interpreting with caution: in particular, it is tempting to seize on a single figure as the ultimate answer to the question: Does it work? […] More realistically, we need to look at variation in what works’.

For me, the greatest value in Plonsky and Ziegler’s paper was nothing to do with effect sizes and big answers to big questions. It was the bibliography … and the way it forced me to be rather more critical about meta-analyses.

References

Barcroft, J. 2015. Lexical Input Processing and Vocabulary Learning. Amsterdam: John Benjamins

Boers, F., Warren, P., He, L. & Deconinck, J. 2017. ‘Does adding pictures to glosses enhance vocabulary uptake from reading?’ System 66: 113 – 129

Boulton, A. 2016. ‘Quantifying CALL: significance, effect size and variation’ in S. Papadima-Sophocleus, L. Bradley & S. Thouësny (eds.) CALL Communities and Culture – short papers from Eurocall 2016 pp.55 – 60 http://files.eric.ed.gov/fulltext/ED572012.pdf

Dunlosky, J., Rawson, K.A., Marsh, E.J., Nathan, M.J. & Willingham, D.T. 2013. ‘Improving Students’ Learning With Effective Learning Techniques’ Psychological Science in the Public Interest 14 / 1: 4 – 58

Hattie, J.A.C. 2009. Visible Learning. Abingdon, Oxon.: Routledge

Howatt, A.P.R. 2004. A History of English Language Teaching 2nd edition. Oxford: Oxford University Press

Khezrlou, S., Ellis, R. & K. Sadeghi 2017. ‘Effects of computer-assisted glosses on EFL learners’ vocabulary acquisition and reading comprehension in three learning conditions’ System 65: 104 – 116

Laufer, B. & Shmueli, K. 1997. ‘Memorizing new words: Does teaching have anything to do with it?’ RELC Journal 28 / 1: 89 – 108

Nation, I.S.P. 2013. Learning Vocabulary in Another Language. Cambridge: Cambridge University Press

Plonsky, L. & Ziegler, N. 2016. ‘The CALL–SLA interface:  insights from a second-order synthesis’ Language Learning & Technology 20 / 2: 17 – 37

Simpson, A. 2017. ‘The misdirection of public policy: Comparing and combining standardised effect sizes’ Journal of Education Policy, 32 / 4: 450-466

Taylor, A. M. 2006. ‘The effects of CALL versus traditional L1 glosses on L2 reading comprehension’. CALICO Journal, 23, 309–318.

Taylor, A. M. 2009. ‘CALL-based versus paper-based glosses: Is there a difference in reading comprehension?’ CALICO Journal, 23, 147–160.

Taylor, A. M. 2013. CALL versus paper: In which context are L1 glosses more effective? CALICO Journal, 30, 63-8

In my last post, I looked at the way that, in the absence of a clear, shared understanding of what ‘personalization’ means, it has come to be used as a slogan for the promoters of edtech. In this post, I want to look a little more closely at the constellation of meanings that are associated with the term, suggest a way of evaluating just how ‘personalized’ an instructional method might be, and look at recent research into ‘personalized learning’.

In English language teaching, ‘personalization’ often carries a rather different meaning than it does in broader educational discourse. Jeremy Harmer (Harmer, 2012: 276) defines it as ‘when students use language to talk about themselves and things which interest them’. Most commonly, this is in the context of ‘freer’ language practice of grammar or vocabulary of the following kind: ‘Complete the sentences so that they are true for you’. It is this meaning that Scott Thornbury refers to first in his entry for ‘Personalization’ in his ‘An A-Z of ELT’ (Thornbury, 2006: 160). He goes on, however, to expand his definition of the term to include humanistic approaches such as Community Language Learning / Counseling learning (CLL), where learners decide the content of a lesson, where they have agency. I imagine that no one would disagree that an approach such as this is more ‘personalized’ than a ‘complete-the-sentences-so-they-are-true-for you’ exercise to practise the present perfect.

Outside of ELT, ‘personalization’ has been used to refer to everything from ‘from customized interfaces to adaptive tutors, from student-centered classrooms to learning management systems’ (Bulger, 2016: 3). The graphic below (from Bulger, 2016: 3) illustrates just how wide the definitional reach of ‘personalization’ is.

TheBulger_pie_chart

As with Thornbury’s entry in his ‘A – Z of ELT’, it seems uncontentious to say that some things are more ‘personalized’ than others.

Given the current and historical problems with defining the term, it’s not surprising that a number of people have attempted to develop frameworks that can help us to get to grips with the thorny question of ‘personalization’. In the context of language teaching / learning, Renée Disick (Disick, 1975: 58) offered the following categorisation:

Disick

In a similar vein, a few years later, Howard Altman (Altman, 1980) suggested that teaching activities can differ in four main ways: the time allocated for learning, the curricular goal, the mode of learning and instructional expectations (personalized goal setting). He then offered eight permutations of these variables (see below, Altman, 1980: 9), although many more are imaginable.

Altman 1980 chart

Altman and Disick were writing, of course, long before our current technology-oriented view of ‘personalization’ became commonplace. The recent classification of technologically-enabled personalized learning systems by Monica Bulger (see below, Bulger, 2016: 6) reflects how times have changed.

5_types_of_personalized_learning_system

Bulger’s classification focusses on the technology more than the learning, but her continuum is very much in keeping with the views of Disick and Altman. Some approaches are more personalized than others.

The extent to which choices are offered determines the degree of individualization in a particular program. (Disick, 1975: 5)

It is important to remember that learner-centered language teaching is not a point, but rather a continuum. (Altman, 1980: 6)

Larry Cuban has also recently begun to use a continuum as a way of understanding the practices of ‘personalization’ that he observes as part of his research. The overall goals of schooling at both ends of the curriculum are not dissimilar: helping ‘children grow into adults who are creative thinkers, help their communities, enter jobs and succeed in careers, and become thoughtful, mindful adults’.

Cubans curriculum

As Cuban and others before him (e.g. Januszewski, 2001: 57) make clear, the two perspectives are not completely independent of each other. Nevertheless, we can see that one end of this continuum is likely to be materials-centred with the other learner-centred (Dickinson, 1987: 57). At one end, teachers (or their LMS replacements) are more likely to be content-providers and enact traditional roles. At the other, teachers’ roles are ‘more like those of coaches or facilitators’ (Cavanagh, 2014). In short, one end of the continuum is personalization for the learner; the other end is personalization by the learner.

It makes little sense, therefore, to talk about personalized learning as being a ‘good’ or a ‘bad’ thing. We might perceive one form of personalized learning to be more personalized than another, but that does not mean it is any ‘better’ or more effective. The only possible approach is to consider and evaluate the different elements of personalization in an attempt to establish, first, from a theoretical point of view whether they are likely to lead to learning gains, and, second, from an evidence-based perspective whether any learning gains are measurable. In recent posts on this blog, I have been attempting to do that with elements such as learning styles , self-pacing and goal-setting.

Unfortunately, but perhaps not surprisingly, none of the elements that we associate with ‘personalization’ will lead to clear, demonstrable learning gains. A report commissioned by the Gates Foundation (Pane et al, 2015) to find evidence of the efficacy of personalized learning did not, despite its subtitle (‘Promising Evidence on Personalized Learning’), manage to come up with any firm and unequivocal evidence (see Riley, 2017). ‘No single element of personalized learning was able to discriminate between the schools with the largest achievement effects and the others in the sample; however, we did identify groups of elements that, when present together, distinguished the success cases from others’, wrote the authors (Pane et al., 2015: 28). Undeterred, another report (Pane et al., 2017) was commissioned: in this the authors were unable to do better than a very hedged conclusion: ‘There is suggestive evidence that greater implementation of PL practices may be related to more positive effects on achievement; however, this finding requires confirmation through further research’ (my emphases). Don’t hold your breath!

In commissioning the reports, the Gates Foundation were probably asking the wrong question. The conceptual elasticity of the term ‘personalization’ makes its operationalization in any empirical study highly problematic. Meaningful comparison of empirical findings would, as David Hartley notes, be hard because ‘it is unlikely that any conceptual consistency would emerge across studies’ (Hartley, 2008: 378). The question of what works is unlikely to provide a useful (in the sense of actionable) response.

In a new white paper out this week, “A blueprint for breakthroughs,” Michael Horn and I argue that simply asking what works stops short of the real question at the heart of a truly personalized system: what works, for which students, in what circumstances? Without this level of specificity and understanding of contextual factors, we’ll be stuck understanding only what works on average despite aspirations to reach each individual student (not to mention mounting evidence that “average” itself is a flawed construct). Moreover, we’ll fail to unearth theories of why certain interventions work in certain circumstances. And without that theoretical underpinning, scaling personalized learning approaches with predictable quality will remain challenging. Otherwise, as more schools embrace personalized learning, at best each school will have to go at it alone and figure out by trial and error what works for each student. Worse still, if we don’t support better research, “personalized” schools could end up looking radically different but yielding similar results to our traditional system. In other words, we risk rushing ahead with promising structural changes inherent to personalized learning—reorganizing space, integrating technology tools, freeing up seat-time—without arming educators with reliable and specific information about how to personalize to their particular students or what to do, for which students, in what circumstances. (Freeland Fisher, 2016)

References

Altman, H.B. 1980. ‘Foreign language teaching: focus on the learner’ in Altman, H.B. & James, C.V. (eds.) 1980. Foreign Language Teaching: Meeting Individual Needs. Oxford: Pergamon Press, pp.1 – 16

Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. New York: Data and Society Research Institute. https://www.datasociety.net/pubs/ecl/PersonalizedLearning_primer_2016.pdf

Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week http://www.edweek.org/ew/articles/2014/10/22/09pl-overview.h34.html

Dickinson, L. 1987. Self-instruction in Language Learning. Cambridge: Cambridge University Press

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Freeland Fisher, J. 2016. ‘The inconvenient truth about personalized learning’ [Blog post] retrieved from http://www.christenseninstitute.org/blog/the-inconvenient-truth-about-personalized-learning/ (May 4, 2016)

Harmer, J. 2012. Essential Teacher Knowledge. Harlow: Pearson Education

Hartley, D. 2008. ‘Education, Markets and the Pedagogy of Personalisation’ British Journal of Educational Studies 56 / 4: 365 – 381

Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited

Pane, J. F., Steiner, E. D., Baird, M. D. & Hamilton, L. S. 2015. Continued Progress: Promising Evidence on Personalized Learning. Seattle: Rand Corporation retrieved from http://www.rand.org/pubs/research_reports/RR1365.html

Pane, J.F., Steiner, E. D., Baird, M. D., Hamilton, L. S. & Pane, J.D. 2017. Informing Progress: Insights on Personalized Learning Implementation and Effects. Seattle: Rand Corporation retrieved from https://www.rand.org/pubs/research_reports/RR2042.html

Riley, B. 2017. ‘Personalization vs. How People Learn’ Educational Leadership 74 / 6: 68-73

Thornbury, S. 2006. An A – Z of ELT. Oxford: Macmillan Education

 

 

 

440px-HydraOrganization_HeadLike the mythical monster, the ancient Hydra organisation of Marvel Comics grows two more heads if one is cut off, becoming more powerful in the process. With the most advanced technology on the planet and with a particular focus on data gathering, Hydra operates through international corporations and highly-placed individuals in national governments.
Personalized learning has also been around for centuries. Its present incarnation can be traced to the individualized instructional programmes of the late 19th century which ‘focused on delivering specific subject matter […] based on the principles of scientific management. The intent was to solve the practical problems of the classroom by reducing waste and increasing efficiency, effectiveness, and cost containment in education (Januszewski, 2001: 58). Since then, personalized learning has adopted many different names, including differentiated instruction, individualized instruction, individually guided education, programmed instruction, personalized learning, personalized instruction, and individually prescribed instruction.
Disambiguating the terms has never been easy. In the world of language learning / teaching, it was observed back in the early 1970s ‘that there is little agreement on the description and definition of individualized foreign language instruction’ (Garfinkel, 1971: 379). The point was echoed a few years later by Grittner (1975: 323): it ‘means so many things to so many different people’. A UNESCO document (Chaix & O’Neil, 1978: 6) complained that ‘the term ‘individualization’ and the many expressions using the same root, such as ‘individualized learning’, are much too ambiguous’. Zoom forward to the present day and nothing has changed. Critiquing the British government’s focus on personalized learning, the Institute for Public Policy Research (Johnson, 2004: 17) wrote that it ‘remains difficult to be certain what the Government means by personalised learning’. In the U.S. context, a piece by Sean Cavanagh (2014) in Education Week (which is financially supported by the Gates Foundation) noted that although ‘the term “personalized learning” seems to be everywhere, there is not yet a shared understanding of what it means’. In short, as Arthur Levine  has put it, the words personalized learning ‘generate more heat than light’.
Despite the lack of clarity about what precisely personalized learning actually is, it has been in the limelight of language teaching and learning since before the 1930s when Pendleton (1930: 195) described the idea as being more widespread than ever before. Zoom forward to the 1970s and we find it described as ‘one of the major movements in second-language education at the present time’ (Chastain, 1975: 334). In 1971, it was described as ‘a bandwagon onto which foreign language teachers at all levels are jumping’ (Altman & Politzer, 1971: 6). A little later, in the 1980s, ‘words or phrases such as ‘learner-centered’, ‘student-centered’, ‘personalized’, ‘individualized’, and ‘humanized’ appear as the most frequent modifiers of ‘instruction’ in journals and conferences of foreign language education (Altman & James, 1980). Continue to the present day, and we find that personalized learning is at the centre of the educational policies of governments across the world. Between 2012 and 2015, the U.S. Department of Education threw over half a billion dollars at personalized learning initiatives (Bulger, 2016: 22). At the same time, there is massive sponsorship of personalized learning from the biggest international corporations (the William and Flora Hewlett Foundation, Rogers Family Foundation, Susan and Michael Dell Foundation, and the Eli and Edythe Broad Foundation) (Bulger, 2016: 22). The Bill & Melinda Gates Foundation has invested nearly $175 million in personalized learning development and Facebook’s Mark Zuckerberg is ploughing billions of dollars into it.
There has, however, been one constant: the belief that technology can facilitate the process of personalization (whatever that might be). Technology appears to offer the potential to realise the goal of personalized learning. We have come a long way from Sydney Pressey’s attempts in the 1920s to use teaching machines to individualize instruction. At that time, the machines were just one part of the programme (and not the most important). But each new technology has offered a new range of possibilities to be exploited and each new technology, its advocates argue, ‘will solve the problems better than previous efforts’ (Ferster, 2014: xii). With the advent of data-capturing learning technologies, it has now become virtually impossible to separate advocacy of personalized instruction from advocacy of digitalization in education. As the British Department for Education has put it ‘central to personalised learning is schools’ use of data (DfES (2005) White Paper: Higher Standards, Better Schools for All. London, Department for Education and Skills, para 4.50). When the U.S. Department of Education threw half a billion dollars at personalized learning initiatives, the condition was that these projects ‘use collaborative, data-based strategies and 21st century tools to deliver instruction’ (Bulger, 2016: 22).
Is it just a coincidence that the primary advocates of personalized learning are either vendors of technology or are very close to them in the higher echelons of Hydra (World Economic Forum, World Bank, IMF, etc.)? ‘Personalized learning’ has ‘almost no descriptive value’: it is ‘a term that sounds good without the inconvenience of having any obviously specific pedagogical meaning’ (Feldstein & Hill, 2016: 30). It evokes positive responses, with its ‘nod towards more student-centered learning […], a move that honors the person learning not just the learning institution’ (Watters, 2014). As such, it is ‘a natural for marketing purposes’ since nobody in their right mind would want unpersonalized or depersonalized learning (Feldstein & Hill, 2016: 25). It’s ‘a slogan that nobody’s going to be against, and everybody’s going to be for. Nobody knows what it means, because it doesn’t mean anything. Its crucial value is that it diverts your attention from a question that does mean something: Do you support our policy?’ (Chomsky, 1997).
None of the above is intended to suggest that there might not be goals that come under the ‘personalized learning’ umbrella that are worth working towards. But that’s another story – one I will return to in another post. For the moment, it’s just worth remembering that, in one of the Marvel Comics stories, Captain America, who appeared to be fighting the depersonalized evils of the world, was actually a deep sleeper agent for Hydra.

References
Altman, H.B. & James, C.V. (eds.) 1980. Foreign Language Teaching: Meeting Individual Needs. Oxford: Pergamon Press
Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare
Bulger, M. 2016. Personalized Learning: The Conversations We’re Not Having. New York: Data and Society Research Institute.
Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week
Chaix, P., & O’Neil, C. 1978. A Critical Analysis of Forms of Autonomous Learning (Autodidaxy and Semi-autonomy in the Field of Foreign Language Learning. Final Report. UNESCO Doc Ed 78/WS/58
Chastain, K. 1975. ‘An Examination of the Basic Assumptions of “Individualized” Instruction’ The Modern Language Journal 59 / 7: 334 – 344
Chomsky, N. 1997. Media Control: The Spectacular Achievements of Propaganda. New York: Seven Stories Press
Feldstein, M. & Hill, P. 2016. ‘Personalized Learning: What it Really is and why it Really Matters’ EduCause Review March / April 2016: 25 – 35
Ferster, B. 2014. Teaching Machines. Baltimore: John Hopkins University Press
Garfinkel, A. 1971. ‘Stanford University Conference on Individualizing Foreign Language Instruction, May 6-8, 1971.’ The Modern Language Journal Vol. 55, No. 6 (Oct., 1971), pp. 378-381
Grittner, F. M. 1975. ‘Individualized Instruction: An Historical Perspective’ The Modern Language Journal 59 / 7: 323 – 333
Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited
Johnson, M. 2004. Personalised Learning – an Emperor’s Outfit? London: Institute for Public Policy Research
Pendleton, C. S. 1930. ‘Personalizing English Teaching’ Peabody Journal of Education 7 / 4: 195 – 200
Watters, A. 2014. The problem with ‘personalization’ Hack Education

Introduction

In the last post, I looked at issues concerning self-pacing in personalized language learning programmes. This time, I turn to personalized goal-setting. Most definitions of personalized learning, such as that offered by Next Generation Learning Challenges http://nextgenlearning.org/ (a non-profit supported by Educause, the Gates Foundation, the Broad Foundation, the Hewlett Foundation, among others), argue that ‘the default perspective [should be] the student’s—not the curriculum, or the teacher, and that schools need to adjust to accommodate not only students’ academic strengths and weaknesses, but also their interests, and what motivates them to succeed.’ It’s a perspective shared by the United States National Education Technology Plan 2017 https://tech.ed.gov/netp/ , which promotes the idea that learning objectives should vary based on learner needs, and should often be self-initiated. It’s shared by the massively funded Facebook initiative that is developing software that ‘puts students in charge of their lesson plans’, as the New York Times https://www.nytimes.com/2016/08/10/technology/facebook-helps-develop-software-that-puts-students-in-charge-of-their-lesson-plans.html?_r=0 put it. How, precisely, personalized goal-setting can be squared with standardized, high-stakes testing is less than clear. Are they incompatible by any chance?

In language learning, the idea that learners should have some say in what they are learning is not new, going back, at least, to the humanistic turn in the 1970s. Wilga Rivers advocated ‘giving the students opportunity to choose what they want to learn’ (Rivers, 1971: 165). A few years later, Renee Disick argued that the extent to which a learning programme can be called personalized (although she used the term ‘individualized’) depends on the extent to which learners have a say in the choice of learning objectives and the content of learning (Disick, 1975). Coming more up to date, Penny Ur advocated giving learners ‘a measure of freedom to choose how and what to learn’ (Ur, 1996: 233).

The benefits of personalized goal-setting

Personalized goal-setting is closely related to learner autonomy and learner agency. Indeed, it is hard to imagine any meaningful sense of learner autonomy or agency without some control of learning objectives. Without this control, it will be harder for learners to develop an L2 self. This matters because ‘ultimate attainment in second-language learning relies on one’s agency … [it] is crucial at the point where the individuals must not just start memorizing a dozen new words and expressions but have to decide on whether to initiate a long, painful, inexhaustive, and, for some, never-ending process of self-translation. (Pavlenko & Lantolf, 2000: 169 – 170). Put bluntly, if learners ‘have some responsibility for their own learning, they are more likely to be engaged than if they are just doing what the teacher tells them to’ (Harmer, 2012: 90). A degree of autonomy should lead to increased motivation which, in turn, should lead to increased achievement (Dickinson, 1987: 32; Cordova & Lepper, 1996: 726).

Strong evidence for these claims is not easy to provide, not least since autonomy and agency cannot be measured. However, ‘negative evidence clearly shows that a lack of agency can stifle learning by denying learners control over aspects of the language-learning process’ (Vandergriff, 2016: 91). Most language teachers (especially in compulsory education) have witnessed the negative effects that a lack of agency can generate in some students. Irrespective of the extent to which students are allowed to influence learning objectives, the desirability of agency / autonomy appears to be ‘deeply embedded in the professional consciousness of the ELT community’ (Borg and Al-Busaidi, 2012; Benson, 2016: 341). Personalized goal-setting may not, for a host of reasons, be possible in a particular learning / teaching context, but in principle it would seem to be a ‘good thing’.

Goal-setting and technology

The idea that learners might learn more and better if allowed to set their own learning objectives is hardly new, dating back at least one hundred years to the establishment of Montessori’s first Casa dei Bambini. In language teaching, the interest in personalized learning that developed in the 1970s (see my previous post) led to numerous classroom experiments in personalized goal-setting. These did not result in lasting changes, not least because the workload of teachers became ‘overwhelming’ (Disick, 1975: 128).

Closely related was the establishment of ‘self-access centres’. It was clear to anyone, like myself, who was involved in the setting-up and maintenance of a self-access centre, that they cost a lot, in terms of both money and work (Ur, 2012: 236). But there were also nagging questions about how effective they were (Morrison, 2005). Even more problematic was a bigger question: did they actually promote the learner autonomy that was their main goal?

Post-2000, online technology rendered self-access centres redundant: who needs the ‘walled garden’ of a self-access centre when ‘learners are able to connect with multiple resources and communities via the World Wide Web in entirely individual ways’ (Reinders, 2012)? The cost problem of self-access centres was solved by the web. Readily available now were ‘myriad digital devices, software, and learning platforms offering educators a once-unimaginable array of options for tailoring lessons to students’ needs’ (Cavanagh, 2014). Not only that … online technology promised to grant agency, to ‘empower language learners to take charge of their own learning’ and ‘to provide opportunities for learners to develop their L2 voice’ (Vandergriff, 2016: 32). The dream of personalized learning has become inseparable from the affordances of educational technologies.

It is, however, striking just how few online modes of language learning offer any degree of personalized goal-setting. Take a look at some of the big providers – Voxy, Busuu, Duolingo, Rosetta Stone or Babbel, for example – and you will find only the most token nods to personalized learning objectives. Course providers appear to be more interested in claiming their products are personalized (‘You decide what you want to learn and when!’) than in developing a sufficient amount of content to permit personalized goal-setting. We are left with the ELT equivalent of personalized cans of Coke: a marketing tool.

coke_cans

The problems with personalized goal-setting

Would language learning products, such as those mentioned above, be measurably any better if they did facilitate the personalization of learning objectives in a significant way? Would they be able to promote learner autonomy and agency in a way that self-access centres apparently failed to achieve? It’s time to consider the square quotes that I put around ‘good thing’.

Researchers have identified a number of potential problems with goal-setting. I have already mentioned the problem of reconciling personalized goals and standardized testing. In most learning contexts, educational authorities (usually the state) regulate the curriculum and determine assessment practices. It is difficult to see, as Campbell et al. (Campbell et al., 2007: 138) point out, how such regulation ‘could allow individual interpretations of the goals and values of education’. Most assessment systems ‘aim at convergent outcomes and homogeneity’ (Benson, 2016: 345) and this is especially true of online platforms, irrespective of their claims to ‘personalization’. In weak (typically internal) assessment systems, the potential for autonomy is strongest, but these are rare.

In all contexts, it is likely that personalized goal-setting will only lead to learning gains when a number of conditions are met. The goals that are chosen need to be both specific, measurable, challenging and non-conflicting (Ordóñez et al. 2009: 2-3). They need to be realistic: if not, it is unlikely that self-efficacy (a person’s belief about their own capability to achieve or perform to a certain level) will be promoted (Koda-Dallow & Hobbs, 2005), and without self-efficacy, improved performance is also unlikely (Bandura, 1997). The problem is that many learners lack self-efficacy and are poor self-regulators. These things are teachable / learnable, but require time and support. Many learners need help in ‘becoming aware of themselves and their own understandings’ (McMahon & Oliver, 2001: 1304). If they do not get it, the potential advantages of personalized goal-setting will be negated. As learners become better self-regulators, they will want and need to redefine their learning goals: goal-setting should be an iterative process (Hussey & Smith, 2003: 358). Again, support will be needed. In online learning, such support is not common.

A further problem that has been identified is that goal-setting can discourage a focus on non-goal areas (Ordóñez et al. 2009: 2) and can lead to ‘a focus on reaching the goal rather than on acquiring the skills required to reach it’ (Locke & Latham, 2006: 266). We know that much language learning is messy and incidental. Students do not only learn the particular thing that they are studying at the time (the belief that they do was described by Dewey as ‘the greatest of all pedagogical fallacies’). Goal-setting, even when personalized, runs the risk of promoting tunnel-vision.

The incorporation of personalized goal-setting in online language learning programmes is, in so many ways, a far from straightforward matter. Simply tacking it onto existing programmes is unlikely to result in anything positive: it is not an ‘over-the-counter treatment for motivation’ (Ordóñez et al.:2). Course developers will need to look at ‘the complex interplay between goal-setting and organizational contexts’ (Ordóñez et al. 2009: 16). Motivating students is not simply ‘a matter of the teacher deploying the correct strategies […] it is an intensely interactive process’ (Lamb, M. 2017). More generally, developers need to move away from a positivist and linear view of learning as a technical process where teaching interventions (such as the incorporation of goal-setting, the deployment of gamification elements or the use of a particular algorithm) will lead to predictable student outcomes. As Larry Cuban reminds us, ‘no persuasive body of evidence exists yet to confirm that belief (Cuban, 1986: 88). The most recent research into personalized learning has failed to identify any single element of personalization that can be clearly correlated with improved outcomes (Pane et al., 2015: 28).

In previous posts, I considered learning styles and self-pacing, two aspects of personalized learning that are highly problematic. Personalized goal-setting is no less so.

References

Bandura, A. 1997. Self-efficacy: The exercise of control. New York: W.H. Freeman and Company

Benson, P. 2016. ‘Learner Autonomy’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon: Routledge. pp.339 – 352

Borg, S. & Al-Busaidi, S. 2012. ‘Teachers’ beliefs and practices regarding learner autonomy’ ELT Journal 66 / 3: 283 – 292

Cavanagh, S. 2014. ‘What Is ‘Personalized Learning’? Educators Seek Clarity’ Education Week http://www.edweek.org/ew/articles/2014/10/22/09pl-overview.h34.html

Cordova, D. I. & Lepper, M. R. 1996. ‘Intrinsic Motivation and the Process of Learning: Beneficial Effects of Contextualization, Personalization, and Choice’ Journal of Educational Psychology 88 / 4: 715 -739

Cuban, L. 1986. Teachers and Machines. New York: Teachers College Press

Dickinson, L. 1987. Self-instruction in Language Learning. Cambridge: Cambridge University Press

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Harmer, J. 2012. Essential Teacher Knowledge. Harlow: Pearson Education

Hussey, T. & Smith, P. 2003. ‘The Uses of Learning Outcomes’ Teaching in Higher Education 8 / 3: 357 – 368

Lamb, M. 2017 (in press) ‘The motivational dimension of language teaching’ Language Teaching 50 / 3

Locke, E. A. & Latham, G. P. 2006. ‘New Directions in Goal-Setting Theory’ Current Directions in Psychological Science 15 / 5: 265 – 268

McMahon, M. & Oliver, R. (2001). Promoting self-regulated learning in an on-line environment. In C. Montgomerie & J. Viteli (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2001 (pp. 1299-1305). Chesapeake, VA: AACE

Morrison, B. 2005. ‘Evaluating learning gain in a self-access learning centre’ Language Teaching Research 9 / 3: 267 – 293

Ordóñez, L. D., Schweitzer, M. E., Galinsky, A. D. & Bazerman, M. H. 2009. Goals Gone Wild: The Systematic Side Effects of Over-Prescribing Goal Setting. Harvard Business School Working Paper 09-083

Pane, J. F., Steiner, E. D., Baird, M. D. & Hamilton, L. S. 2015. Continued Progress: Promising Evidence on Personalized Learning. Seattle: Rand Corporation

Pavlenko, A. & Lantolf, J. P. 2000. ‘Second language learning as participation and the (re)construction of selves’ In J.P. Lantolf (ed.), Sociocultural Theory and Second Language Learning. Oxford: Oxford University Press, pp. 155 – 177

Reinders, H. 2012. ‘The end of self-access? From walled garden to public park’ ELT World Online 4: 1 – 5

Rivers, W. M. 1971. ‘Techniques for Developing Proficiency in the Spoken Language in an Individualized Foreign Language program’ in Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare. pp. 165 – 169

Ur, P. 1996. A Course in Language Teaching: Practice and Theory. Cambridge: Cambridge University Press

Ur, P. 2012. A Course in English Language Teaching. Cambridge: Cambridge University Press

Vandergriff, I. Second-language Discourse in the Digital World. 2016. Amsterdam: John Benjamins

Introduction

Allowing learners to determine the amount of time they spend studying, and, therefore (in theory at least) the speed of their progress is a key feature of most personalized learning programs. In cases where learners follow a linear path of pre-determined learning items, it is often the only element of personalization that the programs offer. In the Duolingo program that I am using, there are basically only two things that can be personalized: the amount of time I spend studying each day, and the possibility of jumping a number of learning items by ‘testing out’.

Self-regulated learning or self-pacing, as this is commonly referred to, has enormous intuitive appeal. It is clear that different people learn different things at different rates. We’ve known for a long time that ‘the developmental stages of child growth and the individual differences among learners make it impossible to impose a single and ‘correct’ sequence on all curricula’ (Stern, 1983: 439). It therefore follows that it makes even less sense for a group of students (typically determined by age) to be obliged to follow the same curriculum at the same pace in a one-size-fits-all approach. We have probably all experienced, as students, the frustration of being behind, or ahead of, the rest of our colleagues in a class. One student who suffered from the lockstep approach was Sal Khan, founder of the Khan Academy. He has described how he was fed up with having to follow an educational path dictated by his age and how, as a result, individual pacing became an important element in his educational approach (Ferster, 2014: 132-133). As teachers, we have all experienced the challenges of teaching a piece of material that is too hard or too easy for many of the students in the class.

Historical attempts to facilitate self-paced learning

Charles_W__Eliot_cph_3a02149An interest in self-paced learning can be traced back to the growth of mass schooling and age-graded classes in the 19th century. In fact, the ‘factory model’ of education has never existed without critics who saw the inherent problems of imposing uniformity on groups of individuals. These critics were not marginal characters. Charles Eliot (president of Harvard from 1869 – 1909), for example, described uniformity as ‘the curse of American schools’ and argued that ‘the process of instructing students in large groups is a quite sufficient school evil without clinging to its twin evil, an inflexible program of studies’ (Grittner, 1975: 324).

Attempts to develop practical solutions were not uncommon and these are reasonably well-documented. One of the earliest, which ran from 1884 to 1894, was launched in Pueblo, Colorado and was ‘a self-paced plan that required each student to complete a sequence of lessons on an individual basis’ (Januszewski, 2001: 58-59). More ambitious was the Burk Plan (at its peak between 1912 and 1915), named after Frederick Burk of the San Francisco State Normal School, which aimed to allow students to progress through materials (including language instruction materials) at their own pace with only a limited amount of teacher presentations (Januszewski, ibid.). Then, there was the Winnetka Plan (1920s), developed by Carlton Washburne, an associate of Frederick Burk and the superintendent of public schools in Winnetka, Illinois, which also ‘allowed learners to proceed at different rates, but also recognised that learners proceed at different rates in different subjects’ (Saettler, 1990: 65). The Winnetka Plan is especially interesting in the way it presaged contemporary attempts to facilitate individualized, self-paced learning. It was described by its developers in the following terms:

A general technique [consisting] of (a) breaking up the common essentials curriculum into very definite units of achievement, (b) using complete diagnostic tests to determine whether a child has mastered each of these units, and, if not, just where his difficulties lie and, (c) the full use of self-instructive, self corrective practice materials. (Washburne, C., Vogel, M. & W.S. Gray. 1926. A Survey of the Winnetka Public Schools. Bloomington, IL: Public School Press)

Not dissimilar was the Dalton (Massachusetts) Plan in the 1920s which also used a self-paced program to accommodate the different ability levels of the children and deployed contractual agreements between students and teachers (something that remains common educational practice around the world). There were many others, both in the U.S. and other parts of the world.

The personalization of learning through self-pacing was not, therefore, a minor interest. Between 1910 and 1924, nearly 500 articles can be documented on the subject of individualization (Grittner, 1975: 328). In just three years (1929 – 1932) of one publication, The Education Digest, there were fifty-one articles dealing with individual instruction and sixty-three entries treating individual differences (Chastain, 1975: 334). Foreign language teaching did not feature significantly in these early attempts to facilitate self-pacing, but see the Burk Plan described above. Only a handful of references to language learning and self-pacing appeared in articles between 1916 and 1924 (Grittner, 1975: 328).

Disappointingly, none of these initiatives lasted long. Both costs and management issues had been significantly underestimated. Plans such as those described above were seen as progress, but not the hoped-for solution. Problems included the fact that the materials themselves were not individualized and instructional methods were too rigid (Pendleton, 1930: 199). However, concomitant with the interest in individualization (mostly, self-pacing), came the advent of educational technology.

Sidney L. Pressey, the inventor of what was arguably the first teaching machine, was inspired by his experiences with schoolchildren in rural Indiana in the 1920s where he ‘was struck by the tremendous variation in their academic abilities and how they were forced to progress together at a slow, lockstep pace that did not serve all students well’ (Ferster, 2014: 52). Although Pressey failed in his attempts to promote his teaching machines, he laid the foundation stones in the synthesizing of individualization and technology.Pressey machine

Pressey may be seen as the direct precursor of programmed instruction, now closely associated with B. F. Skinner (see my post on Behaviourism and Adaptive Learning). It is a quintessentially self-paced approach and is described by John Hattie as follows:

Programmed instruction is a teaching method of presenting new subject matter to students in graded sequence of controlled steps. A book version, for example, presents a problem or issue, then, depending on the student’s answer to a question about the material, the student chooses from optional answers which refers them to particular pages of the book to find out why they were correct or incorrect – and then proceed to the next part of the problem or issue. (Hattie, 2009: 231)

Programmed instruction was mostly used for the teaching of mathematics, but it is estimated that 4% of programmed instruction programs were for foreign languages (Saettler, 1990: 297). It flourished in the 1960s and 1970s, but even by 1968 foreign language instructors were sceptical (Valdman, 1968). A survey carried out by the Center for Applied Linguistics revealed then that only about 10% of foreign language teachers at college and university reported the use of programmed materials in their departments. (Valdman, 1968: 1).grolier min max

Research studies had failed to demonstrate the effectiveness of programmed instruction (Saettler, 1990: 303). Teachers were often resistant and students were often bored, finding ‘ingenious ways to circumvent the program, including the destruction of their teaching machines!’ (Saettler, ibid.).

In the case of language learning, there were other problems. For programmed instruction to have any chance of working, it was necessary to specify rigorously the initial and terminal behaviours of the learner so that the intermediate steps leading from the former to the latter could be programmed. As Valdman (1968: 4) pointed out, this is highly problematic when it comes to languages (a point that I have made repeatedly in this blog). In addition, students missed the personal interaction that conventional instruction offered, got bored and lacked motivation (Valdman, 1968: 10).

Programmed instruction worked best when teachers were very enthusiastic, but perhaps the most significant lesson to be learned from the experiments was that it was ‘a difficult, time-consuming task to introduce programmed instruction’ (Saettler, 1990: 299). It entailed changes to well-established practices and attitudes, and for such changes to succeed there must be consideration of the social, political, and economic contexts. As Saettler (1990: 306), notes, ‘without the support of the community and the entire teaching staff, sustained innovation is unlikely’. In this light, Hattie’s research finding that ‘when comparisons are made between many methods, programmed instruction often comes near the bottom’ (Hattie, 2009: 231) comes as no great surprise.

Just as programmed instruction was in its death throes, the world of language teaching discovered individualization. Launched as a deliberate movement in the early 1970s at the Stanford Conference (Altman & Politzer, 1971), it was a ‘systematic attempt to allow for individual differences in language learning’ (Stern, 1983: 387). Inspired, in part, by the work of Carl Rogers, this ‘humanistic turn’ was a recognition that ‘each learner is unique in personality, abilities, and needs. Education must be personalized to fit the individual; the individual must not be dehumanized in order to meet the needs of an impersonal school system’ (Disick, 1975:38). In ELT, this movement found many adherents and remains extremely influential to this day.

In language teaching more generally, the movement lost impetus after a few years, ‘probably because its advocates had underestimated the magnitude of the task they had set themselves in trying to match individual learner characteristics with appropriate teaching techniques’ (Stern, 1983: 387). What precisely was meant by individualization was never adequately defined or agreed (a problem that remains to the present time). What was left was self-pacing. In 1975, it was reported that ‘to date the majority of the programs in second-language education have been characterized by a self-pacing format […]. Practice seems to indicate that ‘individualized’ instruction is being defined in the class room as students studying individually’ (Chastain, 1975: 344).

Lessons to be learned

This brief account shows that historical attempts to facilitate self-pacing have largely been characterised by failure. The starting point of all these attempts remains as valid as ever, but it is clear that practical solutions are less than simple. To avoid the insanity of doing the same thing over and over again and expecting different results, we should perhaps try to learn from the past.

One of the greatest challenges that teachers face is dealing with different levels of ability in their classes. In any blended scenario where the online component has an element of self-pacing, the challenge will be magnified as ability differentials are likely to grow rather than decrease as a result of the self-pacing. Bart Simpson hit the nail on the head in a memorable line: ‘Let me get this straight. We’re behind the rest of the class and we’re going to catch up to them by going slower than they are? Coo coo!’ Self-pacing runs into immediate difficulties when it comes up against standardised tests and national or state curriculum requirements. As Ferster observes, ‘the notion of individual pacing [remains] antithetical to […] a graded classroom system, which has been the model of schools for the past century. Schools are just not equipped to deal with students who do not learn in age-processed groups, even if this system is clearly one that consistently fails its students (Ferster, 2014: 90-91).bart_simpson

Ability differences are less problematic if the teacher focusses primarily on communicative tasks in F2F time (as opposed to more teaching of language items), but this is a big ‘if’. Many teachers are unsure of how to move towards a more communicative style of teaching, not least in large classes in compulsory schooling. Since there are strong arguments that students would benefit from a more communicative, less transmission-oriented approach anyway, it makes sense to focus institutional resources on equipping teachers with the necessary skills, as well as providing support, before a shift to a blended, more self-paced approach is implemented.

Such issues are less important in private institutions, which are not age-graded, and in self-study contexts. However, even here there may be reasons to proceed cautiously before buying into self-paced approaches. Self-pacing is closely tied to autonomous goal-setting (which I will look at in more detail in another post). Both require a degree of self-awareness at a cognitive and emotional level (McMahon & Oliver, 2001), but not all students have such self-awareness (Magill, 2008). If students do not have the appropriate self-regulatory strategies and are simply left to pace themselves, there is a chance that they will ‘misregulate their learning, exerting control in a misguided or counterproductive fashion and not achieving the desired result’ (Kirschner & van Merriënboer, 2013: 177). Before launching students on a path of self-paced language study, ‘thought needs to be given to the process involved in users becoming aware of themselves and their own understandings’ (McMahon & Oliver, 2001: 1304). Without training and support provided both before and during the self-paced study, the chances of dropping out are high (as we see from the very high attrition rate in language apps).

However well-intentioned, many past attempts to facilitate self-pacing have also suffered from the poor quality of the learning materials. The focus was more on the technology of delivery, and this remains the case today, as many posts on this blog illustrate. Contemporary companies offering language learning programmes show relatively little interest in the content of the learning (take Duolingo as an example). Few app developers show signs of investing in experienced curriculum specialists or materials writers. Glossy photos, contemporary videos, good UX and clever gamification, all of which become dull and repetitive after a while, do not compensate for poorly designed materials.

Over forty years ago, a review of self-paced learning concluded that the evidence on its benefits was inconclusive (Allison, 1975: 5). Nothing has changed since. For some people, in some contexts, for some of the time, self-paced learning may work. Claims that go beyond that cannot be substantiated.

References

Allison, E. 1975. ‘Self-Paced Instruction: A Review’ The Journal of Economic Education 7 / 1: 5 – 12

Altman, H.B. & Politzer, R.L. (eds.) 1971. Individualizing Foreign Language Instruction: Proceedings of the Stanford Conference, May 6 – 8, 1971. Washington, D.C.: Office of Education, U.S. Department of Health, Education, and Welfare

Chastain, K. 1975. ‘An Examination of the Basic Assumptions of “Individualized” Instruction’ The Modern Language Journal 59 / 7: 334 – 344

Disick, R.S. 1975 Individualizing Language Instruction: Strategies and Methods. New York: Harcourt Brace Jovanovich

Ferster, B. 2014. Teaching Machines. Baltimore: John Hopkins University Press

Grittner, F. M. 1975. ‘Individualized Instruction: An Historical Perspective’ The Modern Language Journal 59 / 7: 323 – 333

Hattie, J. 2009. Visible Learning. Abingdon, Oxon.: Routledge

Januszewski, A. 2001. Educational Technology: The Development of a Concept. Englewood, Colorado: Libraries Unlimited

Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48:3, 169-183

Magill, D. S. 2008. ‘What Part of Self-Paced Don’t You Understand?’ University of Wisconsin 24th Annual Conference on Distance Teaching & Learning Conference Proceedings.

McMahon, M. & Oliver, R. 2001. ‘Promoting self-regulated learning in an on-line environment’ in C. Montgomerie & J. Viteli (eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications 2001 (pp. 1299-1305). Chesapeake, VA: AACE

Pendleton, C. S. 1930. ‘Personalizing English Teaching’ Peabody Journal of Education 7 / 4: 195 – 200

Saettler, P. 1990. The Evolution of American Educational Technology. Denver: Libraries Unlimited

Stern, H.H. 1983. Fundamental Concepts of Language Teaching. Oxford: Oxford University Press

Valdman, A. 1968. ‘Programmed Instruction versus Guided Learning in Foreign Language Acquisition’ Die Unterrichtspraxis / Teaching German 1 / 2: 1 – 14

 

by Philip Kerr & Andrew Wickham

from IATEFL 2016 Birmingham Conference Selections (ed. Tania Pattison) Faversham, Kent: IATEFL pp. 75 – 78

ELT publishing, international language testing and private language schools are all industries: products are produced, bought and sold for profit. English language teaching (ELT) is not. It is an umbrella term that is used to describe a range of activities, some of which are industries, and some of which (such as English teaching in high schools around the world) might better be described as public services. ELT, like education more generally, is, nevertheless, often referred to as an ‘industry’.

Education in a neoliberal world

The framing of ELT as an industry is both a reflection of how we understand the term and a force that shapes our understanding. Associated with the idea of ‘industry’ is a constellation of other ideas and words (such as efficacy, productivity, privatization, marketization, consumerization, digitalization and globalization) which become a part of ELT once it is framed as an industry. Repeated often enough, ‘ELT as an industry’ can become a metaphor that we think and live by. Those activities that fall under the ELT umbrella, but which are not industries, become associated with the desirability of industrial practices through such discourse.

The shift from education, seen as a public service, to educational managerialism (where education is seen in industrial terms with a focus on efficiency, free market competition, privatization and a view of students as customers) can be traced to the 1980s and 1990s (Gewirtz, 2001). In 1999, under pressure from developed economies, the General Agreement on Trade in Services (GATS) transformed education into a commodity that could be traded like any other in the marketplace (Robertson, 2006). The global industrialisation and privatization of education continues to be promoted by transnational organisations (such as the World Bank and the OECD), well-funded free-market think-tanks (such as the Cato Institute), philanthro-capitalist foundations (such as the Gates Foundation) and educational businesses (such as Pearson) (Ball, 2012).

Efficacy and learning outcomes

Managerialist approaches to education require educational products and services to be measured and compared. In ELT, the most visible manifestation of this requirement is the current ubiquity of learning outcomes. Contemporary coursebooks are full of ‘can-do’ statements, although these are not necessarily of any value to anyone. Examples from one unit of one best-selling course include ‘Now I can understand advice people give about hotels’ and ‘Now I can read an article about unique hotels’ (McCarthy et al. 2014: 74). However, in a world where accountability is paramount, they are deemed indispensable. The problem from a pedagogical perspective is that teaching input does not necessarily equate with learning uptake. Indeed, there is no reason why it should.

Drawing on the Common European Framework of Reference for Languages (CEFR) for inspiration, new performance scales have emerged in recent years. These include the Cambridge English Scale and the Pearson Global Scale of English. Moving away from the broad six categories of the CEFR, such scales permit finer-grained measurement and we now see individual vocabulary and grammar items tagged to levels. Whilst such initiatives undoubtedly support measurements of efficacy, the problem from a pedagogical perspective is that they assume that language learning is linear and incremental, as opposed to complex and jagged.

Given the importance accorded to the measurement of language learning (or what might pass for language learning), it is unsurprising that attention is shifting towards the measurement of what is probably the most important factor impacting on learning: the teaching. Teacher competency scales have been developed by Cambridge Assessment, the British Council and EAQUALS (Evaluation and Accreditation of Quality Language Services), among others.

The backwash effects of the deployment of such scales are yet to be fully experienced, but the likely increase in the perception of both language learning and teacher learning as the synthesis of granularised ‘bits of knowledge’ is cause for concern.

Digital technology

Digital technology may offer advantages to both English language teachers and learners, but its rapid growth in language learning is the result, primarily but not exclusively, of the way it has been promoted by those who stand to gain financially. In education, generally, and in English language teaching, more specifically, advocacy of the privatization of education is always accompanied by advocacy of digitalization. The global market for digital English language learning products was reported to be $2.8 billion in 2015 and is predicted to reach $3.8 billion by 2020 (Ambient Insight, 2016).

In tandem with the increased interest in measuring learning outcomes, there is fierce competition in the market for high-stakes examinations, and these are increasingly digitally delivered and marked. In the face of this competition and in a climate of digital disruption, companies like Pearson and Cambridge English are developing business models of vertical integration where they can provide and sell everything from placement testing, to courseware (either print or delivered through an LMS), teaching, assessment and teacher training. Huge investments are being made in pursuit of such models. Pearson, for example, recently bought GlobalEnglish, Wall Street English, and set up a partnership with Busuu, thus covering all aspects of language learning from resources provision and publishing to off- and online training delivery.

As regards assessment, the most recent adult coursebook from Cambridge University Press (in collaboration with Cambridge English Language Assessment), ‘Empower’ (Doff, et. Al, 2015) sells itself on a combination of course material with integrated, validated assessment.

Besides its potential for scalability (and therefore greater profit margins), the appeal (to some) of platform-delivered English language instruction is that it facilitates assessment that is much finer-grained and actionable in real time. Digitization and testing go hand in hand.

Few English language teachers have been unaffected by the move towards digital. In the state sectors, large-scale digitization initiatives (such as the distribution of laptops for educational purposes, the installation of interactive whiteboards, the move towards blended models of instruction or the move away from printed coursebooks) are becoming commonplace. In the private sectors, online (or partially online) language schools are taking market share from the traditional bricks-and-mortar institutions.

These changes have entailed modifications to the skill-sets that teachers need to have. Two announcements at this conference reflect this shift. First of all, Cambridge English launched their ‘Digital Framework for Teachers’, a matrix of six broad competency areas organised into four levels of proficiency. Secondly, Aqueduto, the Association for Quality Education and Training Online, was launched, setting itself up as an accreditation body for online or blended teacher training courses.

Teachers’ pay and conditions

In the United States, and likely soon in the UK, the move towards privatization is accompanied by an overt attack on teachers’ unions, rights, pay and conditions (Selwyn, 2014). As English language teaching in both public and private sectors is commodified and marketized it is no surprise to find that the drive to bring down costs has a negative impact on teachers worldwide. Gwynt (2015), for example, catalogues cuts in funding, large-scale redundancies, a narrowing of the curriculum, intensified workloads (including the need to comply with ‘quality control measures’), the deskilling of teachers, dilapidated buildings, minimal resources and low morale in an ESOL department in one British further education college. In France, a large-scale study by Wickham, Cagnol, Wright and Oldmeadow (Linguaid, 2015; Wright, 2016) found that EFL teachers in the very competitive private sector typically had multiple employers, limited or no job security, limited sick pay and holiday pay, very little training and low hourly rates that were deteriorating. One of the principle drivers of the pressure on salaries is the rise of online training delivery through Skype and other online platforms, using offshore teachers in low-cost countries such as the Philippines. This type of training represents 15% in value and up to 25% in volume of all language training in the French corporate sector and is developing fast in emerging countries. These examples are illustrative of a broad global trend.

Implications

Given the current climate, teachers will benefit from closer networking with fellow professionals in order, not least, to be aware of the rapidly changing landscape. It is likely that they will need to develop and extend their skill sets (especially their online skills and visibility and their specialised knowledge), to differentiate themselves from competitors and to be able to demonstrate that they are in tune with current demands. More generally, it is important to recognise that current trends have yet to run their full course. Conditions for teachers are likely to deteriorate further before they improve. More than ever before, teachers who want to have any kind of influence on the way that marketization and industrialization are shaping their working lives will need to do so collectively.

References

Ambient Insight. 2016. The 2015-2020 Worldwide Digital English Language Learning Market. http://www.ambientinsight.com/Resources/Documents/AmbientInsight_2015-2020_Worldwide_Digital_English_Market_Sample.pdf

Ball, S. J. 2012. Global Education Inc. Abingdon, Oxon.: Routledge

Doff, A., Thaine, C., Puchta, H., Stranks, J. and P. Lewis-Jones 2015. Empower. Cambridge: Cambridge University Press

Gewirtz, S. 2001. The Managerial School: Post-welfarism and Social Justice in Education. Abingdon, Oxon.: Routledge

Gwynt, W. 2015. ‘The effects of policy changes on ESOL’. Language Issues 26 / 2: 58 – 60

McCarthy, M., McCarten, J. and H. Sandiford 2014. Touchstone 2 Student’s Book Second Edition. Cambridge: Cambridge University Press

Linguaid, 2015. Le Marché de la Formation Langues à l’Heure de la Mondialisation. Guildford: Linguaid

Robertson, S. L. 2006. ‘Globalisation, GATS and trading in education services.’ published by the Centre for Globalisation, Education and Societies, University of Bristol, Bristol BS8 1JA, UK at http://www.bris.ac.uk/education/people/academicStaff/edslr/publications/04slr

Selwyn, N. 2014. Distrusting Educational Technology. New York: Routledge

Wright, R. 2016. ‘My teacher is rich … or not!’ English Teaching Professional 103: 54 – 56

 

 

All aboard …

The point of adaptive learning is that it can personalize learning. When we talk about personalization, mention of learning styles is rarely far away. Jose Ferreira of Knewton (but now ex-CEO Knewton) made his case for learning styles in a blog post that generated a superb and, for Ferreira, embarrassing  discussion in the comments that were subsequently deleted by Knewton. fluentu_learning_stylesFluentU (which I reviewed here) clearly approves of learning styles, or at least sees them as a useful way to market their product, even though it is unclear how their product caters to different styles. Busuu claims to be ‘personalised to fit your style of learning’. Voxy, Inc. (according to their company overview) ‘operates a language learning platform that creates custom curricula for English language learners based on their interests, routines, goals, and learning styles’. Bliu Bliu (which I reviewed here) recommended, in a recent blog post, that learners should ‘find out their language learner type and use it to their advantage’ and suggests, as a starter, trying out ‘Bliu Bliu, where pretty much any learner can find what suits them best’. Memrise ‘uses clever science to adapt to your personal learning style’.  Duolingo’s learning tree ‘effectively rearranges itself to suit individual learning styles’ according to founder, Louis Von Ahn. This list could go on and on.

Learning styles are thriving in ELT coursebooks, too. Here are just three recent examples for learners of various ages. Today! by Todd, D. & Thompson, T. (Pearson, 2014) ‘shapes learning around individual students with graded difficulty practice for mixed-ability classes’ and ‘makes testing mixed-ability classes easier with tests that you can personalise to students’ abilities’.today

Move  it! by Barraclough, C., Beddall, F., Stannett, K., Wildman, J. (Pearson, 2015) offers ‘personalized pathways [which] allow students to optimize their learning outcomes’ and a ‘complete assessment package to monitor students’ learning process’. pearson_move_it

Open Mind Elementary (A2) 2nd edition by Rogers, M., Taylor-Knowles, J. & Taylor-Knowles, S. (Macmillan, 2014) has a whole page devoted to learning styles in the ‘Life Skills’ strand of the course. The scope and sequence describes it in the following terms: ‘Thinking about what you like to do to find your learning style and improve how you learn English’. Here’s the relevant section:macmillan_coursebook

rosenber-learning-stylesMethodology books offer more tips for ways that teachers can cater to different learning styles. Recent examples include Patrycja Kamińska’s  Learning Styles and Second Language Education (Cambridge Scholars, 2014), Tammy Gregersen & Peter D. MacIntyre’s Capitalizing on Language Learners’ Individuality (Multilingual Matters, 2014) and Marjorie Rosenberg’s Spotlight on Learning Styles (Delta Publishing, 2013). Teacher magazines show a continuing interest  in the topic. Humanising Language Teaching and English Teaching Professional are particularly keen. The British Council offers courses about learning styles and its Teaching English website has many articles and lesson plans on the subject (my favourite explains that your students will be more successful if you match your teaching style to their learning styles), as do the websites of all the major publishers. Most ELT conferences will also offer something on the topic.oup_learning_styles

How about language teaching qualifications and frameworks? The Cambridge English Teaching Framework contains a component entitled ‘Understanding learners’ and this specifies as the first part of the component a knowledge of concepts such as learning styles (e.g., visual, auditory, kinaesthetic), multiple intelligences, learning strategies, special needs, affect. Unsurprisingly, the Cambridge CELTA qualification requires successful candidates to demonstrate an awareness of the different learning styles and preferences that adults bring to learning English. The Cambridge DELTA requires successful candidates to accommodate learners according to their different abilities, motivations, and learning styles. The Eaquals Framework for Language Teacher Training and Development requires teachers at Development Phase 2 t0 have the skill of determining and anticipating learners’ language learning needs and learning styles at a range of levels, selecting appropriate ways of finding out about these.

Outside of ELT, learning styles also continue to thrive. Phil Newton (2015 ‘The learning styles myth is thriving in higher education’ Frontiers in Psychology 6: 1908) carried out a survey of educational publications  (higher education) between 2013 and 2016, and found that an overwhelming majority (89%) implicitly or directly endorse the use of learning styles. He also cites research showing that 93% of UK schoolteachers believe that ‘individuals learn better when they receive information in their preferred Learning Style’, with similar figures in other countries. 72% of Higher Education institutions in the US teach ‘learning style theory’ as part of faculty development for online teachers. Advocates of learning styles in English language teaching are not alone.

But, unfortunately, …

In case you weren’t aware of it, there is a rather big problem with learning styles. There is a huge amount of research  which suggests that learning styles (and, in particular, teaching attempts to cater to learning styles) need to be approached with extreme scepticism. Much of this research was published long before the blog posts, advertising copy, books and teaching frameworks (listed above) were written.  What does this research have to tell us?

The first problem concerns learning styles taxonomies. There are three issues here: many people do not fit one particular style, the information used to assign people to styles is often inadequate, and there are so many different styles that it becomes cumbersome to link particular learners to particular styles (Kirschner, P. A. & van Merriënboer, J. J. G. 2013. ‘Do Learners Really Know Best? Urban Legends in Education’ Educational Psychologist, 48 / 3, 169-183). To summarise, given the lack of clarity as to which learning styles actually exist, it may be ‘neither viable nor justified’ for learning styles to form the basis of lesson planning (Hall, G. 2011. Exploring English Language Teaching. Abingdon, Oxon.: Routledge p.140). More detailed information about these issues can be found in the following sources:

Coffield, F., Moseley, D., Hall, E. & Ecclestone, K. 2004. Learning styles and pedagogy in post-16 learning: a systematic and critical review. London: Learning and Skills Research Centre

Dembo, M. H. & Howard, K. 2007. Advice about the use of learning styles: a major myth in education. Journal of College Reading & Learning 37 / 2: 101 – 109

Kirschner, P. A. 2017. Stop propagating the learning styles myth. Computers & Education 106: 166 – 171

Pashler, H., McDaniel, M., Rohrer, D. & Bjork, E. 2008. Learning styles concepts and evidence. Psychological Science in the Public Interest 9 / 3: 105 – 119

Riener, C. & Willingham, D. 2010. The myth of learning styles. Change – The Magazine of Higher Learning

The second problem concerns what Pashler et al refer to as the ‘meshing hypothesis’: the idea that instructional interventions can be effectively tailored to match particular learning styles. Pashler et al concluded that the available taxonomies of student types do not offer any valid help in deciding what kind of instruction to offer each individual. Even in 2008, their finding was not new. Back in 1978, a review of 15 studies that looked at attempts to match learning styles to approaches to first language reading instruction, concluded that modality preference ‘has not been found to interact significantly with the method of teaching’ (Tarver, Sara & M. M. Dawson. 1978. Modality preference and the teaching of reading. Journal of Learning Disabilities 11: 17 – 29). The following year, two other researchers concluded that [the assumption that one can improve instruction by matching materials to children’s modality strengths] appears to lack even minimal empirical support. (Arter, J.A. & Joseph A. Jenkins 1979 ‘Differential diagnosis-prescriptive teaching: A critical appraisal’ Review of Educational Research 49: 517-555). Fast forward 20 years to 1999, and Stahl (Different strokes for different folks?’ American Educator Fall 1999 pp. 1 – 5) was writing the reason researchers roll their eyes at learning styles is the utter failure to find that assessing children’s learning styles and matching to instructional methods has any effect on learning. The area with the most research has been the global and analytic styles […]. Over the past 30 years, the names of these styles have changed – from ‘visual’ to ‘global’ and from ‘auditory’ to ‘analytic’ – but the research results have not changed. For a recent evaluation of the practical applications of learning styles, have a look at Rogowsky, B. A., Calhoun, B. M. & Tallal, P. 2015. ‘Matching Learning Style to Instructional Method: Effects on Comprehension’ Journal of Educational Psychology 107 / 1: 64 – 78. Even David Kolb, the Big Daddy of learning styles, now concedes that there is no strong evidence that teachers should tailor their instruction to their student’s particular learning styles (reported in Glenn, D. 2009. ‘Matching teaching style to learning style may not help students’ The Chronicle of Higher Education). To summarise, the meshing hypothesis is entirely unsupported in the scientific literature. It is a myth (Howard-Jones, P. A. 2014. ‘Neuroscience and education: myths and messages’ Nature Reviews Neuroscience).

This brings me back to the blog posts, advertising blurb, coursebooks, methodology books and so on that continue to tout learning styles. The writers of these texts typically do not acknowledge that there’s a problem of any kind. Are they unaware of the research? Or are they aware of it, but choose not to acknowledge it? I suspect that the former is often the case with the app developers. But if the latter is the case, what  might those reasons be? In the case of teacher training specifications, the reason is probably practical. Changing a syllabus is an expensive and time-consuming operation. But in the case of some of the ELT writers, I suspect that they hang on in there because they so much want to believe.

As Newton (2015: 2) notes, intuitively, there is much that is attractive about the concept of Learning Styles. People are obviously different and Learning Styles appear to offer educators a way to accommodate individual learner differences.  Pashler et al (2009:107) add that another related factor that may play a role in the popularity of the learning-styles approach has to do with responsibility. If a person or a person’s child is not succeeding or excelling in school, it may be more comfortable for the person to think that the educational system, not the person or the child himself or herself, is responsible. That is, rather than attribute one’s lack of success to any lack of ability or effort on one’s part, it may be more appealing to think that the fault lies with instruction being inadequately tailored to one’s learning style. In that respect, there may be linkages to the self-esteem movement that became so influential, internationally, starting in the 1970s. There is no reason to doubt that many of those who espouse learning styles have good intentions.

No one, I think, seriously questions whether learners might not benefit from a wide variety of input styles and learning tasks. People are obviously different. MacIntyre et al (MacIntyre, P.D., Gregersen, T. & Clément, R. 2016. ‘Individual Differences’ in Hall, G. (ed.) The Routledge Handbook of English Language Teaching. Abingdon, Oxon: Routledge, pp.310 – 323, p.319) suggest that teachers might consider instructional methods that allow them to capitalise on both variety and choice and also help learners find ways to do this for themselves inside and outside the classroom. Jill Hadfield (2006. ‘Teacher Education and Trainee Learning Style’ RELC Journal 37 / 3: 369 – 388) recommends that we design our learning tasks across the range of learning styles so that our trainees can move across the spectrum, experiencing both the comfort of matching and the challenge produced by mismatching. But this is not the same thing as claiming that identification of a particular learning style can lead to instructional decisions. The value of books like Rosenberg’s Spotlight on Learning Styles lies in the wide range of practical suggestions for varying teaching styles and tasks. They contain ideas of educational value: it is unfortunate that the theoretical background is so thin.

In ELT things are, perhaps, beginning to change. Russ Mayne’s blog post Learning styles: facts and fictions in 2012 got a few heads nodding, and he followed this up 2 years later with a presentation at IATEFL looking at various aspects of ELT, including learning styles, which have little or no scientific credibility. Carol Lethaby and Patricia Harries gave a talk at IATEFL 2016, Changing the way we approach learning styles in teacher education, which was also much discussed and shared online. They also had an article in ELT Journal called Learning styles and teacher training: are we perpetuating neuromyths? (2016 ELTJ 70 / 1: 16 – 27). Even Pearson, in a blog post of November 2016, (Mythbusters: A review of research on learning styles) acknowledges that there is a shocking lack of evidence to support the core learning styles claim that customizing instruction based on students’ preferred learning styles produces better learning than effective universal instruction, concluding that  it is impossible to recommend learning styles as an effective strategy for improving learning outcomes.

 

 

Every now and then, someone recommends me to take a look at a flashcard app. It’s often interesting to see what developers have done with design, gamification and UX features, but the content is almost invariably awful. Most recently, I was encouraged to look at Word Pash. The screenshots below are from their promotional video.

word-pash-1 word-pash-2 word-pash-3 word-pash-4

The content problems are immediately apparent: an apparently random selection of target items, an apparently random mix of high and low frequency items, unidiomatic language examples, along with definitions and distractors that are less frequent than the target item. I don’t know if these are representative of the rest of the content. The examples seem to come from ‘Stage 1 Level 3’, whatever that means. (My confidence in the product was also damaged by the fact that the Word Pash website includes one testimonial from a certain ‘Janet Reed – Proud Mom’, whose son ‘was able to increase his score and qualify for academic scholarships at major universities’ after using the app. The picture accompanying ‘Janet Reed’ is a free stock image from Pexels and ‘Janet Reed’ is presumably fictional.)

According to the website, ‘WordPash is a free-to-play mobile app game for everyone in the global audience whether you are a 3rd grader or PhD, wordbuff or a student studying for their SATs, foreign student or international business person, you will become addicted to this fast paced word game’. On the basis of the promotional video, the app couldn’t be less appropriate for English language learners. It seems unlikely that it would help anyone improve their ACT or SAT test scores. The suggestion that the vocabulary development needs of 9-year-olds and doctoral students are comparable is pure chutzpah.

The deliberate study of more or less random words may be entertaining, but it’s unlikely to lead to very much in practical terms. For general purposes, the deliberate learning of the highest frequency words, up to about a frequency ranking of #7500, makes sense, because there’s a reasonably high probability that you’ll come across these items again before you’ve forgotten them. Beyond that frequency level, the value of the acquisition of an additional 1000 words tails off very quickly. Adding 1000 words from frequency ranking #8000 to #9000 is likely to result in an increase in lexical understanding of general purpose texts of about 0.2%. When we get to frequency ranks #19,000 to #20,000, the gain in understanding decreases to 0.01%[1]. In other words, deliberate vocabulary learning needs to be targeted. The data is relatively recent, but the principle goes back to at least the middle of the last century when Michael West argued that a principled approach to vocabulary development should be driven by a comparison of the usefulness of a word and its ‘learning cost’[2]. Three hundred years before that, Comenius had articulated something very similar: ‘in compiling vocabularies, my […] concern was to select the words in most frequent use[3].

I’ll return to ‘general purposes’ later in this post, but, for now, we should remember that very few language learners actually study a language for general purposes. Globally, the vast majority of English language learners study English in an academic (school) context and their immediate needs are usually exam-specific. For them, general purpose frequency lists are unlikely to be adequate. If they are studying with a coursebook and are going to be tested on the lexical content of that book, they will need to use the wordlist that matches the book. Increasingly, publishers make such lists available and content producers for vocabulary apps like Quizlet and Memrise often use them. Many examinations, both national and international, also have accompanying wordlists. Examples of such lists produced by examination boards include the Cambridge English young learners’ exams (Starters, Movers and Flyers) and Cambridge English Preliminary. Other exams do not have official word lists, but reasonably reliable lists have been produced by third parties. Examples include Cambridge First, IELTS and SAT. There are, in addition, well-researched wordlists for academic English, including the Academic Word List (AWL)  and the Academic Vocabulary List  (AVL). All of these make sensible starting points for deliberate vocabulary learning.

When we turn to other, out-of-school learners the number of reasons for studying English is huge. Different learners have different lexical needs, and working with a general purpose frequency list may be, at least in part, a waste of time. EFL and ESL learners are likely to have very different needs, as will EFL and ESP learners, as will older and younger learners, learners in different parts of the world, learners who will find themselves in English-speaking countries and those who won’t, etc., etc. For some of these demographics, specialised corpora (from which frequency-based wordlists can be drawn) exist. For most learners, though, the ideal list simply does not exist. Either it will have to be created (requiring a significant amount of time and expertise[4]) or an available best-fit will have to suffice. Paul Nation, in his recent ‘Making and Using Word Lists for Language Learning and Testing’ (John Benjamins, 2016) includes a useful chapter on critiquing wordlists. For anyone interested in better understanding the issues surrounding the development and use of wordlists, three good articles are freely available online. These are:making-and-using-word-lists-for-language-learning-and-testing

Lessard-Clouston, M. 2012 / 2013. ‘Word Lists for Vocabulary Learning and Teaching’ The CATESOL Journal 24.1: 287- 304

Lessard-Clouston, M. 2016. ‘Word lists and vocabulary teaching: options and suggestions’ Cornerstone ESL Conference 2016

Sorell, C. J. 2013. A study of issues and techniques for creating core vocabulary lists for English as an International Language. Doctoral thesis.

But, back to ‘general purposes’ …. Frequency lists are the obvious starting point for preparing a wordlist for deliberate learning, but they are very problematic. Frequency rankings depend on the corpus on which they are based and, since these are different, rankings vary from one list to another. Even drawing on just one corpus, rankings can be a little strange. In the British National Corpus, for example, ‘May’ (the month) is about twice as frequent as ‘August’[5], but we would be foolish to infer from this that the learning of ‘May’ should be prioritised over the learning of ‘August’. An even more striking example from the same corpus is the fact that ‘he’ is about twice as frequent as ‘she’[6]: should, therefore, ‘he’ be learnt before ‘she’?

List compilers have to make a number of judgement calls in their work. There is not space here to consider these in detail, but two particularly tricky questions concerning the way that words are chosen may be mentioned: Is a verb like ‘list’, with two different and unrelated meanings, one word or two? Should inflected forms be considered as separate words? The judgements are not usually informed by considerations of learners’ needs. Learners will probably best approach vocabulary development by building their store of word senses: attempting to learn all the meanings and related forms of any given word is unlikely to be either useful or successful.

Frequency lists, in other words, are not statements of scientific ‘fact’: they are interpretative documents. They have been compiled for descriptive purposes, not as ways of structuring vocabulary learning, and it cannot be assumed they will necessarily be appropriate for a purpose for which they were not designed.

A further major problem concerns the corpus on which the frequency list is based. Large databases, such as the British National Corpus or the Corpus of Contemporary American English, are collections of language used by native speakers in certain parts of the world, usually of a restricted social class. As such, they are of relatively little value to learners who will be using English in contexts that are not covered by the corpus. A context where English is a lingua franca is one such example.

A different kind of corpus is the Cambridge Learner Corpus (CLC), a collection of exam scripts produced by candidates in Cambridge exams. This has led to the development of the English Vocabulary Profile (EVP) , where word senses are tagged as corresponding to particular levels in the Common European Framework scale. At first glance, this looks like a good alternative to frequency lists based on native-speaker corpora. But closer consideration reveals many problems. The design of examination tasks inevitably results in the production of language of a very different kind from that produced in other contexts. Many high frequency words simply do not appear in the CLC because it is unlikely that a candidate would use them in an exam. Other items are very frequent in this corpus just because they are likely to be produced in examination tasks. Unsurprisingly, frequency rankings in EVP do not correlate very well with frequency rankings from other corpora. The EVP, then, like other frequency lists, can only serve, at best, as a rough guide for the drawing up of target item vocabulary lists in general purpose apps or coursebooks[7].

There is no easy solution to the problems involved in devising suitable lexical content for the ‘global audience’. Tagging words to levels (i.e. grouping them into frequency bands) will always be problematic, unless very specific user groups are identified. Writers, like myself, of general purpose English language teaching materials are justifiably irritated by some publishers’ insistence on allocating words to levels with numerical values. The policy, taken to extremes (as is increasingly the case), has little to recommend it in linguistic terms. But it’s still a whole lot better than the aleatory content of apps like Word Pash.

[1] See Nation, I.S.P. 2013. Learning Vocabulary in Another Language 2nd edition. (Cambridge: Cambridge University Press) p. 21 for statistical tables. See also Nation, P. & R. Waring 1997. ‘Vocabulary size, text coverage and word lists’ in Schmitt & McCarthy (eds.) 1997. Vocabulary: Description, Acquisition and Pedagogy. (Cambridge: Cambridge University Press) pp. 6 -19

[2] See Kelly, L.G. 1969. 25 Centuries of Language Teaching. (Rowley, Mass.: Rowley House) p.206 for a discussion of West’s ideas.

[3] Kelly, L.G. 1969. 25 Centuries of Language Teaching. (Rowley, Mass.: Rowley House) p. 184

[4] See Timmis, I. 2015. Corpus Linguistics for ELT (Abingdon: Routledge) for practical advice on doing this.

[5] Nation, I.S.P. 2016. Making and Using Word Lists for Language Learning and Testing. (Amsterdam: John Benjamins) p.58

[6] Taylor, J.R. 2012. The Mental Corpus. (Oxford: Oxford University Press) p.151

[7] For a detailed critique of the limitations of using the CLC as a guide to syllabus design and textbook development, see Swan, M. 2014. ‘A Review of English Profile Studies’ ELTJ 68/1: 89-96