Posts Tagged ‘grammar’

9781316629178More and more language learning is taking place, fully or partially, on online platforms and the affordances of these platforms for communicative interaction are exciting. Unfortunately, most platform-based language learning experiences are a relentless diet of drag-and-drop, drag-till-you-drop grammar or vocabulary gap-filling. The chat rooms and discussion forums that the platforms incorporate are underused or ignored. Lindsay Clandfield and Jill Hadfield’s new book is intended to promote online interaction between and among learners and the instructor, rather than between learners and software.

Interaction Online is a recipe book, containing about 80 different activities (many more if you consider the suggested variations). Subtitled ‘Creative activities for blended learning’, the authors have selected and designed the activities so that any teacher using any degree of blend (from platform-based instruction to occasional online homework) will be able to use them. The activities do not depend on any particular piece of software, as they are all designed for basic tools like Facebook, Skype and chat rooms. Indeed, almost every single activity could be used, sometimes with some slight modification, for teachers in face-to-face settings.

A recipe book must be judged on the quality of the activities it contains, and the standard here is high. They range from relatively simple, short activities to much longer tasks which will need an hour or more to complete. An example of the former is a sentence-completion activity (‘Don’t you hate / love it when ….?’ – activity 2.5). As an example of the latter, there is a complex problem-solving information-gap where students have to work out the solution to a mystery (activity 6.13), an activity which reminds me of some of the material in Jill Hadfield’s much-loved Communication Games books.

In common with many recipe books, Interaction Online is not an easy book to use, in the sense that it is hard to navigate. The authors have divided up the tasks into five kinds of interaction (personal, factual, creative, critical and fanciful), but it is not always clear precisely why one activity has been assigned to one category rather than another. In any case, the kind of interaction is likely to be less important to many teachers than the kind and amount of language that will be generated (among other considerations), and the table of contents is less than helpful. The index at the back of the book helps to some extent, but a clearer tabulation of activities by interaction type, level, time required, topic and language focus (if any) would be very welcome. Teachers will need to devise their own system of referencing so that they can easily find activities they want to try out.

Again, like many recipe books, Interaction Online is a mix of generic task-types and activities that will only work with the supporting materials that are provided. Teachers will enjoy the latter, but will want to experiment with the former and it is these generic task-types that they are most likely to add to their repertoire. In activity 2.7 (‘Foodies’ – personal interaction), for example, students post pictures of items of food and drink, to which other students must respond with questions. The procedure is clear and effective, but, as the authors note, the pictures could be of practically anything. ‘From pictures to questions’ might be a better title for the activity than ‘Foodies’. Similarly, activity 3.4 (‘Find a festival’ –factual interaction) uses a topic (‘festivals’), rather than a picture, to generate questions and responses. The procedure is slightly different from activity 2.7, but the interactional procedures of the two activities could be swapped around as easily as the topics could be changed.

Perhaps the greatest strength of this book is the variety of interactional procedures that is suggested. The majority of activities contain (1) suggestions for a stimulus, (2) suggestions for managing initial responses to this stimulus, and (3) suggestions for further interaction. As readers work their way through the book, they will be struck by similarities between the activities. The final chapter (chapter 8: ‘Task design’) provides an excellent summary of the possibilities of communicative online interaction, and more experienced teachers may want to read this chapter first.

Chapter 7 provides a useful, but necessarily fairly brief, overview of considerations regarding feedback and assessment

Overall, Interaction Online is a very rich resource, and one that will be best mined in multiple visits. For most readers, I would suggest an initial flick through and a cherry-picking of a small number of activities to try out. For materials writers and course designers, a better starting point may be the final two chapters, followed by a sampling of activities. For everyone, though, Online Interaction is a powerful reminder that technology-assisted language learning could and should be far more than what is usually is.

(This review first appeared in the International House Journal of Education and Development.)

 

Advertisements

Adaptive learning providers make much of their ability to provide learners with personalised feedback and to provide teachers with dashboard feedback on the performance of both individuals and groups. All well and good, but my interest here is in the automated feedback that software could provide on very specific learning tasks. Scott Thornbury, in a recent talk, ‘Ed Tech: The Mouse that Roared?’, listed six ‘problems’ of language acquisition that educational technology for language learning needs to address. One of these he framed as follows: ‘The feedback problem, i.e. how does the learner get optimal feedback at the point of need?’, and suggested that technological applications ‘have some way to go.’ He was referring, not to the kind of feedback that dashboards can provide, but to the kind of feedback that characterises a good language teacher: corrective feedback (CF) – the way that teachers respond to learner utterances (typically those containing errors, but not necessarily restricted to these) in what Ellis and Shintani call ‘form-focused episodes’[1]. These responses may include a direct indication that there is an error, a reformulation, a request for repetition, a request for clarification, an echo with questioning intonation, etc. Basically, they are correction techniques.

These days, there isn’t really any debate about the value of CF. There is a clear research consensus that it can aid language acquisition. Discussing learning in more general terms, Hattie[2] claims that ‘the most powerful single influence enhancing achievement is feedback’. The debate now centres around the kind of feedback, and when it is given. Interestingly, evidence[3] has been found that CF is more effective in the learning of discrete items (e.g. some grammatical structures) than in communicative activities. Since it is precisely this kind of approach to language learning that we are more likely to find in adaptive learning programs, it is worth exploring further.

What do we know about CF in the learning of discrete items? First of all, it works better when it is explicit than when it is implicit (Li, 2010), although this needs to be nuanced. In immediate post-tests, explicit CF is better than implicit variations. But over a longer period of time, implicit CF provides better results. Secondly, formative feedback (as opposed to right / wrong testing-style feedback) strengthens retention of the learning items: this typically involves the learner repairing their error, rather than simply noticing that an error has been made. This is part of what cognitive scientists[4] sometimes describe as the ‘generation effect’. Whilst learners may benefit from formative feedback without repairing their errors, Ellis and Shintani (2014: 273) argue that the repair may result in ‘deeper processing’ and, therefore, assist learning. Thirdly, there is evidence that some delay in receiving feedback aids subsequent recall, especially over the longer term. Ellis and Shintani (2014: 276) suggest that immediate CF may ‘benefit the development of learners’ procedural knowledge’, while delayed CF is ‘perhaps more likely to foster metalinguistic understanding’. You can read a useful summary of a meta-analysis of feedback effects in online learning here, or you can buy the whole article here.

I have yet to see an online language learning program which can do CF well, but I think it’s a matter of time before things improve significantly. First of all, at the moment, feedback is usually immediate, or almost immediate. This is unlikely to change, for a number of reasons – foremost among them being the pride that ed tech takes in providing immediate feedback, and the fact that online learning is increasingly being conceptualised and consumed in bite-sized chunks, something you do on your phone between doing other things. What will change in better programs, however, is that feedback will become more formative. As things stand, tasks are usually of a very closed variety, with drag-and-drop being one of the most popular. Only one answer is possible and feedback is usually of the right / wrong-and-here’s-the-correct-answer kind. But tasks of this kind are limited in their value, and, at some point, tasks are needed where more than one answer is possible.

Here’s an example of a translation task from Duolingo, where a simple sentence could be translated into English in quite a large number of ways.

i_am_doing_a_basketDecontextualised as it is, the sentence could be translated in the way that I have done it, although it’s unlikely. The feedback, however, is of relatively little help to the learner, who would benefit from guidance of some sort. The simple reason that Duolingo doesn’t offer useful feedback is that the programme is static. It has been programmed to accept certain answers (e.g. in this case both the present simple and the present continuous are acceptable), but everything else will be rejected. Why? Because it would take too long and cost too much to anticipate and enter in all the possible answers. Why doesn’t it offer formative feedback? Because in order to do so, it would need to identify the kind of error that has been made. If we can identify the kind of error, we can make a reasonable guess about the cause of the error, and select appropriate CF … this is what good teachers do all the time.

Analysing the kind of error that has been made is the first step in providing appropriate CF, and it can be done, with increasing accuracy, by current technology, but it requires a lot of computing. Let’s take spelling as a simple place to start. If you enter ‘I am makeing a basket for my mother’ in the Duolingo translation above, the program tells you ‘Nice try … there’s a typo in your answer’. Given the configuration of keyboards, it is highly unlikely that this is a typo. It’s a simple spelling mistake and teachers recognise it as such because they see it so often. For software to achieve the same insight, it would need, as a start, to trawl a large English dictionary database and a large tagged database of learner English. The process is quite complicated, but it’s perfectably do-able, and learners could be provided with CF in the form of a ‘spelling hint’.i_am_makeing_a_basket

Rather more difficult is the error illustrated in my first screen shot. What’s the cause of this ‘error’? Teachers know immediately that this is probably a classic confusion of ‘do’ and ‘make’. They know that the French verb ‘faire’ can be translated into English as ‘make’ or ‘do’ (among other possibilities), and the error is a common language transfer problem. Software could do the same thing. It would need a large corpus (to establish that ‘make’ collocates with ‘a basket’ more often than ‘do’), a good bilingualised dictionary (plenty of these now exist), and a tagged database of learner English. Again, appropriate automated feedback could be provided in the form of some sort of indication that ‘faire’ is only sometimes translated as ‘make’.

These are both relatively simple examples, but it’s easy to think of others that are much more difficult to analyse automatically. Duolingo rejects ‘I am making one basket for my mother’: it’s not very plausible, but it’s not wrong. Teachers know why learners do this (again, it’s probably a transfer problem) and know how to respond (perhaps by saying something like ‘Only one?’). Duolingo also rejects ‘I making a basket for my mother’ (a common enough error), but is unable to provide any help beyond the correct answer. Automated CF could, however, be provided in both cases if more tools are brought into play. Multiple parsing machines (one is rarely accurate enough on its own) and semantic analysis will be needed. Both the range and the complexity of the available tools are increasing so rapidly (see here for the sort of research that Google is doing and here for an insight into current applications of this research in language learning) that Duolingo-style right / wrong feedback will very soon seem positively antediluvian.

One further development is worth mentioning here, and it concerns feedback and gamification. Teachers know from the way that most learners respond to written CF that they are usually much more interested in knowing what they got right or wrong, rather than the reasons for this. Most students are more likely to spend more time looking at the score at the bottom of a corrected piece of written work than at the laborious annotations of the teacher throughout the text. Getting students to pay close attention to the feedback we provide is not easy. Online language learning systems with gamification elements, like Duolingo, typically reward learners for getting things right, and getting things right in the fewest attempts possible. They encourage learners to look for the shortest or cheapest route to finding the correct answers: learning becomes a sexed-up form of test. If, however, the automated feedback is good, this sort of gamification encourages the wrong sort of learning behaviour. Gamification designers will need to shift their attention away from the current concern with right / wrong, and towards ways of motivating learners to look at and respond to feedback. It’s tricky, because you want to encourage learners to take more risks (and reward them for doing so), but it makes no sense to penalise them for getting things right. The probable solution is to have a dual points system: one set of points for getting things right, another for employing positive learning strategies.

The provision of automated ‘optimal feedback at the point of need’ may not be quite there yet, but it seems we’re on the way for some tasks in discrete-item learning. There will probably always be some teachers who can outperform computers in providing appropriate feedback, in the same way that a few top chess players can beat ‘Deep Blue’ and its scions. But the rest of us had better watch our backs: in the provision of some kinds of feedback, computers are catching up with us fast.

[1] Ellis, R. & N. Shintani (2014) Exploring Language Pedagogy through Second Language Acquisition Research. Abingdon: Routledge p. 249

[2] Hattie, K. (2009) Visible Learning. Abingdon: Routledge p.12

[3] Li, S. (2010) ‘The effectiveness of corrective feedback in SLA: a meta-analysis’ Language Learning 60 / 2: 309 -365

[4] Brown, P.C., Roediger, H.L. & McDaniel, M. A. Make It Stick (Cambridge, Mass.: Belknap Press, 2014)

Back in December 2013, in an interview with eltjam , David Liu, COO of the adaptive learning company, Knewton, described how his company’s data analysis could help ELT publishers ‘create more effective learning materials’. He focused on what he calls ‘content efficacy[i]’ (he uses the word ‘efficacy’ five times in the interview), a term which he explains below:

A good example is when we look at the knowledge graph of our partners, which is a map of how concepts relate to other concepts and prerequisites within their product. There may be two or three prerequisites identified in a knowledge graph that a student needs to learn in order to understand a next concept. And when we have hundreds of thousands of students progressing through a course, we begin to understand the efficacy of those said prerequisites, which quite frankly were made by an author or set of authors. In most cases they’re quite good because these authors are actually good in what they do. But in a lot of cases we may find that one of those prerequisites actually is not necessary, and not proven to be useful in achieving true learning or understanding of the current concept that you’re trying to learn. This is interesting information that can be brought back to the publisher as they do revisions, as they actually begin to look at the content as a whole.

One commenter on the post, Tom Ewens, found the idea interesting. It could, potentially, he wrote, give us new insights into how languages are learned much in the same way as how corpora have given us new insights into how language is used. Did Knewton have any plans to disseminate the information publicly, he asked. His question remains unanswered.

At the time, Knewton had just raised $51 million (bringing their total venture capital funding to over $105 million). Now, 16 months later, Knewton have launched their new product, which they are calling Knewton Content Insights. They describe it as the world’s first and only web-based engine to automatically extract statistics comparing the relative quality of content items — enabling us to infer more information about student proficiency and content performance than ever before possible.

The software analyses particular exercises within the learning content (and particular items within them). It measures the relative difficulty of individual items by, for example, analysing how often a question is answered incorrectly and how many tries it takes each student to answer correctly. It also looks at what they call ‘exhaustion’ – how much content students are using in a particular area – and whether they run out of content. The software can correlate difficulty with exhaustion. Lastly, it analyses what they call ‘assessment quality’ – how well  individual questions assess a student’s understanding of a topic.

Knewton’s approach is premised on the idea that learning (in this case language learning) can be broken down into knowledge graphs, in which the information that needs to be learned can be arranged and presented hierarchically. The ‘granular’ concepts are then ‘delivered’ to the learner, and Knewton’s software can optimise the delivery. The first problem, as I explored in a previous post, is that language is a messy, complex system: it doesn’t lend itself terribly well to granularisation. The second problem is that language learning does not proceed in a linear, hierarchical way: it is also messy and complex. The third is that ‘language learning content’ cannot simply be delivered: a process of mediation is unavoidable. Are the people at Knewton unaware of the extensive literature devoted to the differences between synthetic and analytic syllabuses, of the differences between product-oriented and process-oriented approaches? It would seem so.

Knewton’s ‘Content Insights’ can only, at best, provide some sort of insight into the ‘language knowledge’ part of any learning content. It can say nothing about the work that learners do to practise language skills, since these are not susceptible to granularisation: you simply can’t take a piece of material that focuses on reading or listening and analyse its ‘content efficacy at the concept level’. Because of this, I predicted (in the post about Knowledge Graphs) that the likely focus of Knewton’s analytics would be discrete item, sentence-level grammar (typically tenses). It turns out that I was right.

Knewton illustrate their new product with screen shots such as those below.

Content-Insight-Assessment-1

 

 

 

 

 

Content-Insight-Exhaustion-1

 

 

 

 

 

 

 

They give a specific example of the sort of questions their software can answer. It is: do students generally find the present simple tense easier to understand than the present perfect tense? Doh!

It may be the case that Knewton Content Insights might optimise the presentation of this kind of grammar, but optimisation of this presentation and practice is highly unlikely to have any impact on the rate of language acquisition. Students are typically required to study the present perfect at every level from ‘elementary’ upwards. They have to do this, not because the presentation in, say, Headway, is not optimised. What they need is to spend a significantly greater proportion of their time on ‘language use’ and less on ‘language knowledge’. This is not just my personal view: it has been extensively researched, and I am unaware of any dissenting voices.

The number-crunching in Knewton Content Insights is unlikely, therefore, to lead to any actionable insights. It is, however, very likely to lead (as writer colleagues at Pearson and other publishers are finding out) to an obsession with measuring the ‘efficacy’ of material which, quite simply, cannot meaningfully be measured in this way. It is likely to distract from much more pressing issues, notably the question of how we can move further and faster away from peddling sentence-level, discrete-item grammar.

In the long run, it is reasonable to predict that the attempt to optimise the delivery of language knowledge will come to be seen as an attempt to tackle the wrong question. It will make no significant difference to language learners and language learning. In the short term, how much time and money will be wasted?

[i] ‘Efficacy’ is the buzzword around which Pearson has built its materials creation strategy, a strategy which was launched around the same time as this interview. Pearson is a major investor in Knewton.

There are a number of reasons why we sometimes need to describe a person’s language competence using a single number. Most of these are connected to the need for a shorthand to differentiate people, in summative testing or in job selection, for example. Numerical (or grade) allocation of this kind is so common (and especially in times when accountability is greatly valued) that it is easy to believe that this number is an objective description of a concrete entity, rather than a shorthand description of an abstract concept. In the process, the abstract concept (language competence) becomes reified and there is a tendency to stop thinking about what it actually is.

Language is messy. It’s a complex, adaptive system of communication which has a fundamentally social function. As Diane Larsen-Freeman and others have argued patterns of use strongly affect how language is acquired, is used, and changes. These processes are not independent of one another but are facets of the same complex adaptive system. […] The system consists of multiple agents (the speakers in the speech community) interacting with one another [and] the structures of language emerge from interrelated patterns of experience, social interaction, and cognitive mechanisms.

As such, competence in language use is difficult to measure. There are ways of capturing some of it. Think of the pages and pages of competency statements in the Common European Framework, but there has always been something deeply unsatisfactory about documents of this kind. How, for example, are we supposed to differentiate, exactly and objectively, between, say, can participate fully in an interview (C1) and can carry out an effective, fluent interview (B2)? The short answer is that we can’t. There are too many of these descriptors anyway and, even if we did attempt to use such a detailed tool to describe language competence, we would still be left with a very incomplete picture. There is at least one whole book devoted to attempts to test the untestable in language education (edited by Amos Paran and Lies Sercu, Multilingual Matters, 2010).

So, here is another reason why we are tempted to use shorthand numerical descriptors (such as A1, A2, B1, etc.) to describe something which is very complex and abstract (‘overall language competence’) and to reify this abstraction in the process. From there, it is a very short step to making things even more numerical, more scientific-sounding. Number-creep in recent years has brought us the Pearson Global Scale of English which can place you at a precise point on a scale from 10 to 90. Not to be outdone, Cambridge English Language Assessment now has a scale that runs from 80 points to 230, although Cambridge does, at least, allocate individual scores for four language skills.

As the title of this post suggests (in its reference to Stephen Jay Gould’s The Mismeasure of Man), I am suggesting that there are parallels between attempts to measure language competence and the sad history of attempts to measure ‘general intelligence’. Both are guilty of the twin fallacies of reification and ranking – the ordering of complex information as a gradual ascending scale. These conceptual fallacies then lead us, through the way that they push us to think about language, into making further conceptual errors about language learning. We start to confuse language testing with the ways that language learning can be structured.

We begin to granularise language. We move inexorably away from difficult-to-measure hazy notions of language skills towards what, on the surface at least, seem more readily measurable entities: words and structures. We allocate to them numerical values on our testing scales, so that an individual word can be deemed to be higher or lower on the scale than another word. And then we have a syllabus, a synthetic syllabus, that lends itself to digital delivery and adaptive manipulation. We find ourselves in a situation where materials writers for Pearson, writing for a particular ‘level’, are only allowed to use vocabulary items and grammatical structures that correspond to that ‘level’. We find ourselves, in short, in a situation where the acquisition of a complex and messy system is described as a linear, additive process. Here’s an example from the Pearson website: If you score 29 on the scale, you should be able to identify and order common food and drink from a menu; at 62, you should be able to write a structured review of a film, book or play. And because the GSE is so granular in nature, you can conquer smaller steps more often; and you are more likely to stay motivated as you work towards your goal. It’s a nonsense, a nonsense that is dictated by the needs of testing and adaptive software, but the sciency-sounding numbers help to hide the conceptual fallacies that lie beneath.

Perhaps, though, this doesn’t matter too much for most language learners. In the early stages of language learning (where most language learners are to be found), there are countless millions of people who don’t seem to mind the granularised programmes of Duolingo or Rosetta Stone, or the Grammar McNuggets of coursebooks. In these early stages, anything seems to be better than nothing, and the testing is relatively low-stakes. But as a learner’s interlanguage becomes more complex, and as the language she needs to acquire becomes more complex, attempts to granularise it and to present it in a linearly additive way become more problematic. It is for this reason, I suspect, that the appeal of granularised syllabuses declines so rapidly the more progress a learner makes. It comes as no surprise that, the further up the scale you get, the more that both teachers and learners want to get away from pre-determined syllabuses in coursebooks and software.

Adaptive language learning software is continuing to gain traction in the early stages of learning, in the initial acquisition of basic vocabulary and structures and in coming to grips with a new phonological system. It will almost certainly gain even more. But the challenge for the developers and publishers will be to find ways of making adaptive learning work for more advanced learners. Can it be done? Or will the mismeasure of language make it impossible?

It’s a good time to be in Turkey if you have digital ELT products to sell. Not so good if you happen to be an English language learner. This post takes a look at both sides of the Turkish lira.

OUP, probably the most significant of the big ELT publishers in Turkey, recorded ‘an outstanding performance’ in the country in the last financial year, making it their 5th largest ELT market. OUP’s annual report for 2013 – 2014 describes the particularly strong demand for digital products and services, a demand which is now influencing OUP’s global strategy for digital resources. When asked about the future of ELT, Peter Marshall , Managing Director of OUP’s ELT Division, suggested that Turkey was a country that could point us in the direction of an answer to the question. Marshall and OUP will be hoping that OUP’s recently launched Digital Learning Platform (DLP) ‘for the global distribution of adult and secondary ELT materials’ will be an important part of that future, in Turkey and elsewhere. I can’t think of any good reason for doubting their belief.

tbl-ipad1OUP aren’t the only ones eagerly checking the pound-lira exchange rates. For the last year, CUP also reported ‘significant sales successes’ in Turkey in their annual report . For CUP, too, it was a year in which digital development has been ‘a top priority’. CUP’s Turkish success story has been primarily driven by a deal with Anadolu University (more about this below) to provide ‘a print and online solution to train 1.7 million students’ using their Touchstone course. This was the biggest single sale in CUP’s history and has inspired publishers, both within CUP and outside, to attempt to emulate the deal. The new blended products will, of course, be adaptive.

Just how big is the Turkish digital ELT pie? According to a 2014 report from Ambient Insight , revenues from digital ELT products reached $32.0 million in 2013. They are forecast to more than double to $72.6 million in 2018. This is a growth rate of 17.8%, a rate which is practically unbeatable in any large economy, and Turkey is the 17th largest economy in the world, according to World Bank statistics .

So, what makes Turkey special?

  • Turkey has a large and young population that is growing by about 1.4% each year, which is equivalent to approximately 1 million people. According to the Turkish Ministry of Education, there are currently about 5.5 million students enrolled in upper-secondary schools. Significant growth in numbers is certain.
  • Turkey is currently in the middle of a government-sponsored $990 million project to increase the level of English proficiency in schools. The government’s target is to position the country as one of the top ten global economies by 2023, the centenary of the Turkish Republic, and it believes that this position will be more reachable if it has a population with the requisite foreign language (i.e. English) skills. As part of this project, the government has begun to introduce English in the 1st grade (previously it was in the 4th grade).
  • The level of English in Turkey is famously low and has been described as a ‘national weakness’. In October/November 2011, the Turkish research institute SETA and the Turkish Ministry for Youth and Sports conducted a large survey across Turkey of 10,174 young citizens, aged 15 to 29. The result was sobering: 59 per cent of the young people said they “did not know any foreign language.” A recent British Council report (2013) found the competence level in English of most (90+%) students across Turkey was evidenced as rudimentary – even after 1000+ hours (estimated at end of Grade 12) of English classes. This is, of course, good news for vendors of English language learning / teaching materials.
  • Turkey has launched one of the world’s largest educational technology projects: the FATIH Project (The Movement to Enhance Opportunities and Improve Technology). One of its objectives is to provide tablets for every student between grades 5 and 12. At the same time, according to the Ambient report , the intention is to ‘replace all print-based textbooks with digital content (both eTextbooks and online courses).’
  • Purchasing power in Turkey is concentrated in a relatively small number of hands, with the government as the most important player. Institutions are often very large. Anadolu University, for example, is the second largest university in the world, with over 2 million students, most of whom are studying in virtual classrooms. There are two important consequences of this. Firstly, it makes scalable, big-data-driven LMS-delivered courses with adaptive software a more attractive proposition to purchasers. Secondly, it facilitates the B2B sales model that is now preferred by vendors (including the big ELT publishers).
  • Turkey also has a ‘burgeoning private education sector’, according to Peter Marshall, and a thriving English language school industry. According to Ambient ‘commercial English language learning in Turkey is a $400 million industry with over 600 private schools across the country’. Many of these are grouped into large chains (see the bullet point above).
  • Turkey is also ‘in the vanguard of the adoption of educational technology in ELT’, according to Peter Marshall. With 36 million internet users, the 5th largest internet population in Europe, and the 3rd highest online engagement in Europe, measured by time spent online, (reported by Sina Afra ), the country’s enthusiasm for educational technology is not surprising. Ambient reports that ‘the growth rate for mobile English educational apps is 27.3%’. This enthusiasm is reflected in Turkey’s thriving ELT conference scene. The most popular conference themes and conference presentations are concerned with edtech. A keynote speech by Esat Uğurlu at the ISTEK schools 3rd international ELT conference at Yeditepe in April 2013 gives a flavour of the current interests. The talk was entitled ‘E-Learning: There is nothing to be afraid of and plenty to discover’.

All of the above makes Turkey a good place to be if you’re selling digital ELT products, even though the competition is pretty fierce. If your product isn’t adaptive, personalized and gamified, you may as well not bother.

What impact will all this have on Turkey’s English language learners? A report co-produced by TEPAV (the Economic Policy Research Foundation of Turkey) and the British Council in November 2013 suggests some of the answers, at least in the school population. The report  is entitled ‘Turkey National Needs Assessment of State School English Language Teaching’ and its Executive Summary is brutally frank in its analysis of the low achievements in English language learning in the country. It states:

The teaching of English as a subject and not a language of communication was observed in all schools visited. This grammar-based approach was identified as the first of five main factors that, in the opinion of this report, lead to the failure of Turkish students to speak/ understand English on graduation from High School, despite having received an estimated 1000+ hours of classroom instruction.

In all classes observed, students fail to learn how to communicate and function independently in English. Instead, the present teacher-centric, classroom practice focuses on students learning how to answer teachers’ questions (where there is only one, textbook-type ‘right’ answer), how to complete written exercises in a textbook, and how to pass a grammar-based test. Thus grammar-based exams/grammar tests (with right/wrong answers) drive the teaching and learning process from Grade 4 onwards. This type of classroom practice dominates all English lessons and is presented as the second causal factor with respect to the failure of Turkish students to speak/understand English.

The problem, in other words, is the curriculum and the teaching. In its recommendations, the report makes this crystal clear. Priority needs to be given to developing a revised curriculum and ‘a comprehensive and sustainable system of in-service teacher training for English teachers’. Curriculum renewal and programmes of teacher training / development are the necessary prerequisites for the successful implementation of a programme of educational digitalization. Unfortunately, research has shown again and again that these take a long time and outcomes are difficult to predict in advance.

By going for digitalization first, Turkey is taking a huge risk. What LMSs, adaptive software and most apps do best is the teaching of language knowledge (grammar and vocabulary), not the provision of opportunities for communicative practice (for which there is currently no shortage of opportunity … it is just that these opportunities are not being taken). There is a real danger, therefore, that the technology will push learning priorities in precisely the opposite direction to that which is needed. Without significant investments in curriculum reform and teacher training, how likely is it that the transmission-oriented culture of English language teaching and learning will change?

Even if the money for curriculum reform and teacher training were found, it is also highly unlikely that effective country-wide approaches to blended learning for English would develop before the current generation of tablets and their accompanying content become obsolete.

Sadly, the probability is, once more, that educational technology will be a problem-changer, even a problem-magnifier, rather than a problem-solver. I’d love to be wrong.

In a recent interesting post on eltjam, Cleve Miller wrote the following

Knewton asks its publishing partners to organize their courses into a “knowledge graph” where content is mapped to an analyzable form that consists of the smallest meaningful chunks (called “concepts”), organized as prerequisites to specific learning goals. You can see here the influence of general learning theory and not SLA/ELT, but let’s not concern ourselves with nomenclature and just call their “knowledge graph” an “acquisition graph”, and call “concepts” anything else at all, say…“items”. Basically our acquisition graph could be something like the CEFR, and the items are the specifications in a completed English Profile project that detail the grammar, lexis, and functions necessary for each of the can-do’s in the CEFR. Now, even though this is a somewhat plausible scenario, it opens Knewton up to several objections, foremost the degree of granularity and linearity.

In this post, Cleve acknowledges that, for the time being, adaptive learning may be best suited to ‘certain self-study material, some online homework, and exam prep – anywhere the language is fairly defined and the content more amenable to algorithmic micro-adaptation.’ I would agree, but its value / usefulness will depend on getting the knowledge graph right.

Which knowledge graph, then? Cleve suggests that it could be something like the CEFR, but it couldn’t be the CEFR itself because it is, quite simply, too vague. This was recognized by Pearson when they developed their Global Scale of English (GSE), an instrument which, they claim, can provide ‘for more granular and detailed measurements of learners’ levels than is possible with the CEFR itself, with its limited number of wide levels’. This Global Scale of English will serve as ‘the metric underlying all Pearson English learning, teaching and assessment products’, including, therefore, the adaptive products under development.

gse2

‘As part of the GSE project, Pearson is creating an associated set of Pearson Syllabuses […]. These will help to link instructional content with assessments and to create a reference for authoring, instruction and testing.’ These syllabuses will contain grammar and vocabulary inventories which ‘will be expressed in the form of can-do statements with suggested sample exponents rather than as the prescriptive lists found in more traditional syllabuses.’ I haven’t been able to get my hands on one of these syllabuses yet: perhaps someone could help me out?

Informal feedback from writer colleagues working for Pearson suggests that, in practice, these inventories are much more prescriptive than Pearson claim, but this is hardly surprising, as the value of an inventory is precisely its more-or-less finite nature.

Until I see more, I will have to limit my observations to two documents in the public domain which are the closest we have to what might become knowledge graphs. The first of these is the British Council / EAQUALS Core Inventory for General EnglishScott Thornbury, back in 2011, very clearly set out the problems with this document and, to my knowledge, the reservations he expressed have not yet been adequately answered. To be fair, this inventory was never meant to be used as a knowledge graph: ‘It is a description, not a prescription’, wrote the author (North, 2010). But presumably a knowledge graph would look much like this, and it would have the same problems. The second place where we can find what a knowledge graph might look like is English Profile and this is mentioned by Cleve. Would English Profile work any better? Possibly not. Michael Swan’s critique of English Profile (ELTJ 68/1 January 2014 pp.89-96) asks some big questions that have yet, to my knowledge, to be answered.

Knewton’s Sally Searby has said that, for ELT, knowledge graphing needs to be ‘much more nuanced’. Her comment suggests a belief that knowledge graphing can be much more nuanced, but this is open to debate. Michael Swan quotes Prodeau, Lopez and Véronique (2012): ‘the sum of pragmatic and linguistic skills needed to achieve communicative success at each level makes it difficult, if not impossible, to find lexical and grammatical means that would characterize only one level’. He observes that ‘the problem may, in fact, simply not be soluble’.

So, what kind of knowledge graph are we likely to see? My best bet is that it would look a bit like a Headway syllabus.