Archive for July, 2014

(This post was originally published at eltjam.)

learning_teaching_ngramWe now have young learners and very young learners, learner differences and learner profiles, learning styles, learner training, learner independence and autonomy, learning technologies, life-long learning, learning management systems, virtual learning environments, learning outcomes, learning analytics and adaptive learning. Much, but not perhaps all, of this is to the good, but it’s easy to forget that it wasn’t always like this.

The rise in the use of the terms ‘learner’ and ‘learning’ can be seen in policy documents, educational research and everyday speech, and it really got going in the mid 1980s[1]. Duncan Hunter and Richard Smith[2] have identified a similar trend in ELT after analysing a corpus of articles from the English Language Teaching Journal. They found that ‘learner’ had risen to near the top of the key-word pile in the mid 1980s, but had been practically invisible 15 years previously. Accompanying this rise has been a relative decline of words like ‘teacher’, ‘teaching’, ‘pupil’ and, even, ‘education’. Gert Biesta has described this shift in discourse as a ‘new language of learning’ and the ‘learnification of education’.

It’s not hard to see the positive side of this change in focus towards the ‘learner’ and away from the syllabus, the teachers and the institution in which the ‘learning’ takes place. We can, perhaps, be proud of our preference for learner-centred approaches over teacher-centred ones. We can see something liberating (for our students) in the change of language that we use. But, as Bingham and Biesta[3] have pointed out, this gain is also a loss.

The language of ‘learners’ and ‘learning’ focusses our attention on process – how something is learnt. This was a much-needed corrective after an uninterrupted history of focussing on end-products, but the corollary is that it has become very easy to forget not only about the content of language learning, but also its purposes and the social relationships through which it takes place.

There has been some recent debate about the content of language learning, most notably in the work of the English as a Lingua Franca scholars. But there has been much more attention paid to the measurement of the learners’ acquisition of that content (through the use of tools like the Pearson Global Scale of English). There is a growing focus on ‘granularized’ content – lists of words and structures, and to a lesser extent language skills, that can be easily measured. It looks as though other things that we might want our students to be learning – critical thinking skills and intercultural competence, for example – are being sidelined.

More significant is the neglect of the purposes of language learning. The discourse of ELT is massively dominated by the paying sector of private language schools and semi-privatised universities. In these contexts, questions of purpose are not, perhaps, terribly important, as the whole point of the enterprise can be assumed to be primarily instrumental. But the vast majority of English language learners around the world are studying in state-funded institutions as part of a broader educational programme, which is as much social and political as it is to do with ‘learning’. The ultimate point of English lessons in these contexts is usually stated in much broader terms. The Council of Europe’s Common European Framework of Reference, for example, states that the ultimate point of the document is to facilitate better intercultural understanding. It is very easy to forget this when we are caught up in the business of levels and scales and measuring learning outcomes.

Lastly, a focus on ‘learners’ and ‘learning’ distracts attention away from the social roles that are enacted in classrooms. 25 years ago, Henry Widdowson[4] pointed out that there are two quite different kinds of role. The first of these is concerned with occupation (student / pupil vs teacher / master / mistress) and is identifying. The second (the learning role) is actually incidental and cannot be guaranteed. He reminds us that the success of the language learning / teaching enterprise depends on ‘recognizing and resolving the difficulties inherent in the dual functioning of roles in the classroom encounter’[5]. Again, this may not matter too much in the private sector, but, elsewhere, any attempt to tackle the learning / teaching conundrum through an exclusive focus on learning processes is unlikely to succeed.

The ‘learnification’ of education has been accompanied by two related developments: the casting of language learners as consumers of a ‘learning experience’ and the rise of digital technologies in education. For reasons of space, I will limit myself to commenting on the second of these[6]. Research by Geir Haugsbakk and Yngve Nordkvelle[7] has documented a clear and critical link between the new ‘language of learning’ and the rhetoric of edtech advocacy. These researchers suggest that these discourses are mutually reinforcing, that both contribute to the casting of the ‘learner’ as a consumer, and that the coupling of learning and digital tools is often purely rhetorical.

One of the net results of ‘learnification’ is the transformation of education into a technical or technological problem to be solved. It suggests, wrongly, that approaches to education can be derived purely from theories of learning. By adopting an ahistorical and apolitical standpoint, it hides ‘the complex nexus of political and economic power and resources that lies behind a considerable amount of curriculum organization and selection’[8]. The very real danger, as Biesta[9] has observed, is that ‘if we fail to engage with the question of good education head-on – there is a real risk that data, statistics and league tables will do the decision-making for us’.

[1] 2004 Biesta, G.J.J. ‘Against learning. Reclaiming a language for education in an age of learning’ Nordisk Pedagogik 24 (1), 70-82 & 2010 Biesta, G.J.J. Good Education in an Age of Measurement (Boulder, Colorado: Paradigm Publishers)

[2] 2012 Hunter, D. & R. Smith ‘Unpackaging the past: ‘CLT’ through ELTJ keywords’ ELTJ 66/4 430-439

[3] 2010 Bingham, C. & Biesta, G.J.J. Jacques Rancière: Education, Truth, Emancipation (London: Continuum) 134

[4] 1990 Widdowson, H.G. Aspects of Language Teaching (Oxford: OUP) 182 ff

[5] 1987 Widdowson, H.G. ‘The roles of teacher and learner’ ELTJ 41/2

[6] A compelling account of the way that students have become ‘consumers’ can be found in 2013 Williams, J. Consuming Higher Education (London: Bloomsbury)

[7] 2007 Haugsbakk, G. & Nordkvelle, Y. ‘The Rhetoric of ICT and the New Language of Learning: a critical analysis of the use of ICT in the curricular field’ European Educational Research Journal 6/1 1 – 12

[8] 2004 Apple, M. W. Ideology and Curriculum 3rd edition (New York: Routledge) 28

[9] 2010 Biesta, G.J.J. Good Education in an Age of Measurement (Boulder, Colorado: Paradigm Publishers) 27



(This post won’t make a lot of sense unless you read the previous two – Researching research: part 1 and part 2!)

The work of Jayaprakash et al was significantly informed and inspired by the work done at Purdue University. In the words of these authors, they even ‘relied on [the] work at Purdue with Course Signals’ for parts of the design of their research. They didn’t know when they were doing their research that the Purdue studies were fundamentally flawed. This was, however, common knowledge (since September 2013) before their article (‘Early Alert of Academically At-Risk Students’) was published. This raises the interesting question of why the authors (and the journal in which they published) didn’t pull the article when they could still have done so. I can’t answer that question, but I can suggest some possible reasons. First, though, a little background on the Purdue research.

The Purdue research is important, more than important, because it was the first significant piece of research to demonstrate the efficacy of academic analytics. Except that, in all probability, it doesn’t! Michael Caulfield, director of blended and networked learning at Washington State University at Vancouver, and Alfred Essa, McGraw-Hill Education’s vice-president of research and development and analytics, took a closer look at the data. What they found was that the results were probably the result of selection bias rather than a real finding. In other words, as summarized by Carl Straumsheim in Inside Higher Ed in November of last year, there was no causal connection between students who use [Course Signals] and their tendency to stick with their studies .The Times Higher Education and the e-Literate blog contacted Purdue, but, to date, there has been no serious response to the criticism. The research is still on Purdue’s website .

The Purdue research article, ‘Course Signals at Purdue: Using Learning Analytics to Increase Student Success’ by Kimberley Arnold and Matt Pistilli, was first published as part of the proceedings of the Learning Analytics and Knowledge (LAK) conference in May 2012. The LAK conference is organised by the Society for Learning Analytics Research (SoLAR), in partnership with Purdue. SoLAR, you may remember, is the organisation which published the new journal in which Jayaprakash et al’s article appeared. Pistilli happens to be an associate editor of the journal. Jayaprakash et al also presented at the LAK ’12 conference. Small world.

The Purdue research was further publicized by Pistilli and Arnold in the Educause review. Their research had been funded by the Gates Foundation (a grant of $1.2 million in November 2011). Educause, in its turn, is also funded by the Gates Foundation (a grant of $9 million in November 2011). The research of Jayaprakash et al was also funded by Educause, which stipulated that ‘effective techniques to improve student retention be investigated and demonstrated’ (my emphasis). Given the terms of their grant, we can perhaps understand why they felt the need to claim they had demonstrated something.

What exactly is Educause, which plays such an important role in all of this? According to their own website, it is a non-profit association whose mission is to advance higher education through the use of information technology. However, it is rather more than that. It is also a lobbying and marketing umbrella for edtech. The following screenshot from their website makes this abundantly clear.educause

If you’ll bear with me, I’d like to describe one more connection between the various players I’ve been talking about. Purdue’s Couse Signals is marketed by a company called Ellucian. Ellucian’s client list includes both Educause and the Gates Foundation. A former Senior Vice President of Ellucian, Anne K Keehn, is currently ‘Senior Fellow -Technology and Innovation, Education, Post-Secondary Success’ at the Gates Foundation – presumably the sort of person to whom you’d have to turn if you wanted funding from the Gates Foundation. Small world.

Personal, academic and commercial networks are intricately intertwined in the high-stakes world of edtech. In such a world (not so very different from the pharmaceutical industry), independent research is practically impossible. The pressure to publish positive research results must be extreme. The temptation to draw conclusions of the kind that your paymasters are looking for must be high. Th edtech juggernaut must keep rolling on.

While the big money will continue to go, for the time being, into further attempts to prove that big data is the future of education, there are still some people who are interested in alternatives. Coincidentally (?), a recent survey  has been carried out at Purdue which looks into what students think about their college experience, about what is meaningful to them. Guess what? It doesn’t have much to do with technology.

(This post won’t make a lot of sense unless you read the previous one – Researching research: part 1!)

dropoutsI suggested in the previous post that the research of Jayaprakash et al had confirmed something that we already knew concerning the reasons why some students drop out of college. However, predictive analytics are only part of the story. As the authors of this paper point out, they ‘do not influence course completion and retention rates without being combined with effective intervention strategies aimed at helping at-risk students succeed’. The point of predictive analytics is to facilitate the deployment of effective and appropriate interventions strategies, and to do this sooner than would be possible without the use of the analytics. So, it is to these intervention strategies that I now turn.

Interventions to help at-risk students included the following:

  • Sending students messages to inform them that they are at risk of not completing the course (‘awareness messaging’)
  • Making students more aware of the available academic support services (which could, for example, direct them to a variety of campus-based or online resources)
  • Promoting peer-to-peer engagement (e.g. with an online ‘student lounge’ discussion forum)
  • Providing access to self-assessment tools

The design of these interventions was based on the work that had been done at Purdue, which was, in turn, inspired by the work of Vince Tinto, one of the world’s leading experts on student retention issues.

The work done at Purdue had shown that simple notifications to students that they were at risk could have a significant, and positive, effect on student behaviour. Jayaprakash and the research team took the students who had been identified as at-risk by the analytics and divided them into three groups: the first were issued with ‘awareness messages’, the second were offered a combination of the other three interventions in the bullet point list above, and the third, a control group, had no interventions at all. The results showed that the students who were in treatment groups (of either kind of intervention) showed a statistically significant improvement compared to those who received no treatment at all. However, there seemed to be no difference in the effectiveness of the different kinds of intervention.

So far, so good, but, once again, I was left thinking that I hadn’t really learned very much from all this. But then, in the last five pages, the article suddenly got very interesting. Remember that the primary purpose of this whole research project was to find ways of helping not just at-risk students, but specifically socioeconomically disadvantaged at-risk students (such as those receiving Pell Grants). Accordingly, the researchers then focussed on this group. What did they find?

Once again, interventions proved more effective at raising student scores than no intervention at all. However, the averages of final scores are inevitably affected by drop-out rates (since students who drop out do not have final scores which can be included in the averages). At Purdue, the effect of interventions on drop-out rates had not been found to be significant. Remember that Purdue has a relatively well-off student demographic. However, in this research, which focussed on colleges with a much higher proportion of students on Pell Grants, the picture was very different. Of the Pell Grant students who were identified as at-risk and who were given some kind of treatment, 25.6% withdrew from the course. Of the Pell Grant students who were identified as at-risk but who were not ‘treated’ in any way (i.e. those in the control group), only 14.1% withdrew from the course. I recommend that you read those numbers again!

The research programme had resulted in substantially higher drop-out rates for socioeconomically disadvantaged students – the precise opposite of what it had set out to achieve. Jayaprakash et al devote one page of their article to the ethical issues this raises. They suggest that early intervention, resulting in withdrawal, might actually be to the benefit of some students who were going to fail whatever happened. It is better to get a ‘W’ (withdrawal) grade on your transcript than an ‘F’ (fail), and you may avoid wasting your money at the same time. This may be true, but it would be equally true that not allowing at-risk students (who, of course, are disproportionately from socioeconomically disadvantaged backgrounds) into college at all might also be to their ‘benefit’. The question, though, is: who has the right to make these decisions on behalf of other people?

The authors also acknowledge another ethical problem. The predictive analytics which will prompt the interventions are not 100% accurate. 85% accuracy could be considered a pretty good figure. This means that some students who are not at-risk are labelled as at-risk, and other who are at-risk are not identified. Of these two possibilities, I find the first far more worrying. We are talking about the very real possibility of individual students being pushed into making potentially life-changing decisions on the basis of dodgy analytics. How ethical is that? The authors’ conclusion is that the situation forces them ‘to develop the most accurate predictive models possible, as well as to take steps to reduce the likelihood that any intervention would result in the necessary withdrawal of a student’.

I find this extraordinary. It is premised on the assumption that predictive models can be made much, much more accurate. They seem to be confusing prediction and predeterminism. A predictive model is, by definition, only predictive. There will always be error. How many errors are ethically justifiable? And, the desire to reduce the likelihood of unnecessary withdrawals is a long way from the need to completely eliminate the likelihood of unnecessary withdrawals, which seems to me to be the ethical position. More than anything else in the article, this sentence illustrates that the a priori assumption is that predictive analytics can be a force for good, and that the only real problem is getting the science right. If a number of young lives are screwed up along the way, we can at least say that science is getting better.

In the authors’ final conclusion, they describe the results of their research as ‘promising’. They do not elaborate on who it is promising for. They say that relatively simple intervention strategies can positively impact student learning outcomes, but they could equally well have said that relatively simple intervention strategies can negatively impact learning outcomes. They could have said that predictive analytics and intervention programmes are fine for the well-off, but more problematic for the poor. Remembering once more that the point of the study was to look at the situation of socioeconomically disadvantaged at-risk students, it is striking that there is no mention of this group in the researchers’ eight concluding points. The vast bulk of the paper is devoted to technical descriptions of the design and training of the software; the majority of the conclusions are about the validity of that design and training. The ostensibly intended beneficiaries have got lost somewhere along the way.

How and why is it that a piece of research such as this can so positively slant its results? In the third and final part of this mini-series, I will turn my attention to answering that question.

article-2614966-1D6DC26500000578-127_634x776In the 8th post on this blog (‘Theory, Research and Practice’), I referred to the lack of solid research into learning analytics. Whilst adaptive learning enthusiasts might disagree with much, or even most, of what I have written on this subject, here, at least, was an area of agreement. May of this year, however, saw the launch of the inaugural issue of the Journal of Learning Analytics, the first journal ‘dedicated to research into the challenges of collecting, analysing and reporting data with the specific intent to improve learning’. It is a peer-reviewed, open-access journal, available here , which is published by the Society for Learning Analytics Research (SoLAR), a consortium of academics from 9 universities in the US, Canada, Britain and Australia.

I decided to take a closer look. In this and my next two posts, I will focus on one article from this inaugural issue. It’s called Early Alert of Academically At‐Risk Students: An Open Source Analytics Initiative and it is co-authored by Sandeep M. Jayaprakash, Erik W. Moody, Eitel J.M. Lauría, James R. Regan, and Joshua D. Baron of Marist College in the US. Bear with me, please – it’s more interesting than it might sound!

The background to this paper is the often referred to problem of college drop-outs in the US, and the potential of learning analytics to address what is seen as a ‘national challenge’. The most influential work that has been done in this area to date was carried out at Purdue University. Purdue developed an analytical system, called Course Signals, which identified students at risk of course failure and offered a range of interventions (more about these in the next post) which were designed to improve student outcomes. I will have more to say about the work at Purdue in my third post, but, for the time being, it is enough to say that, in the field, it has been considered very successful, and that the authors of the paper I looked at have based their approach on the work done at Purdue.

Jayaprakash et al developed their own analytical system, based on Purdue’s Course Signals, and used it at their own institution, Marist College. Basically, they wanted to know if they could replicate the good results that had been achieved at Purdue. They then took the same analytical system to four different institutions, of very different kinds (public, as opposed to private; community colleges offering 2-year programmes rather than universities) to see if the results could be replicated there, too. They also wanted to find out if the interventions with students who had been signalled as at-risk would be as effective as they had been at Purdue. So far, so good: it is clearly very important to know if one particular piece of research has any significance beyond its immediate local context.

So, what did Jayaprakash et al find out? Basically, they learnt that their software worked as well at Marist as Course Signals had done at Purdue. They collected data on student demographics and aptitude, course grades and course related data, data on students’ interactions with the LMS they were using and performance data captured by the LMS. Oh, yes, and absenteeism. At the other institutions where they trialled their software, the system was 10% less accurate in predicting drop-outs, but the authors of the research still felt that ‘predictive models developed based on data from one institution may be scalable to other institutions’.

But more interesting than the question of whether or not the predictive analytics worked is the question of which specific features of the data were the most powerful predictors. What they discovered was that absenteeism was highly significant. No surprises there. They also learnt that the other most powerful predictors were (1) the students’ cumulative grade point average (GPA), an average of a student’s academic scores over their entire academic career, and (2) the scores recorded by the LMS of the work that students had done during the course which would contribute to their final grade. No surprises there, either. As the authors point out, ‘given that these two attributes are such fundamental aspects of academic success, it is not surprising that the predictive model has fared so well across these different institutions’.

Agreed, it is not surprising at all that students with lower scores and a history of lower scores are more likely to drop out of college than students with higher scores. But, I couldn’t help wondering, do we really need sophisticated learning analytics to tell us this? Wouldn’t any teacher know this already? They would, of course, if they knew their students, but if the teacher: student ratio is in the order of 1: 100 (not unheard of in lower-funded courses delivered primarily through an LMS), many teachers (and their students) might benefit from automated alert systems.

But back to the differences between the results at Purdue and Marist and at the other institutions. Why were the predictive analytics less successful at the latter? The answer is in the nature of the institutions. Essentially, it boils down to this. In institutions with low drop-out rates, the analytics are more reliable than in institutions with high drop-out rates, because the more at-risk students there are, the harder it is to predict the particular individuals who will actually drop out. Jayaprakash et al provide the key information in a useful table. Students at Marist College are relatively well-off (only 16% receive Pell Grants, which are awarded to students in financial need), and only a small number (12%) are from ‘ethnic minorities’. The rate of course non-completion in normal time is relatively low (at 20%). In contrast, at one of the other institutions, the College of the Redwoods in California, 44% of the students receive Pell Grants and 22% of them are from ‘ethnic minorities’. The non-completion rate is a staggering 96%. At Savannah State University, 78% of the students receive Pell Grants, and the non-completion rate is 70%. The table also shows the strong correlation between student poverty and high student: faculty ratios.

In other words, the poorer you are, the less likely you are to complete your course of study, and the less likely you are to know your tutors (these two factors also correlate). In other other words, the whiter you are, the more likely you are to complete your course of study (because of the strong correlations between race and poverty). While we are playing the game of statistical correlations, let’s take it a little further. As the authors point out, ‘there is considerable evidence that students with lower socio-economic status have lower GPAs and graduation rates’. If, therefore, GPAs are one of the most significant predictors of academic success, we can say that socio-economic status (and therefore race) is one of the most significant predictors of academic success … even if the learning analytics do not capture this directly.

Actually, we have known this for a long time. The socio-economic divide in education is frequently cited as one of the big reasons for moving towards digitally delivered courses. This particular piece of research was funded (more about this in the next posts) with the stipulation that it ‘investigated and demonstrated effective techniques to improve student retention in socio-economically disadvantaged populations’. We have also known for some time that digitally delivered education increases the academic divide between socio-economic groups. So what we now have is a situation where a digital technology (learning analytics) is being used as a partial solution to a problem that has always been around, but which has been exacerbated by the increasing use of another digital technology (LMSs) in education. We could say, then, that if we weren’t using LMSs, learning analytics would not be possible … but we would need them less, anyway.

My next post will look at the results of the interventions with students that were prompted by the alerts generated by the learning analytics. Advance warning: it will make what I have written so far seem positively rosy.