In a recent interesting post on eltjam, Cleve Miller wrote the following

Knewton asks its publishing partners to organize their courses into a “knowledge graph” where content is mapped to an analyzable form that consists of the smallest meaningful chunks (called “concepts”), organized as prerequisites to specific learning goals. You can see here the influence of general learning theory and not SLA/ELT, but let’s not concern ourselves with nomenclature and just call their “knowledge graph” an “acquisition graph”, and call “concepts” anything else at all, say…“items”. Basically our acquisition graph could be something like the CEFR, and the items are the specifications in a completed English Profile project that detail the grammar, lexis, and functions necessary for each of the can-do’s in the CEFR. Now, even though this is a somewhat plausible scenario, it opens Knewton up to several objections, foremost the degree of granularity and linearity.

In this post, Cleve acknowledges that, for the time being, adaptive learning may be best suited to ‘certain self-study material, some online homework, and exam prep – anywhere the language is fairly defined and the content more amenable to algorithmic micro-adaptation.’ I would agree, but its value / usefulness will depend on getting the knowledge graph right.

Which knowledge graph, then? Cleve suggests that it could be something like the CEFR, but it couldn’t be the CEFR itself because it is, quite simply, too vague. This was recognized by Pearson when they developed their Global Scale of English (GSE), an instrument which, they claim, can provide ‘for more granular and detailed measurements of learners’ levels than is possible with the CEFR itself, with its limited number of wide levels’. This Global Scale of English will serve as ‘the metric underlying all Pearson English learning, teaching and assessment products’, including, therefore, the adaptive products under development.

gse2

‘As part of the GSE project, Pearson is creating an associated set of Pearson Syllabuses […]. These will help to link instructional content with assessments and to create a reference for authoring, instruction and testing.’ These syllabuses will contain grammar and vocabulary inventories which ‘will be expressed in the form of can-do statements with suggested sample exponents rather than as the prescriptive lists found in more traditional syllabuses.’ I haven’t been able to get my hands on one of these syllabuses yet: perhaps someone could help me out?

Informal feedback from writer colleagues working for Pearson suggests that, in practice, these inventories are much more prescriptive than Pearson claim, but this is hardly surprising, as the value of an inventory is precisely its more-or-less finite nature.

Until I see more, I will have to limit my observations to two documents in the public domain which are the closest we have to what might become knowledge graphs. The first of these is the British Council / EAQUALS Core Inventory for General EnglishScott Thornbury, back in 2011, very clearly set out the problems with this document and, to my knowledge, the reservations he expressed have not yet been adequately answered. To be fair, this inventory was never meant to be used as a knowledge graph: ‘It is a description, not a prescription’, wrote the author (North, 2010). But presumably a knowledge graph would look much like this, and it would have the same problems. The second place where we can find what a knowledge graph might look like is English Profile and this is mentioned by Cleve. Would English Profile work any better? Possibly not. Michael Swan’s critique of English Profile (ELTJ 68/1 January 2014 pp.89-96) asks some big questions that have yet, to my knowledge, to be answered.

Knewton’s Sally Searby has said that, for ELT, knowledge graphing needs to be ‘much more nuanced’. Her comment suggests a belief that knowledge graphing can be much more nuanced, but this is open to debate. Michael Swan quotes Prodeau, Lopez and Véronique (2012): ‘the sum of pragmatic and linguistic skills needed to achieve communicative success at each level makes it difficult, if not impossible, to find lexical and grammatical means that would characterize only one level’. He observes that ‘the problem may, in fact, simply not be soluble’.

So, what kind of knowledge graph are we likely to see? My best bet is that it would look a bit like a Headway syllabus.

Advertisement
Comments
  1. eflnotes says:

    hi

    the only refs to knowledge graph i can find are to google’s knowledge graph, hmm. i guess they needed to rebrand semantic networks?

    found one study on computer tutoring system for maths http://ies.ed.gov/ncee/wwc/interventionreport.aspx?sid=88

    h/t http://www.technologyreview.com/news/506366/questions-surround-software-that-adapts-to-students

    ta
    mura

    • philipjkerr says:

      Thanks for this, Mura. I couldn’t get anywhere with your first link, but maths is always going to be an interesting case as it is the school subject which has been most experimented with with adaptive systems. There is a meta-analysis of the effectiveness of intelligent tutoring systems on K–12 students’ mathematical learning available online http://psycnet.apa.org/journals/edu/105/4/970/ The major findings, as summarised in the abstract, are ‘(a) overall, ITS had no negative and perhaps a small positive effect on K–12 students’ mathematical learning, as indicated by the average effect sizes ranging from g = 0.01 to g = 0.09, and (b) on the basis of the few studies that compared ITS with homework or human tutoring, the effectiveness of ITS appeared to be small to modest’.
      The second article you link reinforces the results of the meta-analysis I’ve mentioned above. This raises an interesting question. If maths, which (by general agreement) is more suited to adaptive learning, doesn’t produce very significant results, what can we expect for language learning, which (by general agreement) does not sit so easily with the adaptive model?
      We will see. The studies of both the meta-analysis and those cited in the MIT Technology Review that you link took place in 2010 and before, so we should expect some improvements. It will be very interesting to find out what three or four additional years of research can do.

  2. cleve360 says:

    Hi Phillip

    Thanks for the mention!

    One clarification: when I discuss the CEFR as a example of a possible model for the knowledge graph, that included the English Profile project that adds the grammar, lexis, and functions at a fairly detailed level for each of the can-do’s, to avoid the vagueness you referred to. The Profile isn’t complete yet (only vocab is released I think) but provides the actual language needed. Of course this as all speculative – would need to see exactly what the knowledge graph framework looks like before going further.

  3. Thomas Ewens says:

    Thank you for this blog, Philip.

    It’s interesting how proponents of the grammar syllabus, discrete item etc approach to language learning pay lip-service to the communicative approach, often by invoking the CEFR.

    The CEFR has endured for so long precisely because it describes what students can do with language, not just what language they can use. Any attempt to map language onto it misses the point, surely?

    What other possible models for an adaptive learning knowledge graph are there, I wonder?

  4. Philip, you quote Michael Swan, to the effect that ‘the problem may, in fact, simply not be soluble’.
    I would agree: it is not. Or, if it is, those who advocate a ‘knowledge graph’ solution are barking up the wrong tree. Proficiency in a language is not the net effect of knowing a quantifiable number of accumulated entities of grammar and vocabulary.

    Take one of the classic case studies of naturalistic second language acquisition, a Japanese immigrant to the US, ‘Wes’, who, with the most rudimentary means, grammatically speaking, achieved a high degree of conversational fluency, to the extent that his interlocutors rated his English very favorably (Schmidt 1983). Reflecting on this study more recently, Schmidt asks ‘Why do people think his English is so good when he doesn’t use prepositions, articles, plurals, and tense? I think it’s because when people talk to him and listen to him, they don’t notice that he doesn’t use them’ (Schmidt 2013). This he attributes to Wes’s willingness to communicate ‘and especially his persistence in communicating what he has in mind and understanding what his interlocutors have in their minds’, which goes ‘a long way towards compensating for his grammatical inaccuracies’. Schmidt adds, ‘Grammar teachers, on the other hand, generally consider him a disaster’ (1983).

    Schmidt offers the case of Wes as an argument in favour of the view that fluency is not a function of grammatical knowledge, or, to put it the other way round, ‘grammatical competence derived through formal training is not a good predictor of communicative skills.’ It is also an argument supporting the view that language learning is situated, selective and idiosyncratic. As Bommaert (2010) comments, ‘We never know ‘all’ of a language, we always know specific bits and pieces of it’. Those specific ‘bits and pieces’ are the ones which equip us with communicative competence in the particular contexts in which we use the language. They cannot be predicted simply by aggregating the language output of hundreds of thousands of other learners, all of them operating in different contexts and for different purposes.

    In similar vein, Lantolf and Thorne (2006) argue that ‘learning an additional language is about enhancing one’s repertoire of fragments and patterns that enables participation in a wider array of communicative activities. It is not about building up a complete and perfect grammar in order to produce well-formed sentences’.

    If ‘knowledge graphs’ are predicated on the idea that ultimate achievement is best defined in terms of ‘a complete and perfect grammar’ consisting of ‘well-formed sentences’, the lip-service that proponents of adaptive learning technologies pay to ‘personalization’ is – frankly – nonsense. The only person who knows what I want to do with my language is me. The person best placed to guide me is my teacher.

  5. Scott, knowledge graphs are not predicated on the idea that “ultimate achievement is best defined in terms of ‘a complete and perfect grammar’ consisting of ‘well-formed sentences’”. What they are predicated on is the idea that learning is cumulative, so that at any point, what you are learning will be helped by having learned other things previously – the knowledge graph is therefore a complex network of prerequisites. It doesn’t have to be based on grammar or vocab – the stipulations are that it’s based on chunks of learning, that it is possible to determine prerequisites for each chunk, and that someone (either a computer or a teacher) is able to judge the extent to which the student has successfully mastered each chunk or not. So, a chunk could be a CEFR can-do, although there’s nothing to say it has to be prescriptive about what grammar or vocab is used.

    The other thing to point out is that a knowledge graph is not static – it evolves based on how students actually interact with the material. If the prerequisites have been wrongly mapped in the original course design, then that will show up and the graph will adapt. The aim is to use the graph to guide the student through to an overall objective (whatever that may be) as efficiently as possible. That’s where the ‘personalization’ claims come from – there are millions of possible paths through a course, and AL will map out one designed for each student and adjust it in real time as they go.

    So, is learning a language a cumulative process?
    In ELT, do we have cohorts of students who all want or need to reach the same overall objective (such as passing an exam)?
    In language learning, is it helpful to be working towards objectives?
    Can language learning be broken into chunks whose mastery can be measured in any way?
    Are different students likely to have different strengths and weaknesses that mean they would benefit from focusing more of their time and effort on some areas and less on others?

    If the answers to those questions are all clearly “no”, then AL is pointless. Otherwise, I think it would be interesting to at least think a bit about how to use it most effectively in language learning. For example, could it help move us away from the identikit grammar-focused courses that are still being churned out right now? Maybe, maybe not – be interesting to explore that possibility, though. Knowledge graphs may well end up looking a bit like the Headway syllabus – but that doesn’t have to happen.

    A lot of those debating AL in ELT right now are influential in the industry and could help to shape things for the better – given a willingness to engage constructively, rather than shouting “Here be monsters!”.

    • Thanks for your detailed (and measured) response, Laurie.

      As an example of how I think I am engaging constructively let me refer you to this comment on an earlier post of Philip’s where I explore (and, admittedly, ultimately dismiss) the idea that the CEFR (as you suggest) could provide measurable targets on a ‘knowledge graph’:

      https://adaptivelearninginelt.wordpress.com/2014/02/03/part-7-the-next-4-to-5-years-of-adaptive-learning-in-elt-10-predictions/comment-page-1/#comment-61

      In brief, I query the notion that (at least in the forseeable future) software could be designed so as to assess (except in the most clunky fashion) achievement of ‘can do’ statements of the type found in the CEFR. Hence a knowledge graph that takes these statements as its targets would, in the end, need to be interpreted, applied and monitored by expert humans, which means it is nothing more than a conventional syllabus, albeit a competency-based one, rather than a grammar-based one.

      But, yes, I agree with you that learners who are preparing for an exam, especially one which is itself graded automatically, and hence consists of fixed response items, could well find that adaptive learning software that is based on big data collected from previous candidates will be sufficient for their needs. Given that more and more learners are setting their sights on passing these kinds of exams, then the future of adaptive learning technologies is assured. But we shouldn’t confuse these narrowly instrumental aims with what competence in a language really comprises, i.e. the ability to sustain meaningful communication with another human being.

      • Thanks Scott. You say in the other post “I don’t see CEF descriptors yielding easily to adaptive software”. I think that’s absolutely true, although possibly reveals a misconception – the idea that adaptive software attempts to assess students’ work. That’s not the case – information about the extent to which student A has mastered ‘learning chunk X’ is part of the input that the adaptive software uses to build its picture of the student’s level/strengths/weaknesses/gaps against the overall learning objective mapped out in the knowledge graph. Making the judgement about the extent to which a student has mastered something is outside the scope of the AL software.

        Of course, the most likely thing is that AL ends up being used for things that can be assessed automatically by a computer (probably through students doing dull drag and drop and gap-fill activities in an LMS). A computer’s ability to assess “the ability to sustain meaningful communication with another human being” is absolutely pitiful, and will probably be so for many years (which is why I’m not convinced by the “adaptive learning will replace teachers” argument). So that means the path of least resistance for AL is to use it for more easily quantifiable things like vocab, grammar and exam skills, where there are clear ‘right’ and ‘wrong’ answers. There’s nothing inherent in AL that means it has to be that way, though. Could it be used to support ‘good’ approaches to language learning as well as ‘bad’ ones? If not, why not? And if so, how?

  6. Thomas Ewens says:

    Laurie,

    You state that knowledge graphs (and, by inference, adaptive learning itself) is predicated on the belief that learning is a cumulative process; a positivist view of learning in other words. I’m not sure that Scott is exactly a big fan of positivism. And it’s certainly fair to say that it’s just one of many competing theories of learning out there (I.e. it’s not axiomatic that learning is a cumulative process).

    However, learning which is cumulative in nature does have its place. And I completely agree that the debate on adaptive learning in ELT could perhaps be more constructive. At least it has remained (for the most part) civil. Although I think one commenter used the word ‘nazi’ over on ELT Jam 😉

    As I said on ELT Jam, I personally very much appreciate David Liu taking the time to answer some our questions. Is there a plan for another ‘Knewton Replies’?

    • Hi Thomas – yes, I hope we’ll be hearing more from Knewton on eltjam – as long as they haven’t been scared off! I think quite a lot of the debate is grounded in misunderstandings about how Knewton really works (or intends to work), so it would be good if everyone involved could reach a clear understanding – then the real debate can begin!

  7. […] In a recent interesting post on eltjam, Cleve Miller wrote the followingKnewton asks its publishing partners to organize their courses into a “knowledge graph” where content is mapped to an analyzable form that consists of the smallest meaningful chunks (called “concepts”), organized as prerequisites to specific learning goals. You can see here the influence of general learning theory and not SLA/ELT, but let’s not concern ourselves with nomenclature and just call their “knowledge graph” an “acquisition graph”, and call “concepts” anything else at all, say…“items”. Basically our acquisition graph could be something like the CEFR, and the items are the specifications in a completed English Profile project that detail the grammar, lexis, and functions necessary for each of the can-do’s in the CEFR. Now, even though this is a somewhat plausible scenario, it opens Knewton up to several objections, foremost the degree of granularity and linearity.  […]

  8. […] initiative will stand or fall with the validity of their Global Scale of English, discussed in my March post ‘Knowledge Graphs’ . However, there are a number of other considerations that make the whole […]

  9. […] learner, and Knewton’s software can optimise the delivery. The first problem, as I explored in a previous post, is that language is a messy, complex system: it doesn’t lend itself terribly well to […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s