View all news

Isaacs delivers plenary presentation at international Language Testing Research Colloquium

7 June 2014

Dr Talia Isaacs makes presentation about current GSoE research at international conference in Amsterdam, June 2014

Dr Talia Isaacs delivered a plenary presentation at the Language Testing Research Colloquium, the major international language testing conference, in Amsterdam in June, 2014. She explicitly tied in her ongoing collaborative work on comprehensibility scale development across geographical contexts with the conference theme, Towards a Universal Framework. The title and abstract of the plenary appear below:

Modelling comprehensibility in an oral production scale for L2 learners of English:

Which linguistic factors generalize across L1s?

Talia Isaacs,1 Pavel Trofimovich,2 Dustin Crowther,2 Kazuya Saito,3 & Jennifer A. Foote2

1Univesity of Bristol, UK; 2Concordia University, Canada; 3Waseda University, Japan

Pronunciation can be considered the skill most amenable to diagnostic assessment. Indeed, pronunciation-focused discrete-point items were prominently featured in Lado’s Language Testing (1961), designed to assess “language problems” based on differences between first and second language (L1-L2) inventories. Modern approaches to pronunciation assessment also demonstrate how systematic testing can pinpoint learner perception/production errors (Hewings, 2004). However, pronunciation is also arguably the most difficult skill to model in diagnostic scales designed for learners from diverse L1 backgrounds. This is, in part, because differences between learner productions often occur due to L1 transfer errors, with L1 effects on L2 speech tending to be more perceptually salient to listeners for pronunciation than for other skills (e.g., grammar, lexis). Thus, pronunciation contributes to listener perceptions of speech even if pronunciation isn’t explicitly what is being measured, making it difficult to specify, in rating descriptors, the phonological features that are universally applicable to learners from different L1 backgrounds. To complicate matters further, not all pronunciation errors “count” the same in pedagogical terms, with some errors being more detrimental to comprehensibility, or listeners’ ability to understand the L2 speech, than others (Derwing & Munro, 2009). The growing consensus among applied linguists is that the linguistic features most likely to impact comprehensibility should be emphasized in L2 instruction, as in rating descriptors, and that the features that contribute to an L2 accent but that are inconsequential for comprehensibility should be left aside (Isaacs, 2014).   

The Common European Framework of Reference (CEFR) demonstrates problems inherent in modelling pronunciation in a common scale. The mandate of including language-generic features in the descriptors – so that the scale can be used for any (European) target language – is prohibitive for generating descriptors that are specific enough to isolate the source of individual learners’ pronunciation difficulties. Pronunciation is thus not included as a criterion in the global CEFR scales, largely due to erratic statistical modelling of pronunciation indicators during scale development (North, 2000). The CEFR Phonological Control scale assumes that as accent decreases, L2 speech will become more comprehensible, conflating these interrelated but distinct dimensions (Harding, 2013). Such shortcomings in modelling pronunciation are replicated in other commonly-used L2 speaking scales.

In light of these gaps, this presentation reports on the development of a pedagogically-oriented L2 comprehensibility scale designed for use on university campuses to guide teachers’ identification of the linguistic components most conducive to learners’ production of comprehensible English so that these can be targeted in instruction. A synthesis of findings from our empirical studies (based on instrumental/auditory analyses of L2 speech; statistical analyses of ratings; and content analyses of focus group debriefings and introspective reports) shape the evolution of the “crude” empirically-derived 3-level global comprehensibility scale to a 6-level analytic scale, with a focus of the generalizability of linguistic criteria across L1 background and task type (e.g., Isaacs & Trofimovich, 2012; Saito et al., submitted). Overall, comprehensibility cuts across a wider range of linguistic domains than previously expected, with pronunciation and lexicogrammatical dimensions differentially contributing to comprehensibility ratings as a function of L1 background.

marie curie actions logo for a news item june 2014 logo of research society quebec for news item June 2014

logo of social sciences and humanities research council of Canada for June 2014 news item

Edit this page