icon-cambridge@2x icon-umich@2xCambridge Michigan Language Assessments

Shaping English Language Assessments
with Research + Experience

MwALT 2014

The sixteenth annual MwALT conference was held Friday, October 3, and Saturday, October 4, 2014, at the University of Michigan in Ann Arbor, Michigan. The 2014 conference theme was
Under Construction: Building Arguments, Assessments, and Expertise.

Program & Conference Schedule

Plenary Speaker

A corpus linguist’s view on speech and speaking assessment: Searching for patterns
Ute Römer, Georgia State University

Plenary Abstract

Ute Römer, plenary speaker at MwALT 2014

The past few decades have witnessed a massive increase in corpus research activity in a range of linguistic subfields, including strands within applied linguistics. Corpora are increasingly accepted as powerful tools that help us gain insights into language structure and use, and help inform language teaching and testing practice (see, for instance, Flowerdew 2012, Hawkins & Filipovic 2012, Reppen 2010, and Römer 2011). This paper discusses the importance of considering corpus evidence in highlighting central aspects of spoken language and addresses the question “How can corpus tools and techniques help us shed light on the concept of speaking?” It also looks at speaking tests from a corpus perspective to see how well they reflect central patterns of speech. Since spoken language is not a uniform phenomenon but varies considerably depending on the context of use, the paper does not attempt to describe speech “in general.” Instead, it focuses on one particular, more specialized type of language: spoken English produced in a US research university setting. This type of language is captured in MICASE, the Michigan Corpus of Academic Spoken English (Simpson et al. 2002), a collection of 152 transcripts and 1.8 million words, based on 200 hours of recordings of speech events from across the University of Michigan in Ann Arbor.

The paper starts out with a brief analysis of frequency word and keyword lists of academic speaking (compared to academic writing) and then focuses on phraseological items (variably referred to as n-grams, formulaic sequences, lexical bundles, clusters, etc.) that are particularly common in speaking and carry important discourse functions. Software packages for corpus access and analysis are used to extract lists of contiguous word sequences (n-grams, e.g. you know, a lot of) and non-contiguous word sequences (phrase-frames, e.g. a * of, I don’t * so) of different lengths from MICASE. The resulting lists are filtered manually for items that play a central role in academic speech and appear to have a particularly high communicative value. The final section of the paper reviews rubrics of a selection of high-stakes speaking tests and discusses in how far these rubrics capture central aspects of spoken language as highlighted by corpus analysis. It then discusses implications of our MICASE-based findings for (academic) speaking assessment. Overall, the paper provides evidence for the interrelatedness of vocabulary and grammar in academic speech and stresses the importance of phraseology as a core, rather than a peripheral aspect of language (cf. Ellis 2008), adding to a growing body of existing work in corpus research on phraseology (see e.g. Biber 2009; Hoey 2005; O’Donnell, Römer & Ellis 2013; Römer 2009, 2010; Sinclair 2008). It demonstrates how corpus analysis can contribute to a better understanding of core aspects of speech and how it helps us uncover the patterned nature of speaking.

About Ute Römer

Ute Römer is currently an assistant professor in the Department of Applied Linguistics and ESL at Georgia State University. From 2007 to 2011 she was the director of the applied corpus linguistics unit at the University of Michigan English Language Institute where she managed the Michigan Corpus of Academic Spoken English (MICASE) and the Michigan Corpus of Upper-level Student Papers (MICUSP) projects. Her primary research interests and areas in which she has published include corpus linguistics, phraseology, academic discourse analysis, and the application of corpora in language learning and teaching. Ute‘s current research focuses on student academic writing across disciplines and on how corpus tools and methods can be used to identify meaningful units in specialized discourses. She is also involved in a project that combines corpus- and psycholinguistic evidence to gain insights into speakers‘ use and acquisition of English verb-argument constructions. She serves on the editorial boards of a number of academic journals (including the International Journal of Corpus Linguistics, Corpora, English Text Construction, and the Journal of Advanced Linguistic Studies) and is an advisory board member of the book series Studies in Corpus Linguistics (John Benjamins). She has published a book, three edited volumes, and numerous articles in leading journals in corpus linguistics and applied linguistics.

More information about her research interests and a full list of her publications can be found at www.uteroemer.com.


Preconference Workshop

Training Speaking Test Examiners: The Devil Is in the Detail

Organizers: Jessica O’Boyle and Mark Chapman
Time: Friday October 3, 1 p.m. to 5 p.m.
Aims: To teach participants how to design and structure a training program for speaking examiners.

Workshop Abstract

Are you interested in the testing of speaking in an ESL context? Would you like to learn about some of the key decisions that need to be made when training speaking examiners? If you have a theoretical or a practical interest in speaking assessment, there should be something of value for you in this workshop.

Speaking examiners in a face-to-face context are responsible for creating a comfortable environment for the test taker, administering the speaking test according to documented protocol, and awarding scores consistently from an established scoring rubric.

The importance of the speaking examiner has been well documented in the second language speaking assessment literature. Speaking examiners in a face-to-face context are responsible for creating a comfortable environment for the test taker, administering the speaking test according to documented protocol, and awarding scores consistently from an established scoring rubric. It is often necessary for speaking examiners to combine all these skills over an extended period of time with multiple test takers during periods of intensive testing. This combination of a sometimes challenging testing environment with the need for stable examiner behavior and scoring means that thorough examiner training is an essential factor for the validity of any speaking test.

Although the second language speaking assessment literature has much to say about the importance of speaking examiner behavior and rating decisions, there is relatively little guidance from the literature regarding the practicalities of speaking examiner training. This is where we hope to help. The MwALT 2014 preconference workshop will equip you with the necessary practical skills to coordinate a training program for speaking examiners.

The workshop will mainly cover how to design and structure a speaking examiner training program. Some of the main issues addressed will be:

  • The importance of using multiple videos of authentic test performance.
  • The quantity of video required in benchmark performance sets, calibration sets, and qualifying set and the principles for selecting these performances.
  • The need to create detailed justifications (commentaries) of the scores awarded to all video performances within the training materials. In addition, a frame for structuring the commentary on the performances will be suggested that carefully links the wording from the scoring rubric to the individual test taker performances shown in the training videos.
  • The importance of establishing clear certification guidelines for approving speaking examiners.

Workshop participants will have the opportunity to learn about the speaking examiner training provided in two quite different CaMLA test programs: a speaking test for screening international teaching assistants at the University of Michigan; and a multi-task, multi-level test of general spoken language proficiency that is administered internationally. After completing the workshop you will have a better understanding of how to effectively train speaking examiners in your own local context.



Here are some of the organizations sponsoring MwALT 2014. We want to thank them for their generous support and help in making this year’s event a success!

Cambridge English Language Assessment

Center for Applied Linguistics

Linguistics Department at the University of Michigan

LSA College of Literature, Science, and the Arts at the University of Michigan

English Language Institute at the University of Michigan