Socio-Cognitive Framework
The Socio-Cognitive Framework (SCF) informs test development, research and validation. It has been adopted by many leading testing organisations and researchers worldwide. It combines social, cognitive and evaluative dimensions of language, linking these to the contexts and consequences of test use.
The SCF originated in the work of the founding Director of CRELLA, Professor Cyril Weir, and has been further developed by the CRELLA team and others.
Click each tab below to find out more
- Who takes the test?
- Where and how do they need to use the language?
Test taker characteristics refer to features of the candidates, for example, age, gender, nervousness, background knowledge, and experience.
- Do test takers engage the same cognitive processes when using language for the test as in real life?
Interacting with context validity, as well as being influenced by test-taker characteristics, is cognitive validity.
This aspect corresponds with the substantive validity by Messick, and it examines whether, when completing a test (or task), candidates go through processes in a way that corresponds to the hypothesised construct.
- How do the tasks on the test represent the ways in which test takers will use the language?
Context validity, which encompasses Messick’s notions of content validity and generalizability (Messick, 1996), concerns the relevance and representativeness of the test task, including the nature of input and the administration conditions such as response format, time constraint and interlocutor.
- Do the scores reflect the importance of target skills?
- Are the scores reliable?
After responses are produced, scoring validity needs to be examined, addressing the consistency, reliability and generalisability of the scores.
This aspect concerns not only Messick’s structural validity, which questions if the scoring of the candidate’s response matches the construct assumed by testers, but also the reliability of the raters and consistency of the rating scales (when testing productive skills).
- How does the use of the test affect teaching and learning?
- Does use of the test benefit society?
The consequential validity component examines the appropriateness of the interpretations of the scores, and is often addressed by studies of decision-making based on and washback effects of a test on all stakeholders (e.g. candidates, teachers, admission officers, employers, publishers, textbook designers, etc.).
- Do scores on the test match scores on other tests of the same abilities?
- How well does the test predict performance in real life?
Criterion-related validity relates closely to Messick’s external validity, also traditionally known as concurrent validity, and is often investigated by correlating the scores with another test of the same construct (or of a different construct to show distinctiveness). In addition, this aspect addresses the issue of equivalence, not only between different versions of a test, but also different versions of a task.
Testing experts say that SCF...
…underpins the sustained development of the College English Test (CET) for undergraduate and postgraduate students in China. The framework has helped CET to meet the exacting professional standards set for high-stakes tests in the 21st century
…is a fundamental theoretical cornerstone for Cambridge English exams and their validation…its principles are embedded in our Product Lifecycle Management approach.
…helped mould the Language Training and Testing Center’s (LTTC) endeavours in the fields of test development and validation. All our GEPT validation studies have been based on the socio-cognitive framework.
…is an innovative approach to language testing and its validation that is not only comprehensive and accessible, but also practical in empowering [users] to evaluate critically tests.
The SCF has been used for the development, revision and validation of numerous language tests around the world, testifying to its broad applicability. Examples of such tests include
- College English Test (CET) and the Test for English Majors in China
- KET, PET, FCE, CAE and CPE1 by Cambridge English Language Assessment
- The Graded Examinations in Spoken English (GESE) and the Integrated Skills in English (ISE) by Trinity College London
- The General English Proficiency Test (GEPT) by Language Teaching and Testing Center, Taiwan
- Test of English for Academic Purposes (TEAP) in Japan
- The National English Adaptive Test in Uruguay
- The Plan Ceibal Speaking Test in Uruguay
- QALSPELL, a generic specific-purpose test of English in higher education in the Baltic States
- The EXAVER Examinations at Universidad Veracruzana, Mexico
- National tests of Macedonian as a Foreign Language (TEMAK) in the former Yugoslav Republic of Macedonia
- Goethe-Zertifikate exams for German as a Foreign Language at the Goethe Institut
- The Graduate Admission Test of English (GATE) for postgraduate admission in Malaysia
- The Certificate of Proficiency in English (COPE), an English exemption test for entry into Turkish higher education
1. These tests are now called Cambridge English: Key, Preliminary, First, Advanced and Proficiency, respectively.
Weir, C. (2005). Language testing and validation: An evidence-based approach. Basingstoke: Palgrave MacMillan.
The original, comprehensive book on the SCF that takes readers through each of its components for reading, listening, writing and speaking, while thoroughly reviewing earlier theoretical works on test validity.
[**Note that cognitive validity is called by its former name, theory-based validity, in this book.]
O’Sullivan, B., & Weir, C. J. (2011). Test development and validation. In B.O’Sullivan. (ed.) Language testing: Theories and practices (pp.13-32). Basingstoke: Palgrave Macmillan.
In this edited book, the first chapter by O’Sullivan & Weir (2011) provides a briefer overview of the SCF and what it can offer testing professionals, updated from 2005. It also explains how the SCF can address issues that other validation models or frameworks do not.
Other chapters in the book report on some of the test projects listed above (e.g. EXAVER, COPE).
Green, A. (2021). Exploring language testing and assessment (2nd edition). New York: Routledge.
Part II of this book (Chapters 5 and 6) presents a socio-cognitive account of techniques used in assessing the use of a language or languages and argues for the value of the SCF in framing an argument to evaluate the use of a test in a given social context.
The four books in the Studies in Language Testing series (Vol.26, 29, 30, 35) by UCLES/Cambridge University Press (CUP) scrutinise various aspects of the Cambridge General English exams.
- Shaw, S.D. and Weir, C. J. (2007) Examining writing.
- Khalifa, H and Weir, C.J. (2009) Examining reading.
- Taylor, L. (ed) (2011) Examining speaking.
- Geranpayeh, A. and Taylor, L. (eds) (2013) Examining listening.
Wu, R. W. (2014). Validating second language reading examinations. Cambridge: UCLES/CUP.
This book examines the contextual, cognitive and scoring validity of the GEPT Reading exams through alignment with the Common European Framework of Reference (CEFR).
Papp, S. & Rixon, S. (2018). Examining young learners. Cambridge: Cambridge Assessment English/CUP.
The volume reflects on how learners’ L2 development between the ages of 6 and 16 can be coherently described and examines the Cambridge English family of assessments for children and teenagers.
Cheung, K. Y.F., McElwee, S. & Emery, J. (2017). Applying the socio-cognitive framework to the BioMedical Admissions Test. Cambridge: Cambridge Assessment English/CUP.
This volume applies the SCF to examine different aspects of the BioMedical Admissions Test (BMAT), an admissions test used for biomedical courses. The book demonstrates how effectively language testing frameworks can be used in different educational contexts
Taylor, L. & Saville, N. (eds.) (2020). Lessons and legacy: A Tribute to Professor Cyril J Weir (1950-2018). Cambridge: Cambridge Assessment English/CUP.
Written by a selection of his friends and collaborators from all over the world, this volume pays tribute to the academic achievements of the late Professor Cyril J Weir. This book clearly demonstrates the breadth and depth of the impact of his work and the SCF on language testing and assessment, and how his lessons continue to be relevant to the present day.
Weir, C. J. & Chan, S. (2019). Research and practice in assessing academic reading: The case of IELTS. Cambridge: Cambridge Assessment English/CUP.
By interrogating various aspects of the IELTS Academic Reading Module based on the SCF, this volume discusses the definitions and operationalisations of academic reading ability in the past, present and future.
Yu, G. & Xu, J. (eds.) (forthcoming, 2020). Language test validation in a digital age. Cambridge: Cambridge Assessment English/CUP.
Nakatsuhara, F. (2013). The co-construction of conversation in group oral tests. Frankfurt am Main: Peter Lang.
By focusing on test-taker characteristics and contextual validity in the SCF, this book explores how test-takers with different extraversion and proficiency levels co-construct spoken interaction in group oral tests
Inoue, C. (2013). Task equivalence in speaking tests. Bern: Peter Lang.
This mixed-method research explores how task equivalence, which is a pre-requisite in any comparative research in speaking performance, can be established in terms of the context, scoring and criterion-related validity of the tasks in the SCF.
Chan, S. (2018). Defining integrated reading-into-writing constructs: Evidence at the B2-C1 interface. Cambridge: CUP.
Addressing the research gap in the new area of integrated assessment, this book investigates the contextual and cognitive validity of reading-into-writing test tasks at the CEFR B2 and C1 levels.