Research Article

A Systematic Self-Review of Specific-Skill Assessment Studies: Principles and Practices

Authors

  • Reima Al-Jarf Full Professor of English and Translation Studies, Riyadh, Saudi Arabia

Abstract

This study conducted a systematic review (SR) of the author’s research program on specific language skill assessment published between 2004 and 2023. The SR includes 40 empirical and theoretical studies covering the assessment of listening, pronunciation, speaking, reading, writing, vocabulary, grammar, morphology, spelling, and research skills. The studies were organized into six thematic clusters: test construction principles; operationalizing and measuring process and product subskills; assessment in experimental studies; multi skill assessment; assessment instruments and technologies; and factors influencing skill assessment. Results of the SR showed that the studies provide a comprehensive, longitudinal, and practice based model for language skill assessment in EFL and L1 contexts. Across the studies, the author consistently demonstrated how process based instruction requires process based assessment, and how subskills must be operationalized, taught, and measured with precision. The findings also revealed that the author’s test construction principles, such as alignment with instructional objectives, skill decomposition into process and product subskills, construct validity, reliability, and transparency, were applied consistently across all studies. Additionally, the SR showed that the author’s experimental studies provided strong evidence that explicit instruction in subskills leads to significant gains in learners’ performance, and that assessment instruments designed by the author were effective enough to detect such gains. Multi skill studies demonstrated the interconnectedness of language skills and the need for integrated assessment tasks. Studies on assessment technologies highlighted the author’s early adoption of online tools, scoring iRubrics, and mobile apps to enhance assessment efficiency, fairness, and objectivity. Several contextual and learner related factors were found to influence assessment outcomes, including instructional consistency, teacher qualifications and expertise, test format clarity and logical sequencing, and learners’ exposure to process and product subskill based training. The review also identified recurring pedagogical implications, emphasizing the need for systematic subskill instruction, alignment between teaching and testing, and the development of assessment literacy among teachers. Overall, the SR concluded that the author’s research program constitutes a coherent, cumulative, and influential contribution to language skill assessment. It provides a replicable model for designing valid, reliable, comprehensive, discriminating and instructionally aligned assessment tools, and offers a rich foundation for future research on process based language assessment in similar educational contexts.

Article information

Journal

Journal of World Englishes and Educational Practices

Volume (Issue)

8 (2)

Pages

11-33

Published

2026-04-07

How to Cite

Al-Jarf, R. (2026). A Systematic Self-Review of Specific-Skill Assessment Studies: Principles and Practices. Journal of World Englishes and Educational Practices , 8(2), 11-33. https://doi.org/10.32996/jweep.2026.8.2.2

Downloads

Views

0

Downloads

0

Keywords:

Systematic review (SR), Al-Jarf research program, language skill assessment, assessment principles, assessment technologies, assessment tools, process subskill assessment, product subskill assessment, factors affecting assessment