Research Article

Specific Linguistic Questions that Artificial Intelligence (AI) Cannot Answer Accurately: Implications for Digital Didactics

Authors

  • Reima Al-Jarf Full Professor of English and Translation Studies, Riyadh, Saudi Arabia

Abstract

This study aimed to examine the kinds of specific linguistic questions that AI systems cannot answer accurately, drawing on daily scholarly interactions with Copilot (MC), DeepSeek (DS), Google Translate (GT), Gemini, Monica, rather than artificial test prompts. It also seeks to understand why AI makes mistakes and whether the nature of user questions influences AI performance. In addition, it aims to analyze the nature of these failures through a linguistic and a pedagogical lens to offer recommendations for students and researchers on how to critically evaluate AI output in language and translation tasks. For those purposes, a sample of 50 specific linguistic and translation questions that the author asked to MC, DS, GT, Gemini & Monica between 2023 and late 2025 was collected and anlayzed. The questions were classified into the following categories: phonology, transcription, morphology, lexical questions, pragmatics and culture, explanation or translation of Arabic grammatical terms, books AI cannot fully identify, telling a story in Arabic classical literature, converting a hand-written text to a typed text, translation of technical terms, metaphorical expressions and metonyms, and bibliographic, and scholarly workflow issues. Results of the 5 AI responses to the 50 linguistic and translation questions and tasks revealed recurrent AI shortcomings across diverse domains: phonology and transcription errors, morphological and lexical inaccuracies, pragmatic and cultural misinterpretations, and faulty explanation of Arabic grammatical terms. AI systems failed to identify certain books, struggled with classical storytelling in Arabic classical literature, and could not convert handwritten to a typed text. Technical terminology, metaphorical expressions, and metonyms were often mistranslated, while bibliographic and scholarly workflow tasks showed fabricated references and gaps in organizing references. Collectively, these errors underscore AI’s tendency toward surface-level processing at the expense of linguistic depth and cultural fidelity. Findings of this study suggest directions for refining AI translation models and integrating critical AI literacy into language education. This study also contributes to ongoing debates on the limitations of AI and the use of AI in linguistics by highlighting the pedagogical risks of uncritical reliance on AI output. Despite these small flaws and issues, AI is capable of performing unlimited linguistic tasks with lightning speed and impressive content that humans are incapable of.

Article information

Journal

Frontiers in Computer Science and Artificial Intelligence

Volume (Issue)

4 (4)

Pages

43-61

Published

2025-12-13

How to Cite

Specific Linguistic Questions that Artificial Intelligence (AI) Cannot Answer Accurately: Implications for Digital Didactics (R. Al-Jarf, Trans.). (2025). Frontiers in Computer Science and Artificial Intelligence, 4(4), 43-61. https://doi.org/10.32996/fcsai.2025.4.4.4

Downloads

Views

38

Downloads

56

Keywords:

Artificial Intelligence (AI), AI linguistic limitations, AI translation limitations, AI responses to author initiated questions, AI inaccuracies, causes of AI errors, improving AI responses, translation didactics, studetnts' linguistic skills, students' searching skills.