Comparing the Effectiveness of Artificial Intelligence (AI) Models, ChatGPT and Meta AI to Detect and Correct Morphosyntactic Errors in BS Accounting Student’s Written Text
Abstract
The current study explores the effectiveness of artificial intelligence (AI) models, specifically ChatGPT and Meta AI, in identifying and categorizing morphosyntactic errors in academic texts. Morphosyntactic errors encompass issues in both morphology (word forms) and syntax (sentence structure), areas often challenging for non-native English speakers and students in fields outside of linguistic training, such as accounting. This study compares the error-detection abilities of ChatGPT and Meta AI on a sample text written by a BS accounting student, aiming to determine each model’s strengths and limitations in this domain. Using qualitative content analysis, we categorize the types of errors each model identifies and assess the corrective feedback provided. The errors were categorized using the surface technique developed by Dulay et al. (1982) and an adaption of linguistic category developed by Gayo and Widodo (2018). The data collection and analysis were carried out using Keshavarz's (2012) methodology of error analysis. The results reveal that both ChatGPT and Meta AI can effectively detect morphosyntactic errors, Meta AI offers a broader range of corrections that encompass both grammar and style, enhancing readability and contextual clarity. This makes Meta AI a more robust tool for advanced error detection, particularly in academic writing, where both accuracy and stylistic sophistication are crucial. This research aims to provide insights for educators, researchers, and AI developers regarding the application of AI-driven grammar tools in academic writing. Findings contribute to a better understanding of the current capabilities and areas for improvement in AI-based language assistance.