AI doesn’t outperform accounting students yet, but educators must adapt to rapid change in academia and practice.
LAKEWOOD RANCH, Fla., April 19, 2023 /PRNewswire/ — A new study to be published in the Issues in Accounting Education pits the ChatGPT AI chatbot against the performance of human accounting students on accounting exam questions.
Organized by David A. Wood, a professor of accounting at Brigham Young University, 327 authors from 186 institutions in 14 countries contributed to the crowd-sourced research effort, believed to be the first of its kind in the accounting field. Professors submitted their own assessment questions and data on their own students’ performance, then ChatGPT was asked to respond to the same questions.
The collection of such a large data set was possible because the topic resonated with so many active educators. Accounting faculty across the globe are aware of the ramifications of ChatGPT and other AI technologies; they’re interested in understanding how these tools will impact their classrooms, and how they should be teaching with these technologies in mind.
On average, across more than 28,000 assessment questions, human students generally outperformed the chatbot. Topic area and type of question greatly influenced ChatGPT’s performance: the chatbot was better at questions related to accounting information systems, analytics/technology, and audits, but worse at financial, managerial, and tax questions. Likewise, multiple choice and true/false questions proved easier for the AI than workout or short-answer questions. The bot also benefited when partial credit was awarded and when questions from a textbook test question bank were used. However, in no scenario did the AI beat human student averages more than 28 percent of the time.
Taken together, the results highlight important implications for accounting education. While faculty must prepare students for careers where they will use AI, they must be on guard that the same technology may be used to cheat on assessments or coursework. Students and professionals using AI must also be wary of chatbots’ tendency to confidently deliver incorrect information. In the study, ChatGPT frequently gave descriptive explanations for wrong answers, made up facts and sources, and made nonsensical errors when performing math.
Wood discussed the implications of the study for accounting education: “The fundamental question now is, for accounting and beyond, how should education be different as we enter the AI era? Our students’ jobs are going to be completely different from when we were in the profession. Many jobs will even change radically from the time they enter a program to when they graduate, technologies like AI are just evolving that rapidly. It’s going to be critical for those students to be able to adapt to change, and it is up to us as educators to prepare them.”
The study, “The ChatGPT Artificial Intelligence Chatbot: How Well Does It Answer Accounting Assessment Questions?,” is forthcoming in Issues in Accounting Education, which is published by the American Accounting Association.
The American Accounting Association (www.aaahq.org) is the largest community of accountants in academia. Founded in 1916, we have a rich and reputable history built on leading-edge research and publications. The diversity of our membership creates a fertile environment for collaboration and innovation. Collectively, we shape the future of accounting through teaching, research and a powerful network, ensuring our position as thought leaders in accounting.
SOURCE Issues in Accounting Education