Using Large Language Models to Forecast Local Government Revenue

Main Article Content

Il Hwan Chung https://orcid.org/0000-0003-0061-2827
Berat Kara https://orcid.org/0000-0002-6948-2197
Melissa F. McShea https://orcid.org/0000-0002-5263-7033
Rahul Pathak https://orcid.org/0000-0003-2611-6776
Daniel Williams https://orcid.org/0000-0002-3225-5556

Keywords

Artificial Intelligence, ChatGPT, Forecasting, Government Revenue, Large Language Model

Abstract

We examine the use of a public access large language model (LLM) to make local government revenue forecasts. ChatGPT is an LLM that is not specifically designed to perform quantitative analysis. However, it is capable of completing a wide range of tasks. The goals of this article are to determine the accuracy that can be obtained and to examine its potential bias. This study is based on a government revenue dataset from the Government Finance Officers Association (GFOA). The benefits of determining the accuracy and bias of LLM forecasts include providing a low-cost forecast method for small- and medium-sized governments and enabling external observers to validate forecasts made by official sources. Discovering the limitations of ChatGPT and similar LLMs, as well as the specific conditions required to use them wisely, may help localities avoid adverse outcomes. We find that a combination of LLM and human input provides a viable alternative forecasting method for small- and medium-sized governments, and it enables external observers to validate forecasts made by official sources. Errors in forecasting with the human-in-the-loop can be as low as 9.9 percent at the aggregated annual level. Using ChatGPT results alone can lead to high-error forecasts that may not be reliable.

Abstract 1 | PDF Downloads 0