AI and the experts: the political economy of data driven decision-making
Conference
10th International Conference on Agricultural Statistics
Format: CPS Abstract - ICAS 2026
Keywords: agriculture, artificial intelligence, data, decision-making, large language models, policy analysis, political economy
Abstract
Governments and development agencies have long consulted experts and rigorous research relying on data and economic modeling, to inform the design, monitoring, and evaluation of policy. With recent advances in computing and artificial intelligence, large language models and machine learning now have the potential to support researchers in creating models, making scalable predictions, categorizing and searching for insights across vast amounts of information.
What then remains the comparative advantage of experts in policy analysis, and assessing and conveying policy options and tradeoffs to decision-makers? Machine learning and large language models (LLMs)are able to process more data and surface patterns potentially unrecognizable to analysts. Conversely, experts, particularly local ones, bring contextual and political knowledge in which to interpret the numbers, for example, the particular tastes and preferences of the decision-maker that may deviate from the norm, including the more idiosyncratic challenges that arise with policy implementation as opposed to design. Human analysts are also able to “curate” and verify the body of knowledge from which to inform decision-making and recognize implicit assumptions underpinning a model, combating AI models' tendency to ‘hallucinate’. On the other hand, “expert” knowledge might contain its own biases (such as anchoring) and inhibitions around offering unpopular advice. Finally, certain ethical frameworks may proscribe the use of models in roles that function in building or maintaining communities, such as policy formulation (Segun 2024).
To explore the tradeoffs, we compare policy design and implementation recommendations generated by selected LLMs with those proposed by experts leveraging a multi-objective policy analysis tool, the Policy Explorer Platform (PEP-AgQuery). We focus on two main objectives: 1. Identify where and how recommendations differ in their priorities and policy instruments, and 2. evaluate recommendations against policy decisions that were ultimately chosen, and the distribution of costs and benefits. We focus on agriculture, and the goals of improving productivity, nutrition, rural poverty and off-farm employment opportunities.
PEP-AgQuery is a R-Shiny platform to support expert throughout the policy analysis process by centralizing in a single, verified, updateable repository all the documents and data necessary for evaluating policy instruments and their tradeoffs. PEP-AgQuery distinguishes itself from traditional dashboards in its policy focus, open-sourcing and customizability. Users can look at pre-populated maps or create their own with the hosted data, download literature, explore bivariate relationships, and calculate basic benefit cost estimates. Once populated, the platform cues the user to consider goals, values, stakeholders, the legality, speed, and cost of different policy instruments, and relevant evidence from the literature on estimated impacts, enabling experts to remain fully in control of prioritizing among alternative instruments.
For a LLM, we begin with GeminiPro, developed by Google capable of giving lengthy, citation-supported answers to factual questions. It achieves relatively high scores on benchmarks of factual knowledge and improves upon previous models with a larger context window (the amount of information held in a form of “working memory” while users provide context for their requests). Analysts may use the models for research purposes, but it is also capable of responding to requests to weigh tradeoffs or provide a recommendation given a set of instructions. The content of these responses is drawn from a statistical sample of the training data, with reinforcements based on input by human testers. Thus, while the information produced by the model may be a combination of vastly more sources than a human analyst could reasonably expect to read, it may also be outdated or fabricated based on inappropriate combinations of conflicting inputs.
Nonetheless, the capacity of LLMs can only be expected to improve. As the adoption of these tools becomes widespread, it is reasonable to expect increasing tensions between the use of algorithmic decision-making and expert-led analysis.
Ultimately, we reflect on the potential roles LLMs and other generative models could play in traditional policy analysis frameworks to enhance, rather than replace, expert judgment. We discuss areas such as rapid evidence synthesizing and agentic AI to assist in coding text and mapping feedback loops between expert analysts and machine-generated priorities.
We expect this presentation to generate productive discussion on the complementarity and respective strengths of AI and expert judgment in agricultural policy analysis and decision-making.
Citations
Segun, S. (2024) Are certain African ethical values at risk from artificial intelligence? Data and Policy 6:e68. Doi:10.1017/dap/2024.64