Governing Goes Digital ⚖️🖥️

Could AI craft policy better than politicians?

Welcome to Answers on AI: making you smarter about artificial intelligence, one email at a time.

Before we kick it off, a quick request. This free newsletter is intended to help educate the masses so they can understand what’s happening in AI. It’s only able to grow thanks to readers like you, who share it with others. If you’d be willing to forward it to a friend, we’d greatly appreciate it. And we’re sure they’d appreciate it too!

First time reader? Sign up here to get Answers in your inbox. And now, on to the question…

Could AI craft policy better than politicians?

In an age where artificial intelligence (AI) is increasingly influential, it's natural to wonder if these technologies could outperform humans in complex tasks such as policy-making. We’ve previously explored the idea that AI can help make government more efficient. Today we’ll take it one step further: Can AI actually craft policy by itself? The thought of AI shaping our laws and government protocols seems like a scenario straight out of science fiction, yet as these systems become more sophisticated, the discussion around their potential role in governance is worth exploring.

  • 🤔 The Rise of the Policy-Making Bots: AI systems have the potential to process vast amounts of data far quicker than any human, which can be advantageous in policy-making. They can pull insights from large datasets, recognizing patterns and trends that might elude even the most diligent of politicians. This can lead to more informed decisions that take into account a wider range of variables and potential outcomes. For example, an AI trained in environmental data could suggest policies that precisely target sources of pollution to improve air quality.

  • ⚖️ Objective Oracles or Biased Bots?: The allure of AI in governance is often tied to the hope that they can be perfectly objective, creating policies free from personal bias or political pressure. However, AI systems are created by humans who have their own biases that can inadvertently seep into the algorithms. Additionally, the training data passed into the neural networks contain their own hidden biases. Ensuring true impartiality would require developers to be acutely aware of their own prejudices and those hidden in data, to implement checks and balances that prevent these from influencing the AI's policy recommendations.

  • 🔄 The Feedback Loop: To avoid biases and ensure policies remain relevant, any AI system involved in policy-making would need a feedback loop. This loop could involve assessment of policy outcomes and continuous data input that helps the AI learn and adapt over time. If, for instance, a policy intended to boost employment in a certain sector isn't yielding expected results, the AI can analyze real-time data, recalibrate its recommendations, and suggest adjustments swiftly and efficiently.

  • 👥 Human Touch: A major con is that policy-making isn't just about data; it's also about empathy and understanding the nuanced needs of a population. AI lacks the human touch, which is critical when dealing with sensitive issues that require compassion and ethical considerations. Policies impacting healthcare, education, and social services must resonate on a personal level, something that data alone cannot fully address.

  • 💡 Crafting Unbiased Systems: To build AI systems capable of crafting policy without bias, it's important to have multiplicity in design teams – a diversity of races, genders, cultural backgrounds, and political ideologies. Data sources must be comprehensive and representative of the entire population, as democracy is intended to be. Additionally, the algorithms would need to be transparent and open to auditing, allowing outsiders to understand how decisions are made, which can build trust and accountability. We have written about this field of Explainable AI before here, including its many challenges.

  • 👮 The Left and the Right: Practically, partisan politics may make this a far-off—if not impossible—dream. Questions around how to overcome the distrust built into the political system begin to arise: How could a governing party gain the support needed from its political rivals? Would the segment of the country who opposes the party in leadership be willing to embrace an opponent-led implementation of AI policy? Can the biases mentioned above ever be objectively measured and removed?

AI certainly has the potential to transform governance, bringing efficiency and potentially unbiased analysis to policy-making. Yet, the question remains, how do we ensure these AI systems serve the public good without infringing on the nuances and values that define our human society? With these thoughts in mind, it's crucial to proceed with both curiosity and careful consideration. How likely is it that AI will one day craft our policies, and are we ready to embrace the changes that will come with it?

In the real world…

  • Although AI is not yet being used to craft and implement policy by itself yet, there are numerous examples of government agencies using AI to help inform or improve its policies. Australia has used machine learning to track reported symptoms and understand the spread of disease. Quebec has used AI to fine-tune economic development policy faster than it could manually.

  • Last year, it was revealed that an attorney in a US federal court case used ChatGPT to write a legal brief, “hallucinating” cases for citation that never actually happened.

  • The Parliament of the United Kingdom is in the process of establishing a new committee tasked with understanding the impact generative AI will have on elections and government security in the future.

What do the experts say?

"I asked ChatGPT if Joe Biden has ever made a mistake as president. [It gave] a very diplomatic, if non-specific, answer. But look what happens — and how specific and very detailed the AI gets — when I ask the same question about Donald Trump. It’s as if ChatGPT is responding to me: Pull up a chair, guy, we’ve got a lot to talk about!"

"For AI, every election this year is a beta test—not only for how AI tools will be used, but also for how we’ll collectively respond."

— Kat Duffy, from AI in Context: Indonesian Elections Challenge GenAI Policies in Council on Foreign Relations

"Governments will increasingly rely on AI for data analysis and policy recommendations. However, to ensure optimal use, there should be a clear delineation between AI's role as a data analysis tool and human agents' role as the ultimate decision-makers in policymaking. The quality and availability of data used to train AI systems will be important, and governments may require specialized agencies dedicated to data generation and dissemination, which is essential for the training of AI systems."

— Michael J. Ahn, quoted in Government Policymakers Start to Take AI Seriously in Governing

Stay Tuned for More

In each issue, we bring you the most interesting and thought-provoking questions of the day to help make you smarter about AI. Stay tuned for more questions, and more importantly, more answers.

Share Answers on AI

Want to help your friends get smarter? Share Answers with them! We may even give you a shoutout or the ability to ask us your own AI question.