Governments should consult with AI when taking decisions: Five good reasons


Artificial Intelligence is becoming ever more sophisticated in its deductions. This has caused many to consider its role in the governance of countries, states, cities, and towns. I believe there’s a strong case to make when it comes to its integration into politics and power. Here’s why: 

  1. Human decision-making can be badly flawed

We are not as rational as we take ourselves to be. Our judgments are regularly compromised by cognitive biases which can unfairly influence the way in which we deliberate. We think that good looking people give more compelling testimony, we are liable to fall into “group think” in meetings, and our political biases skew how we calculate the benefits and risks of new initiatives (Daniel Kahneman, Thinking, Fast and Slow). Artificially intelligent systems are uncompromised by these, and the plethora of subjective, conscious experiences which mislead reliable, reasoned decision-taking.

  1. Algorithms are more accurate in unpredictable environments

Back in 1954, the psychologist Paul Meehl published a work in which he revealed that statistical algorithms consistently outperform the predictions of trained clinicians. Decades later, and after more than 200 re-runs of this type of study, no-one has convincingly contradicted this conclusion. It remains the case that in environments with high levels of unpredictability – like politics and governance – formulas maximize accuracy above-and-beyond the intuitions of industry experts. This inconvenient truth has been widely ignored by those whose authority it challenges.

  1. Machines are fast and comprehensive

Artificial intelligence is extraordinarily fast, and this speed allows it to churn huge amounts of data. Thanks to “datafication” there is now more human-centric information than ever (social media, email/text data, energy use, travel info, geolocation/GPS, loyalty cards, Fitbit/device etc.). Undoubtedly, much of this is highly useful to those who govern; it gives critical, up-to-date (even real time) information about who we are, how we behave, what we like and what we need. Yet, without AI we can’t interrogate it. Human decision-takers have to ignore much of it as our comparatively slow processing tools cannot hope to keep pace without assistance. Non-adopters could easily be considered negligent governors in the future.

  1. Apocalyptic predictions shouldn’t obstruct reasonable use

Some critics are fearful of integrating artificial intelligence into public life due to longer-term fears about true or strong AI, i.e. machines with human-level intelligence that could become hostile or take their place as our despotic rulers. Skynet, basically. If it is even possible, this type of AI is a long way-off, and dystopian predictions should not be allowed to stand in the way of reasonable, deployable advancements in narrow AI: machines optimized for a very limited range of tasks. Though the latter can already outstrip human prediction in a number of areas, there’s no obvious way in which it might evolve into complex, generalized intelligence. Many experts are skeptical that it ever will.

  1. AI can reduce costs to citizens

Lastly, it should probably strike us as obvious that if judgments can be made more quickly and accurately, this will reduce costs. In terms of government departments, this might refer to admin and research costs. In politics, it could mean better prediction and foresight when making important decisions that have social costs attached.  There are two other likely bonuses; firstly, that more gets done when the mechanisms move more quickly, and second, human decision-makers can assign more time for deliberation where needed. As with most innovation, intelligent machines remove much of the labor so that humans can focus more sharply on purpose and value.

It is true that many governments and government departments already work with AI instruments, but their use is not ubiquitous or obligatory.  Though there are well-founded concerns about the use of algorithms, it makes sense to cautiously experiment with their use in environments that are currently dominated by (largely) flawed human-intuition.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s