The practical implications of public sector AI
Gary Pettengell, chief executive of not for profit technology provider Empowering Communities, discusses what artificial intelligence means for public services
With the public sector becoming increasingly reliant on Artificial Intelligence (AI), what does this mean for our public services in practice? Will machines actually yield better results than humans or is time to slow down and take stock of the implications?
What actually is AI?
Many of us believe that AI is solely focused on technology which can self-learn to mimic the mind of a human. The reality is that the term has a much broader scope and Stanford defines it as “the science and engineering of making intelligent machines, especially intelligent computer programs.”
With this in mind, AI could impact front-line services in three key areas:
1. Administrative automation
2. Predictive analysis
3. Complex decision-making
Opportunities exist within each of these, but there's also a myriad of complexities.
Technology is readily available which can imitate a human to the extent of providing a basic level of support. For example, Enfield Council recently evaluated a chatbot, named Amelia, to help residents locate information and complete applications. It was also used to provide self-certification for planning and authenticate applications for permits.
Assistant IT Director, Rocco Labellarte, explained that Amelia could tackle more advanced issues if councils collaborated.
“In the case of Enfield, we have over 450 transactional services delivered through multiple channels including e-forms. For a single council to build this number of services using AI would take a long time, whereas sharing the development workload with other councils would shorten the time needed to achieve the same goal.“
Another area that could benefit from a collaborative approach to AI is fraud. For several years, companies such as Paypal have been developing systems which successfully identify fraudsters. If this technology could be implemented within the public sector, it could make a significant dent in the £20bn lost each year.
Clearly, the biggest point of contention with any public sector automation is how it relates to job cuts. In my own personal view, I hope that the technology supports the workforce, rather than replaces them. By automating mundane repetitive tasks, you free up time to provide hands-on help that only humans can.
AI can support decision-making by providing intelligent predictions which are based on millions of data points. One such technology, Babylon Health, is being used by North London clinical commissioning groups to conduct a triage process on urgent healthcare issues.
It’s delivered through a mobile app and advice is provided in the form of a one-to-one discussion. Essentially, it can assess someone’s condition, make predictions on their future health and take them through some key steps on how to prevent illness. In the US, the app is used to connect the patient with doctors for a video consultation.
It’s also widely known that Deepmind (owned by Google) has been granted access to healthcare data relating to 1.6 million patients. Its intention is to create technology to support doctors by making predictions based on complex reasoning. For example, by instantly comparing symptoms with millions of other patients, it could make a recommendation which might otherwise have been out of scope.
West Yorkshire Police also employs predictive analysis to identify where burglaries are most likely to occur. Predictive policing is a technology developed by Professor Shane Johnson, which focuses on the premise that certain crime types have a pattern.
Chief Superintendent Chris Rowley explains “Theoretically speaking, an algorithm can be developed for all acquisitive crime types, such as theft of a motor vehicle or burglaries”. He added, “We’re now looking at whether algorithms can get you even closer to the actual time and location of where the next offences will take place”.
The success of predictive analysis depends on the quality of data available to the AI. It works well in policing, but it struggles when data sharing is scarce. For example, local government departments often work in silos which can restrict the amount of data available.
By far the most controversial area of AI relates to autonomy - how much responsibility are we willing to hand over? The real question boils down to how we, as a society, decide to make decisions with moral implications.
Would we prefer humans to make every critical decision, even though no-one has the capacity to consider a billion data points? Or should technology provide all the answers, even though it feels morally challenging?
To illustrate this moral dilemma, consider an impossible scenario with self-driving cars. You’re being driven home on your regular commute and a child runs out in front of you. The only way to avoid the child is by swerving right, which involves colliding with a young woman on the pavement. Your autonomous car of the future calculates overall risk instantly and swerves right. It’s unlikely that you could’ve made a safer decision, but even though it’s technically safer, it just feels wrong.
The same principles could be applied to the justice system. AI could be valuable in sifting through volumes of case law, but should it decide a sentence? IBM’s Watson can scan 40 million health documents in just 15 seconds, but should it actually diagnose?
AI is set to make a monumental impact across the entire public sector, but we need to slow down and collectively decide how best to accommodate it. Simply layering it on top of an overburdened workforce may actually do more harm than good.
Gary Pettengell is the chief executive of Empowering Communities , a not-for-profit technology provider that helps public and third sector teams to collaborate.