Governments must do more to establish frameworks to support the ethical use of AI and build trust with citizens, according to a new Boston Consulting Group (BCG) report.
To gain insights into citizens’ attitudes about and perceptions of the use of AI in government, BCG surveyed more than 14,000 internet users around the world, asking them how comfortable they are with certain activities or decisions being made by a computer rather than a human, what concerns they have about the use of AI by governments, and to what extent they agree or disagree with certain statements in relation to the impact of AI on the economy and jobs.
The report found that the ethical implications of AI, as well as its potential impact on jobs, are among citizens’ top concerns. Thirty-two percent of survey respondents expressed unease about the potential ethical issues associated with AI. A further thirty-one percent were concerned about the lack of transparency surrounding artificial intelligence and its decision making processes.
Notably, respondents seem to favor some sort of regulatory response to the growth of this technology. Sixty-one percent of respondents were worried about AI affecting the availability of work, and Fifty-eight percent believe governments need to regulate AI to protect jobs.
Implications for Governments
Governments need to tread carefully when looking to harness AI to enhance the efficiency and impact of the services they deliver. Transparency into where and how AI is used in government will be essential to establishing legitimacy and credibility of the technology in citizens’ eyes and to mitigate their concerns about any negative impact it might have on their lives.
Survey respondents said that they were most comfortable with government use of AI when it made known, repetitive tasks faster or easier. Uses cases include tax and welfare administration, assistance with services delivery, and identifying fraud and non-compliance. However, they were less supportive of the use of AI in areas such as health care and justice. For example, seventy-one percent of respondents agreed with the use of AI in optimizing traffic, whereas fifty-one percent of respondents disagreed with using AI to determine innocence or guilt in a criminal trial.
Broadly, whenever government service involves work assessing unique sets of circumstances respondents were wary of having computers make those decisions. Criminal justice, parole, child welfare, and immigration had very low scores for trust in AI.
Whether or not individuals trust their government is also a key component of the AI discussion. The report findings show that the more people trust their public institutions, the more likely they are to support government using AI. However, if trust is already low, using artificial intelligence is met with some skepticism.
Governments will also have to pay special attention to the data they use to train machines. If the data they use reflect human biases present in their communities, training machines on those biases will exacerbate biases both good and bad. Every effort should be made to understand the provenance of and assumptions baked into data. The use of pilot projects can help avoid large scale and costly missteps. Incorporating local residents and stakeholders in pilots can also go a long way toward improving trust and transparency.
The full report is available here.