Trust will be a key factor determining the level of success achieved by an AI strategy.
This article outlines steps that can be taken to assure AI deployments. Factors that could erode trust include:
People don’t accept the use of AI for a particular application. For example, the use of face recognition technology for law enforcement, faces legal and civil liberties challenges worldwide.
Concerns about data privacy are more prevalent when considering AI applications. People aren’t just worried about their data being stolen but also how it is used.
A perception, or belief, that the output from the AI isn’t objective due to an unintended bias in the algorithm. There have been concerns raised that facial recognition software doesn’t work as well for women because it is trained using data sets that are mostly men. In the UK I have had discussions with clients about the possibility of a credit assessment algorithm or recruitment system unfairly excluding people from ethnic minorities; again, due to the data set being mostly derived from a specific sector of the population.
The algorithm outputs false positive results that are not contextualized. The use of face recognition for law enforcement is a good example. Even though the system may present numerous matches, these do not mean that each person matched is a suspect. In reality, face recognition shortlists the number of images that may contain the suspect; further examination is undertaken by a human before the police intervene.
These risks can and should be mitigated pro-actively by implementing a Trust Management Forum that is independent of the AI projects. The forum should convene at regular intervals (e.g. monthly) to consider the following points for each AI development:
Can we explain how the algorithm works? Can we defend the output? What will our customers, staff, suppliers, the general public and investors think?Are we operating within the law and in compliance with data regulations?
The forum should be chaired by an executive with responsibility for Trust and comprise representatives, from the program and the business, as well as legal, compliance, finance and the communications team.
Specific members of this group should be tasked with providing the voice of the external stakeholders. The output from the forum should be documented: the following questions provide a useful structure.
What types of decision result from this AI program? Is there a mechanism to receive comments and concerns about the solution? What data is used to train the AI algorithm? Does this data contain other information that could be used to infer a pattern? How was the data labelled? By whom? Could there be some unintended human bias? What process is in place for sampling results and detecting unintended outcomes?
In conclusion, the risks are manageable, certainly for initial small-scale deployments. Building experience through these initial implementations will facilitate trust management as the use of AI scales. Key is having the independence governance structure, the Trust Forum, that pro-actively addresses the potential trust issues and defines strategies to address them.
For practical advice about assuring AI deployments please get in touch: john@collaborative-ai.com