A sign on Queen Street in Cardiff city center warning that facial recognition is being used by South Wales Police on August 25, 2022 in Cardiff, United Kingdom.
Photo: Matthew Horwood/Getty Images
A growing community of AI ethicists is helping companies develop codes of conduct on the use of AI in their business operations. Charles Radclyffe is the CEO of EthicsGrade Limited, an ESG rating agency focused on AI governance. As part of our ongoing series on the different aspects of using AI in the workplace, we talked to him about why AI should be treated as an ESG issue.
RADCLYFFÉ: It is helpful to frame this question in ESG terms, because ESG is only about the risks and off-balance sheet liabilities that organizations face and therefore investors are exposed to. And like any balance sheet where there are assets and liabilities, an AI system is potentially an asset to an organization, but can also be a liability.
The governance of AI therefore falls very much within the ESG domain. There are other people who focus on climate risks or decarbonization or biodiversity. I happen to focus on a very different niche than ESG, which few people cover.
The business community doesn’t like to use the word “ethics”. We would prefer to use a synonym such as sustainability, but in reality we are talking about ethics, because I am trying to shed light on the question of how aligned organizations are with the values of their stakeholders – and how they do what they say is important.
EDGE: In your experience, to what extent do most companies think about the ethical side of AI governance or just look at the business opportunity that AI represents?
RADCLYFFÉ: I would say we’re getting closer to a tipping point, where it goes from being a side business to becoming something that’s really cutting edge; But we are not there yet. Ten years ago, nobody cared at all about the risks. It was about how we can use this ability to do the things that people do today, but much faster, and how we can bring together sets of data to extract value that they wouldn’t have not had by themselves.
A single code of conduct is not particularly helpful. And that’s usually where organizations start to get into what I would call ethics-washing territory.
The Cambridge Analytica scandal It was the moment of the “hole in the ozone layer” where ordinary people suddenly realized the dark side of these technologies but did not really begin to change their behaviors. It’s still an upcoming event, but what we’re starting to see now, certainly within the EU, is a lot more proposed regulations around the tech industry.
You don’t need to understand AI to behave well
EDGE: In order to be able to implement an ethical code of conduct, do you need to understand the technology itself? Could it be getting harder and harder to do?
RADCLYFFÉ: No, I don’t think it’s necessary at all. One of the interesting things about the draft EU AI regulations is that it tries to define AI, but in very, very broad terms. It was deeply unsatisfying to most in the tech community because it was so broad and vague.
EU Commissioners insist that AI systems in the EU be developed with two things in place: first, a risk management process; an organization therefore needs to understand whether the use of AI in its own domain is high risk or not. And if it is high risk, it should be a minimum standard of compliance with what the EU sets out in its call for quality management.
And a company must implement these quality management controls not only around its own technologies, but also integrate them into its relations with its suppliers and its purchases. And that’s something that we haven’t really seen in the use of technology so far.
Ultimately, having good risk management controls and quality management processes should be something that responsible organizations should strive for, whether they use AI, quantum computing, other magical technologies or just excel or paper tools. I don’t think the question of defining what AI is or isn’t is particularly relevant here.
EDGE: But to build an effective risk management tool, you need to understand the risks that AI could generate. Since we don’t know why a computer does something, it’s hard to predict how it will behave.
RADCLYFFÉ: It is very true. The Commission facilitates this task by defining certain categories of activities which will always be high risk. So, for example, if you’re involved in the credit decision-making process around an individual, or the use of AI in HR systems, like what will be regulated in New York from January 2023 – but what regulators are looking at is: have you thought about what is particularly high risk, medium risk, low risk for your organization and what areas you deem to be high risk? Does this trigger the requirements of the law?
The trap that many organizations fall into is that they end up trying to create a set of rules or principles that apply to the whole organization and especially to the engineers who build this stuff. And then, lo and behold, the engineers don’t quite understand what it takes. And when it makes headlines, it’s deeply embarrassing for the company.
We can borrow a lot of best practices from the ESG community. One of the key elements of ESG discipline relates to how an organization engages with stakeholders – to what extent it identifies who its stakeholders are – second, to what extent it understands what those stakeholders are most interested in . And then third, how he handles the points of tension and the conflicts between those needs.
EDGE: Is there enough commonality between different industries and different ways of using AI to have a common code of conduct?
RADCLYFFÉ: I think a single code of conduct is not particularly helpful. And that’s usually where organizations start to get into what I would call ethics-washing territory.
What they need to do instead is identify who its stakeholders are. And then find a way to identify what is of most concern to each of these stakeholder groups. And then look for the points of tension between those things and devise strategies to resolve those tensions.
Let me give you an example.
You want to introduce facial recognition, as a hypothetical example, to help people enter office premises without contact. So rather than having a swipe card to get into the office in the morning, you only have one camera and the camera holds the gate open and recognizes your face.
In a place like China, for example, no one would back down from such a system, but in Germany you can imagine there would be a lot of controversy about it. What you need to do in this situation is be able to talk about the use of AI, identify the stakeholders that would be affected, hear their concerns, and articulate your response. And an international organization may say, we’re just not going to do it for those reasons (and we should really articulate the Why along with his decision).
Then another situation arises where they might want to use facial recognition systems in order to secure the perimeter of their factories. And because it’s a different use case, one could argue that actually those security concerns are more important than the privacy concerns of the stakeholders, and so, for the use case of security is acceptable. But for the use case of allowing employees to enter the building from a convenience perspective, this is not the case. Again, the decision and rationale must be articulated.
You now have two examples, which is very, very useful for engineers to triangulate a third, fourth, fifth example as to which side of the alignment falls. Humans are really good at filling in the blanks – we’re just not very good at explaining whether we’ve achieved fairness or justice and are much more likely to twist definitions if you apply the principled approach.
And you don’t need many of those use cases. Five or 10 well-thought-out examples are fine for an engineering team to use this case law, so to speak, to take any other use case and figure out how best to govern it.
#ESG #issue