AI requires a contract of trust, says KPMG

AI requires a contract of trust, says KPMG

(Image by Gerd Altmann from Pixabay)

The summer of AI in the UK is upon us. But only if we avoid the shadows of data bias and the cold winds of poorly designed algorithms, short-sighted goals and the mistaken belief in quick savings that I described in my previous report.

But what practical steps can policymakers actually take to accentuate the positives and avoid the negatives? And just as important, why should they care?

One answer is that it is not just ethical behavior at the macro level – although it is vital in global and socially connected markets.

Leanne Allen is a partner, financial services data, at consulting and services giant KPMG UK. Speaking this week at a Westminster eForum on next steps for AI, she explained that all organizations should use AI responsibly to make their business smarter and more responsive. In turn, other benefits will accrue – both economic and social. She says:

Consumers and society’s investors demand much more from organizations, in all sectors. Whether it’s the benefit of better frictionless services for consumers, or the end of “industry experiments” with greater personalization, or the desire for industries such as financial services to do more to help fight against inequalities and promote sustainable finance.

Business expectations for innovation and creating real value from data and new technologies continue to grow rapidly. And the adoption of advanced techniques that include machine learning [ML] and AI give organizations the edge to meet these demands.

In that sense, improving what Allen calls the “customer experience journey” is just as important as a general desire for ethical behavior, because in the long run, companies will become more considerate and sustainable, he said. -she suggested:

It’s about making better and faster decisions. It’s increased accuracy, which means better understanding of customers and leads to improved products and services. These are things like risk pricing or more accurate product pricing, and the possibility of a step change in operational efficiency. And this, of course, has been very beneficial for organizations by reducing internal costs.

Thus, in Allen’s view, there is “no controversy” about the benefits to organizations and customers of using big data analytics and AI. But users should avoid getting carried away with all these new possibilities. This is where the real danger lies, she warned:

With all this potential comes new and increased risk. The fact is, without proper patrols and governance over the design and use of advanced techniques, we have already started to see unintended damage.

Unfair biases on the results of decision models cause financial harm to consumers and can damage the reputation of organizations. Unfair pricing practices result in systematic groups of society [sic] be excluded from insurance, and this removes access to risk pooling.

Selling inappropriate or low-value products and services to customers is another example, or targeted advertising, dynamic pricing, and “purpose drift” on data usage, which have led to non-compliance with laws. existing on data protection.

These are just a few examples of the misdeeds and challenges facing the industry.

That’s a whole list of cons. And the ripple effect is a loss of trust between consumers/citizens and whoever is tracking their data. Such repercussions can have far-reaching effects on people’s credit history and financial inclusion, for example.

That’s why policymakers should never – deliberately or unintentionally – jeopardize consumer confidence in the pursuit of easy wins, Allen said:

Trust is the determining factor in the success or failure of an organization. So, as companies keep pace, transforming their businesses to become more data- and information-driven, they need to focus on building and maintaining that trust.

We see many organizations launching their own initiatives to put in place governance and controls around the use of big data and AI. But the pace of progress varies.

Typically, we see financial services leading the way, and these organizations set their own ethical principles. They operationalize them and take a risk-based approach, aligning with core principles such as fairness, transparency, “explainability” and accountability. Collectively, these actively promote trust.

The “true north” of business ethics

Yet in a deepening recession, struggling consumers might take the idea that financial services are leading the charge toward a fairer society with a pinch of salt. But hope the companies are sincere.

For Ian West, a partner in another part of KPMG’s UK business, as head of telecommunications, media and technology, trust is the “golden thread” of business. He added:

We need to ensure companies are ready to deploy AI responsibly. KPMG distills the actions needed to steer an organization toward the “true north” of corporate and civic ethics by articulating five guiding pillars for ethical AI.

Talk about mixing up your metaphors! But West (or is that North?) continued:

First, it is essential to start preparing employees now. The most immediate challenge for businesses when implementing AI is workplace disruption. But organizations can prepare for it by helping employees effectively adapt to the role of machines in their work early in the process.

There are many ways to do this. But it’s worth considering partnering with academic institutions to create programs that meet skills needs. This will help educate, train, and manage the new AI-enabled workforce, and will also aid in mental well-being.

Second, we recommend developing strong oversight and governance. So there must be company-wide policies around the deployment of AI, especially around data usage and privacy standards. And that comes down to the trust challenge. AI stakeholders need to trust the business, so it’s crucial that organizations fully understand the data and frameworks that underpin their AI in the first place.

Third, autonomous algorithms raise concerns about cybersecurity, which is one reason the governance of machine learning systems is an urgent priority. Strong security should be built into the creation of algorithms and data governance. And of course, we could have a bigger conversation about quantum technologies in the medium term.

Fourth, there is the unfair bias that can arise in AI without proper governance or controls to mitigate it. Leaders should strive to understand the workings of sophisticated algorithms that can help eliminate this bias over time.

The attributes used to train the algorithms should be relevant, fit for purpose, and allowed. Arguably worth having a team dedicated to this, as well as setting up independent review of critical models. Prejudices can have a negative social impact.

And fifth, companies need to increase transparency. Transparency underpins all the previous steps. Don’t just be transparent with your staff – of course this is very important – but also give customers the clarity and information they want and need.

Think of it as a contract of trust.

my catch

Well put. The important lesson, then, is not to sacrifice user trust in your quest to gain competitive advantage. Take your customers with you on a shared journey. Help them see how you make their lives better, and your business smarter.

#requires #contract #trust #KPMG

Leave a Comment

Your email address will not be published. Required fields are marked *