LAS VEGAS, Nov 30 (Reuters) – Amazon.com Inc (AMZN.O) plans to roll out warning cards for software sold by its cloud computing division, amid continued concern that artificially intelligent systems can discriminate against different groups, the company told Reuters.
Similar to long nutrition labels, Amazon’s so-called AI service cards will be public so its business customers can see the limitations of certain cloud services, such as facial recognition and audio transcription. The goal would be to prevent misuse of its technology, explain how its systems work and manage privacy, Amazon said.
The company is not the first to issue such warnings. International Business Machines Corp (IBM.N), a smaller cloud player, did this years ago. The No. 3 cloud provider, Alphabet Inc’s Google, also released even more details about the datasets it used to train some of its AIs.
Still, Amazon’s decision to release its first three service cards on Wednesday reflects the industry leader’s bid to change its image after a public spat with civil liberties critics years ago left feeling like he cared less about AI ethics than his peers. The move will coincide with the company’s annual cloud conference in Las Vegas.
Michael Kearns, a professor at the University of Pennsylvania and since 2020 a researcher at Amazon, said the decision to issue the cards followed privacy and fairness audits of the company’s software. The maps would publicly address AI ethics issues at a time when technology regulation loomed on the horizon, Kearns said.
“The biggest thing about this launch is the commitment to do it on an ongoing and expanded basis,” he said.
Amazon has chosen software addressing sensitive demographic issues as the starting point for its service maps, which Kearns expects to develop in detail over time.
COMPLEXIONS
One of these services is called “Rekognition”. In 2019, Amazon challenged a study claiming the technology struggled to identify the gender of people with darker skin. But after the 2020 killing of unarmed black George Floyd during an arrest, the company placed a moratorium on police use of its facial recognition software.
Now, Amazon says in a service card seen by Reuters that Rekognition does not support matching “images that are too blurry and grainy for the face to be recognized by a human, or that have large portions of the face occluded by hair. , hands and other objects.” He also warns against face-matching in cartoons and other “non-human entities”.
In another warning card seen by Reuters, on the audio transcript, Amazon says, “Inconsistent editing of audio inputs could lead to unfair results for different demographics.” Kearns said accurately transcribing the wide range of regional accents and dialects in North America was a challenge that Amazon had worked on.
Jessica Newman, director of the AI Security Initiative at the University of California, Berkeley, said tech companies are increasingly releasing such disclosures as a signal of responsible AI practices, even though they have a ways to go. TO DO.
“We shouldn’t depend on the goodwill of companies to provide basic details about systems that can have a huge impact on people’s lives,” she said, calling for more industry standards.
Tech giants have worked hard to make these documents short enough for people to read, but detailed and up-to-date enough to reflect frequent software changes, said a person who worked on nutrition labels at two major companies.
Reporting by Jeffrey Dastin in Las Vegas and Paresh Dave in Oakland; Editing by Bradley Perrett
Our standards: The Thomson Reuters Trust Principles.
#Amazon #warns #customers #limits