What an AI-powered World Cup obscures

What an AI-powered World Cup obscures

Things this World Cup is loaded with so far: Geopolitics intrigue and controversy. Messy soccer world drama. Unlikely English goals in the first half.

And, of course: a list of trending AI apps.

Wait what?

FIFA boasts a AI-powered decision-making system who will use sensors in the real soccer ball to help determine calls. A vast network of facial recognition compatible cameras will follow the crowd, with technology from the same family as that deployed by the controversial firm Clearview AI. AI-powered sensors in stadiums will even help control the climate.

Which looks very cool. But it also begs the question: is all of this really “AI”? And if so, how is it possible for the same technology to power such a disparate list of applications, let alone generate surreal artWhere pre-made legal documents?

In a sense, the hype around this World Cup is just a marketing push from the host country and the organization. Qatar is proud to have used its (relatively) new natural gas fortune to propel it into the ranks of other wealthy Gulf states like Saudi Arabia and the United Arab Emirates, and FIFA has aggressively played its high-tech additions to the game.

This buzzing invocation of AI is the flipside of mounting anxiety around technology among industry watchdogs. Both ways of thinking about AI tend to lump different issues together into one big topic. And both point to a larger question: how the public supposed thinking about AI?

One reason that matters a lot right now: politics has finally discovered AI. The Biden administration is trying to nudge the field toward its preferred values ​​and practices with the AI Bill of Rights. Europe is doing the same, but with statutory teeth. Governments are striving to regulate AI at a slower pace than the technology itself develops, but faster than the layman’s understanding of it. This poses a political problem, as the marketing “wow factor” around AI increasingly obscures how it really works and how it impacts our lives, leaving the public relatively clueless about the regulatory decisions being made.

“If football’s first yellow line appeared today rather than in 1998, they would say it was generated by AI,” said Ben Recht, a professor in the Department of Electrical Engineering and Computer Science at the University. University of California at Berkeley. who has written extensively on AI and machine learning. “AI has become nothing more than a marketing term for ‘the things we do automatically with computers’.”

The story of what artificial intelligence really is is may be beyond the scope of this afternoon bulletin. Mathematics and computer science historian Stephanie Dick described the term’s long semantic drift in a 2019 essay for the Harvard Data Science Review which focused on the roots of the field in computerized attempts to model human intelligence. As the field moved away from that effort and toward powerful machine learning systems like those that power DALL-E or GPT-3the original branding remained, masking the actual functions of these systems behind a fog of hype and sci-fi speculation about sentient machines or human-like “artificial general intelligence”.

We have now come to use AI as an umbrella term for, as computer scientist Louis Rosenberg said when I spoke to him,process large datasets, find patterns in those datasets, and then use those patterns to make predictions or learn insights.

In other words, the application of AI to a soccer ball or an AC system is (slightly) demystified. But that only scratches the surface of how these machine learning systems insinuate themselves into our lives. The political discourse around AI currently focuses on much larger issues like systemic biases seeping into decision-making systems, or unchecked surveillance of facial recognition like the one currently being deployed in Qatar, or the data collection without consent.

These are the kinds of issues that appear in the Biden administration’s new AI policy, but there’s still a huge understanding gap between policymakers and the public on the issue. At Stanford report written last year noted that “accurate science communication has not engaged a wide enough range of audiences to gain a realistic understanding of the limitations, strengths, social risks, and benefits of AI”, and that “Given the historical model boom/bust in public support for AI, it is important that the AI ​​community not overemphasize specific approaches or products and create unrealistic expectations” – a dynamic that likely has not not helped by the World Cup hype machine.

And while guidelines like those from the Biden administration can be helpful, they are still…just guidelines. There are still few, if any, laws in place to prevent the kind of AI-induced damage that could continue under the radar amid a general fog of curiosity and misunderstanding – which makes understanding technology public much larger than one might at first think.

“First, AI is not a form of magic, and second, we are not on a predetermined path in terms of where the technology is going and what we do with it,” Maximilian Gahntz, senior researcher at policies at the Mozilla Foundation. , said. “As consumers, people can vote with their feet if they have the information to make informed choices about products and services that use AI. And as voters, people can push for tech companies and those deploying AI are held accountable.

Yet another gee-whiz use for AI: time travel.

Well, sort of. Game writer and designer Merritt K is currently book crowdfunding called LAN Party, a coffee table photo book with a lens that mixes the history of technology and the future of technology: Using the image upscaler Gigapixel AI to restore and enhance photos of 1990s computer game sessions that brought gamers together to network their computers in person before the advent of online gaming.

The photos reveal a bygone era of computing that was decidedly different from the one we inhabit today: in addition to the cultural amenities of 1990s nerd-dom, the photos reveal, as Merritt put it in a interview with Ars Technica“a sheer anarchy of cases, office layouts and diverse approaches to construction”.

“Some people might say, ‘Oh, it’s just a bunch of idiots having fun.’ But that’s a lot of what culture is, what human history is, however, idiots have fun,” she also told Ars Technica – and the point is well understood. given the extent to which gaming has driven some of the graphical development of the 21st century, including fueling the development of the metaverse. And now that AI is powerful enough to help archivists uncover the past, the implications go beyond- beyond Merritt’s Clinton-era gaming history: Gigapixel was used to improve historical moments of cinema and restore and improve pre-color snapshots.

Who’s Afraid of Gary Gensler?

POLITICO’s Declan Harty ask the question today, reporting how the ambitious SEC Chairman is using this near-apocalyptic moment for the crypto world to bolster his regulatory agenda. FTX’s collapse has seriously endangered a bill backed by FTX founder Sam Bankman-Fried that would have put the crypto under the responsibility of the Commodity Futures Trading Commission, a move widely seen as more favorable to the market. industry that Gensler’s transfer of liability to the SEC.

The balance of power could soon tip towards the SEC. An unnamed source told Declan that “the SEC has encouraged crypto exchanges to register with the agency on a voluntary basis because officials want to avoid litigation with a large segment of the industry they believe breaks the rules,” and that “the agency is likely to begin taking legal action against digital asset exchanges in 2023, given that it takes about two years to build a case.

This ambition, however, comes with its own setback. Declan reports that the SEC itself has been troubled by the increased workload and a push to return to the office, and the crypto industry is unlikely to take the threat of more intense regulation lying down – Kristin Smith, president of industry group the Blockchain Association, told Us in a statement that the plans reported by the SEC were “nothing new” and posed “a threat to the United States’ lead in the global race to capitalize on the digital asset economy”.

Stay in touch with the whole team: Ben Schrecker ([email protected]); Derek Robertson ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.

If you have received this newsletter, you can Register and read our mission statement to the links provided.

#AIpowered #World #Cup #obscures

Leave a Comment

Your email address will not be published. Required fields are marked *