In recent years, AI systems have moved from being able to process content (recognizing faces or reading and transcribing text) to creating digital paintings or writing essays. Digital artist Beeple was shocked in August when several Twitter users generated their own versions of one of his paintings with AI-powered tools. Similar software can create music and even videos. The umbrella term for all of this is “generative AI,” and as this latest breakthrough in our digital future has become part of our present, some familiar tech industry challenges like copyright and social harms are reemerging. already.
We’ll likely look back to 2022 as generative AI exploded into mainstream attention, as image-generating systems from OpenAI and open-source startup Stability AI were released to the public, sparking a flood of fantastic pictures on social media.(1) Breakthroughs always come thick and fast. Last week, researchers at Meta Platforms Inc. announced an AI system capable of successfully negotiating with humans and generating dialogue in a strategy game called Diplomacy. Venture capital investment in the field reached $1.3 billion in deals this year, according to data from research firm Pitchbook, even as it contracted for other areas of technology. (Transaction volume increased by almost 500% in 2021.)
Companies that sell AI systems to generate text and images will be among the first to make money, says Sonya Huang, partner at Sequoia Capital who released a “map” of generative AI companies that are went viral this month. One particularly lucrative area will be gaming, already the largest category of consumer digital spending.
“What if the game was generated by whatever your brain could imagine, and the game grew as it went? asks Huang. Most generative AI startups rely on a few popular AI models that they pay to access or get for free. OpenAI, the artificial intelligence research company co-founded by Elon Musk and majority-funded by Microsoft Corp., sells access to its DALL-E 2 image generator and GPT-3 automatic text editor. (Its next iteration of the latter, known as GPT-4, is said by its developers to be incredibly adept at mimicking human jokes, poetry, and other forms of writing.)
But this progress will not continue unhindered, and one of the thorniest issues to resolve is copyright. Typing “a dragon in the style of Greg Rutkowski” will produce artwork that looks like it’s from the named digital artist who creates fantasy landscapes. Rutkowski derives no financial benefit from this, even if the generated image is used for commercial purposes, which the artist has publicly complained about.
Popular image generators such as DALL-E 2 and Stable Diffusion are protected by the US Fair Use Doctrine, which relies on freedom of expression as a defense for the use of copyrighted works. ‘author. Their AI systems are trained on millions of images including Rutkowski’s, so in theory they benefit from direct exploitation of the original work. But copyright lawyers and technologists are divided on whether artists will ever be compensated.
In theory, AI companies could eventually copy the licensing model used by music streaming services, but AI decisions are generally inscrutable – how would they track usage? One solution could be to compensate artists when their name appears in a prompt, but it would be up to AI companies to set up this infrastructure and control its use. Turning Up the Pressure is a copyright class action lawsuit against Microsoft Corp, Github Inc. and OpenAI involving a code-generating tool called Copilot, a case that could set a precedent for the broader field of generative AI.
Then there is the content itself. If AI is rapidly generating more information than humanly possible – including, inevitably, pornography – what happens when some of it is harmful or misleading? Facebook and Twitter have actually improved their ability to clean up misinformation on their sites over the past two years, but they may face a much bigger challenge with text-generating tools — like OpenAI — that have set back their efforts. . The problem was recently highlighted by a new tool from parent Facebook Meta itself.
Earlier this month, Meta unveiled Galactica, a science-focused language system that could write research papers and Wikipedia articles. Within three days, Meta shut it down. Early testers found it generated nonsense that looked dangerously realistic, including instructions on how to make napalm in a bathtub and Wikipedia entries about the benefits of being white or how bears live in the space. The weird effect was facts mixed so finely with hogwash that it was hard to tell the difference between the two. Political and health-related misinformation is hard enough to track when written by humans. What happens when it’s generated by machines that look more and more like people?
This could turn out to be the biggest mess of all.
More from Bloomberg Opinion:
• Our future AI overlords need a resistance movement: Parmy Olson
• AI can help make cryptocurrency safer for everyone: Tyler Cowen
• US chip borders highlight cracks in Chinese AI strategy: Tim Culpan
(1) The advent of the transformer model is one of the technological milestones that sparked the rise of generative AI. First proposed in a paper by Google researchers in 2017, the models took less time to train and could underpin better AI systems for generating language.
This column does not necessarily reflect the opinion of the Editorial Board or of Bloomberg LP and its owners.
Parmy Olson is a Bloomberg Opinion columnist covering technology. A former journalist for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous”.
More stories like this are available at bloomberg.com/opinion
#Analysis #Creative #generates #complicated #problems