Effective altruism pushes a dangerous brand of 'AI security'

Effective altruism pushes a dangerous brand of ‘AI security’

Since then, the quest to proliferate increasingly broad linguistic patterns has accelerated, and many of the dangers we warned of, such as mass publication of hateful texts and misinformation, continue to manifest themselves. Just a few days ago, Meta released his “Galactica” LLM, which is supposed to “summarize academic papers, solve mathematical problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more. Again”. Just three days later, the public demonstration was pulled after researchers generated “research papers and wiki entries on a wide variety of topics ranging from the benefits of suicide, drinking crushed glass, and anti-Semitism, to why homosexuals are bad”.

This race didn’t stop at LLMs, but shifted to text-image models like OpenAI’s DALL-E and StabilityAI’s Stable Diffusion, models that take text as input and generate images based on it. text. The dangers of these models include the creation of child pornography, the perpetuation of prejudices, the reinforcement of stereotypes and the mass dissemination of disinformation, as reported by many researchers and journalists. However, instead of slowing down, companies are removing the few security features they had in their quest to outperform. For example, OpenAI had restricted the sharing of photorealistic generated faces on social media. But after newly formed startups like StabilityAI, which reportedly raised $101 million with a whopping $1 billion valuation, called those security measures “paternalistic,” OpenAI removed those restrictions.

With the founding and funding of EA institutes, companies, think tanks and research groups at elite universities dedicated to the brand of “AI security” popularized by OpenAI, we are about to see a greater proliferation of harmful models presented as a step towards “beneficial AGI”. And the influence starts early: Effective Altruists provide “community development grants” to recruit from major college campuses, with EA chapters developing programs and teaching AI security courses at universities around the world. elite like Stanford.

Last year, Anthropic, which is described as an “AI research and security company” and was founded by former OpenAI research and security VPs, raised $704 million. billion, with most of its funding coming from EA billionaires like Talin, Muskovitz, and Bankman. -Fried. An upcoming workshop on “AI Security” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also announced to be sponsored by FTX Future Fund, the charity focused on Bankman-Fried’s EA whose team resigned two weeks ago. The studio advertises $100,000 in “Best Paper Award”, an amount I have never seen in any academic discipline.

Research priorities follow funding, and given the large sums of money poured into AI to support an ideology with billionaire adherents, it’s no surprise that the field has moved in a direction promising an “unimaginably future”. big” around the corner as proliferating products harm marginalized groups in the present.

We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of guardianship, or guardianship” so that any data taken from the Maori benefits them in priority. Contrast this approach with that of organizations like StabilityAI, which harvest artists’ works without their consent or attribution while claiming to build “AI for the people.” We must free our imagination from the one sold to us so far: to save ourselves from a hypothetical AGI apocalypse dreamed up by a privileged few, or the ever-elusive techno utopia promised us by the elites of Silicon Valley. We must free our imagination from the one sold to us so far: to save ourselves from a hypothetical AGI apocalypse dreamed up by a privileged few, or the ever-elusive techno-utopia promised us by the elites of Silicon Valley.

#Effective #altruism #pushes #dangerous #brand #security

Leave a Comment

Your email address will not be published. Required fields are marked *