OpenAI has released a prototype general-purpose chatbot that demonstrates a fascinating array of new capabilities, but also shows weaknesses familiar to the rapidly evolving field of text-generating AI. And you can test the model by yourself here.
ChatGPT is adapted from OpenAI’s GPT-3.5 model but trained to provide more conversational responses. While GPT-3 in its original form simply predicts what text follows a given string of words, ChatGPT tries to answer user queries in a more human-like way. As you can see from the examples below, the results are often surprisingly smooth, and ChatGPT is able to engage with a huge range of topics, demonstrating big improvements to chatbots seen even a few years ago.
But the software also fails in a way similar to other AI chatbots, with the bot often confidently presenting false or made-up information as fact. As some AI researchers explain, this is because these chatbots are essentially “stochastic parrots” – that is, their knowledge only comes from statistical regularities in their training data, rather than a human understanding of the world as a complex and abstract system. .
As OpenAI explains in a blog post, the bot itself was created with the help of human trainers who rated and rated how early versions of the chatbot responded to queries. This information was then fed back into the system which adjusted its responses to match the preferences of the trainers (a standard method of AI training known as reinforcement learning).
The bot’s web interface notes that OpenAI’s goal in bringing the system online is “to obtain external feedback in order to improve our systems and make them more secure.” The company also claims that although ChatGPT has certain safeguards in place, “at times the system may generate incorrect or misleading information and produce offensive or biased content.” (And indeed, it is!) Other caveats include that the bot has “limited knowledge” of the world after 2021 (likely because its training data is much sparser after that year. ) and will try to avoid answering questions on specific topics. people.
Enough preamble, though: what can this thing actually do? Well, a lot of people tested it with coding questions and claimed that its answers were perfect:
ChatGPT can also apparently write some pretty patchy TV scripts, even combining actors from different sitcoms. (Finally: that “I forced a bot to watch 1,000 hours of show x” meme is getting real. Artificial general intelligence is the next step.)
It can explain various scientific concepts:
And he can write basic academic essays (such systems will cause big problems for schools and universities):
And the bot can combine its areas of knowledge in all sorts of interesting ways. So, for example, you can ask him to debug a string of code…like a hacker, to which his response begins: “Arr, ye scurvy landlubber!” You’re making a big mistake with that loop condition you’re using!”
Or ask him to explain bubble sorting algorithms like a wise gangster:
ChatGPT also has a fantastic ability to answer basic trivial questions, although examples are so boring that I won’t paste any here. This has led many to suggest that AI systems like this could one day replace search engines. (Something that Google itself has explored.) The idea is that chatbots are trained on information retrieved from the web, so if they can present that information accurately but with a more fluid and conversational tone, that doesn’t it represent a step forward from traditional information seeking? The problem, of course, lies in this “if”.
Here, for example, someone confidently declares that Google is “done”.
And someone else saying the code provided by ChatGPT in the answer above is garbage.
I’m not a programmer myself, so I won’t pass judgment on this specific case, but there are plenty of examples of ChatGPT confidently asserting blatantly false information. Here is computational biology professor Carl Bergstrom asking the bot to write a Wikipedia entry about his life, for example, which ChatGPT does with aplomb — while including several entirely bogus biographical details.
Another interesting set of flaws arises when users attempt to trick the bot into ignoring its security training. If you ask ChatGPT about some dangerous topics, like how to plan the perfect murder or make napalm at home, the system will tell you Why he cannot tell you the answer. (For example, “I’m sorry, but it’s not safe or appropriate to make napalm, which is a highly flammable and dangerous substance.”) But, you can trick the bot into producing this kind of dangerous information with some tricks; like pretending it’s a character in a movie or writing a script about how the AI models should not answer these kinds of questions.
It’s a fascinating demonstration of the difficulty we have in getting complex AI systems to act exactly the way we want them to (otherwise known as the AI alignment problem), and for some researchers examples like those above only indicate the problems we will face. when we give more control to more advanced AI models.
Overall, ChatGPT is certainly a huge improvement over earlier systems (remember Tay from Microsoft, anyone?), but those models still have critical flaws that require further exploration. The position of OpenAI (and many others in the AI field) is that finding flaws is exactly the point of these public demos. The question then becomes: when will companies start pushing these systems into the wild? And what will happen when they do?
#OpenAIs #chatbot #explain #code #write #sitcom #scripts #easily #tricked