The inside story of how ChatGPT was created from the people who created it

0

Sandini Agarwal: We have many steps ahead. I think the virality of ChatGPT has caused many of the issues we know about to really surface and become critical – things we want to fix as soon as possible. For example, we know that the model is always very biased. Yes, ChatGPT is very good at rejecting bad requests, but it’s very easy to write instructions that allow us to not reject anything we want to reject.

Liam Fedus: It’s been fun to see the diverse and creative applications from users, but we’re still focusing on areas where we can improve. Through an iterative process of deployment, feedback and refinement, we believe we can produce the most balanced and successful technology. As our technologies evolve, new problems will inevitably arise.

Sandini Agarwal: In the weeks since launch, we’ve looked at some of the worst examples people have found, the worst things people have seen in the wild. We rated each of them and discussed how to fix them.

Ian Lake: Sometimes it’s something that goes viral on Twitter, but we have people reach out to us silently.

Sandini Agarwal: Many of the things we found were jailbreaks, which is definitely an issue we need to address. But since users have to try these complicated methods to get the model to say something wrong, it’s not something we missed at all, or something that was very surprising to us. However, this is something we are actively working on now. When we find jailbreaks, we add them to our training and testing data. All the data we see will be fed into the future model.

Ian Lake: As we get a better model we want to release it and test it. We are optimistic that targeted adversary training can significantly improve the jailbreak situation. It is not clear if these problems will be completely removed, but we believe that we can make jailbreaking much more difficult. Again, it’s not like we didn’t know jailbreak was possible before release. I think it’s very difficult to predict what the actual security issues will be once these systems are in place. That’s why we focus so much on tracking why people are using the system, what’s happening, and then acting on it. That doesn’t mean you shouldn’t proactively mitigate security issues when you anticipate them. But yes, it is very difficult to predict what will actually happen when the system enters the real world.

In January, Microsoft unveiled Bing Chat, a search engine that many believe is OpenAI’s unofficial version of GPT-4. (OpenAI says: “Bing is powered by one of our next-generation models, customized by Microsoft for search. It combines the best of ChatGPT and GPT-3.5.”) Multibillion-dollar use of chatbots by tech giants. A reputation to protect creates new challenges for those tasked with creating key models.

Tech

All news on the site does not represent the views of the site, but we automatically submit this news and translate it using software technology on the site, rather than a human editor.

Leave A Reply