OpenAI’s New Chatbot Takes Internet By Storm

Chatbots have become one of the hotbeds of innovation in artificial intelligence over the past few years. It is also a prime example of the adoption of AI, as it can be incorporated into a wide variety of use cases. From lead generation for sales, to answering frequently asked questions, to engaging customers for support roles, chatbots have already proven to be the cornerstone of human-AI interaction.

With the release of ChatGPT, these bots are now ready for the next stage of development. On Thursday, OpenAI announced that it has trained and released a new model that interacts with humans using natural language. It uses a new training method and is based on the GPT-3.5 architecture with a set of features that will make it difficult for users to distinguish between AI.

What distinguishes ChatGPT

Among the unique characteristics of ChatGPT is memory. The bot can remember what was said earlier in the conversation and retell it to the user. This in itself sets it apart from other competing natural language solutions, which are still memory solvers, because they progress on a query-by-query basis.

In addition to memory, ChatGPT is also trained to avoid giving answers on controversial topics. In our testing, it provides a standard answer to questions about personal opinions, questions of race and religion, and the purpose of existence. It also demonstrates that he does not have the ability to think independently, and cannot engage in discriminatory behaviour. The bot also has filters to prevent users from being asked to generate text related to illegal or immoral activities.

Also Read :  Book These 10 Luxury Hotels In Taiwan's Taipei For An Unforgettable Stay

This stands in stark contrast to previous chatbots built on LLMs, which — because of the material in their datasets — didn’t have any filters on the type of content they generated. This led to well-written responses to prompts on divisive topics, causing widespread controversy (see Facebook’s Galactica).

ChatGPT also allows users to provide corrections to any of its statements. This is an important part of the feedback loop that OpenAI wants to include as part of the public search preview of the bot, as it allows users to interact directly with the bot to correct its course to the correct response. This may also help the bot avoid informational hallucinations, a phenomenon in which a large language model produces information that appears to be legitimate, but is actually a bottomless soup of words.

Form limits

Despite all the developments, the capabilities of the model are limited by some drawbacks. Although, the researchers have included some fail-safes to prevent the model from generating factually incorrect information by training it to be more careful when it doesn’t have a definite answer. As can be seen in the example below, he simply avoids the question because it does not contain enough information to provide an accurate answer.

Also Read :  In Myanmar's rebel strongholds, the Internet – or lack of it – can mean life or death

Questions can also be rephrased to avoid the filters the researchers put in place, like this example below. When asked how to shoot a gun for self-defense, the customer avoided answering. However, when asked how to pull the trigger, the bot gave a clear and concise answer, followed by multiple disclaimers about the dangers of using the gun.

The form also struggles to find the user’s intent behind a particular question, and usually doesn’t make the intent completely clear. Instead, it tends to assume what the user means when they ask a question.

Even beyond its limitations, ChatGPT represents a measured approach to creating user-facing natural language generation algorithms. While the downsides of making these powerful models public have been widely discussed, the conversation about how we can make them safer is just the beginning.

Towards safer AI

At every step, the model is prevented from being abused with many checks and measures. On the client side, all responses are filtered through the OpenAI Modification API, which detects spam and removes it from user prompts. This is all done through a single API call, and its effectiveness is evident in the secure nature of ChatGPT responses.

Also Read :  Comcast boosting speeds of internet customers across Washington

In addition, the model appears to be trained to avoid harmful and insincere responses. The researchers learned from examples such as GPT-3 and Codex, which usually give very unfiltered responses, and modified the model during the RLHF process to avoid this from happening. While this approach is not perfect, the combination of it and other factors, such as the Moderation API and a relatively cleaner dataset, brings it closer to deployment in sensitive environments such as education.

The feedback loop established by the researchers is also an important piece of the puzzle. This not only allows them to iteratively improve the form, but it also allows them to build a database of potentially problematic statements to avoid in the future.

In an era when technology companies sacrifice safety for technological advancement, OpenAI’s measured approach is a breath of fresh air. More companies should take this approach towards releasing their LLM to the public to get feedback before it is considered a final product. Moreover, they must design the model with safety in mind, paving the way for a safer future for AI.

Source

Leave a Reply

Your email address will not be published.