OpenAI CEO Sam Altman warns that competitors may not put safety limits on their ChatGPT-like tools

  • Home
  • Technology
  • OpenAI CEO Sam Altman warns that competitors may not put safety limits on their ChatGPT-like tools
Open AI's CEO Sam Altman says rivals may throw caution to the wind on ChatGPT-like tools

OpenAI’s CEO, Sam Altman, warns that competitors may have fewer concerns than OpenAI about including guardrails into their chatGPT-like tools. Despite the various benefits that society enjoys from the use of artificial intelligence (AI), he is concerned that developers may use the technology negatively.

Sam Altman mentioned in an interview with  ABC News that “there will be other people who don’t put some of the safety limits that we put on.” He also said that society has little time to figure out a way to resolve the issue.

Last week, OpenAI unveiled an efficient AI model called GPT-4. This is an upgrade on its AI chatbot ChatGPT which was released for public use in November.

Although Microsoft is a big investor in OpenAI, other companies are creating tools similar to ChatGPT, creating serious competition for OpenAI.

The downside to artificial intelligence 

OpenAI’s chief scientist and co-founder, Ilya Sutskever, mentioned in an interview with Verge that the fact that many companies are trying to develop tools that are equivalent to GPT-4  is an indication that the industry is growing.

He also explained the company’s reasons for revealing a little information about the basics of GPT-4. His remarks are an indication that OpenAI’s competitors are meeting up to their standards.

CEO Altman suggested that some of these competitors might not put safety limits on their ChatGPT-like tools.

“The society, I think, has a limited amount of time to figure out how to react to that, how to regulate it and how to handle it.”

OpenAI’s CEO, Sam Altman on generative AI

Phone scammers have started using the voice-cloning function in AI tools to imitate the voices of people’s relatives and then request financial help from their victims.

Altman said, “I’m particularly worried that these models could be used for large-scale disinformation.”

Last week, OpenAI shared a “system card” document. It showed how testers deliberately made dangerous searches on GPT-4, like how to make harmful chemicals with the use of kitchen ingredients. They also revealed how the company handled the issue before the launch.

History of OpenAI

OpenAI was created in 2015, and its focus was on the secure development of AI. With Microsoft becoming a major investor, OpenAI moved from being a non-profit model to a capped-profit model in 2019.

Elon Musk, CEO of Tesla and Twitter, who was also one of the co-founders of OpenAI, condemned the shift. According to him, the initial design of OpenAI was to make it an open-source platform , but with the shift, it has become a closed source.

Musk described ChatGPT as a scary good and warned that society is close to risky AI. 

In the same way, Altman has been warning the public about the bad use of AI. He also mentioned that institutions should have freedom to discover what to do. Legal constraints and having enough time to understand and navigate the issue are all important factors in this case.

Photo credits: Alexander Koch(Pixabay)

Tags: