Elon Musk and other experts have called for a halt in the development of Artificial Intelligence (AI), citing potential threats to society. In an open letter published on March 29th, Musk, along with AI specialists and business executives, called for a six-month break in the advancement of networks stronger than the recently developed OpenAI GPT-4.
The letter comes in response to the unveiling of GPT-4 by Microsoft-backed OpenAI earlier this month. The company has impressed many users with its capabilities. However, the signatories of the letter argue that the development of stronger AI systems must only proceed when it has been proven that they are helpful and present minimal risk to society.
According to the transparency register of the European Union, major funding groups for AI technology include the Musk Foundation, the London-based group Founders Pledge, and the Silicon Valley Community Foundation.
Musk, who is one of the co-owners of OpenAI and also heads Tesla, has expressed his dissatisfaction with the current level of control on the autopilot system and has called for AI to have a positive impact on society. However, James Grimmelmann, a professor of Digital and Information Law at Cornell University, has criticized Musk for being hypocritical, citing Tesla’s inadequacy in AI.
Experts want the development of AI privacy policies
The open letter has prompted a pause in the development of AI until there are available to ensure its safe and responsible use. The letter raises questions such as whether machines should have the permission to inundate data media with uncertainty and promotions, and whether AI will displace or outplay humans. The signatories emphasize that non-tech executives should not make decisions about the development and use of AI.
Over 1000 individuals have signed the open letter, but notable absentees include Sam Altman, the Executive Director at OpenAI, Sundar Pichai, the head of Alphabet, and Satya Nadella, the head of Microsoft. OpenAI CEO Altman had warned that competitors may not use safety limits on their ChatGPT-like tool.
Photo credits: Unsplash