Press ESC to close

OpenAI Calling For AI Regulation Is A Solid Step In No Direction

The Reason behind OpenAI Calling For AI Regulation

OpenAI, the company that created the famous AI image generator Dall-E and the chatbot ChatGPT, has published an article on their website titled “Governance of Superintelligence” in an attempt to demonstrate their seriousness about the hazards and potential of AI technology.

According to OpenAI’s blog article, a worldwide AI regulatory organisation comparable to the International Atomic Energy Agency (IAEA) is needed to address the risks related to AI technology. But, the article’s lack of detail may damage their cause. The focus of OpenAI on assisting individuals rather than controlling their technology is also not regulated.

Since 2019, OpenAI hasn’t been totally transparent about the language models it’s employed to train its chatbots. The business is allegedly working on an open-source approach, although it will most likely be a poor imitation of GPT. OpenAI was once a nonprofit but is now a profit-making firm with a $30 billion valuation, which might explain why the blog article felt more like marketing hype than a white paper. 

People Also read – OpenAI launches ‘custom instructions’ for ChatGPT so users don’t have to repeat themselves in every prompt

What is the real threat of AI technology?

The main danger of AI is not the technology itself but the potential for it to flood the globe with fake news and visuals, making it impossible for humans to tell what is true. For example, the publication date resulted in AI-generated visuals of the Pentagon and the White House on fire, causing a brief shock wave in the stock market. This event underlines the value of more regulation and control in the AI business. The issue was not AI, but rather a lack of trust and safety measures on social media, which allowed an account to imitate a well-known news organisation and share terrifying images.

Yet, for a business that seeks to market AI, this might draw attention to the unfavourable fact that the problem with the images was not caused by AI. They weren’t really convincing. One may make fake photos of fire at iconic locations using Photoshop. Local officials swiftly confirmed that the explosions did not occur, and the stock market quickly recovered.

The only issue was that the photographs went viral on a social media platform where all trust and safety features had been removed. The account that first circulated them was known as “Bloomberg Feed.” It was also paying $8 per month for a blue checkmark, which no longer indicates that an account is confirmed. 

Conclusion

The short version is that AI is risky. The same was true with nuclear power. Perhaps we require a global organization to oversee artificial intelligence, similar to the International Atomic Energy Agency (IAEA)! While OpenAI’s call for greater oversight and regulation of the AI industry is good, questions remain about their sincerity due to the lack of details in their blog piece and their lack of transparency in their chatbot training models. The true threat of AI is its ability to distribute fake news and pictures, and immediate action is required to solve this issue.