Zuckerberg, Meta’s CEO, recently joined the competition for developing artificial intelligence. In his recent Instagram post, he claimed, “Our long-term vision is to develop general intelligence, open-source ethically, and make it accessible to everyone for their benefit.”
To do this, he combined his two primary AI research efforts: FAIR and the GenAI team. “We’re bringing our two major AI research initiatives (FAIR and GenAI) closer to each other to support this,” he said in his blog post.
The combination of the FAIR and GenAI teams is really exciting. FAIR is committed to research, whereas the GenAI team focuses on creating generative AI experiences for Meta app users. Last year’s Meta Connect saw the announcement of new Facebook and Instagram creative capabilities like AI-powered picture editing, sticker development, and personalized recommendations.
Anticipating the release of LLaMA 3
While Meta has not publicly verified the rumors, trends in development cycles and significant hardware investments point to an impending debut. Llama 1 and Llama 2 trained at six-month intervals, and if this pattern continues, the next Llama 3—which is expected to be comparable to OpenAI’s GPT-4—could be released in the first part of 2024.
AGI role in LLaMA 3
Meta, a top AI startup, is aiming to conquer the Artificial General Intelligence (AGI) race via its huge computational infrastructure, which will include 350k H100s by the end of the year. This is the first time a major technology company has publicly revealed detailed GPU specifications. Meta’s current GPU count is unsurprising given that it had 150K H100s last year, the most among competitors like Google, and Oracle. The amount of GPUs owned by OpenAI is unknown, however Sam Altman has stated that they have enough for GPT-5 training purposes. While Zuckerberg is currently setting his eyes on AGI, Altman previously discounted its significance, claiming that it would have less influence on the world and jobs than we believe.
People Also read – LLMs are surprisingly great at compressing images and audio, DeepMind researchers find
AGI in Metaverse
Zuckerberg believes that after AGI is realized, AI and the Metaverse will coexist in a hybrid of virtual and physical reality. He argues that humans would require new gadgets for AI and the Metaverse since they will be constantly communicating with AIs throughout the day. Meta’s Ray-Ban spectacles are the optimal form factor for allowing AI to see and understand what you do. Meta is devoted to the Metaverse and actively invests in it, with an annual budget of more than $15 billion to fund Reality Labs and improve the metaverse. Last year, Zuckerberg spoke on a podcast with Lex Fridman, marking the first interview in the Metaverse.
Is Meta a successor to OpenAI?
Ultimately, the concept of open-source AGI may force OpenAI to reevaluate its strategy, given that the business initially supported open-sourcing models.
At the recently ended World Economic Forum, LeCun pushed for open-source fundamental models, underlining that OpenAI would not be where it is now without the efforts of the open-source community.
They won’t get to AGI on their own; instead, they’ll be employing PyTorch and Transformers, both of which were published by several of us. They’re benefiting from the open research environment,” he explained.
Fighting for ‘Open AI’ when the incumbents attempted to shut it down! It’s unbelievable how much Meta’s vibes have altered in the previous year.
Similarly, Perplexity CEO Arvind Srinivas stated, “Open Source AGI is an extraordinary vision.” You (Meta) are developing a highly strong technology while also aligning with what makes sense in today’s world: more people having a say in what makes sense and what doesn’t.”