In April, Elon Musk told right-wing commentator Tucker Carlson that he was starting a project to compete with ChatGPT and build “a maximum truth-seeking AI that tries to understand the nature of the universe.”
Today, Musk unveiled that new artificial intelligence venture. It’s called xAI. The company’s spare landing page repeats that goal of understanding the universe and lists 11 AI researchers—seemingly all men—who have made significant contributions to the field of AI in recent years and worked at companies including Google, DeepMind, and OpenAI.
The crew is an “all-star founding team,” according to Linxi “Jim” Fan, an AI researcher at Nvidia. “I’m really impressed by the talent density—read too many papers by them to count,” he writes in a LinkedIn post.
One of the company’s cofounders, Greg Wang, said in a tweet that xAI aims to take AI to the next level by developing a mathematical “‘theory of everything’ for large neural networks,” the machine learning technology that has dominated AI for the past decade. “This AI will enable everyone to understand our mathematical universe in ways unimaginable before,” he wrote.
Like many other new AI projects, Musk’s is motivated by concern and perhaps some FOMO over the rapid rise of ChatGPT. He has talked of xAI as a response to the bot, which he has suggested has political biases, and criticized its creator, startup OpenAI, for being secretive and too cozy with its backer Microsoft.
Musk’s ill feeling is perhaps compounded by the fact that he cofounded OpenAI in 2015, but three years later severed ties with what was then a nonprofit, after reportedly failing to take full control. (The company became a for-profit venture in 2019.) And Musk has recently joined those warning that AI could pose an existential threat to humanity and entrench the power of giants like Microsoft and Google.
Musk is no stranger to making bold bets, but what little has been revealed of xAI’s goals sounds a little odd. ChatGPT and its rivals such as Google’s Bard are built on deep learning, and OpenAI’s CEO Sam Altman has said wholly new ideas are needed to push beyond existing systems. Researching the fundamentals of the technology could help find them.
But much of the recent progress in AI has come from making existing systems bigger and throwing more computing power and data at them. And the sweeping changes AI is expected to deliver in tech and other industries over the next few years will come from deploying that mostly-mature technology.
At this stage, xAI seems likely to lack the cloud computing power needed to match OpenAI, Microsoft, and Google. And its relatively small team of AI researchers does not look world-beating compared to the hundreds that each of those established firms can deploy on AI projects. The only person involved who has a history of working on AI risks is xAI’s sole named advisor, Dan Hendrycks, who is director of the nonprofit Center for AI Safety and coordinated a recent public statement from tech leaders about the existential threat AI may pose.
Although his supposedly giant-killing AI project is starting small, Musk does, of course, have some significant resources to draw on. The new company will work closely with Twitter and Tesla, according to the xAI website. Twitter’s data from conversations on the platform is well suited to training large language models like that behind ChatGPT, and Tesla now designs its own specialized AI chips and has significant experience building large computing clusters for AI, which could be used to boost xAI’s cloud computing power. Tesla is also building a humanoid robot, a project that could be helped by, and be helpful to, xAI in future.
But perhaps at this early stage, xAI’s reality-bending rhetoric is primarily about attracting talent. AI expertise has never been in greater demand. The most pressing problem for a new entrant, even one backed by Musk’s reputation and deep pockets, is to show it can attract the researchers needed to eventually become competitive.
The huge goals Musk has set for himself—challenging existing AI giants and protecting humanity from harmful AI—make his tiny new AI company look even smaller. Many AI researchers who are also concerned about the trajectory of AI seem to view the problem as one that requires greater transparency and collaboration, rather than a lone genius with a small band of all-stars.