Since the dramatic events at OpenAI in November, the rumor mill is still churning as to why exactly the company’s Chief Scientific Officer, Ilya Sutskever, and the rest of the board at the time decided to throw CEO Sam Altman out of the company. As is well known, Altman is now back after the power struggle, but the question marks remain.
Both Reuters and The Information had reported that OpenAI had previously found a new way to develop powerful AI systems. A new model called Q* (pronounced “Q-Star”) is said to be able to solve elementary-level math problems. This, in turn, is said to have been a milestone in the company’s quest to create artificial general intelligence (breaking latest news) – a much-lauded concept that refers to AI systems that are equal to or superior to humans. Q* is still not commented on by OpenAI. Social media was full of speculation and excessive hype following the reports. But how do experts assess the situation? MIT Technology Review interviewed some well-known AI researchers about this.
Computer scientists have spent years trying to get AI language models to solve mathematical problems they weren’t actually designed to solve. GPT-4 and the ChatGPT based on it can solve such tasks, but are neither very good nor reliable. Until now, we currently have neither the algorithms nor the right architectures to reliably solve mathematical problems using generative AI. This is also what Wenda Li, lecturer in AI at the University of Edinburgh, says. While deep learning and the Transformer technique used by language models are great at pattern recognition, that alone is probably not enough, she adds.
Mathematics is a measure of logical thinking, says Li. A machine capable of “thinking” about mathematics could theoretically also learn to perform other tasks that rely on existing information. This applies, for example, to programming or drawing conclusions from news articles, which today’s language models can already do, but not always in the way people want. Mathematics is a particularly difficult task for Li because it requires AI models to have the ability to think logically and truly understand what they are dealing with.
Mathematics from elementary school
A generative AI system that can reliably master mathematics must first have concrete definitions of certain concepts, which can be very abstract, under control. Many math problems also require some level of planning across multiple steps, says Katie Collins, a doctoral candidate at the University of Cambridge who specializes in AI in mathematics. Yann LeCun, senior AI scientist at Meta, sees “Q*” as an attempt by OpenAI to teach its language model to proceed in a planned manner.
People concerned about whether AI poses an existential risk to humans worry that such capabilities could lead to malicious AI. Security concerns could arise if such AI systems are allowed to set their own goals and interact in some way with the real physical or digital world, Collins says. Ironically, the creation of OpenAI was intended to help avoid such risks.
But even if OpenAI has taught its language models better mathematical skills, that doesn’t mean the birth of a superintelligence. “I don’t think this will immediately lead us to breaking latest news or really scary situations,” says Collins. It depends on the type of mathematical problems that AI can solve. There is a difference between a language model mastering elementary school mathematics and exploring the limits of mathematics at the level of a Fields Medalist.
Machine learning research has so far focused on solving simple tasks, but modern AI systems have not yet fully mastered this challenge. Confusing: Some AI models fail at really simple mathematical problems, but can solve more complex tasks. OpenAI has therefore experimented with customized models, but they only occasionally outperform humans.
Again hype about breaking latest news
Nevertheless, developing an AI system that can solve mathematical equations would be impressive. A deeper understanding of mathematics could open up new applications that benefit scientific research and engineering. And students are also likely to use the AI as tutors more often.
However, this is not the first time that a new language model has triggered breaking latest news hype. Just last year, similar claims were made about Google DeepMind’s Gato, a “generalist” AI model that can play Atari video games, label images, chat and stack blocks using a real robotic arm. At the time, some AI researchers claimed that DeepMind was “on the cusp” of breaking latest news because Gato could do a lot of different things quite well.
Such hype cycles do more harm than good to the AI sector because they distract from the real, tangible problems. Rumors of a powerful new AI model could also be a massive own goal for the regulation-shy tech sector. The EU is close to passing the AI Act. One of the biggest points of contention among lawmakers right now is whether tech companies should be given more power to self-regulate cutting-edge AI models.
OpenAI’s board was designed to be an internal company “kill switch” and governance mechanism to prevent the introduction of harmful technologies. The drama of the last few weeks in the boardroom has shown that the profit motive will have the upper hand here too. This also makes it more difficult to justify why AI companies should be trusted to self-regulate. Politicians should take this to heart.
To home page