Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Reuters could not independently verify the capabilities of Q* claimed by the researchers.
Deep Mind is actually delivering shit like an estimate of the entire human proteome structure and creating the transcendently greatest go player of all time.
Meanwhile these chucklefucks are using the same electricity demand as Belgium to replicate a math solver that could probably be assigned as half-term project in an undergraduate class and are pissing themselves about threatening humanity.
The Valley has lost its goddamn mind.
The computer self corrected based on its understanding of math principles that it learned through text. It’s not about the math. It used reason.
The computer had a thought. A rudimentary one, yes. But an actual thought.
I don’t really know what to say if you don’t see why that’s an amazing discovery.
Also the Belgium thing was if it continued growing at the rate it is, but the technology didn’t improve. The technology has already improved by two generations since that paper was written. It’s a crappy talking point and nothing else.
An inordinate amount of our attention is afforded to absurdly rich children
You are missing the very crucial part about how this is generalised. That’s like saying we don’t need to teach math to people anymore, we have calculators now. The AI isn’t too capable currently, but dismissing it would be like dismissing consumer PCs, because what are people gonna do with computers?
Valley bullshit aside, I do have to defend the expensive exploration of the generalized AI space purely because it’s embarassingly parallel. That is, it just gets so much better the more money and resources you throw at it. It couldn’t solve math without a few million dollars worth of supercomputer training time. We didn’t know it would create valid VHDL-to-csv-to-VBA scripts, but I got phind(.com) to make me one. And I certainly can’t tell Wolfram Alpha to package the math solution it generated as a Javascript function.
Not to mention the huge advances in Chess AI. LeelaChessZero is the open-source implementation of the original AlphaZero idea Google came out with, and is rivaling Stockfish 15. Meanwhile, Torch is a new AI being developed that is now kicking Stockfish’s ass.
Grandmasters and notices alike are learning a lot from chess AI, figuring out better ways to improve themselves, either by playing them outright, using them for post-game analysis, or watching two bots play and see the kind of creative strategies they can come up with.
Nicely put.
While I agree that a lot of the hype around AI goes overboard, you should probably read this recent paper about AI classification: https://arxiv.org/abs/2311.02462
Systems like DeepMind are narrow AI, whereas LLMs are general AI.
Not really. The imementation of land is most the same, they just run continously on a per word (token) basis.