According to Tech.co sources, while the controversy over CEO Sam Altman being fired and then returning to OpenAI was going on, one of the rumors was about a letter sent by the company's researchers to the board of directors, expressing concerns about the AI superintelligence model being developed that could potentially pose a threat to humanity.
(Illustration)
Project Q*
The model, known internally as Project Q* (pronounced Q-star), is said to represent a major breakthrough in OpenAI's pursuit of artificial general intelligence (AGI) — a highly autonomous branch of superintelligence capable of cumulative learning and outperforming humans at most tasks.
Q* could mark a major leap forward in artificial intelligence by radically improving AI's reasoning capabilities and bringing OpenAI closer to a major breakthrough in the development of AGI, according to people familiar with the matter.
Unlike current AI models, which focus on the ability to generate responses based on previously learned information, AGI is an autonomous system that can apply “reason” to decisions, giving it human-level problem-solving capabilities.
While AGI is not yet fully realized, many experts believe the technology will also have the ability to learn cumulatively, another trait that gives humans the ability to improve their abilities.
Some sources claim that Q* - OpenAI's project has been able to demonstrate the above properties when solving problems. Not only that, thanks to the model's enormous computational power, Q* has been able to outperform elementary school students, demonstrating reasoning skills and cognitive abilities significantly beyond the capabilities of current AI technology.
It's unclear how long Q* has been in development and what its applications might be, but OpenAI informed employees and board members about the project before the personnel scandal broke.
Ethical concerns
While OpenAI CEO Sam Altman feels confident that AGI technology will drive innovation, some researchers have been quick to point out the project's potential dangers.
In a letter to the board, the researchers warned of the potential dangers of this powerful algorithm to humanity. The specific ethical concerns about AI outlined in the letter were not disclosed, but the warnings were enough to justify the board’s decision to fire Altman.
Meanwhile, the initial reason given for firing Altman was that the CEO “communicated poorly.” He soon found a new position at Microsoft. This action prompted 700 of OpenAI’s 770 employees to threaten to do the same if the CEO was not reinstated.
With the company in danger of collapse, OpenAI's board was forced to reinstate Altman to the top job — which also led to a major overhaul of the company's executive team and highlighted deep divisions within its leadership.
Now that Altman is back in office and Project Q* is likely to get the green light again, it raises new questions.
How realistic is Project Q*?
While the tumultuous days at OpenAI have brought the concept of AGI into the spotlight, this isn't the first time Altman has mentioned the technology.
The Silicon Valley entrepreneur found himself in hot water in September after comparing AGI to “an average human you might hire as a co-worker.” He followed up on comments he made last year about how AI could “do anything you could with a remote co-worker,” including learning how to be a doctor and a good programmer.
While comparing AGI to the intelligence of an “average human” is nothing new, Altman’s use of the phrase was deemed “abhorrent” by AI ethicist and Cambridge University professor Henry Shevlin, as concerns around AI’s impact on job security escalate.
Potential breakthroughs in AGI are also raising alarm bells for other researchers — that the technology is being developed faster than humans can fully comprehend its impact.
OpenAI believes that the positive outcomes of AGI make the risky “minefield” worth pursuing. But as the company continues to push forward in this direction, many worry that Altman is prioritizing commercial success over the interests of users and society.
Phuong Anh (Source: Tech.co, The Guardian)
Source
Comment (0)