(Dan Tri) - The amount of data used to train artificial intelligence is almost exhausted, forcing researchers to use AI to train each other. This could be a breakthrough to help AI surpass human intelligence.
Elon Musk Proposes New Ways to Develop AI That Could Be Dangerous
Tech billionaire Elon Musk, founder of artificial intelligence company xAI, has just released shocking information about the process of training and educating artificial intelligence (AI) models.
"We have now exhausted the amount of human knowledge in training and educating AI. This has basically happened since last year," Elon Musk replied in an interview broadcast live on social network X on January 9.
AI models such as GPT-4, Gemini, Grok or Llama... are trained based on large amounts of data collected from the Internet, from scientific journals, published studies, user data on social networks...
Elon Musk proposes using AI data to train AI, but this has many potential risks (Illustration: Getty).
However, the pace of development of AI models is so fast that the amount of data available is no longer enough to train and enhance the intelligence of these AI models.
To overcome this problem, Elon Musk has proposed a solution, which is to switch to using data generated by AI itself to train AI models. In other words, AI can train itself and train each other without relying on data provided by humans.
“The only way to fix this problem is to supplement the synthetic data generated by the AI models themselves and use this data to train the AI itself,” Elon Musk shared.
AI systems that train themselves based on synthetic data generated by AI itself will help save development costs and reduce dependence on human data. This makes many people worry that AI can train itself to surpass human intelligence, beyond the control of humanity.
However, artificial intelligence experts say that using synthetic data generated by AI itself to train AI models can collapse these models, when the data generated lacks creativity, is biased, and is not updated with the latest data.
“When you use synthetic data to train AI models, their performance gradually degrades, with the output data being uninspiring and biased,” said Andrew Duncan, director of AI at the Alan Turing Institute in the UK.
High-quality data is considered an invaluable "resource" that AI development companies are competing for. However, not all scientists are willing to provide their research works to train AI models.
Google has been able to create AI that thinks and acts exactly like humans
Imagine having a means to determine a person's personality, attitudes, and style and then create an AI replica of that person.
This is not science fiction but the underlying goal of a groundbreaking study by researchers at Stanford University and Google.
With just 2 hours of interviewing, Google can create an AI that thinks and acts exactly like you (Photo: ZipRecruiter).
Researchers created an AI replica of more than 1,000 participants using information from interviews lasting just 2 hours. These AIs can mimic human behavior.
The potential applications of this invention are huge. Policymakers and businesses could use this AI simulation to predict public reactions to new policies or products, instead of relying solely on focus groups or repetitive polls.
Researchers believe the technology could help explore social structures, pilot interventions, and develop nuanced theories of human behavior.
However, it also has some risks, such as ethical concerns about the misuse of AI clones. Bad actors could exploit this AI to manipulate public opinion, impersonate individuals, or simulate public preferences based on fake synthetic data.
These concerns also come under the long-standing concern of many that the proliferation of similar AI models could have negative effects on the future of humanity.
Source: https://dantri.com.vn/suc-manh-so/ai-sap-dat-duoc-buoc-dot-pha-moi-co-the-vuot-qua-tri-tue-con-nguoi-20250111132229246.htm
Comment (0)