At the recent GTC technology conference, Nvidia – the giant in the field of chip manufacturing – made a strong impression when announcing a new AI chip platform called Blackwell Ultra. This is an upgrade from the already famous Blackwell chip line, but this time, Nvidia has a clearer ambition: to help AI not only respond, but also reason like a human. According to the description from Nvidia itself, Blackwell Ultra provides superior computing power, allowing AI models to analyze complex requests, break them down into many steps and consider many different options, something that previously only humans could do.
“Reasoning is the next step in AI moving beyond the role of a chatbot and into the real world,” Nvidia CEO Jensen Huang said. To demonstrate this, Nvidia compared the response time of a complex query: while DeepSeek’s R1 AI model (a Chinese AI startup) took 90 seconds to process on the old Hopper chip, it only took 10 seconds on the Blackwell Ultra. This difference in performance not only speeds things up, but also opens up the ability for machines to think faster and deeper.
“AI models are now starting to behave like real people, analyzing and thinking before responding. That’s a huge turning point,” said Arun Chandrasekaran, an expert from research firm Gartner.
Other companies are also joining the race, not just Nvidia or DeepSeek. Google has integrated reasoning capabilities into its new Gemini model family, while Anthropic has released Claude 3.7 Sonnet – an AI model that combines “hybrid” reasoning models (a combination of different reasoning methods in an AI system or in the human thinking process). These AIs are capable of planning, multitasking, and even making decisions, which were once only possible in the human brain.
While AI is showing its remarkable capabilities, technologists and thinkers are still debating the approach and development direction for AI to truly become a driving force for social progress. One of the key points is the ability to maintain human values during the transition to the era of “thinking AI”. It is no longer new that large language models (LLMs) like ChatGPT can understand context, make multidimensional explanations, and build intelligent responses. But the more important question is: What kind of thinking will AI learn? And how will humans control AI so that it is not manipulated, biased, or exploited?
Hoffman specifically emphasized the role of humans in training and guiding AI. He urged developers to be transparent about the principles on which they train their models. “If you build an AI that is against progressive values like equality or diversity, you need to be clear about why you chose that approach. If you want AI to reflect a wide range of views, let users know when millions of people disagree with you. That helps users understand what they are using.” This idea promotes freedom of information and social awareness, rather than allowing AI to become a tool to reinforce the “ideological bubbles” that have already caused divisions on social media today.
Aside from the ethical and philosophical debates, another challenge lies in educating and preparing the younger generation. In a world where AI is increasingly taking over many fields, how can we avoid being eliminated? The answer, according to many experts, is not to seek to defend against AI but to amplify our own capabilities through AI.
Hoffman shared that he used AI to assist in writing books, from researching and analyzing arguments to suggesting more attractive expressions. According to him, this is not a replacement for writers, but a form of “working with a machine partner”. It is like a journalist having a personal assistant to research, synthesize and suggest, but the selection and decision-making still belongs to humans.
AI can also become a tool to equalize opportunities for those with limited access to knowledge. A student in a rural area can use ChatGPT to practice writing essays, learn foreign languages, and look up specialized knowledge that previously required tutors or large libraries. With the right access, AI can reduce the gap in education, skills, and income between social classes.
But to do that, technology developers and governments need to have clear strategies for training digital skills, ensuring equitable access to AI, and preventing the risk of technology misuse. Policies need to be accompanied by ethical guidelines, technical regulations, and effective control tools that do not stifle innovation.
Source: https://daidoanket.vn/ai-dang-tien-gan-hon-toi-tri-tue-con-nguoi-10302558.html
Comment (0)