On May 25, OpenAI announced that it will award 10 equal grants from its $1 million fund to initiatives that seek to regulate AI and address the technology's limitations.
Critics say AI systems like ChatGPT are inherently biased because of the inputs that shape their views. Users have found examples of AI being racist or sexist. Not only that, but combining them with search engines like Alphabet’s Google or Microsoft’s Bing can produce inaccurate but “highly convincing” information.
The $100,000 grants will be awarded to people who can come up with a framework for answering questions like whether AI should criticize public figures or what constitutes an “average individual” in the world, according to a blog post announcing the fund’s launch.
OpenAI's funding is not for AI research, where programmers can easily earn salaries ranging from $100,000 to over $300,000.
AI systems “will benefit all of humanity and be shaped to be as inclusive as possible,” the startup that owns ChatGPT wrote in a blog post. “The company is launching the grant as a first step in this direction.”
“Clash” with EU lawmakers
OpenAI CEO Sam Altman said on May 26 that the company has no plans to pull out of Europe — a statement that contradicts a statement made just days earlier that ChatGPT could cease operations in the region if compliance with AI regulations became too difficult.
The EU is discussing a bill that could be the world's first set of rules governing AI, and Altman described the draft as “over the top.”
The “threat” of the ChatGPT chatbot creator has been criticized by many EU lawmakers, including Internal Market Commissioner Thierry Breton, as well as a number of other politicians.
Altman is currently on a tour across Europe, meeting with leaders in France, Spain, Poland, Germany, and the UK to discuss the future of AI and the progress of ChatGPT.
The head of OpenAI said the trip “was a productive week of discussions about how to govern AI.”
Meanwhile, the Microsoft-backed startup has been criticized for not disclosing the training data used for its latest AI model (GPT-4), with OpenAI citing “competitive and confidentiality” reasons for not making the information public.
“The provisions of the AI Act are designed to promote transparency, ensuring that AI and the companies behind it are trustworthy. I don’t see why any company should shy away from this,” said Dragos Tudorache, a member of the European Parliament who led the drafting of the EU proposals.
(According to Reuters)
Source
Comment (0)