Two decades ago, Google co-founder Larry Page had a dream of digitally scanning millions of books. It sparked a long legal battle that the company eventually won.
Today, the emergence of massive AI models is upending the debate over scanning multiple books. Accordingly, Google will soon release an AI model called Gemini 1.5 Pro with a context window of 1 million tokens, about 750,000 words, or the equivalent of 3 to 7 books depending on the length. The model can also capture 1 hour of video, 11 hours of audio, and more than 30,000 lines of code through user prompts, according to Business Insider .
Google's Gemini 1.5 Pro has a large contextual window that can read multiple books
Gemini 1.5 Pro is in preview for some lucky early testers. When it’s fully rolled out, users will be able to import entire books, entire legal case histories, or whatever they want. This Google model can quickly import all the information and then answer questions about the data.
After years of trying to scan millions of books, Google will now have users willing to feed entire volumes into the company’s AI models, along with “mountains” of text, code, images, videos, and more. This information will likely be used as training data to help Google build other models. Google says the data shared with Gemini “helps improve and develop Google’s products, services, and machine learning technologies.”
Apple in talks with Google to bring AI tool Gemini to iPhone
The Gemini 1.5 Pro, the Google AI model with the largest contextual window, is not yet fully available, so terms of service have not been released. A Google spokesperson declined to comment on what data practices will apply to the model.
Source link
Comment (0)