Home Technology Google releases Gemini 1.5 Pro AI model: Here’s what company CEO Sundar Pichai has to say | – Times of India

Google releases Gemini 1.5 Pro AI model: Here’s what company CEO Sundar Pichai has to say | – Times of India

0
Google releases Gemini 1.5 Pro AI model: Here’s what company CEO Sundar Pichai has to say | – Times of India

[ad_1]

Google launched Gemini large language model in December. The first generation model (denoted as 1.0) powered Pixel 8 Pro, Gemini AI chatbot and the recently-launched Gemini Advanced. Now, the company has announced its next-generation model, Gemini 1.5 – a refined version of Gemini 1.0 with enhanced capabilities.
The first Gemini 1.5 model that Google is releasing for early testing is Gemini 1.5 Pro – a mid-size multimodal model that can carry out a wide-range of tasks and offer performance at a similar level to Gemini 1.0 Ultra – Google’s largest model to date.
Difference between Gemini 1.0 Pro Gemini 1.5 Pro
Google notes that the latest model offers greater context and comes with more helpful capabilities. As per Demis Hassabis, CEO of Google DeepMind, it “introduces a breakthrough experimental feature in long-context understanding.”
He said that Gemini 1.5 is more efficient to train and serve. While Gemini 1.0 Pro comes with a 32,000 token context window, Gemini 1.5 Pro comes with a standard 128,000 token context window but a limited group of developers and enterprise customers can try it with a context window of up to 1 million tokens.
“This means 1.5 Pro can process vast amounts of information in one go — including 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, we’ve also successfully tested up to 10 million tokens,” Hassabis said.
Tokens are the basic units of text or code that a large language model uses to process and generate language. Tokens can be characters, words, subwords, or other segments of text or code.
In comparison, GPT-4 Turbo has 128,000 token context window and Claude 2.1 gets 200,000 token context.
Google and Alphabet CEO Sundar Pichai shared a video on X (formerly Twitter) that shows an example of Gemini 1.5 Pro’s capabilities with long context.

A note from Google and Alphabet CEO Sundar Pichai
Last week, we rolled out our most capable model, Gemini 1.0 Ultra, and took a significant step forward in making Google products more helpful, starting with Gemini Advanced. Today, developers and Cloud customers can begin building with 1.0 Ultra too — with our Gemini API in AI Studio and in Vertex AI.
Our teams continue pushing the frontiers of our latest models with safety at the core. They are making rapid progress. In fact, we’re ready to introduce the next generation: Gemini 1.5. It shows dramatic improvements across a number of dimensions and 1.5 Pro achieves comparable quality to 1.0 Ultra, while using less compute.
This new generation also delivers a breakthrough in long-context understanding. We’ve been able to significantly increase the amount of information our models can process — running up to 1 million tokens consistently, achieving the longest context window of any large-scale foundation model yet.
Longer context windows show us the promise of what is possible. They will enable entirely new capabilities and help developers build much more useful models and applications. We’re excited to offer a limited preview of this experimental feature to developers and enterprise customers. Demis shares more on capabilities, safety and availability below.
— Sundar



[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here