Google Introduces New Gemini 2.0 AI Model
On Wednesday, Google unveiled the first model in its Gemini 2.0 series of artificial intelligence systems, named Gemini 2.0 Flash.
The initial launch includes a chat version of Gemini 2.0 Flash, available for users worldwide. Additionally, a specialized multimodal version with text-to-speech and image generation capabilities has been released for developers.
Google's CEO Sundar Pichai expressed that while Gemini 1.0 focused on organizing and interpreting information, Gemini 2.0 aims to enhance the utility of that information.
This new large language model shows improved performance across most user requests, such as generating code and delivering factually accurate responses. However, it has been noted that Gemini 2.0 Flash does lag behind its predecessor, Gemini 1.5 Pro, when it comes to analyzing longer contexts.
Those interested in trying out the chat-optimized version of Gemini 2.0 Flash can select it from the model drop-down menu on both desktop and mobile web platforms. Google has also mentioned that this version will soon be accessible on the Gemini mobile application.
The multimodal set of features from Gemini Flash 2.0 can be accessed through Google's AI Studio and Vertex AI developer platforms. Full availability for this version is expected to roll out in January, alongside various sizes of other models in the Gemini 2.0 collection.
Looking ahead, Google has plans to further integrate Gemini 2.0 into more of its products by early 2025, marking a significant step in their ongoing commitment to advancing AI technology.
This launch of Gemini 2.0 comes at a time when competition in the AI sector is heating up. Google is in a race with major tech companies like Microsoft and Meta, along with innovative startups like OpenAI, known for developing ChatGPT, and others such as Perplexity and Claude, created by Anthropic.
Alongside Gemini Flash, Google has introduced several research prototypes targeted at creating more 'agentic' AI models. These advanced models are designed to understand their environment better, plan multiple steps ahead, and perform actions on behalf of users, all under user supervision.
In a recent discussion at The New York Times' DealBook Summit, Pichai acknowledged the advancements made by Microsoft in AI and expressed his eagerness for a side-by-side comparison of their models, emphasizing his confidence in Google's offerings.
WATCH: Google unveils Willow quantum computing chip
Google, AI, Model