Google launched, this Thursday (8), the Gemini Advanced, an advanced version of its most powerful artificial intelligence. Additionally, the company announced the launch of a mobile application for the use of AI and Bard tool name changewhich will now also be called Gemini.
With the changes, Google says goodbye to the term Bard and unifies all its AI products as a single tool. As a result, both the chatbot – which works in a similar way to ChatGPT – and the intelligence behind its operation will have a single name: Gemini.
O Gemini Advanced is a paid and more complete version of the robot, available in English. With it, it is possible to access “Ultra 1.0”, a larger and more capable model for complex tasks, such as coding, logical reasoning, differentiated instructions and collaboration on creative projects.
O Gemini Advanced is available as part of the new plan Google One AI Premium, for R$96.99 per month, in more than 150 countries and territories. At the moment, it is only available in English, but the company’s goal is to expand it to more languages over time. Additionally, users of this plan will be able to use artificial intelligence in other company services, such as Gmail, Meet, Documents, Spreadsheets and Presentations.
In parallel with this launch, Google also announced the launch of the Gemini application for Android and iOS (through the Google App). The mobile app versions of AI will first be available in the United States, but more countries are expected to have access soon. On your cell phone, it will be possible to interact with the robot via text, voice or photo.
What is Gemini?
Gemini is Google’s most advanced artificial intelligence model, launched in December last year. The robot has the ability to organize, understand, operate and combine different types of information, such as texts, images, audios, videos and programming languages.
The first version of Gemini was optimized into three different models: the Gemini Ultra, capable of complex activities; the Gemini Pro, better for scaling a wide variety of tasks, and the Gemini Nano, more efficient for mobile tasks.
Its main difference is that it is a multimodal model, that is, with the ability to understand and reason all types of information, both in text, audio or image. It is the equivalent of GPT-4, OpenAI’s advanced language model.