Google Releases New Reasoning Focused AI Model
In a post on X (previously often called Twitter), Jeff Dean, the Chief Scientist at Google DeepMind, launched the Gemini 2.0 Flash Thinking AI mannequin and highlighted that the LLM is “skilled to make use of ideas to strengthen its reasoning.” It is at the moment obtainable in Google AI Studio, and builders can entry it by way of the Gemini API.
Gadgets 360 employees members had been capable of check the AI mannequin and located that the superior reasoning targeted Gemini mannequin solves complicated questions which might be too tough for the 1.5 Flash mannequin with ease. In our testing, we discovered the standard processing time to be between three to seven seconds, a major enchancment in comparison with OpenAI’s o1 sequence which may take upwards of 10 seconds to course of a question.
The Gemini 2.0 Flash Thinking additionally reveals its thought course of, the place customers can verify how the AI mannequin reached the consequence and the steps it took to get there. We discovered that the LLM was capable of finding the fitting resolution eight out of 10 occasions. Since it’s an experimental mannequin, the errors are anticipated.
While Google didn’t reveal the main points in regards to the AI mannequin’s structure, it highlighted its limitations in a developer-focused blog post. Currently, the Gemini 2.0 Flash Thinking has an enter restrict of 32,000 tokens. It can solely settle for textual content and pictures as inputs. It solely helps textual content as output and has a restrict of 8,000 tokens. Further, the API doesn’t include built-in instrument utilization akin to Search or code execution.