Gemini Pro, a lightweight version of a more capable Gemini model, Gemini Ultra, currently in private preview for a “select set” of customers, is now accessible in public preview in Vertex AI, Google’s fully managed AI dev platform, via the new Gemini Pro API. The API is free to use “within limits” for the time being (more on what that means later) and supports 38 languages and regions including Europe, as well as features like chat functionality and filtering.
“Gemini’s a state-of-the-art natively multimodal model that has sophisticated reasoning advanced coding skills,” Google Cloud CEO Thomas Kurian said during a press briefing on Tuesday. “[Now,] developers will be able to build their own applications against it.”
Gemini Pro API
By default, the Gemini Pro API in Vertex accepts text as input and generates text as output, similar to generative text model APIs like Anthropic’s, AI21’s and Cohere’s. An additional endpoint, Gemini Pro Vision, also launching today in preview, can process text and imagery — including photos and video — and output text along the lines of OpenAI’s GPT-4 with Vision model.
Image processing addresses one of the major criticisms of Gemini following its unveiling last Wednesday — namely that the version of Gemini powering Bard, a fine-tuned Gemini Pro model, can’t accept images despite technically being “multimodal” (i.e. trained on a range of data including text, images, videos and audio). Questions linger around Gemini’s image analysis performance and skills, especially in light of a misleading product demo. But now, at least, users will be able to take the model and its image comprehension for a spin themselves.
Within Vertex AI, developers can customize Gemini Pro to specific contexts and use cases leveraging the same fine-tuning tools available for other Vertex-hosted models, like Google’s PaLM 2. Gemini Pro can also be connected to external APIs to perform particular actions or “grounded” to improve the accuracy and relevance of the model’s responses, either with third-party data from an app or database or with data from the web and Google Search.
Citation checking — another existing Vertex AI capability, now with support for Gemini Pro — serves as an additional fact-checking measure by highlighting the sources of information Gemini Pro used to arrive at a response.
“Grounding allows us to take an answer that Gemini’s generated and compare that with a set of data that sits within a company’s own systems … or web sources,” Kurian said. “[T]his comparison allows you to improve the quality of the model’s answers.”
Kurian spent a fair chunk of time spotlighting Gemini Pro’s control, moderation and governance options — seemingly pushing back against coverage implying that Gemini Pro isn’t the strongest model out there. Will the reassurances be enough to convince developers? Maybe. But if they aren’t, Google’s sweetening the pot with discounts.
Input for Gemini Pro on Vertex AI will cost $0.0025 per character while output will cost $0.00005 per character. (Vertex customers pay per 1,000 characters and, in the case of models like Gemini Pro Vision, per image.) That’s reduced 4x and 2x, respectively, from the pricing for Gemini Pro’s predecessor. And for a limited time — until early next year — Gemini Pro is free to try for Vertex AI customers.
“Our goal is to attract developers with attractive pricing,” Kurian said with candor.
Beefing up Vertex
Google’s bringing other new features to Vertex AI in the hopes of dissuading developers from rival platforms like Bedrock.
Several pertain to Gemini Pro. Soon, Vertex customers will be able to tap Gemini Pro to power custom-built conversational voice and chat agents, providing what Google describes as “dynamic interactions … that support advanced reasoning.” Gemini Pro will also become an option for driving search summarization, recommendation and answer generation features in Vertex AI, drawing on documents across modalities (e.g. PDFs, images) from different sources (e.g. OneDrive, Salesforce) to satisfy queries.
Kurian says that he expects the Gemini Pro-powered conversational and search features to arrive “very early” in 2024.
Elsewhere in Vertex, there’s now Automatic Side by Side (Auto SxS). An answer to AWS’ recently announced Model Evaluation on Bedrock, Auto SxS lets developers evaluate models in an “on-demand,” “automated” fashion; Google claims Auto SxS is both faster and more cost-efficient than manually evaluated models (although the jury’s out on that pending independent testing).
Google’s also adding models to Vertex from third parties including, Mistral and Meta, and introducing “step-by-step” distillation, a technique that creates smaller, specialized and low-latency models from larger models. In addition, Google’s extending its indemnification policy to include outputs from PaLM 2 and its Imagen models, meaning the company will legally defend eligible customers implicated in lawsuits over IP disputes involving those models’ outputs.
Generative AI models have a tendency to regurgitate training data — an obvious concern for corporate customers. If it’s one day discovered that a vendor like Google used copyrighted data to train a model without first obtaining the proper licensing, that vendor’s customers could end up on the hook for incorporating IP-infringing work into their projects.
Google’s stopping short of expanding its Vertex AI indemnification policy to cover customers using the Gemini Pro API. The company says, however, that it’ll do so once the Gemini Pro API launches publicly.
Courtesy by: TechCrunch