Tuesday, July 2, 2024

Google’s Bard chatbot Gemini Pro update now in Nigeria

Gemini Pro helps to double-check responses in more languages and brings ideas to life with image generation.

Google has announced that its Bard chatbot has received a significant upgrade, extending its reach to over 230 countries, including Nigeria, and supporting over 40 languages.

This update leverages the powerful Gemini Pro model, previously available only in English, to enhance Bard’s ability to understand, summarize, reason, brainstorm, write, and plan across diverse languages.

The expansion addresses a key barrier to access by offering Bard’s capabilities to users who prefer languages like Arabic, Chinese, French, Spanish, Hindi, and many more.

Additionally, the “Double Check” feature, which verifies response accuracy by comparing them to search results, is now available in over 40 languages, promoting reliable information for all users.

Further enhancing user experience, Google has introduced image generation support through the Imagen 2 model, currently available in English. Users can now input prompts like “create an image of a futuristic city” and receive AI-generated visuals directly within the chatbot interface.

“Gemini Pro helps to double-check responses in more languages, brings ideas to life with image generation and one of the most preferred chatbox with or without cost,” Taiwo Kola-Ogunlade, head of communications, Google, West Africa, said in a statement on Thursday.

Bringing ideas to life

Bard is a free, experimental AI chatbot that uses natural language processing and machine learning to simulate human conversations.

Bard can generate text, translate languages, and write creative content. It can also respond to user questions and prompts with a human-like understanding.

Google said its image generation feature was designed with responsibility in mind.

To ensure there’s a clear distinction between visuals created with Bard and original human artwork, Bard uses SynthID to embed digitally identifiable watermarks into the pixels of generated images.

“Our technical guardrails and investments in the safety of training data, seek to limit violent, offensive or sexually explicit content,” Mr Kola-Ogunlade said.

“Additionally, we apply filters designed to avoid the generation of images of named people.”


Discover more from Pluboard

Subscribe to get the latest posts to your email.

Pluboard leads in people-focused and issues-based journalism. Follow us on X and Facebook.

Latest Stories

- Advertisement -spot_img

More From Pluboard

Discover more from Pluboard

Subscribe now to keep reading and get access to the full archive.

Continue reading