Google has announced SignGemma, a new artificial intelligence (AI) model that can translate sign language into spoken text. The model, which will be part of the Gemma series of models, is currently being tested by the Mountain View-based tech giant and is expected to be launched later this year. Similar to all the other Gemma models, SignGemma will also be an open-source AI model, available to individuals and businesses. It was first showcased during the Google I/O 2025 keynote, and it is designed to help people with speech and hearing disabilities effectively communicate with even those who do not understand sign language.
In a post on X (formerly known as Twitter), the official handle of Google DeepMind shared a demo of the AI model and some details about its release date. However, this is not the first time we have seen SignGemma. It was also briefly showcased at the Google I/O event by Gus Martin, Gemma Product Manager at DeepMind.
https://twitter.com/GoogleDeepMind/status/1927375853551235160?ref_src=twsrc%5Etfw” rel=”nofollow” target=”_blank
During the showcase, Martins highlighted that the AI model is capable of providing text translation from sign language in real-time, making face-to-face communication seamless. The model was also trained on the datasets of different styles of sign languages, however, it performs the best with the American Sign Language (ASL) when translating it into the English language.
According to MultiLingual, since it is an open-source model, SignGemma can function without needing to connect to the Internet. This makes it suitable to use in areas with limited connectivity. It is said to be built on the Gemini Nano framework and uses a vision transformer to track and analyse hand movements, shapes, and facial expressions. Beyond making it available to developers, Google could integrate the model into its existing AI tools, such as Gemini Live.
Calling it “our most capable model for translating sign language into spoken text,” DeepMind highlighted that it will be released later this year. The accessibility-focused large language model is currently in its early testing phase, and the tech giant has published an interest form to invite individuals to try it out and provide feedback.
Audi’s Five-Cylinder Not Dead Yet? Europe Might Get Hybrid Version Audi could hybridize the five-cylinder…
Most glaciers around the world are shrinking as temperatures rise. But a smaller group behaves…
Jolly LLB 3 Review 3.5/5 & Review RatingStar Cast: Akshay Kumar, Arshad Warsi, Saurabh ShuklaDirector:…
Hyundai has announced a new line of N Performance Parts. First up, Hyundai has upgraded…
New research led by the University of Warwick shows that forests were already growing across…
Indian travellers are rapidly redefining what they expect from travel, with mental wellbeing, sustainability, and…