At Google I/O, Google launched new capabilities for the machine learning SDK available on Firebase.
Google on Tuesday announced several updates to Firebase, its mobile development platform. Notably, Google is launching new capabilities for ML Kit, the machine learning SDK that comes with ready-to-use, on-device and cloud-based APIs with support for custom models. The new capabilities, launched in beta, include an On-device Translation API, an Object Detection & Tracking API and AutoML Vision Edge.
Google announced the updates at Google I/O, the annual developer event where Google typically makes several AI-related announcements. The Firebase news came during the Developer Keynote on Day 1 of the conference.
With the On-device Translation API, app developers can get access to offline models for fast, dynamic translation of text into 58 languages. It uses the same ML models that support Google Translate. The Object Detection and Tracking API lets your app locate and track, in real-time, the most prominent object in a live camera feed. IKEA, for instance, used the new API to create a mobile app experience where users can take photos of household items to find the product or similar items in the retailer’s online catalogue.
Meanwhile, with AutoML Vision Edge, app developers can create custom image classification models. For example, you could build an app that identifes different types of food or different species of animals. Developers can upload their training data to the Firebase console and use Google’s AutoML technology to build a custom TensorFlow Lite model to run locally on the end user’s device.
In addition to the ML Kit enhancements, Google announced several other Firebase updates. For instance, Google is expanding, in beta, Firebase Performance Monitoring to web apps. It’s also introducing a new audience builder in Google Analytics for Firebase.