Today, we announce the release of MediaPipe Iris, a new machine learning model for accurate iris estimation. By additionally employing iris tracking ( right), the avatar’s liveliness is significantly enhanced.Īn example of eye re-coloring enabled by MediaPipe Iris. Often, sophisticated specialized hardware is employed, limiting the range of devices on which the solution could be applied.įaceMesh can be adopted to drive virtual avatars ( middle). Iris tracking is a challenging task to solve on mobile devices, due to limited computing resources, variable light conditions and the presence of occlusions, such as hair or people squinting. This, in-turn, can improve a variety of use cases, ranging from computational photography, over virtual try-on of properly sized glasses and hats to usability enhancements that adopt the font size depending on the viewer’s distance. Once accurate iris tracking is available, we show that it is possible to determine the metric distance from the camera to the user - without the use of a dedicated depth sensor. Posted by Andrey Vakunov and Dmitry Lagun, Research Engineers, Google ResearchĪ wide range of real-world applications, including computational photography (e.g., portrait mode and glint reflections) and augmented reality effects (e.g., virtual avatars) rely on estimating eye position by tracking the iris.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |