The new Pixel phones used a Google exclusive «
semantic image segmentation model» called DeepLab - v3 +, which is now released on open - source software library TensorFlow.
According to the blog post, «
semantic image segmentation» stands for «assigning a semantic label, such as «road», «sky», «person», «dog», to every pixel in an image.»
Google Research has detailed what it calls its machine - learning
semantic image segmentation model, DeepLab - v3 +.
Unlike most flagship models from competitors that rely on dual camera setups, its latest model uses
semantic image segmentation to categorize each pixel of the photo.