FREE AI Course

Pixel 4 – AI On Edge and On Demand!

google pixel pixel pixel 4 pixel 4 xl Apr 16, 2020

Google has augmented its presence in the market for smartphones, with its latest launches of Pixel 4 and Pixel 4 XL. The new variants come loaded with the latest technologies and features that may be the game-changers for Google in the already crowded market of smartphones. Most of these features are powered by the Artificial Intelligence, something Google seems to have taken a lead over all the other players.

The main theme of this years Google launches seems to be the processing of machine learning algorithms on the devices or device based machine learning, whereby all the heavy lifting, that is the backbone of the artificial intelligence and machine learning algorithms (namely the heavy ended calculations that are being performed in the calculations for neural networks and conventional means), are now instead of a remote server or cloud are being processed on the device itself. The feature may not look a lot at first but when we add the possibility of reducing the lag in the features and the usefulness of the artificial intelligence in the real world scenario, the processing on device can put Google greatly ahead of its competitors.

So What Makes it Possible

We have seen how the processing on the device itself can significantly reduce the lag in the use of many machine learning based features and can open doors to the introduction of newer features which were not earlier possible due to the processing being done remotely, but the main question is how has Google been able to do something that none of its competitors have been able to achieve in any of its production grade products. The answer is the presence of the new TPU [1] chip. A TPU or a tensor processing unit is an AI accelerator application that has specific integrated circuits specifically designed for the processing of large matrices [2] or Tensors [3]that are the backbone of any neural network or for that matter any machine learning algorithm. Google has first unveiled the TPUs in 2016, but until recently they have been mostly confined to the cloud servers or have been used for Google’s internal processing tasks alone. The main reason for the same was the size of the units as well as the energy that was required to make use of these TPUs for long term processing tasks.

An Initial Tensor Processing Unit

Recently in the year 2018 [4], Google introduced Edge TPU [5] which was capable of doing similar tasks but came in a fraction of the size of the original units and lower power utilization. The two features made it ideal for its use in the mobile devices, propelling the computational capabilities of the new smartphones from Google to a whole new level. The size comparison of these units with a single US cent is shared in the image below.

Size of Edge TPU

New AI Powered Features

The new devices has a ton of features that have been improved upon or are new introductions altogether, some of the ones that stand out are:-


The AI in regards with the camera not only facilitates in making the photos better but it is also capable of doing recommendations such as

  • Low-light settings for optimal power consumption
  • Depth prediction for portrait mode images
  • Night sight has a significant improvement from pixel 4

Facial Recognition

The TPU chips make it possible for localized processing of the images for facial recognition, making it possible to have a localized robust model for facial recognition, which is reliable enough that Google is enabling payments with facial recognition itself. Google also claims that the facial recognition algorithm on Pixel 4 actually works faster than Apple’s iPhone Face ID.

One of the concerns that have however been raised on the facial recognition feature is that BBC has reported to have been found that the facial recognition feature actually works on the people even when their eyes are closed, this can cause a lot of security concerns since the facial recognition feature has the payment method enable putting the users at a potential financial risk. Google however has made an announcement that in the final release they will update the facial recognition feature to work, the user should blink his eyes.

The Next Generation Google-Assistant

One of the main differences that strikes about the new Google assistant is that it has the multiple dialogue feature, that is one doesn’t have to end the conversation in order of the search results to appear, so the context of multiple talking points can be directly fed into the architecture to get results. In order to end the conversation with the assistant one will now be asked to say stop.

Speech Recognition

In order to make the Google’s continuous conversation with the new Google Assistant possible it was required that the speech recognition is processed locally. The same was made possible with the presence of the new Edge TPU chip-sets that will be able to process the speech recognition locally. The speech recognition capability also helped Google introduce a new features like searching for a particular words in the voice recording app. However recorder app is known to perform poorly so far in actually labeling different speakers in the recording procedure.

All in all Google’s new phone is a solid package with a lot of AI in it, the phone certainly is a magic box from the future. It although has triggered debates about the number of components in the AI domain, with the ethics security that were long overdue and are now being rationally discussed and hopefully answered.




Want to Learn Computer Vision and AI?

Join our mailing list to receive the latest news and updates from Augmented Startups.

We hate SPAM. We will never sell your information, for any reason.