Project EDITH Phase 6 - AI on Android from Unity [DEMO]Dec 02, 2020
Want to know how to implement AI on Android? Check out how we did this in Unity.
▶Ultimate AI-CV Webinar Registration
▶Project E.D.I.T.H. Course
Hey guys and welcome back, so in this demo, you have seen how Mobile AI can be useful for assisting us with specialized tasks such as repairing a car. This technology can go even further for preventative maintenance of… vehicles and machinery. So, this lecture we will be focusing more on how to port your AI models such as object detection onto an Android device in VR mode so that you can use it with the likes of one of these, a Google Cardboard.
Since I do not have a MAC, I will be focusing solely on Android, however if you would like to build this for iOS, the process is more or less the same in Unity, with the exception for the additional iOS specific settings. With that out of the way, lets look at the limitations of this phase of the project:
So, on my beefy deep learning PC, I can run a MobileNet object detection model at a comfortable 20 FPS, however on my LG V30 I'm getting around a heart dampening 4.7 FPS. And running in Google Cardboard VR mode, I scraped in around 3.8 FPS. Now, If you are running this on recent flagship phones, especially those that contain a neural processor unit or NPU, your frame rates may be improved compared to mine.
Other things I have tried, are adding a voice assistant which worked quite well in conjunction with the object detection module. However, when you add in the 3D graphic ID prefabs, my phone just shows a sad, blank VR screen. Hence, I kept to the traditional green bounding boxes.
I will try this again at later stage when I upgrade to more powerful phone or when I get my hands on the nReal Glasses when it gets released hopefully in the next few months.
Want to Learn Computer Vision and AI?
Join our mailing list to receive the latest news and updates from Augmented Startups.
We hate SPAM. We will never sell your information, for any reason.