NVIDIA launched its new member for the Jetson class of smaller processing units – the Jetson Xavier NX. The size of the entire setup is smaller than a normal sized debit or credit card. The module although not the smallest in business but more than makes up for its size with the processing capabilities and other qualities that it has to offer.
The earlier launches like Jetson Nano, the smallest GPU based device, will also have its compatibility with the latest Xavier NX with the pin compatibility making it possible to port the AIoT applications deployed on the Nano. The new device has been launched to ensure compatibility with all major AI frameworks, including the likes of PyTorch , TensorFlow  etc.
NVIDIA’s website  claims that the new device is capable of delivering upto 14 terra operations per seconds  also known as TOPS with the power consumption of 10W, with the power consumption of 15W it is capable of doing 21 TOPS, making it possible to run multiple neural networks in parallel or to process data from multiple sensors of variable resolutions.
In line with the existing family members of the jetson family Xavier NX also runs CUDA-X AI  software making it easier to have an optimized inference for deep learning architectures. The CPU for Xavier NX has 6-Core Carmel ARM 64-bit, 4MB L3 and 6MB of L1 cache, making it possible to support the 6 CSI cameras over 12 MIPI  CSI-2 lanes. The RAM for the new device is pegged at 8GB 128 bit, that can perform data transfers at the rates of 51 GB/ sec. The default operating system for the device is Ubuntu based Linux operating system.
The performance graphs of the different devices in the similar line from Qualcomm Intel class can be seen in the figure below and we can easily see that the new NVIDIA Xavier is easily one of the best in the business as of today.
The new architecture has improved energy efficiency that allows it to use the fraction of the energy of the exiting hardware and still deliver speed enhancements in the ranges of 20x.
The new hardware is capable of using the NVIDIA SDK tools for enhanced performance on the inference side, as the tool enable the conversion of trained models into the TensorRT, which makes it possible to increase the inference speed of the existing architectures manifolds.
There are a significant number of improvements other than the ones stated above, some of the significant difference that makes the new hardware exceedingly different from the existing architecture.
NVIDIA has moved quickly to capture the fast growing AI market, with its domination in the training architectures, with the products like NVIDIA Tesla K-80, P100, T4 GPUs already dominating the market. It now is doing its best to capture the inference segment as well with the Jetson products line, which are smaller, more energy efficient compared to their bigger peers.
Join our mailing list to receive the latest news and updates from our team. You'r information will not be shared.