We are focussed on solving the fundamentally hard problems around Machine Intelligence and Perception. Training advanced deep learning models is challenging, this gets exponentially harder when you are optimizing for various criteria such as target devices, latency and training time etc. Larger NLP models where data parallelism isn’t a viable option any more needs custom modifications to the model to fit your training framework and infrastructure.
Nod’s AI Optimization Framework addresses these challenges to accelerate model development, training and deployment. It also provides a Neural Architecture Search that searches for the most optimal precision reduced and accuracy guaranteed model for your target devices. Nod’s MLIR based Neural Compiler simplifies and automates parallelism beyond just Data / Model Parallelism to even Operator level parallelism. Nod’s Distributed Runtime is a built on tested HPC infrastructure and allows super-linear scalability to thousands of nodes with the Nod Distributed Optimizer.
Nod’s AI Perception Stack provides state of art algorithms for Neural SLAM, Monocular Depth Estimation, Obstacle Avoidance and Body Pose Estimation for Advanced driver-assistance systems (ADAS), Robotics and Augmented Reality uses.