A.I Training doesn’t have to be expensive. Nod is opening up a limited early access program to select customers to deploy the industry’s most efficient and cost effective ML distributed training runtime. No cost to you if we don’t show a cost and / or efficiency improvement. Nod Runtime uses advanced fine-grained operator level asynchronous task parallelism to effectively use your existing hardware deployments. Nod Runtime overlaps compute and communication to maximize utilization of the hardware and network resources.

Results above show a 35% QPS performance improvement on training the DLRM recommendation model to MLPerf AUC (0.8025) using Nod Runtime compared to Nvidia’s NGC on a 8*v100 GPU Google Cloud instance.
Nod Runtime is available on AWS, GCP or installable on premise as a drop-in into your existing workflow.

Sign up for Early Access: https://lnkd.in/gayGry2

Engineers: If you read all the way here, and like working on cutting edge Machine Learning Systems, Optimizations and Compilers checkout our jobs here:

A.I Compiler Engineer: https://lnkd.in/gaDB9ac
ML Systems Engineer: https://lnkd.in/gGKeQdW

Comments are closed.