Video Library
AI Inferencing in the Core and the Cloud

AI Inferencing in the Core and the Cloud

 

AI Inferencing in the Core and the Cloud

Learn about NVIDIA Triton Inference server integration with NetApp ONTAP AI for inferencing on DGX-A100 in the Core and the Cloud. See how ONTAP AI can be designed to host inferencing workloads on-premises, with data pipeline connecting to the cloud.