Democratize GPU Access with MLOps

Domino, in partnership with NVIDIA®, supports open, collaborative, reproducible model development, training, and management free of DevOps constraints - powered by efficient, end-to-end compute. Democratize GPU access by enabling data science teams with powerful NVIDIA AI solutions - on premises, in the cloud, or in the modern hybrid cloud.

Provide Self-Serve Access to Infrastructure

Launch on-demand workspaces with the latest NVIDIA GPUs, optimized with open source and commercial data science tools, frameworks, and libraries - free of dev ops.

Attach auto-scaling clusters that dynamically grow and shrink - using popular compute frameworks like Spark, Ray, and Dask - to meet the needs of intensive deep learning and training workloads.

Data scientists can focus on research while IT teams eliminate infrastructure configuration and debugging tasks.

Orchestrate Workloads Centrally for Improved Productivity

Domino acts as a single system of record - across tools, packages, infrastructure, and compute frameworks.

Provide data scientists self-service access to their preferred IDEs, languages, and packages so they can focus on data science innovation.

Reduce IT costs, management, and support burden with tools and NVIDIA infrastructure consolidated and orchestrated in a central location across projects and teams.

Reproduce Work and Compound Knowledge

Track all data science artifacts across teams and disparate tools - including code, package versions, parameters, NVIDIA infrastructure, and more.

Establish full visibility, repeatability, and reproducibility at any time across the end-to-end lifecycle.

Teams using different tools can seamlessly collaborate on a project, with the ability to leverage valuable insights and harvest a flow of collective wisdom.


Streamline Inference & Hosting

Support the end-to-end model lifecycle from ideation to production – explore, train, validate, deploy, monitor, and repeat – in a single platform - with the latest NVIDIA GPU acceleration capabilities.

Domino makes it easy for data scientists to publish models - as an API, integrated in a web app, or deployed as a scheduled job - while monitoring drift and ongoing health.

Professionalize data science through common patterns and practices with workflows that reduce friction, so all teams involved in data science can maximize productivity and impact.


Drive Utilization of GPU Resources

Easily provision, share, and manage NVIDIA GPU resources. Set permissions by user groups and use case to ensure valuable compute resources are efficiently utilized.

With Domino’s support for NVIDIA Multi-Instance GPU (MIG) technology on the NVIDIA A100 Tensor Core GPU, admins can allow up to 56 concurrent notebooks or hosted models, each with an independent GPU instance.

Domino gives IT visibility into GPU hardware utilization. Usage information and tracking enables IT to easily allocate resources and chargebacks while also measuring ROI.


Featured Integrations

Domino's close collaboration with NVIDIA means our Enterprise MLOps Platform supports a broad range of NVIDIA Accelerated Computing solutions.

dgx-hardware

NVIDIA DGX Systems: Purpose-Built for AI

Best-in-class AI Training

Automate the dev ops required to optimize utilization of the powerful NVIDIA DGX hardware. Domino’s enterprise MLOps platform is a NVIDIA DGX-Ready Software Solution, tested and certified for use on DGX systems to deliver revolutionary performance.

With this amount of power just a few clicks away, important research such as deep learning can be completed in a fraction of the time.

  • Leading enterprise MLOps platform optimized with purpose-built infrastructure sets the bar for data science innovation.
  • Automatically create, manage, and scale multi-node clusters, releasing them when training is done. Auto-Scaling clusters work with the most common distributed compute. frameworks: Spark, Ray, and Dask.
  • Easily leverage a single DGX system to support a variety of different users and use cases. Allocate permissions by user group to ensure efficient utilization.

Read the Blog

Download Solution Brief

Watch Demo

NVIDIA AI Enterprise: Mainstream Servers

Data Center-Ready MLOps

Develop, deploy, and manage GPU-accelerated data science workloads on existing enterprise infrastructure. Domino’s validation for NVIDIA AI Enterprise pairs the Enterprise MLOps benefits of workload orchestration, self-serve infrastructure, and collaboration with cost-effective scale from virtualization on mainstream accelerated servers.

  • Put models into production faster, with cost-effective scale up and out potential for enterprise-wide deployments.
  • Enterprise-grade security, manageability, and support, with Domino validation to run on VMware vSphere® with Tanzu - all deployed on industry-leading, NVIDIA-Certified™ systems from mainstream server vendors.
  • Focus on research instead of dev ops by launching Domino Workspaces on-demand, with docker images configured with the latest data science tools and frameworks - optimized with NVIDIA GPUs - with automatic storing and versioning of code, data, and results.

Read the Blog

Domino on NVIDIA AI Enterprise Solution Brief

Download the Deployment Guide

NetApp ONTAP AI: Converged Infrastructure

Integrated MLOps and GPU Solution Powered by NVIDIA DGX

The Domino platform, combined with NetApp® ONTAP AI, offers an integrated solution for companies looking to allocate compute resources and centralize data science work.

Simplify, scale, and integrate your data pipeline for machine learning and deep learning with the ONTAP AI proven architecture, powered by NVIDIA DGX servers and NetApp cloud-connected all-flash storage.

  • Reduce risk and eliminate infrastructure silos with an optimized, flexible, validated solution.
  • Get started faster with streamlined configuration and deployment of your data science stack with Domino's Enterprise MLOps Platform on ONTAP AI infrastructure.
  • NetAPP ONTAP AI, powered by NVIDIA DGX servers and NetApp cloud-connected flash storage, is one of the first conversed infrastructure stacks, built to help companies fully realize the promise of AI and deep learning.

Learn More About Domino & NetAPP

Download the Solution Brief


cloud-providers

Cloud: GPU Cloud Computing

Expanding horizons with Domino and NVIDIA in the Cloud

Domino serves as the front end to the cloud, automating elastic compute designed for data science workloads while letting IT govern and monitor usage. NVIDIA's GPU-accelerated solutions are available through all top cloud platforms.

Domino's platform can support NVIDIA GPUs in a variety of configurations to support your choice of cloud infrastructure and procurement.

  • Major Cloud Providers: NVIDIA GPU-accelerated solutions are available through all top cloud platforms. Domino supports NVIDIA GPUs on AWS, Azure, Google Cloud, OVHcloud, and more.

  • Cloud Marketplaces: Domino is available via AWS Marketplace and Azure Marketplace.

  • Managed Service: Tata Consultancy Services (TCS) offers a single, converged end-to-end solution for training, AI, ML, and deep learning models using Domino and NVIDIA DGX systems hosted in TCS Enterprise Cloud.

Learn more about Domino and Cloud Data Science

Learn more about the TCS HPC A3 Managed Service Solution

Try Domino on NVIDIA LaunchPad for free!

Get immediate, short-term access to a curated lab with Domino on NVIDIA AI Enterprise.

Technical Resources

Technical Webinars

Data Science Blogs

Additional References

Domino’s growing partner ecosystem helps our customers accelerate the development and delivery of models with key capabilities of infrastructure automation, seamless collaboration, and automated reproducibility. This greatly increases the productivity of data scientists and removes bottlenecks in the data science lifecycle.