Domino MLOps platform

Accelerate deployment, operationalize and govern your AI at scale

Governed

Monitor deployed models for accuracy, drift, and performance and quickly retrain to reduce risk and ensure business value. Track models in a single pane of glass.

Scalable

Deploy to scalable infrastructure across both on-premise, any cloud, or hybrid environments to ensure high-performance inference and responsive models and apps.

Flexible

Domino supports your deployment strategy and infrastructure. Deploy in Domino, in-database, the edge, hosted services like Sagemaker or to existing CI/CD pipelines.

Deploy, observe, improve

Model Registry

Centralized model tracking

Track all your models regardless of where they were trained. With Domino Model Registry, you get complete lineage tracking for auditability using integrated model cards. The model registry offers a central repository of all models, streamlines iterative improvement, and facilitates stakeholder reviews and approvals for transitioning models from development to production.

Model Performance & Analysis

Model Review and Validation

Deploy with confidence

Validate and review models with custom approval workflows to ensure models and applications are robust and audit-ready with best-in-class reproducibility. Provide reviewers with detailed lineage and model metrics to help them evaluate AI trustworthiness.

One-click deployment

Rapidly and flexibly deploy AI solutions

Deploy models to any endpoint for both batch and real-time predictions. Deploy in Domino natively, integrate with existing CI/CD pipelines, or export models to platforms like SageMaker, Snowflake, Databricks, or NVIDIA FleetCommand. Deploy in a hybrid world — on-prem, across multiple on-prem environments, or a combination of on-prem and cloud. Share analytical dashboards, AI models, and GenAI apps with any framework, including Dash, Flask, Streamlit, Shiny, and more.

Churn Model US

Model Monitoring

Continuously improve model performance

Track data drift and model quality degradation automatically with integrated monitoring and alerts. Continuously monitor accuracy metrics and ground truth to improve performance. Incorporate LLM evaluation frameworks. Monitor endpoint activity and health with prebuilt or custom metrics. Quickly identify and remediate issues and retrain with ease.