Governance meets scalable inference
Domino + Amazon SageMaker + AWS Trainium and Inferentia
Watch on demand
Compliant and scalable end-to-end model ops: Develop in Domino and deploy to SageMaker – powered by AWS AI chips
Discover how Domino Data Lab integrates seamlessly with AWS AI chips, including Trainium2 and Inferentia, to provide a cost-effective and scalable solution for inference at scale in Amazon SageMaker.
We'll show how Domino empowers data science teams to deploy models seamlessly to SageMaker, leveraging AWS Inferentia for optimized performance. Domino’s unified platform ensures AI governance, reproducibility, and centralized control, making it easier to bring AI models from development to production.
Whether you’re a data scientist, ML engineer, or IT leader, this webinar will demonstrate how Domino, AWS AI chips, and Amazon SageMaker can bring your AI models to production faster, more cost-effectively, and with complete governance.
Key Takeaways
- How Domino integrates with AWS Inferentia and Trainium, including Trainium2 for optimized training and inference.
- Seamless deployment to Amazon SageMaker for scalable, low-latency inference.
- Full lifecycle governance, ensuring compliance and auditability.
- Trainium2 cost/performance benefits for LLM training at scale.
Watch now to accelerate your AI journey with Domino and AWS's technology, while maintaining full governance and scalability.
Featured Speakers
Josh Mineroff
Director of Solution Architecture, Technology Partners
Josh Mineroff works with Domino's technology partners such as AWS to bring cutting-edge AI solutions from the data and analytics ecosystem to leading data science teams. Prior to joining Domino in 2019, he founded Meet n’ Treat — a dog socialization platform. During his Ph.D. in applied optimization, he published papers on heart valve design and uncertainty quantification generating over 350 cumulative citations.
Kamran Khan
Head of Business Development and GTM, Annapurna ML
Kamran Khan has over a decade of experience helping customers deploy and optimize deep learning training and inference workloads using AWS Inferentia and AWS Trainium.