The enterprise platform to build, deliver, and govern AI
Watch the 15 minute on-demand demo to get an overview of the Domino Enterprise AI Platform.
Domino's Ahmet Gyger recently presented at Anyscale's Ray Summit. Ahmet focused on the unique challenges enterprises face as they look to scale Generative AI applications based on Large Language Models (LLMs). While enterprises may be adept at managing the lifecycle of traditional AI models, LLMs have unique constraints. The development process is different - especially around model evaluation. Deployment is tricky even for experienced IT hands. And LLM monitoring is still in the formative stages.
Ahmet touches on what enterprise teams working with LLMs look for:
The LLM lifecycle remains complex. It touches on many systems, uses multiple technologies, and requires new skill sets. As a result, GenAI application releases mandate strong alignment between teams. Enterprises must define and manage budgets. They have reputations to protect and risks to mitigate. They have to ensure their model releases follow responsible AI practices. That's where Domino Sentry fits, providing a responsible AI model release framework. Ahmet walks through Model Sentry's benefits and explains how Domino addresses LLMOps needs in a corporate environment.
If you want to learn more, take 13 minutes of your day and watch the presentation yourself!
Watch the 15 minute on-demand demo to get an overview of the Domino Enterprise AI Platform.
Watch the 15 minute on-demand demo to get an overview of the Domino Enterprise AI Platform.