Domino 5.3 Increases the Speed and Impact of Enterprise Data Science
Bob Laurent2022-10-06 | 9 min read
Today's most successful organizations put AI and machine learning models at the heart of their business, creating competitive advantage by accelerating innovation, improving customer experience, and making faster and less-biased decisions. Data science teams building AI-driven applications and experiences require flexible access to the latest tools and any data across hybrid, multi-cloud and on-premises environments. Success with building models that drive digital transformation also requires the security, governance, and compliance that enterprises and governments expect regardless of the underlying infrastructure.
Today we’re proud to announce the general availability of Domino 5.3 – a powerful release that directly addresses these needs to improve the velocity, quality, and impact of data science at scale. Here’s what these new capabilities mean for companies that want to become more model-driven as they take on new, bigger challenges.
Scale Data Science across Hybrid/Multi-Cloud Environments
Enterprises need hybrid and multi-cloud data science capabilities to scale model-driven outcomes everywhere the organization operates and stores data — across clouds, on-premises infrastructure, divisions, and regions. In fact, a recent Forrester survey of AI infrastructure decision-makers makes that need clear — 71% view hybrid cloud support by their AI platform as important for executing their AI strategy, and 29% say it's already critical.
We announced our hybrid and multi-cloud architecture, “Nexus,” in June, and today we’re excited to deliver a private preview to select Domino customers. Nexus is a single pane of glass that lets you run data science and machine learning workloads across any compute cluster — in any cloud, region, or on-premises. It unifies data science silos across the enterprise, so you have one place to build, deploy, and monitor models.
Domino customers who have participated in design reviews or have been given demos are really excited about its potential to:
- Protect Data Sovereignty. Instead of wasting time (and money) moving data across regions to access compute resources, Nexus will bring the compute to where the data resides — a powerful paradigm shift that helps deal with data gravity in complex infrastructure. By restricting access to data by region, you’ll have greater control to enforce data localization regulations, such as GDPR.
- Reduce Compute Spend. Nexus will give users the choice over where workloads are processed, so you can get the right performance at the lowest possible cost. For example, you can re-direct computationally intensive workloads to on-premises infrastructure (e.g., GPU hardware clusters) to reduce monthly cloud infrastructure costs.
- Future Proof Infrastructure. No company wants to be locked into a single cloud vendor, especially when recent outages expose significant business risks. With Nexus, you get the piece of mind that comes with having a hybrid/multi-cloud strategy, implemented in a way that allows you to maintain consistent operations.
You can get more information about Nexus, view a demo, and even request access to be part of our private preview here.
Get Easier Access to Critical Data Sources
Our industry has been talking about the challenges of being able to find and work with data for over 15 years, and today it’s still a dreaded chore that wastes a considerable amount of time for data scientists. That’s because the majority of solutions are all focused on solving the problem at the database level, with data warehouses, data catalogs, and other standalone products meant to serve multiple types of users
In Domino 5.0, we introduced our unique approach that integrates a data catalog inside the Domino platform where data scientists are already spending the majority of their time. The combination of advanced search capabilities, integrated data versioning, and pre-built connectors, accelerates model velocity to maximize the productivity of data science teams.
As the most open and flexible platform in the industry, Domino already includes connectors to the most popular data sources. Domino 5.3 adds the capability to connect to Teradata warehouses and query Amazon S3 (tabular) databases. For customers with a Trino cluster, Domino 5.3 also includes a Trino connector to enable connectivity to additional data sources, including Cassandra, MongoDB, Redis, and Delta Lake.
Our unique approach to connecting to data eliminates the all too common DevOps barriers related to the installation of drivers, libraries, and more. With Domino, you simply have to select the connector you wish to use and fill in basic information, including your login credentials. Connection properties are stored securely as an access key and secret key in an instance of Hashicorp Vault that is deployed with Domino. Our solution also streamlines the process of deploying and monitoring models with lineage back to the original data used for model training.
For more information, including a demo of how easy we’ve made getting access to the critical information you need, check out this blog.
Operationalize Deep Learning with GPUs
Cutting-edge, modern, model-driven enterprises already use Domino and advanced deep learning techniques to achieve medical breakthroughs, analyze emerging threats from cyberattacks and fraud, and improve customer relationships. For example:
- The Janssen Pharmaceutical Company of Johnson & Johnson uses Domino to accelerate training of deep learning models by up to 10x to more quickly and accurately diagnose and characterize cancer cells through whole-slide image analysis.
- Topdanmark, a large European insurance company, uses natural language and image processing techniques in Domino to automate the processing of claims and get their customers’ lives back on track faster.
Domino already offers the best environment for training advanced deep learning models. With Domino 5.3 we’re enabling the model-driven enterprise to develop and productionalize deep learning at scale, with no DevOps skills required.
Domino 5.3 makes it possible for deployed models to use GPUs for model inference instead of having to rely on slower CPUs – and thus, realizes a significant performance improvement for deep learning and other massively parallel computing applications. Studies have shown that GPUs are 5 to 15 times faster for inference for deep learning applications than CPUs at similar costs.
By incorporating GPUs for model inference, companies are able to deploy models that rely on advanced frameworks such as TensorFlow and high-performance GPU-accelerated platforms such as NVIDIA V100 and NVIDIA T4. They can also take advantage of the rapid innovation happening in the open source community to incorporate and deploy powerful algorithms (e.g., BERT on Hugging Face) and pre-trained frameworks for image recognition (e.g., OpenMMLab Detection Toolbox).
Control and Audit Access to Sensitive Data
In the highly regulated life sciences industry, companies need to be absolutely sure that patient bias has not been introduced during any phase of clinical trials. Being able to control and audit who has been authorized to see various data over the course of a study is crucial to the integrity of the analysis.
Domino 5.3 delivers new compliance and governance functionality for pharmaceutical companies that rely on Domino as a modern Statistical Computing Environment. An unalterable audit trail outlines who has been granted access to data within a project to comply with GxP guidelines for regulatory submissions for clinical trials.
Boost Speed and Impact with Domino 5.3
Domino 5.3 increases the speed and impact of enterprise data science with market-leading capabilities to unite hybrid and multi-cloud Infrastructure, democratize data sources, and fuel deep learning. To learn more about how we’re unleashing data science across hybrid and multi-cloud, check out our new Nexus webpage, which includes a Nexus demo. And, discover all of the powerful capabilities and unique benefits of Domino 5. Follow us in LinkedIn to get updates about new Domino 5.3 blogs and details about a virtual event we have planned for November that you won’t want to miss.
Bob Laurent is the Head of Product Marketing at Domino Data Lab where he is responsible for driving product awareness and adoption, and growing a loyal customer base of expert data science teams. Prior to Domino, he held similar leadership roles at Alteryx and DataRobot. He has more than 30 years of product marketing, media/analyst relations, competitive intelligence, and telecom network experience.
RELATED TAGS