Using Domino Experiment Manager from an external client
Author
Sameer Wadkar
Principal Solution Architect
Article topics
MLOps, MLflow, DevOps
Intended audience
Data scientists, DevOps/MLOps engineers
Overview and goals
Within Domino workspaces and jobs, programmatic access to the Domino Experiment Manager (MLflow) is automatic and seamless:
from mlflow import MlflowClient
client = MlflowClient()
# use the client
Domino manages authentication and authorization transparently through a localhost reverse proxy. The proxy injects the user’s identity token and the current run ID, linking each MLflow operation to both the authenticated user and the Domino project context.
Access control is enforced at the project level:
- Collaborators can create new experiments and experiment runs within their project.
- They have read-only access to experiments and runs created by others.
However, many customers need to interact with the Domino Experiment Manager from outside the Domino platform. Typical scenarios include:
- External experiment tracking: Logging experiments executed outside Domino into Domino’s Experiment Manager to maintain it as the system of record.
- External deployment integration: Pulling models from the Domino Model Registry to deploy on:
- Cloud or on-premises endpoints
- Air-gapped or specialized edge devices
- Batch or offline pipelines which execute outside Domino
- CI/CD automation: Enabling secure MLflow-based integrations in automated build, deploy, or retraining workflows.
This document defines the supported approach for securely accessing Domino-hosted MLflow (Experiment Manager) from external clients.
Authenticating to Domino MLflow
To support secure access from external clients, it’s important to understand how Domino authenticates workloads to the backend MLflow service.
1. Domino-aware MLflow objects
Domino extends MLflow by attaching internal metadata tags to every experiment, run, and model. Each object is tagged with:
- Domino user ID – the authenticated identity that created or modified the object
- Domino project ID – the project context in which the object resides
These tags are managed exclusively by the Domino platform and cannot be modified or overridden by user code. All access to MLflow objects is therefore mediated by the user’s project membership:
- Collaborators can create experiments, runs, and model versions within their own projects.
- Collaborators have read-only visibility into experiments and runs created by others in the same project.
- They cannot start runs or edit experiments owned by another user.
2. Enforcement architecture
Domino enforces this security model through a Domino MLflow proxy layer:
- All REST traffic to MLflow passes through the Domino MLflow proxy container.
- Network policies prevent any direct connection to the backend MLflow service.
- Each user workload (workspace, job, or app) runs a localhost sidecar proxy that forwards MLflow client traffic to the platform proxy.
The localhost proxy injects two critical headers with each request:
- User identity token – identifies the calling principal.
- Workload run ID – identifies the Domino project context.
The Domino MLflow proxy uses these values to verify the caller’s collaborator role for that project and apply the appropriate access control.
3. Tag integrity
When an experiment, run, or model is created, Domino automatically applies Domino-prefixed tags that encode project and ownership data. Any attempt to update or overwrite these tags is blocked by the proxy layer. This guarantees the immutability of Domino’s access control metadata and preserves the security boundary around all MLflow objects.
Authenticating with MLflow from external clients
With the authentication model understood, we can now enable external clients to interact securely with the Domino Experiment Manager (MLflow).
Every call to the Domino MLflow proxy must include two identifiers:
- User token — Authenticates the caller.
- External clients must use a
DOMINO_API_KEY. - Service accounts in Domino, which are OAuth-based, can exchange an OAuth token for a
DOMINO_API_KEY. - This design allows both human users and automated systems to access MLflow endpoints using a unified authentication flow.
- Run ID — Establishes project context for access control.
- Internal workloads (jobs, workspaces) automatically include this.
- External clients lack a native run context and must therefore create one.
To generate a valid Run ID, the client can trigger a lightweight Domino job — such as one that simply echoes a string. The job execution yields a run-id, which ties subsequent MLflow API calls to the correct project and confirms that the user (or service account) holds collaborator privileges in that project.
This approach ensures that all MLflow operations initiated externally remain subject to Domino’s standard project-level authorization and tracking mechanisms.
At the implementation level we need two request headers for every call that is made to the backend Domino MLflow proxy:
X-Domino-Api-Key— This establishes the user identityX-Domino-Execution— This is a jwt that encapsulates a Domino run-id which identifies the Domino project this MLflow interaction is for.
Example of an MLflow interaction
The following Python client will demonstrate how to connect to MLflow from outside the Domino platform.
You need to configure the virtual environment. Run the following commands:
python -m venv venv
source venv/bin/activate
venv\Scripts\activate
pip install mlflow==2.18.0 dominodatalab pandas scikit-learn PyJWTNote the version of MLflow (2.18.0). This should match the version of MLflow that your version of Domino is associated with.
If you have a DOMINO_USER_API_KEY you should set the environment variable. If you are approaching this as a service account user, you should bring the service account token and set the environment variable DOMINO_OAUTH_TOKEN
export DOMINO_USER_API_KEY=<DOMINO_USER_API_KEY>
export DOMINO_OAUTH_TOKEN=<SERVICE_ACCOUNT_TOKEN>If the DOMINO_USER_API_KEY is not set, this client will use the DOMINO_OAUTH_TOKEN to generate a DOMINO_USER_API_KEY to pass to the MLflow client.
Next you can execute the program as follows:
python3 run_experiment_and_register_model.py \
--domino_host_name "mydomino.cs.domino.tech" \
--domino_project_owner "wadkars" \
--domino_project_name "ddl-end-to-end-demo" \
--domino_experiment_name "sw-external-ddl-end-to-end-demo" \
--domino_run_name_prefix "external-run" \
--domino_model_name "sw-external-ddl-end-to-end-demo"The program will create/set an experiment with the name sw-external-ddl-end-to-end-demo in the project ddl-end-to-end-demo owned by the user wadkars. It will train a model in a run with prefix external-run and model name sw-external-ddl-end-to-end-demo. Finally it will execute the “latest” version of this model by downloading it from the Domino Model Registry.
Let us look at each step:
1. As mentioned earlier you’ll need a DOMINO_USER_API_KEY. If you are using service accounts you need to generate one. The following lines of code in the script will do that:
api_key = domino_utils.get_domino_user_api_key(domino_host_name,oauth_token)
os.environ['DOMINO_USER_API_KEY'] = api_key2. Next, you’ll need a Domino workload run identifier which belongs to the project and owned by the user represented by the DOMINO_USER_API_KEY
#domino_tracking_uri is the Domino external url
result = start_stub_job(domino_tracking_uri,project_owner,
project_name,api_key,job_name)
run_id=result['runId']
os.environ['DOMINO_RUN_ID'] = run_id3. Now you’ll need to register the request headers for the MLflow client.
#This line is only necessary if you are encapsulating
#this capability in an endpoint serving multiple requests to
#MLflow from multiple principals. It clears the current thread
#of the values for previous call
_request_header_provider_registry._registry.clear()
#See the file domino_api_key_header_provider.py
register_domino_api_key_request_header_provider()
#See the file domino_execution_header_provider.py
register_domino_execution_request_header_provider()And you are all set! Simply access MLflow from the external client as if you were inside a Domino workload.
start_run_and_register_model(experiment_name,run_name_prefix,model_name)
result = download_and_run_logged_model(model_name,"latest",[[0,1]])
print(result)If you read through the implementation of these methods, you will notice that it is simply vanilla MLflow code.
Tracking external work in Domino
Tracking experiments executed outside Domino requires explicit project context. Because every MLflow object must be associated with a Domino project, external work must be logged against a surrogate Domino project that serves as its system-of-record anchor.
Recommended setup:
- Create a one-to-one mapping between each external project or team and a dedicated Domino project.
- Provision service accounts for each external team or user group.
- Add these accounts as collaborators on the corresponding Domino project.
- Mirror the collaboration model that exists in the external environment.
- Use the method above (job-triggered run ID, API key) to track experiments and run into the Domino Experiment Manager.
This approach preserves Domino’s access-control semantics and ensures that experiments, runs, and models created by external clients remain governed by the same project-level authorization rules as internal Domino workloads.
Alternatives and tradeoffs
What are the alternatives to using the above method? The obvious one is to keep all MLflow interaction inside a Domino job. The external client will simply invoke a job using the Domino python client.
This approach should be preferred if you are running a training or a scoring job inside Domino.
However it does not work for the two of the three key use-cases we described earlier:
- External experiment tracking: Logging experiments executed outside Domino into Domino’s Experiment Manager to maintain it as the system of record.
- External deployment integration: Pulling models from the Domino Model Registry to deploy on:
- Cloud or on-premises endpoints
- Air-gapped or specialized edge devices
- Batch or offline pipelines which execute outside Domino
One last note — Domino is planning to deprecate API keys at a later date. This article will be updated to reflect that change when it happens.
Checkout the Github repo

Sameer Wadkar
Principal solution architect

I work closely with enterprise customers to deeply understand their environments and enable successful adoption of the Domino platform. I've designed and delivered solutions that address real-world challenges, with several becoming part of the core product. My focus is on scalable infrastructure for LLM inference, distributed training, and secure cloud-to-edge deployments, bridging advanced machine learning with operational needs.