The enterprise AI event for data science & IT leaders
Join us at Rev, where innovators from leading organizations share how they're driving results across industries.
There’s a moment most data science leaders run into, even if they don’t immediately recognize it. You invest in a platform to bring structure to how work gets done, standardizing development, improving collaboration, putting guardrails around deployment, and for the most part, it works. But over time, you start to notice that some of your strongest practitioners are doing their most effective work just slightly outside of it. Not abandoning the platform entirely, but bending around it in small ways to stay productive.
Key Takeaways
Lately, that pattern has become much easier to spot, and it almost always traces back to one thing: coding assistants.
What’s changed isn’t just the tools themselves, but the way work actually happens. The inner loop of development has shifted from writing code line by line toward something more iterative and conversational. Instead of starting with syntax, people start with intent. They describe what they’re trying to do, collaborate with an assistant to generate a first pass, and then refine from there. It’s a different rhythm. It’s faster in some ways, but also more exploratory. And while that shift is obvious at the individual level, its impact at the organizational level is easy to underestimate. This isn’t just about speed. It’s about how ideas move from concept to production.
The challenge is that most enterprise data science platforms weren't designed with this new loop in mind. They were built around the right problems including reproducibility, governance, secure data access, reliable deployment. But they tend to assume that development itself is relatively stable. Coding assistants break that assumption. Friction shows up in ways that feel small at first: environments that require manual setup, authentication that doesn't persist, context that disappears between sessions. But together, that drag is enough to push people toward local environments and external tools where assistants just work. The platform becomes something you return to later, for tracking, for deployment, for governance. And once that center of gravity shifts, it's hard to pull back. You end up with a disconnect between where work is created and where it's operationalized, introducing risk, inconsistency, and overhead that no one explicitly signed up for.
At Domino, we’ve been spending a lot of time thinking about this exact problem. It isn’t a tooling gap, but a shift in how data science work is fundamentally getting done. The goal isn’t to introduce yet another feature layer. It’s to make sure that the environments where teams build are aligned with how they actually want to build today, without forcing them into a single tool or workflow.
What that means in practice is rethinking the coding environment itself. Instead of treating coding assistants as something that needs to be manually installed and configured, they become part of the environment from the moment it starts. A data scientist launches a workspace and their tools are already there, whichever interface and assistant they choose: VS Code or JupyterLab, Copilot or Claude Code or or another one entirely. There’s no separate setup process, no need to stitch together extensions or manage fragile configurations just to get to a productive starting point. The environment is already prepared to support how they want to work.
Just as importantly, that experience is persistent. One of the more frustrating aspects of using coding assistants in ephemeral environments is that context gets lost. Authentication needs to be re-established, settings disappear, and conversations reset. Over time, that friction adds up, and people fall back to workflows that feel more stable, even if they sit outside the platform. By carrying that context forward across sessions, from authentication to assistant history to configuration, the interaction becomes continuous. The assistant becomes something you build with over time, not something you have to reinitialize every time you start.
There’s also a broader layer of integration that starts to matter as you scale this across teams. Coding assistants are powerful out of the box, but they’re not inherently aware of the platform they’re operating in. When they’re embedded within the environment, you can give them that context. Through Domino's skills framework, assistants can do more than generate code. They can start reproducible jobs, log experiments, attach outputs to governance trackers, and interact with the platform in ways that reflect how your team actually works. The result is an assistant that accelerates development while keeping the path to production connected, rather than one that generates code in isolation and leaves the operationalization work for later.
Most platforms treat coding assistants as an afterthought, something practitioners configure on their own, outside the managed environment. Domino takes a different approach:
This applies whether your team prefers VS Code or JupyterLab, GitHub Copilot or Claude Code. The platform adapts to how teams want to work, not the other way around.
For data science leaders, this is how you keep the center of gravity from moving. Instead of relying on individuals to configure their own tools or on documentation to enforce best practices, you can define what’s available and how it behaves at the environment level. Which assistants are supported, how they’re configured, what context they have access to, even how they should guide development in certain scenarios. Teams still have flexibility in how they work, but they’re doing so within a framework that keeps everything aligned. It reduces setup overhead for practitioners while giving leaders more confidence that work is happening within the right guardrails.
All of this matters because coding assistants are no longer optional. They’ve already become part of how high-performing teams operate, whether formally adopted or not. The real question is whether they’re integrated into your platform or existing alongside it. If they live outside, you lose visibility into how work is happening, and you create gaps between development and production that are difficult to manage. If they’re brought in thoughtfully, they can do the opposite: accelerate development while reinforcing the standards and practices that matter most.
This isn’t really about adding another capability. It’s about making sure your platform evolves with how work actually gets done. The teams you're supporting have already moved in this direction. The opportunity is to meet them there without giving up the structure, governance, and reliability that made the platform valuable in the first place. That's the balance we're focused on at Domino: helping teams move faster with coding assistants while keeping the entire path to production connected, visible, and controlled. And making those assistants aware of the environment they're operating in, not just the code they're generating.
What is the risk of using coding assistants outside an enterprise data science platform?
When coding assistants operate in local or external environments, work happens outside the platform's visibility. This creates gaps between where models are developed and where they're deployed, introducing reproducibility issues, inconsistent governance, and overhead when it comes time to operationalize.
Which AI coding assistants does Domino support?
Domino supports multiple coding assistants including GitHub Copilot, Claude Code, and others. This is done within managed workspace environments. Teams choose the tools that fit their workflow. The platform ensures those tools are pre-configured, persistent, and operating within the right guardrails.
How does Domino keep AI coding assistants aligned with governance requirements?
Platform administrators can define which assistants are available, how they're configured, and what platform context they can access. Through Domino's skills framework, assistants can interact with governance trackers, log experiments, and start reproducible jobs. This keeps AI-assisted development within the same audit trail as the rest of the platform.
Does integrating coding assistants slow down data scientists?
The opposite. By eliminating manual setup, persistent authentication, and fragmented context, Domino reduces the friction that pushes practitioners toward local environments. The goal is to make the platform the path of least resistance, not a destination you return to only at deployment.

Danny Stout is a seasoned data science and analytics leader with over two decades of experience driving enterprise AI and machine learning initiatives. He held senior analytics and AI leadership roles across global organizations including Ernst & Young, Takeda, TIBCO, Quest, and Dell, spanning forecasting, pricing, analytics strategy, and data science consulting. His work emphasizes effectiveness over scale, focusing on governance, team alignment, and measurable outcomes as the determinants of successful AI adoption. Based in Charlton, MA, Danny holds a Ph.D. and combines technical leadership with practical insights that help organizations scale data science responsibly and effectively.
Join us at Rev, where innovators from leading organizations share how they're driving results across industries.
Join us at Rev, where innovators from leading organizations share how they're driving results across industries.