Generative AI

OpenAI’s Wake-up Call to the World: Future-Proof Your Generative AI Strategy Now

Kjell Carlsson2023-11-20 | 3 min read

Return to blog home

OpenAI’s board appears to have fatally stabbed the organization. Whether or not Sam Altman did or did not deserve to be let go, the organization will never be the same in the eyes of its customers, employees, investors and society at large. OpenAI’s employees are reconsidering their career choices while thousands of companies consider the risk OpenAI poses to their Generative AI (GenAI) strategies. The possibility that OpenAI’s progress will stall, that its models will fall behind competitors, indeed, that it could stop operating altogether, has become very real. The unfolding debacle reveals the dangers of heavily relying on individual AI partners and the limits of outsourcing AI capabilities generally.

So, what should companies be doing to ensure that they can take advantage of GenAI, in light of the uncertainty around OpenAI and the risks inherent in relying on specific AI offerings? What they should have been doing from the beginning! That is, building the in-house capabilities to flexibly leverage GenAI models and AI components from the wide and growing ecosystem.

Advanced AI teams already evaluate and implement both proprietary (e.g. OpenAI’s) and open source (e.g. Llama and Falcon) foundation models. They are preparing for a world where these models come from a wide and growing array of providers by implementing automation, governance and security capabilities that allow them to leverage both models as well as tools and technologies (like vector stores and pipelining tools) irrespective of where they come from.

These capabilities are essential, not just to deal with risks from a given vendor, but for scaling impact with GenAI and traditional AI broadly. They are the capabilities needed for operationalizing GenAI applications that meet each business use case's unique accuracy, latency, control, cost, and data security requirements. They are the capabilities needed to take advantage of the incredible innovation in AI that is continuously increasing the range and performance of application areas. And they are the capabilities needed to provide governance and transparency to ensure these technologies are used responsibly.

The Sam Altman affair is a wake-up call. It lays bare the fragility, not just of OpenAI, but of all AI strategies based on a narrow set of offerings and, even more flawed, beliefs that AI capabilities can largely be outsourced. In this respect, we should be thankful that this turmoil is happening now. If companies recognize they need to improve the robustness of their own AI strategies and shift to an extensible approach that will set them up for future success then OpenAI’s self-inflicted damage may indeed further its goal of helping humanity.

Kjell Carlsson is the head of AI strategy at Domino Data Lab where he advises organizations on scaling impact with AI technologies. Previously, he covered AI, ML, and data science as a Principal Analyst at Forrester Research. He has written dozens of reports on AI topics ranging from computer vision, MLOps, AutoML, and conversation intelligence to augmented intelligence, next-generation AI technologies, and data science best practices. He has spoken in countless keynotes, panels, and webinars, and is frequently quoted in the media. Dr. Carlsson is also the host of the Data Science Leaders podcast and received his Ph.D. from Harvard University.

Subscribe to the Domino Newsletter

Receive data science tips and tutorials from leading Data Science leaders, right to your inbox.


By submitting this form you agree to receive communications from Domino related to products and services in accordance with Domino's privacy policy and may opt-out at anytime.