Your generative AI Springboard

Sprint Zero

Code. Webinars. Ideas. It all starts here.

Domino's Sprint Zero offers you, the AI and data science practitioners, the resources you need to get started and stay ahead in the world of GenAI. From prompt engineering with LLM APIs to fine-tuning and hosting an LLM on Domino — it's all here. Dive in!

First Steps

The image of a female data scientist sitting at her computer desk

Webinar

RAG: An Introduction

Learn how to leverage Retrieval-Augmented Generation with Domino

1980s computer with prompts

webinar and repo

Prompt Engineering Jumpstart

Understand the whys and see the hows of prompt engineering for large language models.

Screenshot of the webinar

Webinar

Supercharging your model with Generative AI

Learn how to develop and deploy a LLM app in Domino.

A big African elephant next to a baby elephant. This illustrates the use of nanoGPT vs. the full scale GPT.

NanoGPT

Start Small with NanoGPT

Generate text in the style of Homer’s Iliad.

Advanced Techniques

An illustration of a mechanic tuning an engine

Webinar

Fine-Tuning Large Language Models

Optimizing with Quantization and LoRA in Domino.

Illustration of a data scientist working at her desk

Webinar

Advanced Parameter Efficient Fine-tuning

Scaling Fine-Tuning with Ray and DeepSpeed ZeRO.


An illustration of a person chatting with a robot using his mobile device

REPO

Llama 2 Chatbot

Multi-phase Supervised Fine-tuning with Llama 2.

CODE AND REPO

LLM Inference on Domino

Domino gives control across the LLM fine-tuning lifecycle.

Close up of watchmaker's hand holding a watch

Blog Post and Repo

Overcome the challenges of fine-tuning large language models (LLMs)

See how quantization and LoRA can help you deliver LLM power with less, on Domino.

Data scientist at her computer

REPO

Create a Q&A Agent

See how to use OpenAI’s API and Pinecone on Domino.