Docker launched "Docker Model Runner" to run LLMs through llama.cpp with a single "docker model" command. In this episode Bret details examples and some useful use cases for using this way to run LLMs. He breaks down the internals. How it works, when you should use it or not use it; and, how to get started using Open WebUI for a private ChatGPT-like experience.

★Topics★
Model Runner Docs
Hub Models
OCI Artifacts
Open WebUI
My Open WebUI Compose file

Creators & Guests

Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host
  • (00:00) - Intro
(00:46) - Model Runner Elevator Pitch (01:28) - Enabling Docker Model Runner (04:28) - Self Promotion! Is that an ad? For me? (05:03) - Downloading Models (07:11) - Architectrure of Model Runner (10:49) - ORAS (11:09) - What's next for Model Runner? (12:13) - Troubleshooting


You can also support my content by subscribing to my YouTube channel and my weekly newsletter at bret.news!

Grab the best coupons for my Docker and Kubernetes courses.
Join my cloud native DevOps community on Discord.
Grab some merch at Bret's Loot Box
Homepage bretfisher.com

Podden och tillhörande omslagsbild på den här sidan tillhör Bret Fisher. Innehållet i podden är skapat av Bret Fisher och inte av, eller tillsammans med, Poddtoppen.

Senast besökta

DevOps and Docker Talk: Cloud Native Interviews and Tooling

Docker Model Runner

00:00