Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services.

// MLOps Podcast #274 with Michael Gschwind, Software Engineer, Software Executive at Meta Platforms.

// Abstract

Explore the role in boosting model performance, on-device AI processing, and collaborations with tech giants like ARM and Apple. Michael shares his journey from gaming console accelerators to AI, emphasizing the power of community and innovation in driving advancements.

// Bio

Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. He led the development of MultiRay and Textray, the first deployment of LLMs at a scale exceeding a trillion queries per day shortly after its rollout. He created the strategy and led the implementation of PyTorch donation optimization with Better Transformers and Accelerated Transformers, bringing Flash Attention, PT2 compilation, and ExecuTorch into the mainstream for LLMs and GenAI models. Most recently, he led the enablement of large language models on-device AI with mobile and edge devices.

// MLOps Swag/Merch

https://mlops-community.myshopify.com/

// Related Links

Website: https://en.m.wikipedia.org/wiki/Michael_Gschwind

--------------- ✌️Connect With Us ✌️ -------------

Join our slack community: https://go.mlops.community/slack

Follow us on Twitter: @mlopscommunity

Sign up for the next meetup: https://go.mlops.community/register

Catch all episodes, blogs, newsletters, and more: https://mlops.community/

Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/

Connect with Michael on LinkedIn: https://www.linkedin.com/in/michael-gschwind-3704222/?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app


Timestamps:

[00:00] Michael's preferred coffee

[00:21] Takeaways

[01:59] Please like, share, leave a review, and subscribe to our MLOps channels!

[02:10] Gaming to AI Accelerators

[11:34] Torch Chat goals

[18:53] Pytorch benchmarking and competitiveness

[21:28] Optimizing MLOps models

[24:52] GPU optimization tips

[29:36] Cloud vs On-device AI

[38:22] Abstraction across devices

[42:29] PyTorch developer experience

[45:33] AI and MLOps-related antipatterns

[48:33] When to optimize

[53:26] Efficient edge AI models

[56:57] Wrap up

Podden och tillhörande omslagsbild på den här sidan tillhör Demetrios Brinkmann. Innehållet i podden är skapat av Demetrios Brinkmann och inte av, eller tillsammans med, Poddtoppen.