With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey po...

Podden och tillhörande omslagsbild på den här sidan tillhör Tony Wan. Innehållet i podden är skapat av Tony Wan och inte av, eller tillsammans med, Poddtoppen.