David is an OG in AI who has been at the forefront of many of the major breakthroughs of the past decade. His resume: VP of Engineering at OpenAI, a key contributor to Google Brain, co-founder of Adept, and now leading Amazon’s SF AGI Lab. In this episode we focused on how far test-time compute gets us, the real implications of DeepSeek, what agents milestones he’s looking for and more.
[0:00] Intro [1:14] DeepSeek Reactions and Market Implications [2:44] AI Models and Efficiency [4:11] Challenges in Building AGI [7:58] Research Problems in AI Development [11:17] The Future of AI Agents [15:12] Engineering Challenges and Innovations [19:45] The Path to Reliable AI Agents [21:48] Defining AGI and Its Impact [22:47] Challenges and Gating Factors [24:05] Future Human-Computer Interaction [25:00] Specialized Models and Policy [25:58] Technical Challenges and Model Evaluation [28:36] Amazon's Role in AGI Development [30:33] Data Labeling and Team Building [36:37] Reflections on OpenAI [42:12] Quickfire
With your co-hosts:
@jacobeffron
- Partner at Redpoint, Former PM Flatiron Health
@patrickachase
- Partner at Redpoint, Former ML Engineer LinkedIn
@ericabrescia
- Former COO Github, Founder Bitnami (acq’d by VMWare)
Podden och tillhörande omslagsbild på den här sidan tillhör
by Redpoint Ventures. Innehållet i podden är skapat av by Redpoint Ventures och inte av,
eller tillsammans med, Poddtoppen.