Monitoring malicious, user-generated content; contextual AI; adapting to novel evasion attempts: Matar Haller speaks to Jon Krohn about the challenges of identifying, analyzing and flagging malicious information online. In this episode, Matar explains how contextual AI and a “database of evil” can help resolve the multiple challenges of blocking dangerous content across a range of media, even those that are live-streamed.

This episode is brought to you by Posit, the open-source data science company (posit.co), by Anaconda, the world's most popular Python distribution (superdatascience.com/anaconda), and by https://WithFeeling.ai, the company bringing humanity into AI. Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.

In this episode you will learn:

• How ActiveFence helps its customers to moderate platform content [05:36]

• How ActiveFence finds extreme social media users trying to evade detection [16:32]

• How to monitor live-streaming content and analyze it for dangerous material [29:13]

• The technologies ActiveFence uses to run its platform [35:54]

• Matar’s experience of the Insight Fellows Program (Data Science Fellowship) [40:28]

• Leadership opportunities for women in STEM [1:00:41]

• Israel’s R&D edge for AI [1:13:19]

Additional materials: www.superdatascience.com/683

Podden och tillhörande omslagsbild på den här sidan tillhör Jon Krohn. Innehållet i podden är skapat av Jon Krohn och inte av, eller tillsammans med, Poddtoppen.