• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

A.I. Policy at OpenAI

Added on August 3, 2022 by Jon Krohn.

OpenAI released many of the most revolutionary A.I. models of recent years, e.g., DALL-E 2, GPT-3 and Codex. Dr. Miles Brundage was behind the A.I. Policy considerations associated with each transformative release.

Miles:
• Is Head of Policy Research at OpenAI.
• He’s been integral to the rollout of OpenAI’s game-changing models such as the GPT series, DALL-E series, Codex, and CLIP.
• Previously he worked as an A.I. Policy Research Fellow at the University of Oxford’s Future of Humanity Institute.
• He holds a PhD in the Human and Social Dimensions of Science and Technology from Arizona State University.

Today’s episode should be deeply interesting to technical experts and non-technical folks alike.

In this episode, Miles details:
• Considerations you should take into account when rolling out any A.I. model into production.
• Specific considerations OpenAI concerned themselves with when rolling out:
• The GPT-3 natural-language-generation model,
• The mind-blowing DALL-E artistic-creativity models,
• Their software-writing Codex model, and
• Their bewilderingly label-light image-classification model CLIP.
• Differences between the related fields of AI Policy, AI Safety, and AI Alignment.
• His thoughts on the risks of AI displacing versus augmenting humans in the coming decades.

The SuperDataScience show's available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Interview, Podcast, Professional Development, SuperDataScience, YouTube Tags superdatascience, machinelearning, ai, aipolicy, gpt3, dalle2
← Newer: Getting Kids Excited about STEM Subjects Older: The A.I. Platforms of the Future →
Back to Top