Alignment Newsletter Podcast
By Rohin Shah et al.
The Alignment Newsletter is a weekly publication with recent content relevant to AI alignment.
This podcast is an audio version, recorded by Robert Miles (http://robertskmiles.com)
More information about the newsletter at: https://rohinshah.com/alignment-newsletter/
This podcast is an audio version, recorded by Robert Miles (http://robertskmiles.com)
More information about the newsletter at: https://rohinshah.com/alignment-newsletter/
Latest episode
-
Alignment Newsletter #173: Recent language model results from DeepMind
Recent language model results from DeepMind -
Alignment Newsletter #172: Sorry for the long hiatus!
Sorry for the long hiatus! -
Alignment Newsletter #171: Disagreements between alignment "optimists" and "pessimists"
Recorded by Robert Miles: More information about the newsletter here: YouTube Channel: HIGHLIGHTS (Richard Ngo and Eliezer Yudkowsky) (summarized by Rohin): Eliezer is known for being pessimistic about our chances of... -
Alignment Newsletter #170: Analyzing the argument for risk from power-seeking AI
Analyzing the argument for risk from power-seeking AI -
Alignment Newsletter #169: Collaborating with humans without human data
Collaborating with humans without human data -
Alignment Newsletter #168: Four technical topics for which Open Phil is soliciting grant proposals
Four technical topics for which Open Phil is soliciting grant proposals -
Alignment Newsletter #167: Concrete ML safety problems and their relevance to x-risk
Concrete ML safety problems and their relevance to x-risk -
Alignment Newsletter #166: Is it crazy to claim we're in the most important century?
Is it crazy to claim we're in the most important century? -
Alignment Newsletter #165: When large models are more likely to lie
When large models are more likely to lie -
Alignment Newsletter #164: How well can language models write code?
How well can language models write code?