All Feeds

LessWrong

A community blog devoted to refining the art of rationality

Some conceptual alignment research projects09/01/22
Survey advice08/27/22
Toni Kurz and the Insanity of Climbing Mountains08/23/22
Deliberate Grieving08/19/22
Language models seem to be much better than humans at next-token prediction08/16/22
Humans provide an untapped wealth of evidence about alignment08/14/22
Changing the world through slack & hobbies08/10/22
«Boundaries», Part 1: a key missing concept from utility theory08/04/22
ITT-passing and civility are good; "charity" is bad; steelmanning is niche07/26/22
What should you change in response to an "emergency"? And AI risk07/23/22
On how various plans miss the hard bits of the alignment challenge07/20/22
Humans are very reliable agents07/14/22
Looking back on my alignment PhD07/09/22
It’s Probably Not Lithium07/05/22
What Are You Tracking In Your Head?07/01/22
Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment06/29/22
Nonprofit Boards are Weird06/25/22
Where I agree and disagree with Eliezer06/21/22
Six Dimensions of Operational Adequacy in AGI Projects06/17/22
Moses and the Class Struggle06/14/22
Prev | Next