Posts

Showing posts from February, 2026

Bioanchors 2: Electric Bacilli

Table of Contents 1 Arguments for fast AGI progress 2 Intuition pumps for being close to AGI 3 Synthetic life as an intuition pump 4 Some things I like about this analogy [Previously: “ Views on when AGI comes and on strategy to reduce existential risk ”, “ Do confident short timelines make sense? ”] [Whenever discussing when AGI will come, it bears repeating: If anyone builds AGI, everyone dies; no one knows when AGI will be made, whether soon or late; a bunch of people and orgs are trying to make it; and they should stop and be stopped.] 1 Arguments for fast AGI progress Many arguments about “when will AGI come” focus on reasons to think progress will continue quickly, such as: Line go up. Researchers can pivot to address new obstacles and ditch dead ends. AI can be used to accelerate AI research. We’re over a threshold of economic returns, such that AI research will permanently see much more investment than before. 2 Intuition pumps for being close to ...