# my synthesis of situational awareness Published: 2025-10-07 URL: https://diegoprime.com/blog/notes-situational-awareness i read leopold aschenbrenner's 165-page essay on agi. it's long. very dense. but extremely valuable. i took the time to think through it deeply from first principles to actually understand the logical structure. this is that work product. this preserves leopold's core logical architecture while excluding most technical details. hyper-condensed version of what i understood were his essential arguments. not my opinion. not arguing for or against his thesis. just distilling what he claims and why. if you want the full case with all the data, read the original at [situational-awareness.ai](https://situational-awareness.ai/). purely my interpretation. not official. not endorsed by leopold. ## intro - a series of essays making the case for agi/asi, their likely development path, and the measures required to prevent catastrophic outcomes - written by leopold aschenbrenner (former openai researcher) in june 2024, published as 165-page essay series - **core claims:** 1. agi by ~2027 2. automated ai r&d will trigger an intelligence explosion 3. a u.s. "government ai project" will form by 2027-2028 4. asi by ~2030 ## premises for agi leopold's argument rests on three quantifiable trends continuing through 2027: ### 1. ~5 ooms of effective compute increase 1. **compute scaling (~2-3 ooms)** - training clusters grow from ~$1-10B (2024) to $100B+ (2027) - historical trend: ~0.5 ooms/year driven by investment, not moore's law - requires trillions in capital, massive power buildout (us electricity up "tens of percent") 2. **algorithmic efficiency (~2 ooms)** - continued ~0.5 oom/year improvement (2012-2024 historical rate) - better architectures, training procedures 3. **"unhobbling" (chatbot to agent transformation)** - current models are "incredibly hobbled": no long-term memory, can't use computers effectively, can't work independently - **test-time compute overhang**: models currently use ~hundreds of tokens coherently (minutes of thinking). unlocking millions of tokens (months-equivalent) = ~3 ooms effective compute gain - result: "drop-in remote workers" who can execute weeks-long projects independently **combined effect:** another gpt-2 to gpt-4 sized capability jump, reaching expert/phd-level systems. ### 2. scaling laws remain predictive - assumption: for every oom of effective compute, models predictably improve - evidence: held consistently over 15+ ooms already ### 3. ai research automation - sometime at ~5 ooms, models can automate ai researcher work - this enables ~100 million researcher-equivalents running at 100x human speed - triggers recursive self-improvement loop (intelligence explosion) ### key assumptions - **data wall is solvable** (synthetic data, self-play, efficiency gains) - **algorithmic progress sustainable** (low-hanging fruit not exhausted) - **unhobbling achievable** (test-time compute unlockable via training) - **investment continues** (companies spend trillions in capex) ## consequences of agi from the above premises, leopold traces three sequential implications: ### 1. the project (2027) leopold predicts inevitable government takeover. his reasoning: "no startup can handle superintelligence. i find it an insane proposition that the us government will let a random sf startup develop superintelligence. imagine if we had developed atomic bombs by letting uber just improvise." **structure:** - manhattan project-scale mobilization. likely model: defense contractor relationship (dod/boeing/lockheed) rather than nationalization. labs "voluntarily" merge into national effort through "suave orchestration" - core research team (few hundred) moves to scif - congressional appropriations of trillions ### 2. intelligence explosion: agi to asi (2027-2029) once agi automates ai research: - ~100 million automated researchers run 24/7 on gpu fleets, at 100x+ human speed - **result:** decade of algorithmic progress (~5 ooms) compressed into ~1 year - produces "vastly superhuman" ai systems (asi) by ~2028-2030 - limited experiment compute is main constraint. but automated researchers use compute 10x more efficiently (better intuitions, fewer bugs, parallel learning) leopold argues superintelligence provides overwhelming power across domains: - **military:** novel weapons, superhuman hacking, drone swarms - **economic:** growth regime shift from ~2%/year to potentially 30%+ (or multiple doublings/year) - **scientific:** century of r&d compressed into years ### 3. four possible outcomes 1. **us success + cooperation:** us achieves asi first with sufficient security lead, solves alignment, negotiates stable transition with china 2. **authoritarian control:** china/adversary achieves asi first (via espionage or winning race) = totalitarian global order 3. **alignment failure:** asi becomes uncontrollable regardless of which nation builds it = existential catastrophe 4. **war:** race dynamics or volatile transition period triggers direct us-china war ## risks & measures leopold identifies three primary risk categories:
| Risk Category | Key Threat | Suggested Measures |
|---|---|---|
| Security Most Urgent | CCP steals AGI secrets in the next 12-24 months via espionage. |
|
| Alignment | Controlling systems much smarter than humans remains unsolved. |
|
| Geopolitical | China wins the race or the conflict escalates. |
|