Superposition

Navigating the AI-Driven Shift in Power & Economics


(Created: Feb 7, 2025 | Updated: May 6, 2025 


Status: Living Doc (constantly getting updated)


Update: 

Talks and Slides:

AGI: What, When, and Why It Matters | Sensemaking in a Polarized World | Bhishma Raj

Superposition ,

 Superposition talk @ Portal 

Intro to Post AGI economics


P.S. I used some AI help to organize these thoughts, but everything here reflects my genuine concerns and plans for this project. The irony isn't lost on me! 


The TL;DR:



The discourse on AI often focuses on long-term existential scenarios. I believe we're facing a more immediate, fundamental challenge within the next 3-5 years: a rapid shift in socio-economic and political power structures driven by AI. This isn't just about job markets; it's about the potential for unprecedented concentration of capability and control, potentially leading to gradual human disempowerment – economically and politically. Wages falling below subsistence might be a symptom, but the core issue is the potential erosion of human agency and influence in systems increasingly optimized by and for AI controlled by a few.


Maybe economies and societies will adapt smoothly, as they have before. Or maybe AI represents a qualitative break, concentrating power in ways that undermine traditional checks and balances. The evidence is emerging and complex. Superposition aims to be a space for rigorous, grounded exploration of these intertwined political economy challenges, focusing on practical understanding and actionable strategies for maintaining human agency and influence, especially within the Indian context.


If this sounds exciting, feel free to drop by the Discord server




The Challenge: AI, Power Concentration, and Human Relevance


We stand at a pivotal moment. The acceleration of AI capabilities raises profound questions not just about the future of work, but about the future of power itself. Research and emerging trends suggest potential trajectories that diverge sharply from previous technological shifts:


Figure: Conceptual hierarchical power distribution (log-scale) illustrating extreme inequality of power/resources from individuals (~10^-12) up to top-tier actors (~1.0). The red line denotes the Strategic Sufficiency Threshold – the level at which an actor (e.g. a corporation or state) can sustain itself and meet core needs independently of the broader populace via AI and automation. Above this threshold, elites can trade and cooperate mostly among themselves for critical resources, decoupled from the masses below. This model highlights the risk of gradual disempowerment: if AI enables some actors to cross this sufficiency threshold, the majority of individuals beneath it could lose economic influence and bargaining power without any overt conflict .


This isn't a far-future hypothetical. The technological foundations are being laid now, and the potential for significant socio-political restructuring within the next 3-5 years demands urgent, realistic assessment and preparation.


Introducing Superposition: Analyzing the AI Power Shift



"Superposition" is being created to foster a clear-eyed understanding of these intertwined political and economic dynamics. The name reflects the need to hold multiple potential futures—some adaptive, some disruptive—in view simultaneously, resisting premature certainty and focusing on evidence-based analysis.


Our Focus:



Who is This For? This initiative seeks to bring together a diverse group grappling with these challenges: technologists, economists, political scientists, policymakers, governance experts, entrepreneurs, and citizens concerned about navigating this transition.


Why should we worry? 



https://epoch.ai/trends



Core Questions We Need to Address:





What Makes Superposition Different?




Why Technical AI Safety Alone May Not Be Enough: The Case for Governance


While advancing technical AI safety – ensuring AI systems are aligned with human intentions – is critically important, relying solely on technical solutions like interpretability to navigate the near-term power shifts discussed here seems insufficient and potentially fragile. This motivates Superposition's focus on the broader political economy and governance landscape.


  1. The Limits of Interpretability for Detecting Deception: There's a compelling argument, often made implicitly in safety discussions, that if we could just perfectly understand an AI's internal "thoughts" via interpretability, we could reliably detect deception or misalignment. However, as researchers like Neel Nanda argue, this likely overstates our current and foreseeable capabilities.



  1. The Gap Between Development and Deployment: Even if perfect interpretability were possible, the individuals and teams developing these techniques often have little direct control over how AI systems are ultimately deployed. Powerful AI tools, including interpretability methods themselves, are fundamentally dual-use. An advanced AI system deemed "interpretable" could still be deployed by powerful actors within economic or political systems in ways that concentrate control, automate undesirable functions, or manipulate populations, irrespective of the developers' original intentions. Understanding the engine doesn't guarantee the driver has good intentions or societal well-being in mind.


  1. Power Dynamics Transcend Technical Alignment: The core challenges Superposition focuses on – the "Great Decoupling," concentration of strategic capabilities, erosion of human leverage, and potential AI-enabled political consolidation – are fundamentally issues of power, economics, and political structure. Technical alignment aims to ensure an AI does what its operator intends; it does not, by itself, solve the problem of who the operator is, what their intentions are, or how much power they accumulate by wielding aligned AI. An "aligned" AI perfectly executing the goals of a small, unaccountable elite could still lead to widespread human disempowerment.


  1. The Need for Broader Governance Frameworks: Recognizing these limitations motivates a stronger focus on governance and policy. As the recent MIRI Technical Governance Team paper underscores, ensuring a safe transition requires robust infrastructure beyond technical alignment. This includes:




Technical AI safety research is vital and must continue. However, for addressing the near-term (3-5 year) risks of power concentration and gradual disempowerment, relying solely on technical breakthroughs appears insufficient. We need parallel efforts focused on understanding and shaping the socio-political and economic context in which AI is being deployed. 


Superposition aims to contribute to this crucial governance layer by fostering realistic analysis, exploring strategies for maintaining human agency, and facilitating action grounded in the complex interplay of technology, power, and economics. Governance and technical safety must be seen as necessary complements, not substitutes.


What We Won't Primarily Focus On:




Current Actions & Next Steps (As of April 2025):



How You Can Get Involved:


I'm trying to figure out what this means for all of us, but I can't do it alone. My perspective has blind spots, and I need more people with different backgrounds and experiences to weigh in.


This exploration requires diverse perspectives to counteract blind spots. If this resonates:


  1. Connect: Reach out (contacts below) for 1:1 discussion.

  2. Share Resources: Relevant research, data, analysis, or contacts.

  3. Contribute Expertise: Insights from political science, economics, governance, AI safety, geopolitics, or industry experience are invaluable.

  4. Challenge Assumptions: Critical feedback is essential for rigorous analysis.

  5. Broaden Perspectives: Help connect with diverse voices, especially those outside typical tech/policy circles.

  6. Amplify: Share this initiative with others who might contribute.


Superposition seeks to move beyond passive observation to active understanding and preparation for one of the most significant power transitions in history.


Contact: You can reach out to me on Telegram, Signal, WhatsApp 





Appendix: Motivating Research & Resources


If you had to read one article, I would recommend the first one


Article Name

Recommendation Score

Notes

Gradual Disempowerment

5/5


AGI could drive wages below subsistence level | Epoch AI

3/5


By default, capital will matter more than ever after AGI — LessWrong

4/5


Catastrophe through Chaos — LessWrong

4/5


Capital Ownership Will Not Prevent Human Disempowerment

3/5


How AI Takeover Might Happen in 2 Years — LessWrong

2.5/5


Inference Scaling Reshapes AI Governance — Toby Ord

4/5


Safety isn't safety without a social model (or: dispelling the myth of per se technical safety) — LessWrong

4/5


TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI — LessWrong

4/5


My motivation and theory of change for working in AI healthtech — LessWrong

5/5

RAAP

The Anthropic Economic Index



Algorithmic progress likely spurs more spending on compute, not less | Epoch AI

4/5

Jevon's paradox

What AI can currently do is not the story | Epoch AI

4/5


What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) — LessWrong



"Reframing Superintelligence" + LLMs + 4 years — LessWrong



Articles from Tamay Besiroglu and Epoch AI

5/5

Including Playground, Gradient Updates | Epoch AI (Bi weekly updates), What a Compute-Centric Framework Says About Takeoff Speeds | Open Philanthropy

Forethought



Chris Barber (@chrisbarber) / X


Including AI Prep Notes

Measuring AI Ability to Complete Long Tasks - METR


Other research from METR in general

Interviews - Chris Barber 

5/5

Lots of cool interview and information in general

https://ari.us/ 



https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction 



https://80000hours.org/podcast/episodes/allan-dafoe-unstoppable-technology-human-agency-agi/ 

4/5

High signal podcast, lot of novel takes

https://www.forethought.org/research/ai-tools-for-existential-security 

5/5




Revision #2
Created 8 March 2026 07:50:06 by bhishma
Updated 8 March 2026 07:51:37 by bhishma