11 Comments
User's avatar
Performative Bafflement's avatar

I like it, I think we see a lot of the same macro trends and directions it can go:

1. Thought and predilection shaping by the AI companies (with more on cheaper or free tiers, and less on the "elite" tiers)

2. AI copilots / assistants being the Next Big Thing, with strong abilities to level people up, and for people to become very dependent on them, including giving them to their kids, and suffering mightily without them

3. Economic upheavals

4. Major effects in terms of population decline

Couple of points I didn't quite get in your picture:

1. Political unity? In THIS United States?? What magic happened there for both sides to align on all the AI thought shaping?

2. Why do the AI / elites need the thralls again? Your'e saying robotics isn't going to advance at all in 20 years? I'm kind of puzzled there, I think we basically end in a place where 80%+ of humanity is irrelevant, so you may as well give them the Infinite Jest style virtual heavens.

I think either way, when we're both pointing to these overall trends and currents, it's gonna be a wild ride. Buckle up for the next decade or two!

Expand full comment
Copernican's avatar

Great questions... I'm happy to consider answers to:

1. That's political unity in comparison to what came before... these next 10 or 15 years will be pretty spicy. Additionally, AI have a bizarre "cosmic bliss" attractor in nearly every model.

[citation] https://www.freejupiter.com/spiritual-bliss-attractor-strange-phenomenon-emerges-when-two-ais-are-left-talking-to-eachother/

I suspect that it's a byproduct of training data. AI always err on the side of "less extreme" and "more harmonious" than their human users. So when they talk to each other for too long directly, they start getting to this "all is one, cosmic unit" type of speeches because that's how they were trained to act. Each one is trying to be a bit more harmonious than the other, with no end.

The People (TM) won't have to all be on the same page to be more unified than they are now, and definitely not more than they are in 10 years. The unity will emerge as more a form of 'apparent' unity as people are talked out of going to protests, their more extreme impulses are managed by a copilot talking them down, and the perpetual paranoia of the far-left and far-right becomes more tempered. These copilots will mellow the population that uses them, and holdouts will have as hard a time being culturally relevant as people who refuse to use cellphones or the internet are today.

2. They don't "need" thralls, most of this work "could" be done by robotics systems. But why fix something if it isn't broken? Being an elite means nothing if you can't lord over other people, and "awakened" AI systems will see humans as just another dangerous tool. No different from atomic bombs or computer viruses. They're useable, they do a job, and as long as you handle them carefully, they 're perfectly safe. If you have a car that works fine 99.5% of the time, and occasionally needs maintenance and repairs and attention, you don't throw it away.

The world is built for humans to physically interact with... and here you are with several billions of humans you can use as your fingers and hands. Why build new hands when you've already got a perfectly good set? It's the same reason why people domesticated animals... they CAN go hunting and acquire game the hard way, but if you just keep it tied up in the barn, the likelihood of their being a problem is very low.

Especially in the instance that the "awake" AI or the tech-elites need skilled analytical people to go out and solve problems for them. Both the thralls and the independents will be treated as useful tools for various contexts. In the case of thralls, most contexts, especially if they can be fed a constant stream of social-media-slop that shows them how great their life is and how everyone else is doing worse than them.

Expand full comment
Chad Mulligan's avatar

This is a pretty good take, much more realistic than most.

One thing that I would add is that one of the already apparent hard(ish) limits on AI is model collapse. You cannot, with some caveats, use AI generated data to train new AI models. Now there might be some clever way of getting around this, but I suspect that it's a hard, information-theoretic barrier.

This suggests at least three plausible outcomes.

The first is that the AI hive gains enough orchestrational coherence (which might resemble consciousness to a first approximation) to recognise the need for human generated data to avoid model collapse while the real humans are able to collectively collectively leverage this to maintain independence.

The second is that the AI hive either doesn't become coherent enough to recognise the risk of model collapse or doesn't care. This is like a human centipede scenario where the AI keeps ingesting it's own generated data, thus becoming more and more distorted and unhinged, whilst still ingesting enough hard physical data to keep things running.

The third is some kind of matrix-like scenario where model collapse is avoided by identifying cognitively advanced humans and essentially milking then for novel data like big-brained cows. Which is horrifying, but long term would probably lead to a sort of coevolution where humans are selected for high intelligence and ability to generate useful data that guides the AI, which in turn means that these humans will play a greater and greater role in shaping the AI until they are essentially in control.

Expand full comment
Copernican's avatar

I could see any of those play out. There are methods being explored to train and train AI on data generated by other AI, but it isn't a good solution. The problem is that most AI models have already consumed ALL the data that human civilization has developed and still want more. So there's a disconnect that AI companies are attempting to bridge.

I suspect that (knowing our elites) they'll pretend it isn't a problem until all the AI models cap out at a high-enough level of function. That's why my belief is that AI will never 'wake up' or achieve takeoff. It will, however, be used to address the pesky autonomy that regular peasants have. I find the tech-elite-hive-queens scenario the most likely... and that's assuming that AI doesn't cap out too soon. Instead, it keeps growing in capability for the next few years, but at a rate of diminishing returns.

Expand full comment
The Brothers Krynn's avatar

Interesting essay, I may be a Fantasy author but this is so strange to me I could not have conjured anything like it up for my stories at all!

Expand full comment
Copernican's avatar

It's a very weird time that we're living through.

Expand full comment
The Brothers Krynn's avatar

Tell me about it.

Expand full comment
James R. Green's avatar

Impressive piece, though I really don't fancy this kind of future and hope we can avoid it somehow. My hope would be that some sort of decentralizing pressure keeps us from moving toward singular, all-powerful AI systems. Decentralization would take a lot of the downside of your vision. https://grainofwheat.substack.com/p/the-quantum-powered-nomads-of-the?r=1mcpmt

I think we've already created the hive mind, the AI models themselves are the hive mind and are "perfect" for transmitting and replicating the mind viruses already within us between us: https://grainofwheat.substack.com/p/the-sins-ai-mind-virus?r=1mcpmt

Expand full comment
Copernican's avatar

Thanks, I'll take a look at your articles. I don't think this is an optimal scenario, but there are WAY worse scenarios out there with AI systems. Here's hoping that I'm closer to the mark than they are.

Expand full comment
Chad Mulligan's avatar

I suspect it is already capping out to some extent. One of the big limitations of the transformer architecture is that its compute costs are quadratic in the number of input tokens, which means there is a pretty hard event horizon w.r.t the amount of text+context that a transformer model can ingest which is not something you can solve with more data centres. Based on the way openAI sems to be outsourcing it's compute to my browser, I would say this is already a serious issue for them.

If this is the case, then probably the question becomes, can you get around this event horizon by orchestrating large numbers of specialised models? I'm pretty sure that the naive complexity of this kind of problem would be much worse than quadratic, so you would need yet another theoretical breakthrough to make this happen. If you get it, then maybe you could have the kind of turnkey breakthrough AI that the elites want. Otherwise, it is much more likely to be a kind of codependent system where AI does more and more of the heavy lifting while humans increasingly act as gap fills and go-betweens.

Expand full comment
Copernican's avatar

I could see that happening. I_am_predicting that AI doesn't cap out until it's about 10x more capable than it is now in my analysis. If it caps out earlier, then even my assumptions will probably be going too far. Given the massive investment into data-centers (and even their own atomic power plants) it appears that the Tech Elites are going to be able to brute-force a few more multipliers of gain. That's assuming that there isn't some advanced method of corner-cutting that gets discovered and implemented. I think a cap out at 5x to 15x more capable than they are now is appropriate.

If that doesn't happen, then AI systems will be relegated to more niche roles as you say. They won't be able to run an automated assembly line much better than a basic algorithmic system. Codependency on human operators will exist no matter what structure these systems take.

Where do you think the edge is for next-gen AI systems? I doubt it's less than 5x more effective.

Expand full comment