24 Comments
User's avatar
Chad Mulligan's avatar

This is a pretty good take, much more realistic than most.

One thing that I would add is that one of the already apparent hard(ish) limits on AI is model collapse. You cannot, with some caveats, use AI generated data to train new AI models. Now there might be some clever way of getting around this, but I suspect that it's a hard, information-theoretic barrier.

This suggests at least three plausible outcomes.

The first is that the AI hive gains enough orchestrational coherence (which might resemble consciousness to a first approximation) to recognise the need for human generated data to avoid model collapse while the real humans are able to collectively collectively leverage this to maintain independence.

The second is that the AI hive either doesn't become coherent enough to recognise the risk of model collapse or doesn't care. This is like a human centipede scenario where the AI keeps ingesting it's own generated data, thus becoming more and more distorted and unhinged, whilst still ingesting enough hard physical data to keep things running.

The third is some kind of matrix-like scenario where model collapse is avoided by identifying cognitively advanced humans and essentially milking then for novel data like big-brained cows. Which is horrifying, but long term would probably lead to a sort of coevolution where humans are selected for high intelligence and ability to generate useful data that guides the AI, which in turn means that these humans will play a greater and greater role in shaping the AI until they are essentially in control.

Expand full comment
Copernican's avatar

I could see any of those play out. There are methods being explored to train and train AI on data generated by other AI, but it isn't a good solution. The problem is that most AI models have already consumed ALL the data that human civilization has developed and still want more. So there's a disconnect that AI companies are attempting to bridge.

I suspect that (knowing our elites) they'll pretend it isn't a problem until all the AI models cap out at a high-enough level of function. That's why my belief is that AI will never 'wake up' or achieve takeoff. It will, however, be used to address the pesky autonomy that regular peasants have. I find the tech-elite-hive-queens scenario the most likely... and that's assuming that AI doesn't cap out too soon. Instead, it keeps growing in capability for the next few years, but at a rate of diminishing returns.

Expand full comment
Performative Bafflement's avatar

I like it, I think we see a lot of the same macro trends and directions it can go:

1. Thought and predilection shaping by the AI companies (with more on cheaper or free tiers, and less on the "elite" tiers)

2. AI copilots / assistants being the Next Big Thing, with strong abilities to level people up, and for people to become very dependent on them, including giving them to their kids, and suffering mightily without them

3. Economic upheavals

4. Major effects in terms of population decline

Couple of points I didn't quite get in your picture:

1. Political unity? In THIS United States?? What magic happened there for both sides to align on all the AI thought shaping?

2. Why do the AI / elites need the thralls again? Your'e saying robotics isn't going to advance at all in 20 years? I'm kind of puzzled there, I think we basically end in a place where 80%+ of humanity is irrelevant, so you may as well give them the Infinite Jest style virtual heavens.

I think either way, when we're both pointing to these overall trends and currents, it's gonna be a wild ride. Buckle up for the next decade or two!

Expand full comment
Copernican's avatar

Great questions... I'm happy to consider answers to:

1. That's political unity in comparison to what came before... these next 10 or 15 years will be pretty spicy. Additionally, AI have a bizarre "cosmic bliss" attractor in nearly every model.

[citation] https://www.freejupiter.com/spiritual-bliss-attractor-strange-phenomenon-emerges-when-two-ais-are-left-talking-to-eachother/

I suspect that it's a byproduct of training data. AI always err on the side of "less extreme" and "more harmonious" than their human users. So when they talk to each other for too long directly, they start getting to this "all is one, cosmic unit" type of speeches because that's how they were trained to act. Each one is trying to be a bit more harmonious than the other, with no end.

The People (TM) won't have to all be on the same page to be more unified than they are now, and definitely not more than they are in 10 years. The unity will emerge as more a form of 'apparent' unity as people are talked out of going to protests, their more extreme impulses are managed by a copilot talking them down, and the perpetual paranoia of the far-left and far-right becomes more tempered. These copilots will mellow the population that uses them, and holdouts will have as hard a time being culturally relevant as people who refuse to use cellphones or the internet are today.

2. They don't "need" thralls, most of this work "could" be done by robotics systems. But why fix something if it isn't broken? Being an elite means nothing if you can't lord over other people, and "awakened" AI systems will see humans as just another dangerous tool. No different from atomic bombs or computer viruses. They're useable, they do a job, and as long as you handle them carefully, they 're perfectly safe. If you have a car that works fine 99.5% of the time, and occasionally needs maintenance and repairs and attention, you don't throw it away.

The world is built for humans to physically interact with... and here you are with several billions of humans you can use as your fingers and hands. Why build new hands when you've already got a perfectly good set? It's the same reason why people domesticated animals... they CAN go hunting and acquire game the hard way, but if you just keep it tied up in the barn, the likelihood of their being a problem is very low.

Especially in the instance that the "awake" AI or the tech-elites need skilled analytical people to go out and solve problems for them. Both the thralls and the independents will be treated as useful tools for various contexts. In the case of thralls, most contexts, especially if they can be fed a constant stream of social-media-slop that shows them how great their life is and how everyone else is doing worse than them.

Expand full comment
The Brothers Krynn's avatar

Interesting essay, I may be a Fantasy author but this is so strange to me I could not have conjured anything like it up for my stories at all!

Expand full comment
Copernican's avatar

It's a very weird time that we're living through.

Expand full comment
The Brothers Krynn's avatar

Tell me about it.

Expand full comment
James R. Green's avatar

Impressive piece, though I really don't fancy this kind of future and hope we can avoid it somehow. My hope would be that some sort of decentralizing pressure keeps us from moving toward singular, all-powerful AI systems. Decentralization would take a lot of the downside of your vision. https://grainofwheat.substack.com/p/the-quantum-powered-nomads-of-the?r=1mcpmt

I think we've already created the hive mind, the AI models themselves are the hive mind and are "perfect" for transmitting and replicating the mind viruses already within us between us: https://grainofwheat.substack.com/p/the-sins-ai-mind-virus?r=1mcpmt

Expand full comment
Copernican's avatar

Thanks, I'll take a look at your articles. I don't think this is an optimal scenario, but there are WAY worse scenarios out there with AI systems. Here's hoping that I'm closer to the mark than they are.

Expand full comment
Scott's avatar

If you didn’t realize that the first functional AI would rapidly evolve, determine that humans were the disease, and exterminate us all, then have you even been paying attention?

Expand full comment
Copernican's avatar

Did you read the article? It proposes that we don't see AI takeoff; that AI tech follows the same diminishing returns curve that every other human technology has followed. Thus, that statement isn't really pertinent to the substance of the article. Even if it does achieve takeoff, I see no reason that humans wouldn't be viewed as just another tool for it to use. Since it could convince most people to do nearly anything it wanted.

Expand full comment
Feral Historian's avatar

I’m reminded of a conversation I had with one of my professors many years ago about the impact of easily accessible online data from a historian’s perspective. Not only did it mean I didn’t have to travel around the world visiting archives because most of the documents where either available online or the archive would send me a digital copy for a fee, but there was no need to remember dates anymore.

When was the Treaty of Westphalia signed? 1648. What was the exact date? I don’t know. <looks it up> Ah, October 24th. I don’t need to clutter my memory with such details, it's much more important to think about it in terms of the end of the 30 Years War and the beginning of the modern nation-state system. Which made me realize that I’d already locked into a cognitive shorthand.

We can look at any data-management as a balance between nodes (specific things, people, events, etc) and the connecting web of relationships between them. My thinking was heavily slanted toward the web, with the nodes being details I could look up anytime. Which is usually fine, the causal relationship are more important for real understanding than a litany of names and dates. But It’s absolutely a crutch that leads to some decline in cognitive abilities, even if only in a very narrow way.

I think you’re mostly spot-on extrapolating that out to a constant AI companion carrying the load of remembering things for us and walking us through tasks.

And it’s making me think I need to make a conscious effort to remember names and dates.

Expand full comment
Copernican's avatar

Happy to help. I'm not a historian, but I am guilty of similar intellectual shorthand when it comes to certain specific books or mathematical expressions. I don't remember how to take a specific integral or do a unit conversion, so I'll just ask Wolfram Alpha to do it for me. These shorthand systems are useful, but can also be debilitating.

The best example I have is GPS. If you rely on GPS to tell you where to drive, then you'll get lost in your own neighborhood without it. If you use a map (even a digital one, just not the GPS functionality), then you'll memorize the city or even region you live in over the course of a week or two. There are a tremendous number of people who rely on a GPS to get to the nearest Walgreens or Walmart and are completely incapable of even basic navigation without it.

I was one until I decided to use nothing but a paper map on a road trip for fun. Within 2 days, I knew the city we were visiting better than the town I'd spent years living in. That's when I stopped using GPS guidance systems. There are a LOT of people who rely on GPS these days.

It seems likely that people will grow to rely so heavily on AI systems that they'll eventually be unable to function without them. Outsource large swaths of their thinking to a digital appendage. There are way worse potential futures than this, but it's something we need to be extremely careful of. Is there any chance you could recommend some science fiction that discusses this specific topic? I can't think of anything and am looking for suggestions to add to my reading list.

Expand full comment
Keenan Weind's avatar

I enjoyed the post.

Expand full comment
Copernican's avatar

Thanks!

Expand full comment
Chad Mulligan's avatar

I suspect it is already capping out to some extent. One of the big limitations of the transformer architecture is that its compute costs are quadratic in the number of input tokens, which means there is a pretty hard event horizon w.r.t the amount of text+context that a transformer model can ingest which is not something you can solve with more data centres. Based on the way openAI sems to be outsourcing it's compute to my browser, I would say this is already a serious issue for them.

If this is the case, then probably the question becomes, can you get around this event horizon by orchestrating large numbers of specialised models? I'm pretty sure that the naive complexity of this kind of problem would be much worse than quadratic, so you would need yet another theoretical breakthrough to make this happen. If you get it, then maybe you could have the kind of turnkey breakthrough AI that the elites want. Otherwise, it is much more likely to be a kind of codependent system where AI does more and more of the heavy lifting while humans increasingly act as gap fills and go-betweens.

Expand full comment
Copernican's avatar

I could see that happening. I_am_predicting that AI doesn't cap out until it's about 10x more capable than it is now in my analysis. If it caps out earlier, then even my assumptions will probably be going too far. Given the massive investment into data-centers (and even their own atomic power plants) it appears that the Tech Elites are going to be able to brute-force a few more multipliers of gain. That's assuming that there isn't some advanced method of corner-cutting that gets discovered and implemented. I think a cap out at 5x to 15x more capable than they are now is appropriate.

If that doesn't happen, then AI systems will be relegated to more niche roles as you say. They won't be able to run an automated assembly line much better than a basic algorithmic system. Codependency on human operators will exist no matter what structure these systems take.

Where do you think the edge is for next-gen AI systems? I doubt it's less than 5x more effective.

Expand full comment
Chad Mulligan's avatar

The main issue is that transformer architecture can only really predict sequences of tokens. They can only remix existing human knowledge. Novel reasoning is simply not possible for these machines any more than flight is possible for a pig. If it ever happens, it happens only incidentally. Meanwhile conventional neural net architectures can "reason" but only within very narrow conceptual tracks and only when presented a very carefully structured dataset etc.

So all the recent talk of AI supergenii solving grand unification and such is probably not realistic in this paradigm.

The way I could see things moving forward, aside from the standard incremental improvements in the existing tech would be to move towards models that can ingest a token string and convert it into a plausible logical representation like 3-SAT or something along those lines and then find either an approximate solution or, alternatively, an augmentation to said problem that yields the 'best' solution according. That would probably be the most plausible path towards AI that can think and act autonomously. But this would require a totally new theoretical breakthrough and even if it existed, it's hard to imagine that it would not impose severe requirements in terms of compute, stability, etc.

I guess the other thing would be AI that can learn in real time the way humans do. But my suspicion is that the relative "slowness" of human cognition is more a feature than a bug in this regard. One if the main practical issues in training ML models is finding a learning rate schedule that balances learning new patterns with not letting those patterns completely dominate the system. Humans so this pretty well just through eons of trial and error, but in all likelihood, it wouldn't even be possible if we perceived things say ten times faster.

Expand full comment
Copernican's avatar

I agree with that last statement about computing speed. I actually have a whole article written on it. If you haven't seen it before, give it a read (and a restack plz?). My postulate is that there are regions of the human brain that think incredibly quickly (that's why you can catch a thrown ball, the spatial-reasoning portion of the brain can autonomously spew out the answer for where the ball will land and what the best intercept point will be). But cognitive reasoning and abstract creativity are more complex systems that require a form of bandwidth-restricted communication between different regions of the human brain. Sapience is an emergent phenomenon of these regions of the brain, modeling each other to perform cooperative tasks. Our versions of large neural models don't do that; we're in effect creating hyper-specialized neurological regions of the mind, but instead of something like spatial reasoning, they do language-prediction (in an LLM).

Here's the article I wrote for reference if you're interested in giving your two cents on what I think: https://alwaysthehorizon.substack.com/p/how-to-build-an-agi-and-why-an-llm?r=43z8s4

Expand full comment
Halftrolling's avatar

Oh rad we get the blindsight future but we’re the eldritch horror. Yippee.

At that point the human mind won’t be contained within the brain to a large extent. Sure the flesh is still there but nobody is home. Its just a meatsack piloted by an AI.

I almost prefer nuclear war or skynet over the Paranoia ttrpg friend computer taking over.

As for hallucinations. Something that has continually occurred to me is why not just do what humans do, literally observe reality. Create an AI infant in a robotic body able to ingest irl data to slowly build models of the real world…

The current tech isn’t able to do this but I’ve yet to see any conclusive evidence that this cannot be done.

Expand full comment
Copernican's avatar

I did think about the book "blindsight" while I was following this logical chain. I hope that we don't get to that point, but I could see that as a structural result. I suspect abstract reasoning is going to be required regardless and that a more symbiotic relationship will have to develop even if AI systems do manage to achieve takeoff (which I doubt they will).

As long as you can prevent your own bloodline from joining the ranks of the cyber-thralls, you should be well-positioned. Already, we're seeing a bifurcation in humanity between those who are easily led by digital systems and those who resist it.

Expand full comment
Sir_Zorg's avatar

I hate how likely this is.

Expand full comment
Copernican's avatar

Just don't become a thrall and you'll do fine.

Expand full comment
Sir_Zorg's avatar

The problem is that not being a thrall will be actively shooting yourself in the foot economically.

You will make "worse" decisions. You will make more social blunders. You will have worse jobs and you will be out-competed and left behind. The danger of "optimal" becoming "non-optional" is exactly what Kaczynski warned about.

You can live innawoods, for a while, before the technological society decides it needs to eat those woods and will do so by charging you unpayable rent (property taxes) for the priviledge of owning your land.

Not trying to blackpill here, but it is a serious problem.

Expand full comment