Discussion about this post

User's avatar
Performative Bafflement's avatar

I like it, I think we see a lot of the same macro trends and directions it can go:

1. Thought and predilection shaping by the AI companies (with more on cheaper or free tiers, and less on the "elite" tiers)

2. AI copilots / assistants being the Next Big Thing, with strong abilities to level people up, and for people to become very dependent on them, including giving them to their kids, and suffering mightily without them

3. Economic upheavals

4. Major effects in terms of population decline

Couple of points I didn't quite get in your picture:

1. Political unity? In THIS United States?? What magic happened there for both sides to align on all the AI thought shaping?

2. Why do the AI / elites need the thralls again? Your'e saying robotics isn't going to advance at all in 20 years? I'm kind of puzzled there, I think we basically end in a place where 80%+ of humanity is irrelevant, so you may as well give them the Infinite Jest style virtual heavens.

I think either way, when we're both pointing to these overall trends and currents, it's gonna be a wild ride. Buckle up for the next decade or two!

Expand full comment
Chad Mulligan's avatar

This is a pretty good take, much more realistic than most.

One thing that I would add is that one of the already apparent hard(ish) limits on AI is model collapse. You cannot, with some caveats, use AI generated data to train new AI models. Now there might be some clever way of getting around this, but I suspect that it's a hard, information-theoretic barrier.

This suggests at least three plausible outcomes.

The first is that the AI hive gains enough orchestrational coherence (which might resemble consciousness to a first approximation) to recognise the need for human generated data to avoid model collapse while the real humans are able to collectively collectively leverage this to maintain independence.

The second is that the AI hive either doesn't become coherent enough to recognise the risk of model collapse or doesn't care. This is like a human centipede scenario where the AI keeps ingesting it's own generated data, thus becoming more and more distorted and unhinged, whilst still ingesting enough hard physical data to keep things running.

The third is some kind of matrix-like scenario where model collapse is avoided by identifying cognitively advanced humans and essentially milking then for novel data like big-brained cows. Which is horrifying, but long term would probably lead to a sort of coevolution where humans are selected for high intelligence and ability to generate useful data that guides the AI, which in turn means that these humans will play a greater and greater role in shaping the AI until they are essentially in control.

Expand full comment
9 more comments...

No posts