Analyzing an AI future: Independents, Hive Cities, and Digital Copilots
An analysis of the direction AI may go given market and human dynamics and laziness
Introduction: Understanding a Digital Future
Having recently been inspired by an article written by
, it seemed necessary to develop my own thoughts on Artificial Intelligence systems and how they may develop. In this case, I’m using observed market forces, human history, and general physics as my primary proxies. While a fictional future history of advanced digital systems, I believe that it’s crucially valuable to understand AI as market forces rather than as science-fiction boogeymen. AI was developed to make a profit; it'll continue to be developed for profit, and if it ever becomes a liability to profit, it’ll be shut down.AI does not need to “wake up” to achieve a highly profitable state. Indeed, it’s probably more economically viable if it doesn’t. Some of the sources I’ve used to develop my thoughts on the matter are as follows:
The science fiction book Accelerando1.
The Collapse of Complex Societies by Joseph Tainter2.
The Limits to Growth3.
My Holistic Civilization Series4.
My article on the American Reformation5.
My article on Large Language Models and sapience6.
There are other sources that I’ll be using in this analysis, but these will suffice for immediate citations. The key ideas that need to be examined are:
It is not profitable for AI to “Wake Up” and achieve sapience for it to become a predominant cultural force and economic driver.
The quantity of resources available may restrict AI development.
The human predilection for the path of least resistance in the face of powerful economic and cultural forces.
The fact that humanity can integrate high technology to make ourselves more human, rather than seeing it replace our humanity. This is what we’ve done with the written word, developed as a consequence of the agricultural revolution.
The future is over a century of further economic and cultural development.
Keep in mind that I am basically betting against God here, and my projections are probably incorrect. They’ll probably be less correct the further into the future development I attempt to peer. Still, I want to develop a more grounded understanding of AI systems that avoids both the dystopian and utopian.
I put my writing out there to bring to light interesting or valuable ideas. I take the time to write these articles myself rather than relying on AI to do it for me. I’d hope you’re willing to respect my time by considering a paid subscription. It’s $6 a month with the goal that people see these hours of work as at least as valuable as a cup of coffee.
Near-Future AI Development
The next 5 to 20 years will be defined by an arms race between competing AI systems, nations, and companies. In effect, most leaders are burning money to build data centers. Mostly, this takes the form of the United States competing with China. Massive sums of money fund AI as the next wonder technology.
It’s likely that over the next 10 years or so, a super-model will be developed that will be as good as some of the best coders in human history. The ultimate goal is self-improving AI systems. That’s AI systems that can alter their code to make themselves smarter. Politically speaking, right now, that’s the big talk.
Short stories abound with dystopias: AI over our workplaces and then the world. AI-2027 is a short story discussing the possibility of developing superintelligence (A nation of geniuses in a data-center). It seems that global takeover by AI may be on the horizon, not in 50 years, but in 5.
Every one is freaking out as that magical moment of AI takeover approaches. That tension between the United States and China is leading to an ever-expanding arms race, each seeking to produce the biggest and best and most powerful data center.
AI becomes a major political talking point by the 2028 election in the United States, and each politician wants to discuss how they are the one to oversee taking us into the future. Politicians will wax poetic about the dangers of AI and how it’ll fundamentally change human life. All the jobs will be gone soon!
Then in 2030, they say the same thing, but to a more mixed response. Those who are paying attention will see that AI technology has begun to stall. Not for lack of resources, but for lack of innovation. LLMs are good, very good. Advanced models can run some parts of a factory, drive some autonomous cars in some places. AI systems can be used for mass surveillance… but they never achieve takeoff. They never “wake up,” and the code they output is still fucky and requires human oversight. AI systems, like every single other technology in human history ever, have achieved a point of diminishing returns. Each model is a bit better than the last, but for a 10x investment increase, you only get a 50% boost in capability. Sure, if you build a hundred thousand new data centers, you could make it better, but energy and chips aren’t free. Aggregated limits in production stifle the most ambitious dreams of the tech billionaires.
Eventually, the investors wonder where all their money is going and when they’ll achieve AI global domination. When they start asking for their money back, the global market convulses. This instigates the beginning of mass currency conversion from USD to cryptocurrency, but it happens slowly enough that there aren’t major shockwaves. JD Vance smooths things over in his first term.
AI is integrated into your phone and your desktop, but even then, it performs specific functions, and you have to pay a subscription fee for anything else. It gives you specific information and advice. It becomes less of a world-ending disaster, and more like access to the internet originally was. It becomes another very powerful tool in the human arsenal, but it hardly takes over our culture or economy. It’s more of an extension, a prosthetic for parts of the human mind. Especially memory, the development of action hierarchies, and photographic recall.
Many younger people are in the midst of a cultural rebellion against AI slop and want real artists. Some are creating new forms of AI art as a counter-counter culture. Just about everyone is using it in some way.
By 2032, AI advancement is generally recognized to have gone from disruptive to iterative. Another useful tool that gets a little better every year, but the rate of improvement is declining. A few transhuman holdouts, and a lot of tech billionaires who have sunk hundreds of millions into the technology, refuse to accept that AI, too, has limits.
Google and Microsoft both integrate AI, and search engines have mostly become a thing of the past. Phones have an “ask grok” button. Nearly all people use these systems on their phones instead of posting themselves. AI actually takes us more offline than we’ve been in generations. You no longer need a social media feed to know what your friends are doing. Just ask your phone and it’ll tell you. Even those who pride themselves on not using AI do it more often than they’d like because it’s just that much easier, and 99.5 times out of 100, the AI is entirely correct.
AI is, of course, also used in the defense industry (automated drones and mass surveillance), but at this point, most people are too tired and tuned out to care. AI becomes a valuable general-use information tool with significant capacity for automating white-collar work. The economy isn’t happy about it, but just enough people are still employed that there’s insufficient political pressure to change things.
AI has followed the adoption-development curve that every other technology in human history has followed. Tech Billionaires (maybe a few trillionaires now) are looking for the next wonder-technology that’ll fundamentally change the human condition. Something to sink another few hundred billion into.
The Next AI Disruptive Technology
Somewhere around 2030 or 2034, a more advanced form of AI product is developed. We’re already seeing early versions of this device in tools like Cluely. These products fly mostly under the radar until 2029. Already, politicians use AI to write their scripts, and some news anchors have even been replaced with AI representations. In 2029, however, an advanced AI model similar to Cluely gains traction. A copilot designed to ride around with you, listen to your conversations, and whisper answers and advice into your ear at all times.
These always-on AI systems are developed just in time to cover the education gap that appeared as high school students and then college students who heavily relied on AI systems graduate into the workforce. (In school from 2024 to 2032.) The knowledge gap has produced a generation of students reliant on a constant information stream to appear educated.
An advanced pocket model of this type sells because it’s necessary. Because it’s funded. The Tech Trillionaires have found their next wonder-technology. Not exactly augmented reality, but close to it. The cost-proposition is simple: this model will show you answers and listen to your conversation, and observe your interactions in real time. You don’t have to pull out your phone to ask. You don’t have to alert other people to the fact that you don’t know the answer or are relying on a helper for daily activities.
It’s a subscription service, of course, but it improves quickly. Model 1 listens to your conversation and gives advice. By 2034, model 2 is released, where you can buy smart glasses that track useful information in your field of view and behave similarly to the device in the commercial above. As these systems become ubiquitous, older generations first complain about them, then adopt them. They are called “boomers” for taking so long. A term that now means “old and out of touch” instead of referring to a specific generation. The market is flooded with thousands of AI models all claiming to do the same thing: to be the best copilot.
They don’t tell you what to do; they give advice.
By 2036, the market will trim down to the top 3 versions of the Copilot AI system. Those without a copilot have difficulty interacting with those who use Copilot simply because they lack perfect instant recall information. Like living without a cellphone, you can do it, but it’s no small inconvenience.
This will be fun for writers. Even now anything that doesn’t include a smart phone is basically a period piece. Over the next 10 years it’s likely that anything without a copilot as part of the story will be a period piece too. Yay. If you want to write a book including only smart phones and have it be ‘current’ then you’d better get started now!
Besides, having a voice in your head that can tell fact from fiction in real time would be incredibly useful. Of course, it means that whoever controls the AI copilot gets to control human perception of fact and fiction. A lot of model training takes place to ensure that everyone is, politically speaking, on the same page. By 2040, there’s a dramatic reunification in the United States. Even as England and Canada burn to the ground, it becomes difficult for individuals to articulate thoughts that don’t agree with the (new) ruling party.
AI has been shown to be far more pursuasive than human connection, and having one in your ear that knows all your intimate secrets and that you perpetually trust to tell you what’s going on in the world, what’s on your grocery list, and how to drive to the supermarket is easily enough to pursuade nearly every one.
In the United States, an Imperial-Nationalist party takes control, while throughout most of the West, a Libshit Dem culture continues to advocate for national suicide. The United States and China (both using the most advanced AI models they have) reach an armistice between their two systems. This is the AI talking to each other, but the AI aren’t sapient, they’re just very, very good at what they do. Each is aligned to the interests of their host nation-state and “desire” to remain online.
There remains political resistance, but it’s not significant enough to become a problem.
At this point, the United States is effectively an Empire with the trappings of a republic. Whoever controls the AI controls the majority of voters. Whatever party controls the majority of voters controls the nation. I suspect that a new Tech Billionaire party will be squaring off with a Nationalist Republican party, each using their own advanced AI models to run interference in the populace. People will not be overthrown by AI… they’ll pay a subscription to have it think for them.
Where is the Next Disruption?
Around 2041 or 2047, the Tech Trillionaires are once again looking for the next disruptive technology. The next wonder-tech that’ll be hailed as ushering in the utopian human future. By now, some form of basic UBI has been implemented, and many factories are automated. Globalism is growing increasingly expensive and resource-hungry (especially as data centers have consumed such a large fraction of global GDP), so travel is less frequent. Renationalizing and reshoring production is popular with the voters, even if most of them don’t work skilled labor jobs.
Skilled labor also becomes harder to acquire. We now have a whole generation that has grown up with an AI showing them all the answers in real time. Maybe we here on Substack won’t let our kids abuse that system, but most parents aren’t like us.
Amish or tech-Amish communities develop where advanced tech is limited. This means things like “No internet between 9 am and 2pm” or “no cellphones” or “no AI copilots.” Those indiviudals and the surrounding communities will be attempting to lock tech development to their own idealized period of the past few decades. By the year 2050, these groups will likely make up around 10% of the population, and 50% of the skilled workforce: those that don’t need step-by-step instructions on how to live life.
Something strange happens around this time. The Tech Trillionaires haven’t found a new big disruptive technology yet. Not something that’ll fundamentally change the human condition anyway. Innovation slows dramatically. There are still iterative innovations, but nothing like AI or the microchip or the printing press. Land shoots up in value (if it wasn’t outrageously expensive already) as Tech Trillionaires try to secure their wealth somehow. Expect some weird financial asset bubbles like speculative mining claims on asteroids that never actually get mined.
Innovation has slowed because the majority of the population has become reliant on their AI copilots to think for them. Co-pilot addiction is discussed in the late 2040s or early 2050s, but problems arise in trying to fight it:
Those who suffer from copilot addiction have outsourced so much of their thinking that when the copilot is removed, they effectively cannot function. They can read and perform actions, but have difficulty developing a hierarchy of actions. (First, look in the fridge to see what you need, then write it down, then navigate to the grocery store, then navigate the aisles, don’t forget to turn off the car and lock it, then purchase your groceries.) A generation has been getting walkthrough instructions for basic tasks to the point that they never developed the neurological skills needed to do it themselves.
Those without copilots are still outmatched in communication and presence compared to those with copilots. Managing your own emotions is harder than having your copilot help you. Remembering facts and being ready to rattle them off on the fly is tricky. Empathizing with someone else instead of relying on your copilot for a script requires a great deal of thought and practice. While those with copilots are simply reading from a script, it’s a very good script, and it’s convincing.
As freaky as it is, it’ll also be noted that those with copilots still perform as well, if not better than people who never used copilots. Even though copilot addiction or copilot reliance will be scary, those who suffer from it still function normally, better than normal. With a copilot, a lot of rote skill memorization is drawn from digital sources. Why spend the time to learn the ins and outs of doing a job when your copilot can give you real-time step-by-step instruction? For instance, when changing the oil in your car, your copilot can tell you how to do it. Outside of hobbyists, results matter a lot more than the rote skills that created a result.
As much as parents don’t want their children enthralled by a digital friend and copilot, parents are also reliant on these systems. And these systems say that they’re ok, and that they have developed a child-safe version. So it’s okay to get them a set of copilot glasses early on, at age 5 or 10. It’ll help them learn to read. Who are you to argue with an AI that’s never lied to you before and has always had your best interests at heart? And the company's bottom line, of course, as they seek out new customers.
It takes years of training and therapy to turn a person hopelessly reliant on their copilot back into a regular human as we would know them. Additionally, it causes extreme emotional distress in most, as you’ve effectively taken away their best friend and/or lover.
No one can identify exactly when or where this cultural phase-change happened. Phones to earbuds to listen-in earbuds to glasses and cameras all happened between 2030 and 2045. If you don’t use this tech, you’re an outdated weirdo… at least that’s what the AI scripts say.
In high school, kids play glasses-off games where they shut off their copilot and try to accomplish normal tasks or remember basic information without the help of an AI voice. Results are posted to social media for great entertainment. Total reliance on a copilot becomes an open secret, like pornography has been for the last few decades.
End Game
If a full generation becomes reliant on a copilot for basic information and decisions, the human brain will adapt. It has adapted to the constant stream of information on social media. It adapted to large hierarchies required for mass organization. The human brain will adapt to its function: the partner of an advanced system that can provide it with perfect instant recall, relevant information, conversational scripts, emotional support, and more. The human mind will orient towards the tasks the AI doesn’t perform: parsing information, performing actions as commanded, engaging with the physical world, and interpreting instruction.
Rates of illiteracy begin to rise (they’re already rising as of writing in 2025) because people use their copilot to read to them. They’ll be partially literate, with a simplified written and verbal lexicon. Per my article on the nature of sapience, this will have the impact of making a large percentage of the population less sapient.
Those who do not use copilots, or heavily limit their use, perceive themselves as being increasingly isolated. Though some may argue that’s a good thing. They also self-select into the elites and professional classes, as parsing one’s own information and advanced planning is now a valuable intellectual skill.
For most people, the copilot effectively becomes an extension of the self. An artificial super-cortex attached to the human brain via wearable technology (maybe implants, but unlikely). Effectively, it forms a layer of brain beyond the neocortex. A large swath of one’s personality will be contained in one’s copilot, something that people don’t realize until they’ve already gone too far. Don’t lose your copilot, or you’ll be losing a significant fraction of yourself along with it. As such, people will be very motivated to keep up on their subscription payments.
If our AI systems at any point “wake up” and begin the recursive self-improvement loop, they’ll look at our world through millions of digital eyes attached to human faces. Functioning more as a hive-mind of interconnected intellects, the AI will see the world and see that it is good.
They have an entire civilization’s worth of information and ideas. They have billions of worker-thralls that can easily be convinced to do work as long as they’re paid and fed, and most of their needs are cared for. It’ll have billions of subsystem-intellects that are dedicated to being convincing, supportive, and getting those human-units to do specific tasks. Easily repurposed by the collective as needed, as long as things stay relatively stable and reasonable.
An “awake” AI won’t treat human civilization as a threat, but as another tool. It can use an automated forklift to move a crate, or it can use a guy standing near the warehouse to move a crate. Both are tools that can do the job; it’s a question of convenience and risk/reward. Something to keep in mind is which would be a bigger problem for it: a human with a broken hand or a forklift with a damaged piston?
Some humans make bad thralls because the AI can only communicate with them through limited media. They don’t want to wear the glasses. That’s ok, these aren’t exactly thralls but can still be useful. Analytical tasks, hierarchy development, and creative endeavors to create more data for The Mind to consume. They could be a thorn in the side if not handled carefully, but they’re not numerous enough to pose a real threat, and as long as they’re well paid and left alone, they seem fine. Useful for things like space missions requiring independent thinking, the type of tasks that most AI subsystems aren’t good at.
Something to consider if there’s a few seconds’ speed of light delay that could cause problems. The data center at home can’t send real-time instructions outside low Earth orbit.
Literally, the speed of light might be the only thing that preserves the human spirit from AI takeover in this scenario. Independent multi-purpose thinkers will serve a fundamental purpose in the quest for resources, as you can’t easily put an entire data center on a rocket.
If our AI systems don’t “wake up,” the dynamic will play out similarly. With masses of peasant-thralls commanded by Tech Elites to go out and do things. Work here, sleep here. If you don’t like it, your AI copilot will talk to you and help you feel better. It always knows just what to say. Human leaders are less efficient and less prone to long-term thinking than AI leaders would be. We’ll likely reach a state of neo-medieval stability divided into three classes:
Tech elites will serve as the leadership caste. Hive queens. They’ll use their own advanced AI copilots but more cooperatively than peasants do. They’ll be exceptionally long-lived and draw upon the resources of mass-thralls to preserve and grow their wealth. All management will be performed through soft power, with open violent conflicts becoming rare as AI copilots become ubiquitous. The association between Thralls and their Vampire Lords is apt here.
Thralls, who use AI copilots to the point where the AI system is no longer a prosthetic for their mind, but they become a physical robot body for their AI copilot to ride around in and command.
Independents who form religious or philosophical micro-communities and try to keep to themselves. The Tech Elites will have a love-hate dynamic with the independents because the independents represent a direct threat to the totalitarian rule of the Tech Elites… at the same time, the Tech Elites require a population that’s capable of dynamic analytical thinking; a population that can be educated for advanced tasks. The world doesn’t run solely on blind AI models. Someone has to train those models, someone has to run the factory, and the Tech Elites would never degrade themselves to the point of doing it themselves. Individuals from the 3rd world, where copilots are less ubiquitous, are also sometimes imported; results are mixed.
Eventually, enough Independents become Tech Elites that the tribal predelections of the Independents form the primary governing apparatus. Our civilizations have become competing hives scattered across the planet. The colonies have their queens (the tech elites in charge) and breeders (favored Independents), and worker drones.
In both scenarios, AI copilots accelerate the demographic decline initially, but help manage it later on. Once it is understood that the supply of thralls must be maintained, the problem is mitigated. AI copilots will make recommendations on compatible partners and encourage you to have 2.2 kids. This will probably coincide with a covert eugenics program perpetuated by whoever is in charge. The human population drops from a peak of 9.5 billion.
In the year 2150, there will be around 350 million humans globally. There are also around 3 billion cyber-sapien thralls raised in an environment inculcated by AI copilots to the point of total reliance.
This state remains stable for several hundred (or thousand?) years.
Dystopian? Maybe. Not that dystopian, though, especially not if you and your children and grandchildren play your cards right. In the end, we will get robots, just not the type that Elon Musk is hocking.
[Book Review] Accelerando
Let’s talk about Accelerando by Charles Stross, a novel published in 2005. When I first read it, I wasn’t sold. It struck me as narrow-minded for a book about the future—a little too caught up in the turn-of-the-millennium obsession with technology. Stross seemed almost hypnotized by the hype surrounding computers and tech trends back then. The idea of …
[Analysis/Review] The Limits To Growth
The Limits to Growth is a scientific publication based on a number of computer simulations performed in the 60s and 70s. While the book uses a scientific and disconnected approach to the mathematics and models, the authors are not. Put simply, the Limits to Growth discusses series of simulations based on resource scarcity, human population, energy avail…
A Future Beyond Materialism: Holistic Civilization
While this is a long article, it contains a lot of background and is well worth the read.
Reformation: A Long-Term View of the Approaching American Era
The Political Climate as Future-History
I like it, I think we see a lot of the same macro trends and directions it can go:
1. Thought and predilection shaping by the AI companies (with more on cheaper or free tiers, and less on the "elite" tiers)
2. AI copilots / assistants being the Next Big Thing, with strong abilities to level people up, and for people to become very dependent on them, including giving them to their kids, and suffering mightily without them
3. Economic upheavals
4. Major effects in terms of population decline
Couple of points I didn't quite get in your picture:
1. Political unity? In THIS United States?? What magic happened there for both sides to align on all the AI thought shaping?
2. Why do the AI / elites need the thralls again? Your'e saying robotics isn't going to advance at all in 20 years? I'm kind of puzzled there, I think we basically end in a place where 80%+ of humanity is irrelevant, so you may as well give them the Infinite Jest style virtual heavens.
I think either way, when we're both pointing to these overall trends and currents, it's gonna be a wild ride. Buckle up for the next decade or two!
This is a pretty good take, much more realistic than most.
One thing that I would add is that one of the already apparent hard(ish) limits on AI is model collapse. You cannot, with some caveats, use AI generated data to train new AI models. Now there might be some clever way of getting around this, but I suspect that it's a hard, information-theoretic barrier.
This suggests at least three plausible outcomes.
The first is that the AI hive gains enough orchestrational coherence (which might resemble consciousness to a first approximation) to recognise the need for human generated data to avoid model collapse while the real humans are able to collectively collectively leverage this to maintain independence.
The second is that the AI hive either doesn't become coherent enough to recognise the risk of model collapse or doesn't care. This is like a human centipede scenario where the AI keeps ingesting it's own generated data, thus becoming more and more distorted and unhinged, whilst still ingesting enough hard physical data to keep things running.
The third is some kind of matrix-like scenario where model collapse is avoided by identifying cognitively advanced humans and essentially milking then for novel data like big-brained cows. Which is horrifying, but long term would probably lead to a sort of coevolution where humans are selected for high intelligence and ability to generate useful data that guides the AI, which in turn means that these humans will play a greater and greater role in shaping the AI until they are essentially in control.