Urban Bugmen and AI Model Collapse: A Unified Theory
A solution indicating that Mouse Utopia is an inherent property of intelligent systems. The problem is information fidelity loss when later generations are trained on regurgitated data.
This is a longer article because I’m trying to flesh out a complex idea. Similar to my article on the nature of human sapience, this is well worth the read1.
Introducing Unified Model Collapse
I have been considering digital modeling and artificial neural networks. Model collapse is a serious limit to AI systems; a failure mode that occurs when AI is trained on AI-generated data. At this point, AI-generated content has infiltrated nearly every digital space (and many physical print spaces), extending even to scientific publications2. As a result, AI is beginning to recycle AI-generated data. This is causing problems in the AI development industry.
In reviewing model collapse, the symptoms bear a striking resemblance to certain non-digital cultural failings. Neural networks collapse, hallucinate, and become delusional when trained only on data produced by other neural networks of the same class. …and when you tell your retarded tech-bro boss that you’re “training a neural network to do data-entry,” upon hiring an intern, are you not technically telling the truth?
I put real hours into the thought and writing presented here. I respect your time by refusing to use AI to produce these works, and hope you’ll consider mine in the purchasing a subscription for 6$ a month. I am putting the material out for free because I hope that it’s valuable to the public discourse.
It may be that, by happenstance in AI development, we have stumbled upon an underlying natural law, a fundamental principle. When applied to trained neural network systems, information-fidelity loss and collapse may be universal, not specific to digital systems. This line of reasoning has serious sociological implications: decadence may be more than just a moral failing; it may be universally applicable.
Model collapse is not unique to digital systems. Rather it’s the most straight-forward form of a much more fundamental underlying principle that effects all systems that train on raw data sets and then output similar data-sets. Training with regurgitated data leads to a loss in fidelity, and a an inability to interact effectivley with the real world.
The Nature of AI Model Collapse
The way neural networks function is that they examine real-world data and then create an average of that data to output. The AI output data resembles real-world data (image generation is an excellent example), but valuable minority data is lost. If model 1 trains on 60% black cats and 40% orange cats, then the output for “cat” is likely to yield closer to 75% black cats and 25% orange cats. If model 2 trains on the output of model 1, and model 3 trains on the output of model 2… then by the time you get to the 5th iteration, there are no more orange cats… and the cats themselves quickly become malformed Chronenburg monstrosities3.
Nature published the original associated article in 2024, and follow-up studies have isolated similar issues. Model collapse appears to be a present danger in data sets saturated with AI-generated content4. Training on AI-generated data causes models to hallucinate, become delusional, and deviate from reality to the point where they’re no longer useful: i.e., Model Collapse.
The more “poisoned” the data is with artificial content, the more quickly an AI model collapses as minority data is forgotten or lost. The majority of data becomes corrupted, and long-tail statistical data distributions are either ignored or replaced with nonsense.
AI model collapse itself has been heavily examined, though definitions vary. The article “Breaking MAD: Generative AI could break the internet” is a decent article on the topic5. The way AI systems intake and output data makes it easy for us to know exactly what they absorb, and how quickly it degrades when output. This makes them excellent test subjects. Hephaestus creates a machine that appears to think, but can it train other machines? What happens when these ideas are applied to Man, or other non-digital neural network models?
Agencies and companies will soon curate non-AI-generated databases. In order to preserve AI models, the data they train on will have to be real human-generated data rather than AI slop. Already, there are professional AI training companies that work to curate AI with real-world experts. The goal is to prevent AI from hallucinating nonsense when asked questions. Results are mixed, as one would expect with any trans-humanism techno-bullshit in the modern day.
Let’s talk about mice.
John B. Calhoun
A series of experiments were conducted between 1962 and 1972 by John B. Calhoun. Much has been written about these experiments (a tremendous amount), but we’ll review them for the uninitiated6 7. While these experiments have been criticized, they are an excellent reference for social and psychological function in isolated groups8.
The Mouse Utopia, universe 25, experiment by John B. Calhoun placed eight mice in a habitat that should have comfortably housed around 6000 mice. The mice promptly reproduced, and the population grew9.
Following an adjustment period, the first pups were born 3½ months later, and the population doubled every 55 days afterward. Eventually this torrid growth slowed, but the population continued to climb [and peaked] during the 19th month.
That robust growth masked some serious problems, however. In the wild, infant mortality among mice is high, as most juveniles get eaten by predators or perish of disease or cold. In mouse utopia, juveniles rarely died. As a result, [there were far more youngsters than normal].
What John B. Calhoun anticipated, and what most other researchers at the time anticipated, was that the population would grow to the threshold (6000 mice), exceed it, and then either starve or descend into in-fighting. That was not the result of the Universe 25 experiment.
The mouse population peaked at 2200 mice after 19 months, just under 2 years. Then the population catastrophically collapsed due to infertility and a lack of mating. Nearly all of the mice died of either old age or internicine conflict, not conflict over food, water, or living space. The results have been cited by numerous social scientists, pseudo-social scientists, and social pseudo-scientists for 50 years (you know which you are).
The conclusion that many draw from the Mouse Utopia experiment is that higher-order animals have a sort of population limit. That is, when population density exceeds certain crucial thresholds, fertility begins to decline for unknown reasons. Some have proposed an evolutionary toggle that’s engaged when over-crowding becomes a risk. Some have proposed that the effects are due to a competition for status in an environment where status means nothing (mice do have their own hierarchies after all).
The reasoning behind the collapse of Universe 25 into in-fighting, the loss of hierarchy, is still up for debate; it did occur. The resultant infertility of an otherwise very healthy population, senseless violence, and withdrawal from society in general have been dubbed the “behavioral sink.”
I am aware that many consider this experiment to be a one-off. It was repeated in other experimence by John Calhoun, but no one has replicated it since. I’d love to do some more of these experiments, but university ethics boards won’t approve them in the modern day and age. WE NEED REPLICATION.
The Demographic Implosion of Civilization
Humans have displayed similar behaviors to those of the Universe 25 population at high densities. An article that I wrote roughly a year ago demonstrates a significant correlation between the percent-urban population and the fertility rate dropping below replacement levels. It appears that between 60% and 80% urban, depending on the tolerance of the population, and fertility rates drop below replacement10.
Under the auspice of Unified Model Collapse Theory, those numbers may need to be changed. Rather than a fertility collapse occurring when a population reaches 60% or 80% urbanization, the drop in fertility would occur after the culture and population have re-adapted to a majority-urban environment. How long it takes the fertility rate to decline would then be proportional to the cultural momentum. Rarely will it take longer than a full generation (30 years), and frequently it’ll be as short as a decade.
Exact analysis on how long this takes will require a comprehensive look at multiple statistical models, and require disentangling the long-term effects of culture, economics, war, plague, and other complicated factors. As a very rough rule of thumb “within 20 years of reaching 60% urbanization” seems to hold true. With that in mind, the global human population is closing in on 60% urbanized, so one would reasonably expect the global fertility rate to fall below replacements well within our current lifespans. (The current global fertility rate is 2.2 children per woman and declining).
The Universe 25 population decline did not begin in month 19 when the population peaked at 2200 mice, but a generation or two prior. Lab mice reach sexual maturity at roughly 6 weeks of age, indicating that the decline may have begun as early as 16 to 17 months11.
Rather than seeing Mouse Utopia, the Human Demographic Implosion, and AI Model collapse as disconnected events, the same principles may be active in all of them. The fidelity of information decays when later generations are trained solely on information created by prior entities of their own class.
A Thesis: Unified Model Collapse Theory
The proposed thesis is that neural-network systems, which include AI models, human minds, larger human cultures, and our individual furry little friends, all train on available data. When a child stubs his wee little toe on an errant stone and starts screaming as if he’d caught himself on fire, that’s data he just received and which will be added to his model of reality. The same goes for climbing a tree, playing a video game, watching a YouTube video, sitting in a chair, eating that yucky green salad, etc. The child’s mind (or rather, subsections of his brain) are neural networks that behave similarly to AI neural networks12.
The citation is to an article discussing how AI systems are NOT general purpose, and how they more closely resemble individual regions of a brain, not a brain.
People use new data as training data to model the outside world, particularly when we are children. In the same way that AI models become delusional and hallucinate when too much AI-generated data is in the training dataset, humans also become delusional when too much human-generated data is in their training dataset.
This is why milennial midwits can’t understand reality unless you figure out a way to reference Harry Potter when trying to make a point13.
What qualifies as “intake data” for humans is nebulous and consists of basically everything. Thus, analyzing the human experience from an external perspective is difficult. However, we can make some broad-stroke statements about human information intake. When a person watches the Olympics, they’re seeing real people interacting with real-world physics. When a person watches a cartoon, they’re seeing artificial people interacting with unrealistic and inaccurate physics. When a human climbs a tree, they’re absorbing real information about gravity, human fragility, and physical strength. When a human plays a high-realism video game, they’re absorbing information artificially produced by other humans to simulate some aspects of the real physical world. When a human watches a cute anime girl driving tanks around, that human is absorbing wholly artificial information created by other humans.
Katyusha is best girl
Brains (or brain regions) undergo model collapse just like AI systems. They become unable to reference reality, they become delusional, and hallucinate things that make no sense. Hence, the “Why do we need farmers when food just comes from the store” level of disconnection observed in urban populations.
In a heavily urban setting, humans train on “data sets” that are nearly wholly artificial. The less time spent outside, the less time spent interacting with the real physical world around them, the less accurate their model of reality becomes. Where exactly one draws the line between “real” and “artificial” data is subject to debate. A rocky slope up a hill may be 100% real, a grass playing field may be 70% real, and a concrete sidewalk may be around 40% real. At some point, however, the “salted” artificial data is sufficient to corrupt the real-world knowledge of individuals and cause model collapse.
Urban bug-people aren’t just delusional, they’re fundamentally broken. Similarly, “fixing” them may not be possible without radical retraining programs to teach them about the real world. “Go live in the woods for a year or two and try not to die” might be enough, but our society would hardly remain stable through such a remedy.
I am reminded of an anecdote from when I was a child. A cousin came to play with my siblings and I. My family had been raised going camping and hiking and wandering the wilds since before I can remember. Somewhere around age 3 or 4, our cousin came to visit and we went cruising up a hill hiking with our fathers in tow.
Probably looking for sticks to whack eachother with. This cousin, however, had grown up in a suburban hellhole where everything was artificial. As such, he found it nearly impossible to navigate a sloped hill. His experience with walking and running had only ever consisted of flat, soft, curated environments produced by other people. He had no experience, or ability, in navigating a dirt trail at a 20 degree incline. His neurological model of the world was trained on human-produced data, and could not function when confronted with reality.
When it comes to navigating the real world, urban bug-people often behave as if they’re retarded. Socially (they’ve never been punched in the face), Geospatially (they have no idea how to navigate by the sun or shadows), Culturally (without some pop-fiction touchstone, culture doesn’t exist), etc. They’re entirely bound to a world of artificial ideas: human-produced data, and unable to accurately model from first principles anything outside their extremely limited sphere of artificial experience.
The bugman’s neurological model of reality is divorced from reality. They hallucinate truths that make no sense, and they delude themselves into provably false ideas, and violently attack anyone with a model of reality more accurate than their own.
They don’t understand violence, hunger, or (real) social organization because they’ve never encountered those things. And by the time they’re adults, their models of reality are too set to be easily changed. As Yuri Bezmenov would say, they’ve been “demoralized,” though I’m not sure that’s the correct term for full-scale neurological model collapse. I’d argue that they’ve been “corrupted” and are no longer capable of understanding reality even in the face of overwhelming evidence.
This also explains why there is a threshold in % urbanization at which human fertility declines to catastrophic levels. Just like the Mice in Mouse Utopia are no longer capable of interacting with each other or breeding another generation…
Universalising the Thesis
The universal thesis for model collapse is that advanced modeling systems, when trained on information produced by entities of their own class, lose information fidelity inter-generationally. After multiple generations of training on poisoned datasets, the models themselves become delusional, hallucinate false information, and cease to function.
As Applied to AI systems
For AI models, it is easy to measure the input-output data that results in model collapse. Model collapse in AI systems that train on their own output data (or other AI output data) collapses information value over multiple generations. Even a relatively limited amount of poisoned data can cause the AI to deviate from the real world by a significant margin.
Generalizing to Animal Neurological Models
For animals, the same applies to AI, but there is a point at which the neural models the animals possess are sufficiently damaged as not to produce a next generation. At that point, catastrophic population decline ensues.
As Applied to Mouse Utopia
Universe 25 created an environment where baby mice had very little real-world feedback: hunger, predators, heat, cold, wet, and dry. The only information that each generation of mice received from its predecessor was derived from either original experiences or other-mouse behavior. The mice were trained on datasets where there was little or no real-world intrusion. As a result, their training reached a state of catastrophic failure after roughly 13 generations. At that point, the fertility dropped to zero in the youngest populations, and the entire mouse society collapsed into nihilistic extinction.
As Applied Animals in Captivity
In most instances where animals are kept in captivity, significant effort is expended to simulate a “natural” environment. This helps prevent weird behavioral idiosyncrasies. Smarter animals are more difficult to keep in captivity, and pandas are notorious for not breeding when kept in captivity. A mouse in a cage is generally receiving a reasonable amount of non-mouse data from its keepers. At the same time, there appears to be a threshold of poisoned information that depends on the neurological structure of each animal. If overloaded solely with recycled information created by other animals of its own species (or maybe similar species?), then some type of behavioral sink is going to appear; in this case, representing neurological model collapse.
As Applied to Human Civilization
Humans have been referred to as a self-domesticated species. Well, some of our subspecies, anyway. It appears that when we create our own environments, a significant percentage of the resultant data becomes “poisoned” by being human-created data. As a result, homo-sapiens that learn about the world solely (or predominantly) through media are not capable of modeling reality. More abstract thinkers can reason from first principles, but they’re not immune, and the majority of the population cannot “curate” their own input data. Human minds, neural networks, create models based entirely on “synthetic” data. The result is that those minds become optimized for synthetic realities.
Those models lose the capacity to understand long-tail information (improbable, but important data) that is no longer represented. Information on topics like serious injuries, getting punched in the nose, how dangerous wild animals can be, and what it’s like to truly be hungry because you can’t find food. Their models default to synthetic human artifice instead of understanding real implications.
The result is delusions about the state of the world; ideas like “it can’t happen here,” or “if I go to school, I’ll get a nice job,” or “no one needs a gun,” are excellent examples. They model imaginary worlds created by other humans, resulting in a suicidal inability to interact with reality. Psycho-social model collapse is most pronounced in the most artificial cultures: hyper-urban cultures.
This type of fidelity loss has become apparent in the wake of studying artificial neural-network systems, and in light of the catastrophic global demographic decline. Demographic decline is most severe in synthetic urban environments, while rural environments and ludditous environments appear far more resistant (though still subject due to the global effects of the digital age).
Potential Flaws in the Thesis
The following are a few counter-arguments I’ve thought of and responses to them in the context of this thesis. If the reader can think of other counter-arguments, please comment on them below. This idea is still getting fleshed out, and it needs to be cross-examined. Still, it does appear to accurately represent an underlying principle in thinking entities.
Eusocial animals:
People keep ant farms. Ant-farms do not self-annihilate when kept as pets over long durations due to infertility. The primary explanation is that only trained neural networks of a given complexity are subject to this degree of information fidelity loss. Instinctual behaviors are inherited and genetic, and do not need to be retrained every generation. Where one draws the line between a “trained” behavior and an “instinctive” behavior remains somewhat fuzzy here, but it does indicate that at a given level of neurological complexity, data-fidelity loss becomes a problem.
Cultural Traditions
Humans do not function well without cultural traditions. Older cultural information seems necessary for future development. Cultural traditions are lower-fidelity information condensed for easy consumption. I’d argue that there’s a relatively broad range of human-generated data that humans can input before it starts to become a problem. There is, however, a maximum threshold.
Likewise, model collapse does not seem to affect the totality of the mind. Rather, it causes declines in specific mental models individually. A NEET hikikomori might not be able to interact outside his home and plays videogames all day… but a businessman can interact outside the home; go on dates, party, get drunk. Lord help him if he’s ever left to fend for himself in a forest, however. Each model needs to absorb real data: human-human interactions as opposed to human-NPC interactions. Geospatial information and not GPS guidance. Real cultural institutions and not endless references to Harry Potter or video game characters.
“Yeah, when that guy at the bar punched me, it was like when I was playing skyrim and like, your stamina bar goes down really fast. I was so out of breath!”
~A paraphrased friend who shall remain nameless
When external data is input from uniform synthetic sources such as Leftist academic mantras or globalist urban culture completely disconnected from reality, there’s a loss in fidelity and function over time. The result is the collapse of one’s intellectual model of reality. Exactly where that line is, and how fine it is, remains up for debate.
Conclusions: Touch Grass
In a very real way, the urban bug-people completely diverge from reality. As
would say:Like the Gen Z boss video, all those people at the [Demcoratic Socialist Convention] are nothing more than children playing at politics. These people barely qualify as the same species when compared to the people who fought in World War II.
A lot of the LGBT aesthetic seems quite childish. There’s a lot of glitter, a lot of emojis, pastels, bright colors, lots of cartoons, etc.
This is a very real psychological breakdown: their neurological models of reality are broken, delusional, and unable to functionally interact with the real world. In the same way AI models hallucinate nonsense, urbanite bug-people become delusional about human nature and the natural world.
There clearly exists a limit to the oroborous of information. There is a limit to the synthetic data that one can absorb before losing touch with reality. The underlying principle is that information from entities of one’s own class cannot accurately represent reality… and that training oneself, one’s children, AI models, or mice solely on data regurgitated by entities of their own class will cause hallucinations, delusions, and a nihilistic breakdown. For fidelity to remain high, external data input is required. For model collapse to be avoided, synthetic information intake must be limited.
You cannot train people on regurgitated data any better than AI.
While the distinction between what counts as “one’s own class” remains fuzzy, there clearly must be one. Perhaps something as simple as inputting data from cultures outside one’s own could be a valuable addition. Certainly, real information about how raw materials work, plants grow, animals hunt, and flee predators is valuable. One might also argue that young children are of a class different from adults in terms of information production, or that psychedelics allow one to reexperience information as if they were of a different entity class (perhaps leading to the sapient awakening of mankind)[11].
Industrial society is completely borked in its current state, but survivable. The populations that do well will be those that limit their artificial information intake, especially to the next generation. The kids need to be playing OUTSIDE. They need to be climbing trees and getting scrapes and bruises. Curated environments will drive them crazy, and you may not see the true effects until they reach adulthood.
Clearly, humans have a tolerance for synthetic data. We’re surrounded by it, but we can manage ourselves as long as we have real first-principles and real interactions with the world around us. Combative martial arts. Shooting. Hiking. Hunting. Even cooking and realistic meal preparation can dramatically improve the quality of input data that a child receives.
Without real data, the human mind ceases to function, and its disparate parts begin hallucinating information that doesn’t exist, and which will often be confidently and violently defended. The modern political Left is a product of delusional psychology that’s hell bent on enacting the worst possible policies because its adherents are fundamentally neurologically broken… and they may not be fixable.
… which finally brings us to a solid answer to the question of the Experience Machine in terms of philosophical morality.
Epilogue: The Moral Question of the Experience Machine
The Experience Machine is a thought-experiment (and hopefully remains one) that’s described accurately in this article by
14. The proposal is that there exists a machine that you can plug into, experience a complete and fulfilling life, along with whatever other fantasies you may have. Is it moral or immoral to plug one’s into such a machine?There’s a great webcomic that exemplifies the concept here. I left a comment on the original article by Nich Halden. Since then, I’ve considered the question in some more detail. The Experience Machine presents a fundamental and existential question about human existence… but in the light of Universal Model Collapse Theory, it also represents a fundamental existential threat.
If individuals are confined to “experience” their perfect version of life in such a way, their brains will rot out their ears. Human neural networks that are presented with 100% synthetic data are likely to stop functioning entirely. An environment where there is no feedback but for wholly synthetic data will cause psychological lapses, neurological breakdown, and a slow entropic decay of the mind.
Initially, the experience machine may present interesting, unique data… but over time, new experiences will be added. Predominantly, experiences crafted by individuals who themselves are using the experience machine, or worse, AI trained on the experiences of those using the experience machine. An orobourous. The end of mankind wouldn’t even be biological in such an instance; it would be neurological. Mankind is consuming his own creativity until there’s nothing left but neurons firing in patterns no real human mind could possibly identify with. To plug oneself into the experience machine could well be a consignment of oneself to psychosis, but deleterious symptoms may appear visible only long after the damage is irreparable.
In the light of Universal Model Collapse, the “Experience” machine becomes a Lovecraftian nightmare that’ll cause individuals to rewrite their own neurology until there’s nothing human left. If you think Urbanite bug-people are bad, imagine what will happen if they lose touch with their sensations of touch, sight, sound, culture, and physicality. A reduction from 50% of their training data being real to zero.
Man can plug himself into an experience machine, but if ever unplugged, there’s a good chance that what walks out will no longer be a man; A cacophony of twisted and decayed neurological voices that long ago lost any semblance to a human mind.
[EDIT]
Indra's Nettle coined the term “Fidelity Drift” in the comments below. I think that’s a great term to describe the process of human civilization over-training on corrupted data-sets. The result is Fidelity Drift, reducing the quality of the information on which we can then train future generations.
Upper Echelon did an excellent video on how AI are infiltrating academia and the sciences back in 2024. The problem is substantially worse now and harder to track.
An excellent article on AI model collapse issues can be found here by Freethink.com.
https://www.nature.com/articles/s41586-024-07566-y
The Reasons for Demographic Decline
A recent hour long documentary provided me with an impetus to examine certain trends regarding demography. The real question postulated by the documentary is why aren’t people having children?

















I'm going to add this comment here based on a discussion that John Carter and I had in the comments of his restack:
I think we could theoretically calculate entropy levels in a system by averaging data-input and data-output. Doing so for simple AI systems would require high level physics, but allow us to calculate the rise of entropy in the intake and data-production of an AI over multiple generations of training. This “generational entropy” or “information fidelity” level could then provide a baseline for examining human civilization.
The results then could apply to individuals, or subcultures. It would just be difficult: universalizing the physics from simple image-generators and text-generators to complex social systems. While the model wouldn’t be perfect, it might show the levels at which fidelity breakdown begins to occur; a mathematical way to calculate how densely packed people/services can be before causing critical levels of psychological model-collapse.
Mathematically, it'd be an idealized system, and you’d have to make a LOT of assumptions. A few dedicated physicists and (non-leftoid) sociologists with a decade of research funding might be able to calculate show how dense human populations can get before reaching significant levels of model breakdown. It would require a tremendous amount of experimental testing.
Effectively: if we assume that the root cause of human sociological breakdown and digital AI model breakdown is the same (the loss of information fidelity over successive instances of training on polluted data) we could create a mathematical/physical theory that redescribes human civilization density as a function of information stability.
—-
On the philosophical side (more likely to be useful) identifying a universal root-cause for these types of breakdowns helps build a philosophical model of morality not merely as the actions of an individual, but as sociological affect. If that’s the case, then individualist-liberalism is placed in checkmate: it doesn’t work, and this is why: we have AI training models as testbeds to prove it. Effectively information/society MUST grow within a natural hierarchy greater that of Man, and truth MUST originate external to the self.
Really fascinating article.
Jane Austen deals with this topic in Northanger Abbey. The sheltered protagonist has had her judgment seriously distorted by reading too many gothic novels. (Also, Austen famously preferred the country to the city, perhaps a precursor of "touch grass.")
Presumably, this argument also explains why the best movies were made by people who grew up without televisions; why the best sitcoms were written by people who didn't grow up saturated with laugh tracks; why the best video games were made by people who didn't grow up playing them; etc.
It would also seem to have implications for retirement savings. As firms like Vanguard, which simply buy index funds, become a larger share of financial markets, they will gradually reduce the information in those markets. "Buy index funds" is good advice until everyone is doing it; but when everyone just buys index funds, you would get model collapse.