Discussion about this post

User's avatar
Copernican's avatar

I'm going to add this comment here based on a discussion that John Carter and I had in the comments of his restack:

I think we could theoretically calculate entropy levels in a system by averaging data-input and data-output. Doing so for simple AI systems would require high level physics, but allow us to calculate the rise of entropy in the intake and data-production of an AI over multiple generations of training. This “generational entropy” or “information fidelity” level could then provide a baseline for examining human civilization.

The results then could apply to individuals, or subcultures. It would just be difficult: universalizing the physics from simple image-generators and text-generators to complex social systems. While the model wouldn’t be perfect, it might show the levels at which fidelity breakdown begins to occur; a mathematical way to calculate how densely packed people/services can be before causing critical levels of psychological model-collapse.

Mathematically, it'd be an idealized system, and you’d have to make a LOT of assumptions. A few dedicated physicists and (non-leftoid) sociologists with a decade of research funding might be able to calculate show how dense human populations can get before reaching significant levels of model breakdown. It would require a tremendous amount of experimental testing.

Effectively: if we assume that the root cause of human sociological breakdown and digital AI model breakdown is the same (the loss of information fidelity over successive instances of training on polluted data) we could create a mathematical/physical theory that redescribes human civilization density as a function of information stability.

—-

On the philosophical side (more likely to be useful) identifying a universal root-cause for these types of breakdowns helps build a philosophical model of morality not merely as the actions of an individual, but as sociological affect. If that’s the case, then individualist-liberalism is placed in checkmate: it doesn’t work, and this is why: we have AI training models as testbeds to prove it. Effectively information/society MUST grow within a natural hierarchy greater that of Man, and truth MUST originate external to the self.

Expand full comment
Richard Jordan's avatar

Really fascinating article.

Jane Austen deals with this topic in Northanger Abbey. The sheltered protagonist has had her judgment seriously distorted by reading too many gothic novels. (Also, Austen famously preferred the country to the city, perhaps a precursor of "touch grass.")

Presumably, this argument also explains why the best movies were made by people who grew up without televisions; why the best sitcoms were written by people who didn't grow up saturated with laugh tracks; why the best video games were made by people who didn't grow up playing them; etc.

It would also seem to have implications for retirement savings. As firms like Vanguard, which simply buy index funds, become a larger share of financial markets, they will gradually reduce the information in those markets. "Buy index funds" is good advice until everyone is doing it; but when everyone just buys index funds, you would get model collapse.

Expand full comment
184 more comments...

No posts

Ready for more?