Could “AI” end the post-industrial economy?

I’m prepared to be wrong, but my present best guess is that the “AI” bubble has now reached its peak and is about to unwind. It’s not out of the question that this will trigger a GFC-like financial crisis given the stupendous amounts of money that have been poured into it. If so, the underlying cause will be much the same as the GFC, i.e., far too much money chasing far too few returns.

The pattern that we have seen around the rollout of LLMs is not new, and is worth exploring in more detail, since it may very well be on its last legs.

We can understand the current situation by comparing it to the form of capitalism which existed about a hundred years ago. One of the great success stories of that form of capitalism was the rise of the automobile, and a very large part of the credit for that needs to go to Henry Ford.

Henry Ford

Entire books have been written about Ford and the huge paradigm shift he brought about. Among other things, he set the stage for the post-war consumer economy. He was also an advocate for early forms of “social justice”. That concept was not always the province of university communists. In fact, even in our time, much of the funding still comes from various billionaires, including the charitable foundation that Henry Ford’s son set up back in the day.

For our purposes here, we’ll focus on just a few key properties of Ford and his work, so that we can compare the system that created the automobile to the one that has created “AI”; industrial capitalism vs post-industrial capitalism.

Thomas Edison: another engineer-capitalist

The first thing to note is that Ford was originally an engineer; a very good one, too, because he rose to the rank of chief engineer at Thomas Edison’s electric company. It was in his spare time that Ford began experimenting with building engines and cars. Like many enthusiasts of that era, he was also actively involved in car racing, which became a testing ground for new ideas and designs.

Ford was not alone in beginning his career on the production side of the business world. Many of the great “capitalists” of the 19th and early 20th centuries were also engineers. Furthermore, what we now think of as public utilities were largely built out by such private engineers. That was true even here in Australia where the market was tiny compared to the US.

The second point to make about Ford’s car manufacturing business was the manner in which he achieved success. He had a number of failed attempts before hitting the winning formula with the Model T. Although relatively large sums of capital were required to manufacture cars, these initial attempts were fairly small bets where Ford himself assumed much of the financial risk.

More importantly, once the Model T took off, Ford expanded the business by reinvesting profits earned, not through speculative capital. This was no accident. Ford was highly critical of the banking industry and saw the concentration of power in the hands of banks as a disincentive to real innovation and growth in favour of profit-taking. That’s also why Ford quickly assumed full control of his company by buying out all other original investors and becoming sole owner. This made him independent of financial institutions and the stock market.

The third crucial point is that Ford was one of the enlightened capitalists of his era who realised that paying workers the lowest possible wage was counterproductive because it created employees who didn’t have enough money to buy the products that were being manufactured. He shocked the business world by introducing the “five dollar wage” at his company, which was almost double what the average worker was earning at that time. This enabled him to snap up the most talented workers, who he further rewarded by giving stock options to the best performers. Later, he introduced a five-day work week.

The combination of these and other practices formed the basis of the economic strategy that drove capitalism for most of the 20th century. Increased productivity led to higher wages and reduced working hours, thereby giving consumers the time and money to purchase the products being created, thereby driving sales, thereby allowing wages to increase, and so on. At the core of this model were efficiency gains driven by the engineering talent that Ford was able to attract and nurture.

Of course, the power of productivity had been known about at least as far back as Adam Smith. What made Ford successful was that he found a way to apply it to complex manufacturing. What also separated Ford was his willingness to share the gains with the workers, in contrast to the old capitalists who tried to screw every last penny out of the labour force.

With these basic elements of Ford’s model sketched out, we are now ready to understand some of the major differences between the industrial economy of the 20th century and the post-industrial economy that has produced things like “AI”. One of the main differences is the way technologies such as LLMs have been developed and financed.

Remember that Ford tried and failed several times to release a viable car to market and then, when he finally succeeded, built his business from profits earned, not from speculative capital. With LLMs, we have an almost exact inversion of that paradigm. LLMs are not a product that have any kind of real consumer demand behind them. Nobody has purchased an LLM. Rather, they have been made freely available. This has only been possible due to the enormous amounts of speculative capital that have been thrown at them. In the post-industrial model, products are not required to be profitable in order to gain investment. The assumption is that they will be monetised later once a market has been created.

The beginnings of this new kind of model can be found in the early days of the home computer industry. Nominally, that industry had a great deal in common with Henry Ford’s kind of capitalism. Bill Gates, Steve Jobs, and Steve Wozniak were all engineers and enthusiasts who mucked around with the tech in their spare time. They invested their own money into their early ventures and absorbed the cost of failure. When they finally stumbled across the winning formula, home computers saw similarly explosive growth to the Model T Ford. All of this looks like a repeat of the old paradigm.

There were at least two big differences, however. Firstly, both Apple and Microsoft were launched on the stock market very early on. This made Gates, Jobs, and Wozniak very rich, but it also meant they lost control of their companies. In fact, Jobs and Wozniak were famously elbowed aside shortly after Apple was publicly listed. That is the price to be paid for selling out to capital. It’s a price that Henry Ford explicitly avoided.

The second big difference was that sales were not really driven by the computers themselves, since only people who knew how to code could use them. Instead, it was the pre-installed ready-to-use software that became the key selling point. For example, Apple got its initial popularity by bundling its computers with one of the first spreadsheet programs. It would later win the high-end of the market by incorporating graphic design and printing software. It turned out that software was the main game, not hardware. Over time, this led to a pattern of offering software free or almost free in order to win market share.

With the arrival of the internet, there was a further abstraction away from hardware in favour of software. One result of this was that websites were offered for free and then monetised later. Capital also got involved much earlier in the process. Thus, Zuckerberg barely needed to invest any money in Facebook before the investment firms came knocking on his door to give him a truckload of money. Google had a very similar story in its early days.

Home computers had followed the same pattern as early automobiles in that they started out expensive but got cheaper as economies of scale and technology improvements came along. By contrast, the internet has followed an opposite pattern. Because the price of nearly every website started out at zero, it was impossible for it to get cheaper. What happened instead was that quality degraded over time once the market was captured and the investment funds started to monetise. This is what Cory Doctorow has usefully called the “enshittification” paradigm. It is, in fact, a form of market failure based on exactly the kind of predatory financial practices that Henry Ford warned about a hundred years ago.

If we think back to the original paradigm that Henry Ford operated under, another core component was productivity gains. There can be no doubt that the computer revolution led to massive efficiency improvements, and so this fitted the old pattern of industrial capitalism. However, the internet has been qualitatively different in this respect. Yes, it certainly saves time to have a search engine that returns the exact results you want, but this useful feature rarely translates into meaningful increased workplace productivity. Google’s revenue came not from productivity gains but from advertising resulting from market capture that included the entire globe, not just local markets.

We can see, then, that the internet has broken with Ford’s paradigm in several ways. Firstly, it has not been about productivity gains. It turns out that increasing the speed of information does not improve its quality. Most of the time, it degrades it. Secondly, the internet has not been about creating a virtuous cycle where prices are lowered over time because the starting price of any website has been zero. Therefore, there has been no incentive to increase quality. On the contrary, predatory finance practices only lead to enshittification.

It is against that background that LLMs have arrived on the scene. Perhaps ironically, one of the main marketing angles for LLMs and “AI” is that they will improve productivity. While there have been some promising results in relation to machine learning more broadly, there is very little evidence that LLMs lead to efficiency gains. The strength of LLMs turns out to be a weakness in this regard. The technology does a pretty good job of impersonating human language, but human language is not optimised for efficiency. It is flexible and context-dependent.

The result is that, for any task of reasonable complexity, it is very difficult even to craft the right instructions to give to an LLM to ensure you get the answer you want. Moreover, even if you fine-tune the instructions, it is in the nature of LLMs to return somewhat random answers, including answers with basic factual errors. There was amusing example of this recently when the consulting firm Deloitte got busted using LLMs in a report for the Australian government. The LLM did its fairly common trick of “hallucinating” facts, i.e. making stuff up.

The whole reason why the computer revolution led to such huge productivity improvements was because computers give the right answer every time as long as they have been programmed correctly. It was the speed and accuracy of computers that led to efficiency gains. LLMs might be quick, but they produce results that cannot be relied on.  

That would be bad enough, but it gets even worse when you consider that LLMs require about an order of magnitude more electricity to produce an inferior result. This is where the whole narrative around LLMs becomes almost schizophrenic because, in order to make them in any way viable, you need a massive build-out of energy infrastructure, and you need that at the exact same time that we’re making energy more expensive and less reliable via solar and wind. In short, LLMs need far more electricity to produce less accurate computation. That’s not a very good deal. In fact, it’s the opposite of the dynamic that Henry Ford created whereby productivity gains led to a reduction in price.

We are now living in a very different kind of “capitalism” from the one that Henry Ford helped to create. The system is no longer predicated on first creating a product that people want to buy and then driving down prices through economies of scale and productivity gains. Rather, the LLM craze is a finance-first phenomenon. It involves throwing huge sums of money at new technologies in the hope that they might be profitable at some undisclosed time in the future. In practice, this involves creating the illusion of a viable product by offering it for free and then enshittifying it later. Along the way, a tsunami of manufactured marketing buzz is needed to keep the investment money flowing.

In truth, the real customers for these kinds of “products” are not consumers at all but bankers and other interested parties who choose winners mostly on ideological grounds. Furthermore, because of the huge sums of money involved, only products which can potentially generate enormous revenues are ever pursued, since that’s the only way to generate returns that could justify the initial investment. That’s the reason why the marketing hype always stresses that the product is going to “change the world”. The change is always just around the corner, too. It has to be to ensure the release of the next round of investment funding, which is the only thing that keeps the whole shebang going. Ford built his empire on profits. LLMs are not being built on profits because there are none.

The LLM craze is following a pattern we have seen before. About ten years ago, the exact same fluff was being peddled about self-driving cars. I distinctly remember friends and colleagues talking as if the age of car ownership were a thing of the past. They genuinely believed that all cars on the road would be self-driving within a period of months or years. Well, it should be obvious that self-driving cars did not take over the world. The revolution didn’t happen. Once the marketing money dried up, the whole thing was quickly forgotten about and we moved on to the next craze.

The self-driving car flop is just one of a pattern of failures of post-industrial economics. Remember when the blockchain and NFTs were going to change the world? All that happened instead was that Sam Bankman-Fried and his buddies got to have amphetamine-fuelled parties in the Bahamas before going to jail. Another fraudster who ended up in jail was Elizabeth Holmes, peddling one of the seemingly infinite number of biotech boondoggles that appear and disappear just as quickly, unless a “pandemic” comes along to help the marketing effort (hello, mRNA vaccines). The list of these kinds of failure could go on and on.

A big part of the reason for these failures is that these products are not chosen by the market but by “the system”. Gone are the days of the engineer iterating towards the right solution in his spare time. The process is no longer driven by independent-minded men like Henry Ford but by whole industries of finance and marketing whose specialty is the creation of bullshit.

What differentiates the current “AI” craze, I think, is the huge sums of money that have now flown into it. Unlike the other boondoggles listed above, which were able to be quietly put to bed without anybody really noticing, I doubt the same can happen with the AI bubble.

The promise that “AI” will “change the world” might turn out to be true, just not in the way that the true believers think. If the AI bubble really does trigger another GFC, there is a genuine chance that this will be the last straw, because it’s not clear how much longer can we afford the post industrial economy.

17 thoughts on “Could “AI” end the post-industrial economy?”

  1. Hi Simon. A very thought provoking and timely blog. What I don’t understand about AI is that people are so keen to embrace it, and for such a broad range of experiences. The accounting professional bodies are pushing it hard, whilst at the same time warning us about ethical and cyber security issues. TBH my understanding is that being a professional involves using professional judgement – so why flog AI as it removes professional judgement leaving only processing work. I also know someone who uses AI for therapy. Why do you think people are so happy to hand over their thinking to a computer? Do you think that it will impact peoples ability to think, solve problems and make decisions over the longer term? Sandra

  2. Hi Simon,

    The old timers used to say: More dollars, than sense, which is also clever word play which conveys many different lessons.

    And that’s the dirty little secret of it all. How can the massive upfront capital costs for the many new servers, and what must be huge ongoing costs in both operations and upkeep, make any sense for a product which is being offered at no cost? Quite nuts really, unless somehow we’re all forced to use the thing and pay for it. It’s not like that path hasn’t been tested recently.

    As an intriguing side story, my professional association purporting to represent the interests of the members (let’s avoid using names here), keeps pushing the use of AI on one hand, and then saying that putting confidential client information onto a public forum is a potential breach of professional obligations. Easier to simply not use the technology, and I also did note that big company which was caught out. That news came and went fast. Makes me wonder if any of that confidential government information is now in the public sphere? And also what else is going on that hasn’t been busted.

    Old habits are hard to kick, and that driving of wages down is an oldie indeed. The stupid thing about The Great Depression was that there was plenty of stuff to buy, it’s just that with a third of the labour market unemployed, wages were pushed way down, and so nobody had any money. My best guess is that we’re heading towards the opposite of that problem, but don’t really know how things will play out.

    Cheers

    Chris

  3. Sandra – thinking takes energy and people are naturally lazy. I think it makes sense to get AI to generate a school essay or consulting paper since both of those are just pointless busywork anyway.

    In relation to professional work, I don’t know what it’s like in accounting, but in the IT industry there are actually very few people who have a proper engineering mindset in that they are able to weigh up alternatives and formulate a strategy that fits a given context. Most people are just mimicking what they think to be expected of them. So, they’re not really thinking. Or, to put it another way, they’re “thinking” like an LLM!

  4. Chris – there is a deeply weird kind of groupthink that goes on with these things. I think it also follows from the way the investment funding works. Basically, there’s an infinite amount of money for the “current thing”, so everybody jumps on the current thing to get access to the money. Then, the current thing changes to something else and everybody jumps on that. The bankers are picking the winners instead of actual customers.

    I think we’re already in another Great Depression, only we’re paying for it through inflation rather than unemployment.

  5. Hi Simon
    Adding porn to ChatGPT – what an innovative idea. I can’t believe I didn’t see that one coming, but I’m guessing subscriptions must be in the pipeline too. Get people hooked on their ChatG(irlfriend)PT and then bill them monthly. Sandra

  6. Sandra – hah! Reminds of the “monetise everything” rule of our insatiable banking system. Men and women have been getting into romantic relationships for free since time immemorial. How inefficient. Now, we can sell you a virtual partner on a monthly payment plan.

  7. Simon, about AI porn – Ah, finally AI’s habbit of adding extra bodily organs to pictures will be welcome…

    That was a great post and I will later come back with actual commentary…

  8. Bakbook – that reminds me of a certain scene from the movie Total Recall, which is rather fitting given the dystopian theme of AI.

  9. So this whole “putting the carriage before the horse” thing reminds me of the archetype of the heard. Perhaps this goes in the bin Jung would call “society”, perhaps society’s shadow.

    My first interaction with “the herd” was when I was working as a shepherd. I was in charge of a herd of 200 sheep and two goats, and had to learn how to read the heard and manipulate it. At first, I perceived the heard as a mob of animals who move chaotically, but if I were to get better at my job I needed a way to relate to this complex system.

    What I came up with is a made up entity I called Shaiba (SHAY-ba, the ba as in bandit). Shaiba, you see, is the reason the sheep form a heard in the first place. It’s both the predators they fear and the food they chase. As a shepherd I needed to appeal to the sheep’s greed but at the same time save the heard from itself, thus, protecting them from Shaiba while using it for my advantage.

    The heard is remarkably intelligent, but can be short circuited. There are documented cases of entire groups of sheep jumping to their death, the sheep at the back unable to change course. Doing this psychodrama allowed me to pass the time in pasture as well as “get in the mood”. Herding sheep is very much a performance art.

    After being done with this line of work, I came by an interesting synchronicity. I think there is an Israeli clearing house for consumer credit named Shaiba, as quite often when you pay in credit at a business in Israel, the monitor would briefly tell you it is “waiting for an ok from Shaiba”. I also came by a real estate office named Shaiba, which is fitting, as both consumer credit and real estate derive value from a herd following. This lead me to think there is a wider meaning to Shaiba than I originally intended. I am yet to see any trace for Shaiba in AI, but this whole thing really reminds me of him.

    The only question is who’s the herd, the true AI believers (who fear being left out), the investors (due to their greed), or both? The bible links shepherds with kings, yet so far politicians like trump sing AI’s praises for political gains, while tech leaders such as Musk seem more like the lead sheep every heard has, rather than a shepherd.

  10. Bakbook – reminds me of one of my favourite lines from the Kurosawa movie The Hidden Fortress. About two peasants that have been co-opted to help save the princess, the king asks the general, “Can we rely on them?” The general responds, “We can rely on their greed.”

    Part of what makes modern society so difficult to grasp is that the people who are in positions of power really aren’t greedy. Of course, they make their living from their position and so there is self-interest in there somewhere. But, for most of them, the ideology really does seem to be the motivating factor. There’s a weird kind of possession that comes with that because the ideology is both the means and the end. Reality does not get a look in.

    I guess it is a bit like herd behaviour but I don’t think it’s a herd with a leader. Rather, it is a combination of ideology and money that plays the coordinating function.

  11. Simon – How do you see this play out, would it be economic, like 2008? Would countries rather than coporations need a bail out?

    Another thing that occured to me is that if we assume this bubble can last a little longer, the process where graphic designers, actors, and copywriters are being replaced by AI could prove irreversible as those people will likely find jobs elsewhere much like how since covid there is a shortage of service workers.

    If so, after AI finally becomes crappy or nonexistant, the machinary of advertising could lose what remains of its effectiveness, and this means one of the last actual tools of control would be removed from western society.

  12. Bakbook – we’re in what Gregory Bateson called a “double bind”. Things can’t continue like this, but they also can’t be stopped. I think you get out of a double bind by either reverting to a previous state or transcending to something new. Therefore, either we revert back to an industrial economy with all the pain that would come with that or….something. I have no idea what that something might be. The true believers would say that the “something” is the singularity, general AI, or whatever digital utopia they have dreamed up this week. Maybe they’re right, but reversion to the mean is always a safe bet.

  13. Here’s the funny thing. I’ve become a rather enthusiastic user of ChatGPT (I even have the paid version!), and yet I roll my eyes at all this talk about how AI will lead to Utopia or Dystopia (take your pick). So, what’s ChatGPT legitimately useful for? Language-related tasks, first and foremost. For instance, proofreading what I write for spelling and grammar. I don’t really use it for English, but I find that for Czech in particular (a legitimately tough language that I learned starting in my 30s and now need to use for work), it vastly out-performs any of the previously available spelling and grammar tools. For me, that alone is worth the free that I pay for it. It’s not perfect, though, and I do have to proofread the corrections it makes because it sometimes changes the intended meaning, and sometimes it’s simply wrong. (Just yesterday, I caught it using the locative case where it should have used the accusative, and when I asked it about it, it immediately confessed its error. Please keep in mind that I wrote the original myself and correctly used the accusative, which it then mis-corrected!) But still, it’s a massive improvement over anything I had before. Oh, and it’s also useful for various computer bugs (Windows, plus other types of software with a large user base). Over at work, our tech support people have complained that I swamp them with requests. Oops. Then I got ChatGPT. Now I hardly ever need tech support for anything. So… Don’t be surprised if AI puts a lot of translators and tech support people out of work.

    Aside from that, well, it can be a decent search engine, though it sometimes says that some article or other says something that it emphatically does NOT say. Plus it’s a nice toy that’ll chat with you about all sorts of things that you might want to chat about. Just be careful not to believe everything it says. Funnily enough, I’ve been chatting with it about comparative linguistics lately, and I’d ask it what it thought about this or that toy-hypothesis that I had, and it would tell me that my toy-hypothesis was EXACTLY what leading linguistics were saying in private. Hahahaha! As if an LLM could possibly know what linguists are saying in private.

    But anyway. I quite like ChatGPT overall. Does the stuff it’s actually good for justify the ginormous investment? Err, I think we can all guess the answer there. I do hope all this language support (superior spelling and grammar check) is here to stay, though. I wonder if that sort of thing could be built and maintained for a small fraction of the total LLM cost.

  14. Irena – there’s no doubt that machine learning and LLMs are useful for some tasks. I think targeted machine learning applications will indeed put a lot of people out of work. So, we’ll get a simultaneous stock market crash alongside mass unemployment. Great depression 2.0.

    A further problem is that these applications are predicated on already existing data generated by humans. Once we’ve put all those humans out of work, who is going to generate the next round of high quality data that will train the software? Language is a good example because it is constantly changing and evolving. Once all translators are made redundant, nobody will be translating the latest evolution of the Czech language into other languages. ChatGPT’s model would slowly degrade.

    I suppose this could be addressed if everybody just starts working for the LLM companies. Imagine a world where your job is to constantly train an LLM. Sounds like a variation on The Matrix 🙂

  15. @Simon

    You’re quite right, of course. 🙂 It is a problem.

    One possibility is that we’ll simply end up living without high-level, well, all sorts of things. Take something like literary translation. Perhaps AI can perform 80-90% as well as a highly skilled translator. So, you feed it an electronic book that you want to read, and it spits out a perfectly adequate translation within minutes. So why pay a translator?! And so translators find themselves out of work, meaning that you can never get a really good translation of anything. Now someone’s going to say “Ah, but if you’re a REALLY good translator, you’ll still have work!” Not so. To get really good translators, you need a whole translation eco-system, with mostly mediocre translators. Put them all out of work, and the whole translation eco-system dies. Without the eco-system, top-notch translators simply cease to exist. And then the best that you can ever get is AI-generated translation.

  16. Another example that is driving me nuts. In recent times, I’ve noticed cafes and restaurants are playing shitty AI covers of classic pop songs as background music. The royalty payments are less, so they save money. Now, it’s just background music, so who cares? Well, royalty payments were a major form of revenue for both musicians and record labels and gave them money to fund the next round of up-and-coming talent. Without that money, the up-and-coming talent never gets nurtured. Eventually, there’s no good music left, just AI-generated crap. (Of course, the music industry had already destroyed itself before AI came along, so you might say that AI is the final nail in the coffin).

Leave a Reply

Your email address will not be published. Required fields are marked *