The Need for Meaning

This blog is now in its sixth year. One of the interesting things about having a large back catalogue of posts is you get to see which ones receive the most amount of traffic from search engines. By far the best-performing of my posts in that regard is one that I probably spent the least amount of time writing. I threw it together based on an idea I was kicking around at the time about Maslow’s hierarchy of needs. I called it the Inverted Maslow Hierarchy. Interested readers can find it here.

As I pointed out at the time, I hadn’t read any of Maslow’s work when I wrote the post two and a half years ago, but I recently got around to checking out the original essay that introduced the hierarchy of needs concept. It’s a short work from 1943 called “A Theory of Human Motivation”.

One of the things that surprised me about the paper is how weak the argumentation is. Maslow provides almost no empirical evidence for the hierarchy of needs and very little convincing explanation for it. In fact, on a number of occasions he provides arguments that call the idea into question. For example, he points out that “higher needs” sometimes trump lower ones. He also notes that some individuals seem not to have certain needs at all. Despite these problems, the paper concludes with a repetition of the initial premise that “lower” needs must be satisfied before “higher” ones may be pursued.

Apparently, Maslow modified the initial theory later in his career to address some of the obvious problems. Interestingly from my point of view, it seems that Viktor Frankl played a fairly significant role in that, since Frankl was also a critic of the hierarchy of needs concept. For those who don’t know Frankl’s work, he claimed that meaning is our primary need and that all other needs are (or should be) subordinate to the need for meaning.

Frankl famously spent several years in concentration camps, and this gives his focus on meaning extra poignancy because, from the point of view of Maslow’s hierarchy, the lower needs were not met in the concentration camps, and yet, some individuals still managed to manifest the higher ones. Thus, Frankl’s disagreement with Maslow was not just a philosophical but an empirical one. He had seen it with his own eyes.

To sketch the outlines of Frankl’s disagreement, we can use what is probably the most basic of our needs: the need for food. Physiological needs are the bottom rung of Maslow’s pyramid, and most people would agree that food and water are the most fundamental of those. In his paper, Maslow notes that people deprived of food become “obsessed” with the subject. This makes intuitive sense and lends credence to the idea that those who have not had their lower needs met cannot pursue higher ones.

However, there are a number of problems with this claim. The first one is that an “obsession” with food is not just borne out of the absence of it. Food obsession can also occur when there is a surplus. There’s a reason why gluttony is one of the seven deadly sins.

Maslow wrote his paper in 1943 and could not have envisaged that only decades later we would live in a world where excess food consumption is a common and widespread issue leading to obesity, diabetes and other health problems. In our time, the obsession with food is not just limited to its initial consumption but also to dealing with the aftereffects.

Thus, we have all kinds of fad diets, exercise programs, and pharmaceutical interventions. All of this proves that it is not just the absence of food but its excess which drives human behaviour. Even basic physiological needs are not an either/or equation but are about the establishment of a stable equilibrium. It is the break from equilibrium that is important.

That observation leads to a second problem with Maslow’s theory. The need for food is not a linear process but a complex one. If it was linear, we would expect hunger to grow as a function of time since the last meal. That is, the need for food would increase until it became the sole preoccupation of the individual. In fact, as anybody who has done fasting knows, this is not the case. Hunger tends to come and go in short bursts. Furthermore, there are at least several different types of hunger. Some of these are purely psychological; others correspond to physiological changes.

We now know that there are several phase changes that occur at the physiological level with the absence of food. The body will metabolise glucose wherever this is available. When it becomes unavailable, as in the case where the individual has not eaten for some time, the body begins to metabolise fats and proteins instead. This releases ketones, which then become the primary source of energy instead of glucose.

Ketogenesis

Psychologically speaking, what we generically call hunger is actually reduced during this ketogenic phase. In addition, there can be a rather pleasant feeling of both high energy and high mental concentration, at least in the early stages of ketogenesis. In simple terms, we would say that the need for food is reduced during this time, even though food hasn’t been eaten for a relatively long period.

Eventually, if no food is eaten, the body will start to metabolise more important things like organs, and that is when the genuine danger begins. During this phase, cognition starts to become impaired, and therefore it’s still not accurate to say that there is a unified “hunger” response marked by an intense desire for food. The physical and mental concentration required for such begins to disappear. Finally, it seems that in the fatal phase of starvation, hunger disappears altogether.

We can see that hunger is not a simple need but rather a process involving a complex series of psychological and physiological interactions. That is before we even get into the social aspects of the situation, since hunger in the individual almost always implies a background of societal breakdown. Maslow’s invocation of a simple need that corresponds to the absence of food, although true in the broadest sense, misleads more than it enlightens.

But even that is not the main objection to the hierarchy of needs. Per Frankl, meaning is the highest human need, and it should trump all other needs, including the most fundamental physiological ones. It turns out that there is an event from almost the exact same time that Maslow was writing his paper that demonstrates this point in the clearest of terms, since it actually involves a scientific study on the subject of starvation.

The Warsaw ghetto was created by the Nazis in 1940. Food provisions in the ghetto had been limited from the beginning, but the shortages were intensified in the months before the ghetto was finally liquidated in early 1943. It was clear the Nazis were trying to starve the inhabitants to death.

A group of doctors in the ghetto decided to begin studying the effects of starvation on the general population. The results of their study are worth reading, but the more important point is the dedication and willpower required to conduct the study at all. The doctors themselves were being starved, since they lived in the ghetto too. Nevertheless, they managed to stick to a rigorous program of data measurement and collection.

Maslow’s hierarchy predicts that people should not be able to pursue higher goals like scientific research when their lower needs are not being met, but the doctors of the Warsaw ghetto proved him wrong. This was, of course, exactly what Frankl had realised from his own experiences in an almost identical situation, and that’s why he would later criticise Maslow’s hierarchy and posit that meaning was primary, even in situations of extreme physiological deprivation.

In one sense, this is quite an obvious conclusion. History shows that people are willing even to sacrifice their lives for higher causes. However, history tends to focus on dramatic acts of heroism such as war fighting and religious martyrdom. What Frankl and the doctors of the Warsaw ghetto showed was also a kind of heroism. Hannah Arendt would later introduce the phrase ‘banality of evil’ to account for the atrocities of the era. But it’s also true that there was a banality of heroism at work. It was the quiet determination to pursue meaning against all odds.

In fairness to Maslow, his early work is amenable to this idea, and his later work does seem to have done much to correct the errors of his initial position. Nevertheless, it is his initial formulation of the hierarchy of needs that has remained a popular meme in post-war culture.

This raises an important question: why has the simplified hierarchy of needs enjoyed such widespread support despite the obvious problems with the idea and despite the fact that Maslow himself moved away from it later in his life? Why is everybody familiar with Maslow’s hierarchy and almost nobody with Frankl’s ideas about the primacy of meaning?

The answer is that the assumptions of the hierarchy of needs match the broader social and political paradigm that has taken hold in the aftermath of WW2.

Firstly, the hierarchy in its initial formulation was intrinsically individualistic, and this fitted into the political philosophy of liberalism with its focus on freedom. The focus on needs also fits into the socialist paradigm because it implies a checklist of requirements that the state should provide for its citizens.

Related to both of the above was the close correlation between the hierarchy of needs and what came to be called human rights. These were motivated by the desire to prevent a recurrence of the concentration camps and other abuses by nation states against minorities. Within this framing, the individualism of the needs/rights concept was seen as an antidote to the totalitarianism of state ideology.

This is also why an implicit hierarchy of needs framing is used unconsciously by almost all modern democratic politicians. Modern democracies are still nation states and therefore perfectly capable of totalitarianism, as we saw during the covid years. Democratic politicians love to invoke the hierarchy of needs as a way to reassure the public that the state is really on their side. Hence, the endless bloviating about how the government is there to keep the public “safe”.

Finally, it’s not hard to see how Maslow’s hierarchy fits neatly into the capitalist paradigm where the market will provide for the needs of the individual and where those needs can grow along with the economy. The keeping-up-with-the-Joneses mentality (i.e. the need for self-esteem) allows the consumer economy to continually expand.

Putting it all together, we can see how the hierarchy of needs has come to serve as a kind of shorthand for the entire post-war paradigm of liberalism, democracy, technocracy and capitalism. Politicians, technocrats, and capitalists are all falling over themselves to provide for our needs. Everybody is in furious agreement that needs must be met. The only question is who can do it best, the market, the state, or the experts. In practice, it’s all of the above.

Therein lies another major weakness in Maslow’s theory. His initial model made no distinction between needs which are satisfied through our own agency and those which are satisfied for us. This implies that we can reach the self-actualisation phase without having ever lifted a finger to provide either for our own needs or the needs of others. One of the results is widespread narcissism.

By placing the self-actualisation phase at the tip of the pyramid, Maslow, perhaps inadvertently, relegated meaning to some rare or elevated state, a distant Shangri-La set far above the vagaries of everyday life. Meaning then becomes separated from reality, and this allows the machine whose job it is to provide for our lower-level needs to run unimpeded. The majority of life then becomes meaningless.

Frankl’s emphasis on meaning was meant to prevent exactly this outcome, and it’s worth pointing out that he was not alone in sensing the danger that comes from the separation of meaning from the everyday world. The banality of heroism says that we are free at any time to insist that meaning comes first and foremost in our lives. If we don’t, we end up with the very modern problem of having an excess of basic necessities while being existentially starved.

Robert Menzies’ Education System

Just as in most other Western nations right now, the centre-right political party here in Australia, called the Liberal Party, is going through an existential crisis. That crisis is particularly relevant because the Liberal Party was founded in 1944, and the demons that beset the party are very largely the same ones threatening the entire post-war paradigm at the moment.

The founder of the Liberal Party was Robert Menzies and he therefore stands as a highly illuminating figure to understand the current situation. There’s a whole series of posts that could be written on Menzies’ contribution to post-war Australia. Maybe I’ll do that in the weeks ahead. But, for now, let’s look at a specific issue that Menzies was instrumental in and which is becoming a major problem these days, namely, the massive expansion of university education in the post-war years.

Robert Menzies

It’s hard for those of us who’ve grown up in recent decades to believe, but, prior to the world wars, the higher education sector in Australia was not a very important part of national life, certainly not for the average person. The original institutions, sometimes called the “sandstone universities”, had been established on the British model and were therefore almost exclusively the domain of elites on the grounds that only the rich could afford the tuition fees. The only chance that the general citizen had to gain entry was via the small number of scholarships that were available.

That is precisely the path that Robert Menzies took. He was born into a relatively poor farming family in the west of Victoria. However, via his excellent academic record and his performance on scholarship exams, he was able to work his way into the elite schooling track and ended up at the University of Melbourne, where he studied law. He would later practise law before moving into politics and becoming Attorney-General and then Prime Minister.

The fact that Menzies studied law at Melbourne University was not an accident. The traditional universities at that time were following a model that stretched back centuries. The goal of that model was to provide a classical humanist education, with a focus on subjects such as law, philosophy, ancient languages, etc. University education functioned as a kind of finishing school for British elites who were expected to go into the legal profession and politics.

The British model was contrasted at the time against the German one established by Frederick the Great, which had a focus on vocational education, including science and technology. Although British universities did eventually incorporate this vocational and scientific aspect, in truth, the great advances in science and technology that happened in Britain took place mostly outside of the formal educational system.

Patrick Bell

In between the British and the German styles of education was what was called the Scottish model. The British model was elitist and abstract. The German was egalitarian and vocational. The Scottish model aimed for somewhere in between. It also implied a certain type of student, namely, a farmer’s son. A classic example of this would be the man who invented the combine harvester, Patrick Bell, who left the family farm to study theology, became a parish minister, and revolutionised agriculture in his spare time.

One of the things that makes Robert Menzies particularly relevant on questions of education is that he sat right in the middle of these different traditions. He was born into a Scots farming family but received a British elite-style education. Meanwhile, by the time Menzies came of age, German-style vocational training had become much more important with the growth of industry.

In the aftermath of World War Two, Menzies began his long run as Australian prime minister (he would eventually be the longest-serving PM), and he made education one of his foremost priorities. It is not an exaggeration to say that he, more than anybody else, created the modern Australian university system.

Given his background, it’s especially interesting to ask what Menzies was hoping to achieve from his new educational system. Like so many people of that era, Menzies was horrified by the destruction of the two world wars, and he saw the ignorance and gullibility of the general public as being one of the driving forces behind the conflict. Menzies also believed that democracy had been failing for much the same set of reasons. Thus, he hoped an increase in education would both see an end to war and conflict while allowing democracy to fulfil its highest ideal. The purpose of the new system was to produce informed democratic citizens, not just elites like the British system, not just workers like the German, and not just educated yeomen like the Scottish.

Now, it has to be said that, in terms of addressing ignorance, the new system was a definite improvement and did elevate general knowledge, especially among the working classes. In an era prior to modern mass communication technologies, that was no doubt of great benefit. In response, however, the operation of propaganda became more sophisticated. Previously, it had been possible to lie to the public in a straightforward manner, either directly or through omission. That became more difficult when the public was better informed. The battleground shifted away from facts and towards ideology.

Menzies lived long enough to see that the university system he had created had become one of the primary battlegrounds for this ideological tussle. More painfully for him, it was clear that his political opponents (the left) had taken control of the institutions.

The ascension of ideology was one set of problems that beset the new system. The second set revolved around the financial realities that came with mass education. During the post-war boom years, it seemed that the sky was the limit. However, by the late 1980s, things were looking different. It was the Hawke Labor government which first introduced tuition fees for university students. These have been regularly increased by successive governments.

In addition, in the last couple of decades we have also seen the massive expansion of full fee-paying international students whose money, the universities themselves now admit, subsidises the tuition of locals. Menzies’ university sector has become a giant cash cow and part of what I like to call Australia’s Education-Immigration-Real Estate Axis of Evil.

In fairness, Menzies had dealt with a similar set of issues in his time. His highest ideal was for a nation of educated citizens willing and able to fully participate in the project of democracy. However, he had also noted how easily Australians would trade political ideals for materialist interests. One of the things he hoped for from his new education system was that it would lift the quality of democratic debate above such base concerns.

It’s fair to say, then, that the modern university sector would likely horrify Menzies. It has somehow become a combination of ideology and materialism. It has morphed into little more than a credential factory whose purpose is to profit from the degrees that confer access to the professional employment sector for local students and to the visa mill for international ones. The idea of producing enlightened democratic citizens seems to have disappeared entirely.

In truth, the modern university system operates in service to the technocracy, and it is here that Menzies must take some responsibility for the current state of affairs. Although he believed in democracy, Menzies also massively expanded the role of the public service and other technocratic institutions. The primary beneficiaries of that expansion were the middle class university students who found employment in those sectors on completing their degrees.

What Menzies and others of his era do not seem to have foreseen was the extent to which the technocracy would become anti-democratic. In fact, it still appears to be the case that most politicians in our time do not realise this, or that they don’t see it as a problem. This makes sense because they themselves have been educated in a system predicated on technocratic assumptions.

While the technocracy, and the economy more generally, were expanding, the problems with them were able to be overlooked. However, it seems that the growth period has now ended and the technocracy is becoming a major burden both financially and politically. One of the effects of that is that university education no longer makes financial sense for individual students. The Australian government’s recent cancellation of student debts is one attempt to prop up the system, but that’s just a drop in the bucket in the grander scheme of things.

Short of some unforeseen developments, the middle class that represented Menzies’ core constituency seems to be in permanent decline. That would certainly explain the declining position of the Liberal Party in Australia and its correlates oversees.

If Menzies represented the ascending middle class, what sort of politician would represent the declining version, and what would be their ideal for the future? It will be a demographic that is better educated than any in history. Ignorance is no longer the problem. The problem now is lack of real opportunity. Whoever can solve that problem could become the next Menzies.

Could “AI” end the post-industrial economy?

I’m prepared to be wrong, but my present best guess is that the “AI” bubble has now reached its peak and is about to unwind. It’s not out of the question that this will trigger a GFC-like financial crisis given the stupendous amounts of money that have been poured into it. If so, the underlying cause will be much the same as the GFC, i.e., far too much money chasing far too few returns.

The pattern that we have seen around the rollout of LLMs is not new, and is worth exploring in more detail, since it may very well be on its last legs.

We can understand the current situation by comparing it to the form of capitalism which existed about a hundred years ago. One of the great success stories of that form of capitalism was the rise of the automobile, and a very large part of the credit for that needs to go to Henry Ford.

Henry Ford

Entire books have been written about Ford and the huge paradigm shift he brought about. Among other things, he set the stage for the post-war consumer economy. He was also an advocate for early forms of “social justice”. That concept was not always the province of university communists. In fact, even in our time, much of the funding still comes from various billionaires, including the charitable foundation that Henry Ford’s son set up back in the day.

For our purposes here, we’ll focus on just a few key properties of Ford and his work, so that we can compare the system that created the automobile to the one that has created “AI”; industrial capitalism vs post-industrial capitalism.

Thomas Edison: another engineer-capitalist

The first thing to note is that Ford was originally an engineer; a very good one, too, because he rose to the rank of chief engineer at Thomas Edison’s electric company. It was in his spare time that Ford began experimenting with building engines and cars. Like many enthusiasts of that era, he was also actively involved in car racing, which became a testing ground for new ideas and designs.

Ford was not alone in beginning his career on the production side of the business world. Many of the great “capitalists” of the 19th and early 20th centuries were also engineers. Furthermore, what we now think of as public utilities were largely built out by such private engineers. That was true even here in Australia where the market was tiny compared to the US.

The second point to make about Ford’s car manufacturing business was the manner in which he achieved success. He had a number of failed attempts before hitting the winning formula with the Model T. Although relatively large sums of capital were required to manufacture cars, these initial attempts were fairly small bets where Ford himself assumed much of the financial risk.

More importantly, once the Model T took off, Ford expanded the business by reinvesting profits earned, not through speculative capital. This was no accident. Ford was highly critical of the banking industry and saw the concentration of power in the hands of banks as a disincentive to real innovation and growth in favour of profit-taking. That’s also why Ford quickly assumed full control of his company by buying out all other original investors and becoming sole owner. This made him independent of financial institutions and the stock market.

The third crucial point is that Ford was one of the enlightened capitalists of his era who realised that paying workers the lowest possible wage was counterproductive because it created employees who didn’t have enough money to buy the products that were being manufactured. He shocked the business world by introducing the “five dollar wage” at his company, which was almost double what the average worker was earning at that time. This enabled him to snap up the most talented workers, who he further rewarded by giving stock options to the best performers. Later, he introduced a five-day work week.

The combination of these and other practices formed the basis of the economic strategy that drove capitalism for most of the 20th century. Increased productivity led to higher wages and reduced working hours, thereby giving consumers the time and money to purchase the products being created, thereby driving sales, thereby allowing wages to increase, and so on. At the core of this model were efficiency gains driven by the engineering talent that Ford was able to attract and nurture.

Of course, the power of productivity had been known about at least as far back as Adam Smith. What made Ford successful was that he found a way to apply it to complex manufacturing. What also separated Ford was his willingness to share the gains with the workers, in contrast to the old capitalists who tried to screw every last penny out of the labour force.

With these basic elements of Ford’s model sketched out, we are now ready to understand some of the major differences between the industrial economy of the 20th century and the post-industrial economy that has produced things like “AI”. One of the main differences is the way technologies such as LLMs have been developed and financed.

Remember that Ford tried and failed several times to release a viable car to market and then, when he finally succeeded, built his business from profits earned, not from speculative capital. With LLMs, we have an almost exact inversion of that paradigm. LLMs are not a product that have any kind of real consumer demand behind them. Nobody has purchased an LLM. Rather, they have been made freely available. This has only been possible due to the enormous amounts of speculative capital that have been thrown at them. In the post-industrial model, products are not required to be profitable in order to gain investment. The assumption is that they will be monetised later once a market has been created.

The beginnings of this new kind of model can be found in the early days of the home computer industry. Nominally, that industry had a great deal in common with Henry Ford’s kind of capitalism. Bill Gates, Steve Jobs, and Steve Wozniak were all engineers and enthusiasts who mucked around with the tech in their spare time. They invested their own money into their early ventures and absorbed the cost of failure. When they finally stumbled across the winning formula, home computers saw similarly explosive growth to the Model T Ford. All of this looks like a repeat of the old paradigm.

There were at least two big differences, however. Firstly, both Apple and Microsoft were launched on the stock market very early on. This made Gates, Jobs, and Wozniak very rich, but it also meant they lost control of their companies. In fact, Jobs and Wozniak were famously elbowed aside shortly after Apple was publicly listed. That is the price to be paid for selling out to capital. It’s a price that Henry Ford explicitly avoided.

The second big difference was that sales were not really driven by the computers themselves, since only people who knew how to code could use them. Instead, it was the pre-installed ready-to-use software that became the key selling point. For example, Apple got its initial popularity by bundling its computers with one of the first spreadsheet programs. It would later win the high-end of the market by incorporating graphic design and printing software. It turned out that software was the main game, not hardware. Over time, this led to a pattern of offering software free or almost free in order to win market share.

With the arrival of the internet, there was a further abstraction away from hardware in favour of software. One result of this was that websites were offered for free and then monetised later. Capital also got involved much earlier in the process. Thus, Zuckerberg barely needed to invest any money in Facebook before the investment firms came knocking on his door to give him a truckload of money. Google had a very similar story in its early days.

Home computers had followed the same pattern as early automobiles in that they started out expensive but got cheaper as economies of scale and technology improvements came along. By contrast, the internet has followed an opposite pattern. Because the price of nearly every website started out at zero, it was impossible for it to get cheaper. What happened instead was that quality degraded over time once the market was captured and the investment funds started to monetise. This is what Cory Doctorow has usefully called the “enshittification” paradigm. It is, in fact, a form of market failure based on exactly the kind of predatory financial practices that Henry Ford warned about a hundred years ago.

If we think back to the original paradigm that Henry Ford operated under, another core component was productivity gains. There can be no doubt that the computer revolution led to massive efficiency improvements, and so this fitted the old pattern of industrial capitalism. However, the internet has been qualitatively different in this respect. Yes, it certainly saves time to have a search engine that returns the exact results you want, but this useful feature rarely translates into meaningful increased workplace productivity. Google’s revenue came not from productivity gains but from advertising resulting from market capture that included the entire globe, not just local markets.

We can see, then, that the internet has broken with Ford’s paradigm in several ways. Firstly, it has not been about productivity gains. It turns out that increasing the speed of information does not improve its quality. Most of the time, it degrades it. Secondly, the internet has not been about creating a virtuous cycle where prices are lowered over time because the starting price of any website has been zero. Therefore, there has been no incentive to increase quality. On the contrary, predatory finance practices only lead to enshittification.

It is against that background that LLMs have arrived on the scene. Perhaps ironically, one of the main marketing angles for LLMs and “AI” is that they will improve productivity. While there have been some promising results in relation to machine learning more broadly, there is very little evidence that LLMs lead to efficiency gains. The strength of LLMs turns out to be a weakness in this regard. The technology does a pretty good job of impersonating human language, but human language is not optimised for efficiency. It is flexible and context-dependent.

The result is that, for any task of reasonable complexity, it is very difficult even to craft the right instructions to give to an LLM to ensure you get the answer you want. Moreover, even if you fine-tune the instructions, it is in the nature of LLMs to return somewhat random answers, including answers with basic factual errors. There was amusing example of this recently when the consulting firm Deloitte got busted using LLMs in a report for the Australian government. The LLM did its fairly common trick of “hallucinating” facts, i.e. making stuff up.

The whole reason why the computer revolution led to such huge productivity improvements was because computers give the right answer every time as long as they have been programmed correctly. It was the speed and accuracy of computers that led to efficiency gains. LLMs might be quick, but they produce results that cannot be relied on.  

That would be bad enough, but it gets even worse when you consider that LLMs require about an order of magnitude more electricity to produce an inferior result. This is where the whole narrative around LLMs becomes almost schizophrenic because, in order to make them in any way viable, you need a massive build-out of energy infrastructure, and you need that at the exact same time that we’re making energy more expensive and less reliable via solar and wind. In short, LLMs need far more electricity to produce less accurate computation. That’s not a very good deal. In fact, it’s the opposite of the dynamic that Henry Ford created whereby productivity gains led to a reduction in price.

We are now living in a very different kind of “capitalism” from the one that Henry Ford helped to create. The system is no longer predicated on first creating a product that people want to buy and then driving down prices through economies of scale and productivity gains. Rather, the LLM craze is a finance-first phenomenon. It involves throwing huge sums of money at new technologies in the hope that they might be profitable at some undisclosed time in the future. In practice, this involves creating the illusion of a viable product by offering it for free and then enshittifying it later. Along the way, a tsunami of manufactured marketing buzz is needed to keep the investment money flowing.

In truth, the real customers for these kinds of “products” are not consumers at all but bankers and other interested parties who choose winners mostly on ideological grounds. Furthermore, because of the huge sums of money involved, only products which can potentially generate enormous revenues are ever pursued, since that’s the only way to generate returns that could justify the initial investment. That’s the reason why the marketing hype always stresses that the product is going to “change the world”. The change is always just around the corner, too. It has to be to ensure the release of the next round of investment funding, which is the only thing that keeps the whole shebang going. Ford built his empire on profits. LLMs are not being built on profits because there are none.

The LLM craze is following a pattern we have seen before. About ten years ago, the exact same fluff was being peddled about self-driving cars. I distinctly remember friends and colleagues talking as if the age of car ownership were a thing of the past. They genuinely believed that all cars on the road would be self-driving within a period of months or years. Well, it should be obvious that self-driving cars did not take over the world. The revolution didn’t happen. Once the marketing money dried up, the whole thing was quickly forgotten about and we moved on to the next craze.

The self-driving car flop is just one of a pattern of failures of post-industrial economics. Remember when the blockchain and NFTs were going to change the world? All that happened instead was that Sam Bankman-Fried and his buddies got to have amphetamine-fuelled parties in the Bahamas before going to jail. Another fraudster who ended up in jail was Elizabeth Holmes, peddling one of the seemingly infinite number of biotech boondoggles that appear and disappear just as quickly, unless a “pandemic” comes along to help the marketing effort (hello, mRNA vaccines). The list of these kinds of failure could go on and on.

A big part of the reason for these failures is that these products are not chosen by the market but by “the system”. Gone are the days of the engineer iterating towards the right solution in his spare time. The process is no longer driven by independent-minded men like Henry Ford but by whole industries of finance and marketing whose specialty is the creation of bullshit.

What differentiates the current “AI” craze, I think, is the huge sums of money that have now flown into it. Unlike the other boondoggles listed above, which were able to be quietly put to bed without anybody really noticing, I doubt the same can happen with the AI bubble.

The promise that “AI” will “change the world” might turn out to be true, just not in the way that the true believers think. If the AI bubble really does trigger another GFC, there is a genuine chance that this will be the last straw, because it’s not clear how much longer can we afford the post industrial economy.

The Good People

Recently, I stumbled across the following line on my internet travels:

“Obama is a good man who was a bad president. Trump is a bad man who is a good president.”

This is one of those statements whose meaning is self-evident to us and yet which hides a deep-seated belief of modern Western culture, one that is worth fleshing out in more detail.

We need not concern ourselves with the veracity of the statement or questions about what it means to be a “good president”. Let’s assume for the sake of the argument that the presidency is a job like any other. Thus, we could translate the above line into a similar statement – “Gary is a good person but a bad plumber.” We understand this to mean that Gary is incompetent at plumbing but is otherwise a decent sort of bloke.

But we can be even more specific than that, because to say that somebody is a “good person” refers to some inner and holistic quality. Central to this is the matter of intention. Gary means well. He really wants to be a good plumber. This intention is “good”. Nevertheless, despite his best intentions, Gary is incapable of doing good plumbing. Perhaps he lacks the motor skills required, and it doesn’t seem to matter how long he spends practicing, he never really gets better.

We can contrast Gary against a different plumber, let’s call him Fred, who has the skills and experience necessary to do good work but who, through some combination of greed, laziness, and indifference, does not. We would say that Fred could be a good plumber except that he’s a bad person. (I once hired a Fred to do some plumbing work on my house. Needless to say, I didn’t call him back).

What we have then is a dichotomy between the inner aspects of the individual and their outer actions in the world, and this brings us to the crux of the matter because in the modern West we have a deep-seated focus on the inner aspects that almost amounts to a complete denial of the outer. For us, the inferred inner character of a person counts for far more than their actual performance. As long as we think Obama is a good person, we forgive him for being a bad president. On the other hand, it doesn’t matter how good a job Trump does in the White House, he’s still a bad person.

We take this so much for granted that we are unaware how unusual it is and also how extreme this mentality has become in the post-war years. One way to see it more clearly is to compare it against a different understanding of what it means to be a “good person”, and we find a prime example of that in Aristotle’s Nichomachean Ethics.

In that work, Aristotle outlines a concept of eudaimonia, which is usually translated into English as happiness but which, for our purposes, we can better think of as virtue. Aristotle set out to define what makes somebody a good person. The specific details of the answer he gave are not relevant here. What is important to note is that Aristotle assumed that the way to judge whether a person was good was to observe their actions. Intention, belief, or any other inner states of the individual were irrelevant. A good person was somebody who did good things.

Thus, the statement “Obama is a good person but a bad president” would have been incomprehensible to the Greeks because their judgement of a person was based on public performance. To do a bad job was to be a bad person. You either manifested virtue, or you did not.

Furthermore, the idea of eudaimonia could only be judged on the full lifespan of the individual. Aristotle believed it was theoretically possible to live a perfectly virtuous life and then blow it all at the end with a final misstep, for example, by dying a cowardly death. This is the opposite of the Christian belief where you can live a perfectly sinful life and then save yourself with a deathbed confession.

It is, of course, from Christianity that modern Western culture gets its focus on the inner states of the individual because the Christian virtues are first and foremost personal, inner qualities. In the post-war years, we have kept the focus on inner states while swapping the Christian notion that we are born sinners for the idea that we are born virtuous. Humanistic psychologists such as Maslow built this into their theories. They assumed that everybody is born good. Everybody is inherently striving for self-actualisation and will achieve it if society doesn’t get in the way. This was a continuation of the philosophy of Rousseau, who stated that we are all born good and it is society which corrupts us.

Although this emphasis on the inner aspects of existence has been a core feature of Western culture for many centuries, it has been taken to a ridiculous extreme in our time. We have now arrived at a point where everybody is assumed to be “good” even if their actions in the world are the exact opposite. The Christian practice of repentance at least involved owning up to sinful actions. As a result, there was still a connection between the inner and outer world. When Christianity was rejected, this connection was severed.

The fact is that we have no direct access to the inner states of others. Therefore, we can only infer their inner qualities from outward behaviour, and that only to the extent that we first understand our own inner lives. An assumed inner virtue decoupled from action amounts to a dissociation from the “real world”. Moreover, forgiveness and repentance are no longer required because, as Dostoevsky put it, everything is permitted. We are born good, and we will die good, irrespective of what we do in life. By this absurd belief, even a murderer can be a “good person”.