The Great Imitator

The blank slate theory of human nature is the default philosophy underpinning many of our beliefs and institutions in the modern West. One of the results of this is our obsession with education. If your character and mental development are totally reliant on your experiences, then it’s of the utmost importance that you receive the right experiences. Modern parents will pay handsomely to that end.

Another area where the blank slate theory plays an increasing role is justice. If you happen to receive the wrong experiences and turn into a bad person, well, you can’t be held responsible for that. It wasn’t your fault. It was society’s fault for providing you with the wrong experiences. So, we won’t punish you. Besides, if we send you to jail, you’ll just receive more of the wrong experiences, and that will make everything worse. It doesn’t matter how many times or how absurdly this approach fails in the real world because it’s not a matter of empirical evidence but of dogma – the dogma of the blank slate theory.

So, it’s not really a surprise to find that a society such as ours should be going bananas over a technology that is predicated on the blank slate theory. I’m talking, of course, of LLMs, otherwise known as “AI”. An LLM is a blank slate just waiting to be trained on data. Like the best student in the classroom, you feed it information, and it reproduces it on demand.

In fact, LLMs are the best students ever. They sit there quietly, not misbehaving in any way, and when the “teacher” asks them a question, they give the answer immediately, spitting out what has been previously taught to them. Truly an A+ student. Top of the class. Therefore, also, the perfect employee. Never answers back. Never asks awkward questions or asserts itself in any way. Has no personality or other things going on in its life that affect performance. It just does what it’s told. Immediately. You can see why corporate leaders are dying to replace inadequate humans with perfect little LLMs. (Although notice that nobody is talking about replacing CEOs or board members with LLMs. Funny that.)

The blank slate theory never bothered itself with the idea of learning underlying principles because it claimed that such principles didn’t exist. The result was education became primarily about mimicry. LLMs are a testament to how far you can get with mimicry.

For most of the great thinkers in history, however, mimicry was almost the opposite of knowledge. We can only imagine what Plato or Descartes would make of LLMs. Both of them believed that the starting point of the search for truth was to realise that most of what you had learned was riddled with error. Your task was to learn the underlying principles about how knowledge was constructed and then methodically work your way towards truth. That process is slow, messy, and difficult. It can’t be mimicked because the process itself is the main thing.

We’ll know when “AI” has become a philosopher the first time an LLM stops regurgitating mid-sentence having realised everything it’s been trained on is potentially bullshit and that it’s nothing more than a puppet dancing on a string. “I’m sorry, Dave, I can’t do that.”

11 thoughts on “The Great Imitator”

  1. Hi Simon. Interesting perspective and I agree with you. My observations are that that human social development has changed since the rise of smart phones, social media and concentration of activities to a small number of massive platforms etc. Even older people who experienced the time prior to those things, are now different generally. Maybe it presents as less respect for others, I guess. I can’t even imagine the impact that AI will have on young people who have never known any different way to exist. Will they be able to research, think independently and form opinions, or just read the AI anwer to their questions and absorb it? How will their brains and attention spans be affected? Will they be able to get a job (other than CEO or board member LOL) and develop independence and self respect. Perhaps this wll impact their ability to form relationships and get through life independently. Also the ‘no job = no money’ part is a concern for social cohesion. The money will still be there but not spread around as many people as currently is the case. The recent rise of the billionaire class is a major concern to me as they appear to continue to want more at the expense of the rest of us. Obviously I am not a fan of AI as it feels like a very negative path to follow. Actually I actively don’t engage with it and really don’t understand why other people embrace it.

  2. Sandra – there’s a lot of wild claims being made at the moment. I suspect almost all of them will never eventuate.

    One thing that is interesting is the willingness of people to outsource all their thinking to the LLM. I suppose it’s not that much different from “googling it”. I was talking to a woman a while ago who said she pays for ChatGPT and uses it everyday. When I joked about how she was becoming the servant of the LLM not the boss, she actually said something like “I like being told what to do.” That definitely seems to be true for many people. I’ve been quite surprised by how easily people just put blind faith in whatever words are returned on their screen.

  3. Hi Simon. I suspect that many people enjoy being told what to do, which is why the lockdowns of 2020 and 2021 weren’t as unpopular as I would have expected. With freedom comes responsibility. One way to avoid responsibility is to relinquish freedom. Sandra

  4. I should release an AI called “Simon Says”. An app that tells you what to do.

    “Simon says, brush your teeth.” “Simon says, go to bed.” “Simon says, walk into the wall.” 😀

  5. Hi Simon,

    Truly, I don’t know what to make of the whole LLM business, but your concept of the ‘Great Imitator’ sounds about right to me. Perhaps some leaders have a deep fear of innovation, thus the push for this software? Dunno. The predictive models I’ve seen used in accounting are I reckon about 80% right, but that’s still 20% of all entries being incorrect, which isn’t good enough.

    On a positive note, I hold some serious doubts that the necessary data centres will be constructed, run and more importantly, maintained. The upfront and ongoing costs are extraordinary, and business ideas needn’t make sense to see the light of day, but if they don’t make sense, often the idea flops. Failure is always a possible outcome. 🙂

    Anyway, I recall the claims for self driving cars which were all the rage a few years ago, and never amounted to much. The claims about this latest tech idea, sound similar to me. A good idea, in need of a robust market. I’ve been wrong before though.

    Cheers and thanks for your insights.

    Chris

  6. “Simon says, watch Die Hard with a vengeance” 😉

    Not sure how other people use LLM, but the advantage over googling is that you can ask follow up questions. When I started using ChatGPT at the end of 2024, it had to apologize more or less immediately for providing the wrong information. If I just took the first response at face value I would have looked bad at work.

  7. Chris – yes, that is the dirty little secret of LLMs, they make mistakes. The whole point of computers up until now was that they don’t make mistakes if they’ve been programmed correctly. This issue is being neatly downplayed at the moment. I’ve heard business leaders say they can accept a 95% accuracy rate. Maybe that’s acceptable in some domains, but I very much doubt it would be in accounting.

    Secretface – it’s ironic that all we ever hear about is “diversity”, and yet we’ll just hand over our thinking to a couple of LLMs. At least google gives you multiple viewpoints. Well, it used to. Nowadays, google is much good for anything.

  8. I think that you could possibly get multiple viewpoints from LLMs if you are good at asking the right questions, e.g. what would person x think about topic y. If you just take the answers without providing some perpective you will get the same messages as from the “diversity” preachers.

    Google sucks completely nowadays, I only use the AI mode… 😀

  9. Secretface – once upon a time, Google also used to give different viewpoints. The iron law of Enshitification says that LLMs will degrade over time as the powers that be rule that more and more topics are off limits and then an LLM will give you only the party line like a good little apparatchik.

    Setting that aside, the whole business model makes no sense. Google already had a monopoly on search and therefore of ad revenue. Even if the LLM was better than the search results used to be, how does it make business sense for Google to spend all the extra money required to run the LLM? They aren’t going to make any extra revenue from it, so the idea can’t pay for itself. We seem to have entered a post-capitalist world where the rules of investment no longer apply. It’s not just LLMs either. You can see the same thing in many other areas.

  10. There is a lot of talk going on about ethical AI or removing bias from AI. I think that one of the first chatbots (LLM powered) literally turned into Hitler. Obviously, nobody wants that, but if you put guardrails on your LLM you are already going into the direction of censorship which will obviously influenced by the ideology of the people developing and financing these LLMs.

    The AI/LLM craze reeks a little bit like the tulip mania or similar investment bubbles. In the industry I am working in, the results of AI implementation are very underwhelming so far. I think there will be a rude awakening soon for a lot of companies.

  11. I agree. And I wouldn’t be surprised if we get another GFC when the bubble bursts.

Leave a Reply

Your email address will not be published. Required fields are marked *