Tabula rasa squared

I must say I had hearty chuckle at a couple of news items from the last few days in relation to “AI” chatbots. Actually, the whole subject of AI has been cause for mirth for a number of years now. Who can forget when Microsoft released a chatbot only to have it turned into a Hitler-lover within twenty four hours by mischievous Twitter users . Google had a similar chatbot debacle a few days ago, albeit with a very different set of political biases, ones that exactly mirror the current ideological groupthink of corporate America. Coincidence? I think not.

There was another interesting piece of “AI” news from the last few days which was the story that Google have apparently agreed to pay Reddit $60 million dollars for data to train Google’s chatbot. I don’t know about you, but Reddit is not the first thing that comes to mind when I think of intelligent inputs to train AI with. As Microsoft learned with its Twitter experiment, a chatbot is only as smart, dumb, or racist as the data it is exposed to.

This raises the question: if you wanted to make artificial intelligence, well, intelligent, wouldn’t you want to feed it with smart stuff. Why isn’t Google shoveling the collected wisdom of Socrates, Plato, Aristotle, Newton and Einstein into its bot’s brain? If you’re going to make a machine intelligent, then surely you have to give it intelligent inputs?

Actually, the whole idea that intelligence is based on such “training” presupposes the philosophical position that sits at the heart not just of AI as a technology but also of its political uses. The “AI” used in chatbots is technically called machine learning. Machine learning involves feeding “correct” data into the computer and then having it pattern match new data based on the models it forms. It’s the digital equivalent of the philosophical idea of tabula rasa, or the blank slate concept.

The way computers are trained using machine learning is practically identical to the educational revolution inspired by blank slate philosophers like Locke and Rousseau a few hundred years ago. These days we assume that the hysterical overemphasis of our society on educational attainment is simply a way for parents to ensure that their children get onto the conveyor belt that connects the modern education system to the bottom rungs of the corporate ladder. That’s certainly true. But the original motivator for education was the new idea that the way a child would turn out was very largely dependent on how they were raised. There’s a whole story behind that which would need its own post but its closely related to the rise of Protestantism and the loss of authority which guaranteed the truths of old.

With the loss of the old form of authority came a new style of education. Rousseau called it the invisible hand. Rather than have an authority figure dictate the truth, the child must be allowed to believe that it was finding its own way to the truth. The teacher would act “invisibly” by controlling what information the child was exposed to in such a way that they would get to the outcome themselves.

Microsoft’s failed chatbot experiment is exactly the fear that many parents who implicitly believe some version of the blank slate theory have. They believe that all it will take is for their child to be exposed to the wrong “data” (the wrong group of people) and they will become corrupted. That’s the negative side of the theory. The positive side is that the child can theoretically become anything if only they get the right education. In any case, it’s all up to the parents. Modern helicopter parenting is the blank slate philosophy taken to its logical (and hysterical) conclusion.

As seems to always be the case, the parental philosophy of a society maps to the way in which the elites of society govern the general public. Back in ancient Rome, the father of the household had complete legal control over his wife and children. It’s no surprise, then, that the Caesars became the parens patriae of the whole society. As above, so below.

Thus, in our time, our elites govern us the same way our parents raise us; according to the blank slate philosophy. We are allowed to think we found our own way to the conclusion when in fact there is a Rousseauean invisible hand guiding us to a predefined outcome. The Google chatbot episode revealed the political agenda behind the rise of chatbots.

Why do we even need these chatbots in the first place and why would they need to be trained on Reddit data? The answer to the second question is because they need to be able to “speak the language” of the average Reddit user, which is a pretty good subset of the average internet user.

Why they need to sound like a real internet user is because it’s clear that the plan for these chatbots is to become “educators” of the general public in the Locke and Rousseau sense of the word. They will be used to generate content that is in accordance with the ideology that the elites want to promulgate. Rather than dictate to the public what to believe, the blank slate approach works by controlling the information that the public is exposed to and then letting them draw “their own” conclusions. This is not a new idea. The method has been going on for at least a century, and certainly longer if you include newspapers in the equation.

I mentioned in last week’s post Woodrow Wilson’s Committee for Public Information. That was a bureaucracy designed to gain the public’s compliance for America’s entry into WW1. One of its main methods was to hire volunteers to publicly advocate for the war. Cinemas were becoming very popular in those days. The volunteers would get up before the movie and give a short speech advocating for the war. That volunteer was just an “average person” who spoke and looked like an average person but who happened to believe the exact message the government wanted to have broadcast. Robert Cialdini called this tactic social proof and it’s been widely used in advertising and public relations since the beginning of the 20th century.

That’s clearly the reason behind Google’s need to purchase Reddit data. It wants its chatbot to sound like an average internet user so that any content generated seems natural. Once that’s achieved, the bot can be tweaked to provide enough bias to skew the agenda towards the approved message but not so much that we’ll see a repeat of the disaster from a few days ago.

Thus, a technology that’s predicated on the blank slate hypothesis is going to be used for purposes defined by the blank slate hypothesis i.e. to tip the balance of public opinion by biasing what information the public is exposed to. Alongside the new “disinformation” bills, it will give the powers-that-be the narrative control they require to ensure another Brexit or Trump can’t happen. At least, that’s what they think.

19 thoughts on “Tabula rasa squared”

  1. Simon: “But the original motivator for education was the new idea that the way a child would turn out was very largely dependent on how they were raised. There’s a whole story behind that which would need its own post but its closely related to the rise of Protestantism and the loss of authority which guaranteed the truths of old.”

    Really? That surprises me. After all, Protestantism is the religion of predestination, which seems more in line with extreme genetic determinism than with any sort of blank-slatism. But maybe that’s just Calvinism and not all of Protestantism(?).

    About chatbots: a few months ago, I read a blog post that suggested that chatbots would lead to the death of the Internet. Why? Because we’ll find ourselves in a situation where we’re never sure if the person we’re communicating with online is an actual person or a chatbot. At that point, more and more humans will decide to stop using the Internet, or at least strictly limit the ways in which they’re using it. I don’t know if I agree, but it is food for thought.

    Semi-related: not too long ago, I received a highly inappropriate work-related e-mail. Oh, no harassment or anything like that. I was just being asked for a particular kind of favor, when it would have been highly unethical (and maybe even illegal, I’m not sure) to oblige. Of course I declined. That’s not the interesting part, though. The interesting part is that I showed this e-mail to a couple of different people (no, not at the same time or in the same room, i.e. their reactions were completely independent of each other) and they both spontaneously exclaimed that it was obviously written with the help of a chatbot/AI. I’m not fully convinced they’re right, though. After all, those chatbots learned their style from humans, and the human in question may just be the kind of person who writes the way that chatbots have now learned to write. What an embarrassment!! Lemme guess: prestigious schools will soon start specifically teaching their students how to write in a way that does *not* sound chatbot-like, so that those students could distinguish themselves both from actual chatbots and from the masses who use the assistance of chatbots to compose e-mails and other types of writing. It’s even possible that slightly broken English will gain a kind of prestige. “Hey, at least I know this was composed by an actual human!” Of course, you could get an arms race (race to the bottom?) there, too. (“Dear chatbot, please compose an e-mail asking for X, and include exactly two English mistakes typical of native [language of choice] speakers.”)

  2. Irena – most of the effect of Protestantism was a reaction to its teaching rather than an implementation of it. On being told that most of them were going to hell and there was nothing they could do about it, the public responded by desperately trying to do something about it! Ironically, worldly success became a signifier that God liked you and you must be one of the chosen few. The old Popes had at least offered to save you from hell for a fee (indulgences). Now, that money went to education instead.

    The ramifications of chatbots will be interesting to see. I like the idea that humans stop using the internet. Imagine then that the internet is entirely full of chatbots designed to mimic human behaviour. The powers-that-be think they are winning for the war for narrative control but actually it’s just chatbots spitting out what they were programmed to write. Chatbots become the perfect type of employee who always tells the boss what they want to hear. They keep getting promoted until eventually they run the corporation 😛

  3. Simon: “Chatbots become the perfect type of employee who always tells the boss what they want to hear. They keep getting promoted until eventually they run the corporation ”

    Y’know, I’m not sure we’d even know the difference in some cases. 😛 For example, you listen to various experts (TM). They’re polished, eloquent, and self-confident. And they’re spouting total nonsense. In a polished, eloquent, and self-confident way. Surely ChatGPT can manage that! Maybe get some help from Gemini for the visuals.

  4. Irena – chatbots would make also make great politicians. Pretty sure they can stay on script much better than a fallible human.

  5. I think Irena is right, AI will lead to the downfall of the internet. Not just from chatbots, but also from AI generated images and videos. If nothing in the digital medium can be trusted anymore, it will become hard to take anything that isn’t straight from a real person’s mouth at face value. This might even extend to phone calls as voices can be reproduced to sound a lot like the originator.

    That’s the hand they are over playing with all the digital prison stuff, and surely they can see this but are maybe just too evangelical about it.

  6. Skip – bad money always drives out the good. Looking at the current state of US politics, as soon fake video is indistinguishable from real, you know that there’ll start being compromising videos of every presidential candidate or any other position of importance. The only solution I can think of would be verified accounts and it’s interesting that most of the big platforms have been moving in that direction.

  7. Yeah but who is doing the verifying? Both sides could just have their own verification. And the rebels will bulk at any officially verification system.

    I can’t really see any way around it, governments will use the proliferation of fakes to take through more and more draconian disinfo bills but the end result will be probably exactly what they don’t want; the entire medium being abandoned and something else taking its place. I could actually see the large scale collapse of the western faith in all media, which would be fascinating from a psycho-social cultural perspective. We are so used to having our thinking and opinion forming done by someone else, and a lot of us live in this bizarre abstract dream world of video and print.

  8. The ramifications of that would be huge. For one, it would almost certainly crash the stock market. It would crash all faith in progress. It could quite likely crash the internet in the sense that how is anybody going to pay for it in such a world. It would shatter a number of illusions that are being propped up by government propaganda. I’m not saying it won’t happen but, at this point, I really think our society will choose almost any other option. To a certain extent, we already are.

    Now if the internet got shut down for some other reason eg. sabotage, some kind of natural event, that would be “interesting”.

  9. Yeah I don’t think the internet itself will end outside of something hardware related, just the ‘news’ media propaganda arm of it because the crucial thing they have done is shown the videos can be horribly faked.Now they always could be in some way underhanded, magic of cinema and all that, but it is out in the open that AI can create, videos, text, sound all at a high level.

    Like what happens if scammers start faking loved ones voices? How quickly do people lose trust in their phone?

  10. The question is: do people care? Reminds of one of Nietzsche’s memorable ideas. He said most people don’t have an “intellectual conscience”. In other words, most people don’t care if they’re being lied to and will happily participate in the lie if it supports their world view. It’s quite similar to Dostoevsky’s Grand Inquisitor idea. Of course, that might work for broader politics. It will be a very different thing if people are losing their own money to scammers.

  11. Hi Simon,

    Makes sense. You should see some of the rubbish comments I receive. They’re barely literate. Here’s one from earlier today (apologies in advance):

    What i do not realize is in fact how you are no longer actually much more well-favored than you might be right now. You’re very intelligent. You recognize thus considerably in relation to this topic, made me in my view believe it from numerous numerous angles. Its like men and women are not fascinated until it is one thing to do with Lady gaga! Your own stuffs excellent. All the time handle it up!

    Crazy stuff and barely comprehensible! I’m pretty sure the first sentence was an insult. Hardly a winning strategy. 🙂 What the AI botheads may not realise, is that the written word is one thing, the spoken word is something else entirely. The two are similar, but cannot be interchanged. And what people may say on Reddit, they may not say to other people’s faces in public. There are social conventions and niceties to be observed.

    Interestingly, I see what you’ve written about with the ‘blank state’ having real world implications in relation to where parents send their children for schooling. I have inadvertently blundered into that minefield by offering candid opinions on the matter. And once in fact managed to upset people at a party whom I barely knew by offering the opinion: “Yes, I understand. But how is the child doing academically?” Far out, talk about getting an unfavourable reaction.

    Cheers

    Chris

  12. Hi Simon,

    One other question pops into my mind. How would the government, or corporate interests, or whomever, even know if a comment is a chatbot or not? There is a presumption that the elites are all pulling in the same direction, when I doubt that is the case.

    And if a chatbot comment was somehow able to be ‘known’, then of course other people will wise up to that, or simply switch off. It is very possible that should the program succeed at any scale, it may sow the seeds of its own eventual failure. I could be wrong though.

    Cheers

    Chris

  13. Chris – are you running a spam filter. That comment sounds like classic spam.

    In relation to bots, I’m pretty sure there are already armies of people paid to comment online so, assuming the bots get good enough to sound human-like, I’m not sure much will change just that some people will lose an income source. It’s on the image and video side of things where the big changes might happen. If generated video becomes indistinguishable from real, that will be a massive paradigm shift.

  14. Hi Simon,

    🙂 Yes, it went straight to the junk area. But the software is not so good and sometimes genuine comments go through to junk. It’s a minor nuisance and the system mostly works. The question I have is: Who writes that stuff?

    I agree, the generated video issue could become a serious problem. But then, I tend to believe that any success with that technology, will also be a failure for the technology. After all, if anyone can produce deep fake images, then all images become suspect. Then as a society we reduce the weighting of the media type. But I’m just guessing.

    There’s something quite disturbing where events spiral out of control, and so an attempt is made to grab and direct the narrative which instead becomes the primary driver.

    Cheers

    Chris

  15. Who writes it? Based on the English in use, I’d say some people in eastern Europe. Pretty sure I saw a story once about how there’s whole towns in countries like Romania and Bulgaria who make their living off it. The real question is: who is paying for it?

    In relation to video, it could be a huge deal. Luther changed the world using the new technology of the printing press. The arrival of TV and film brought about a similar paradigm shift. If this is the death of video, what comes next? People might have to go outside and talk to each other again 😛

  16. Simon

    “If this is the death of video, what comes next? People might have to go outside and talk to each other again ????”

    They say the pen is mightier than the sword. But the well-trained tongue is mightier than both. That tongue is connected to a highly under-appreciated technology.

  17. Jinsiri – I suspect the powers-that-be are well aware of that. Real talk that is un-intermediated by online censorship might be a revolutionary act these days.

  18. Simon,

    Those shady men behind velvet curtains an oaken desks are they who generation after generation have understood the key to power: manipulation of the fear of death. It is an art that has myriad forms. So long as men have no balm to that wound, no matter how toxic the biosphere or metasphere become, they will not emerge from those badlands and return to themselves.

    You are right about revolution. Courage to live and die truly needs face to face time with masters. Although the preliminaries can be done on paper. Failing this revolutions are just that, circular. The good guys overcome the bad guys just to become the bad guys so long as they cannot see the parts if their motivation that’s coming from the shadows. Perhaps better coinage is the “evolutionary act”.

    Fittingly, the ones most afraid of the man with the scythe are the powerful themselves. Like the vampires of Anne Rice, they are terrified of death precisely because they think they are immortal.

    “The man who conquers himself conquers the world.”: the Buddha (Although Sanjuro probably said it too, nippon go de????).

  19. Nominally religious people have also preyed on the fear of death. The popes in Luther’s time are a prime example, although whether they even qualified as religious believers is highly debatable since they too were addicted to power and wealth.

Leave a Reply

Your email address will not be published. Required fields are marked *