The Coronapocalypse Part 29: A Philosophical Interlude

Two of the common criticisms of Jungian archetypes are that they lack analytical rigor and that they are not causative, or at least can’t be shown to be causative in the way we normally associate with science. It is no small irony, of course, that the field of virology suffers from the same difficulties. As we saw in part 8 in this series, there are still no firm rules by which to show that a virus causes an illness and this is especially true of respiratory viruses. And the criteria about what constitutes a distinct virus that is different from other viruses have been changed a number of times recently and there is still much disagreement within virology about what they should be. Nevertheless, our society has had no problem believing that sars-cov-2 is a ‘new’ virus and that it causes a ‘new’ disease. In fact, it’s the same people who believe in that ‘science’ who would criticise Jungian psychology for being unscientific. What is really at stake is not science as such but metaphysics. Jung did not conform to the standard materialist dogma that is dominant in western society and the criticisms of him are mostly nothing more than punishment for this indiscretion. Recently, it occurred to me that there are strong parallels between Jungian psychology and the branch of knowledge I did my university degree in – linguistics. One of the things that is interesting about that is that modern linguistics aims for exactly the kind of rigor some people would say is lacking in Jungian psychology. So, I thought it might be worthwhile to spend a post detailing these correspondences as a way to try and illuminate Jungian psychology and also put the criticisms of it into perspective. Some people deny that the archetypes exist, but nobody would deny that the English language exists. Nevertheless, the two are directly comparable and we should be able to use the latter to elucidate the former. In doing so, we’ll also have a look at why linguistics and psychology have so much in common and how that commonality makes them not amenable to the strict analytical rules and causal determination that can be achieved in the ‘hard’ sciences. Before we get to that, though, let’s do a lightning review of the main moves in 20th century linguistics wrought by the man whose work overturned the discipline, Noam Chomsky.

One of the main tasks Chomsky set for linguistics was how to account for the ages-old problem of how children learn language. We know that every healthy child with faculties intact will learn language to native speaker level automatically and without conscious effort. We also know that a child will learn the language of wherever they happen to be in the world even though the languages of the world show radical diversity in form. And we know that the language a child is exposed to is imperfect and incomplete. As I write this post, I am correcting myself as I go and I have the ability to go over what I have written and correct errors afterwards. But we can’t do that with spoken language. A transcript of me reading this exact same post out loud, even if I was reading directly from the written material, would show errors, false starts and other inconsistencies. Every-day, spontaneous spoken language contains even more errors, false starts and incomplete sentences and that is before you factor in metaphorical language or creative and novel use of forms and all the other bits and pieces that add spice to the spoken word. The task for the language learning child of bringing order to his mess seems insurmountable if the child was using general deduction to try and ascertain the rules of the language. Therefore, Chomsky posited that there must exist a faculty of some kind which is innate and whose structure the child brings to the problem so that it knows most of the rules in advance at some level of abstraction and its job in language learning is to figure out how the language it is hearing fits onto those pre-existing rules. Because these rules are limited, the child has a relatively small set of options to choose from and the task of language learning becomes manageable. Chomsky called this innate faculty Universal Grammar and we can map what this looks like as follows:-

This diagram is a logical diagram that is meant to refer to three logical types of individual speaker, a specific language spoken by a community of speakers and the Universal Grammar whose rules govern the possible forms of individual languages. In reality, universal grammar is in each of us and, according to the materialist position, must be present genetically so that it can be handed down through the generations. Meanwhile, what we call a language is itself an abstraction from the everyday spoken word. The linguist deduces a grammar from the linguistic behaviour of a community of speakers and calls that the language. This language is always in flux and its vocabulary and grammar can and do change over time. Nevertheless, within the Chomskyan paradigm, it is hypothesised that the grammar will always map back to the abstract rules of Universal Grammar which every one of us has in our mental makeup. By analysing the rules of each language spoken in the world, we should be able to hypothesise and then prove the rules of Universal Grammar. In doing so, we would account for how children acquire language and we would shine a scientific light into the previously dark recesses of human cognition. That was, in a nutshell, the goal which Chomsky set and which made linguistics, at least for a few decades, one of the most exciting fields of study in the 20th century.

There are all kinds of problems with the Chomskyan program which we don’t need to go into here. However, this way of formulating the structure of the way we learn language is almost beyond doubt. Linguists will argue about the nature of the faculties used in language acquisition but almost nobody doubts that there must be some innate faculties the child brings to the job. What Chomsky was trying to do was to bring the issue down to concrete, testable hypotheses and thereby to arrive over time at a more precise understanding of the language faculty. What is of relevance to us is that this way of framing things is, as far as I can tell, identical to what Jung had in mind with his archetypes. In fact, the exact same diagram can be drawn to characterise Jung’s theory of the archetypes:

What Maximus (the character in the movie Gladiator), Jesus and Wotan are meant to represent here are culturally specific examples of archetypes which bear the same relation to the collective unconscious as a specific language such as English bears to Universal Grammar. Jung would posit that there exists a more abstract form of each archetype in the Collective Unconscious which governs the expression in a particular culture. Thus, Maximus is one possible example of The Warrior archetype in the same way that English is one possible example of a human language. Just like Chomsky and other linguists study the patterns of language to try and determine the more abstract forms of Universal Grammar, so Jung and other psychologists study the cultural manifestations of the archetypes to try and arrive back at the more abstract forms. At this resolution, the two methods seem the same. However, Chomsky created a detailed and highly technical analysis of syntactic forms in his generative grammar whereas Jung did not aim for anything like that level of precision. In my opinion, what happened with Chomskyan linguistics is that the generative grammar became a complex game in its own right and arguably became a distraction from the underlying goal. What’s more, it doesn’t appear to have worked. The early results, which seemed promising, were limited by a focus on syntax and specifically the syntax of English. As the model was applied to other languages the limitations of this approach became clear. There are languages in the world where syntax simply isn’t that relevant and where morphology does most of the heavy lifting. In such languages, generative grammar becomes a clumsy and laborious way to account for the language and this throws doubt of the relation back to Universal Grammar. It looks to me like Chomsky’s project fell into the kind of category error that we’ll have a look at shortly and the purported rigor and detailed technical analysis actually left out a great deal that was essential to language itself.

It’s instructive that one of the criticisms labelled against Chomsky was that, for all his purported scientific rigor, ultimately he had to rely on the intuitions of individual speakers of a language to decide what was and was not grammatical. His rules were based on such intuitions but, within the edicts of materialist science, that is bad form. We need empirically verifiable data and not ‘subjective’ data. Of course, this same accusation has been levelled against Jung. In order to see why it is invalid and in order to elucidate the category error that I believe Chomsky fell into, let’s have a look at a way of dividing up the domains of knowledge that I first read about in Gregory Bateson’s book Mind and Nature. We’ll see that both linguistics and psychology are related disciplines that fall into a separate category from the hard sciences. Because of that, they can and do share an approach to knowledge that is valid within that domain but would be considered invalid in the domain of hard science.

The two domains that Bateson outlined are pleroma and creatura. To pleroma belongs the two disciplines that we generally consider hard science: physics and chemistry. To creatura belongs disciplines such as linguistics and psychology as well as biology, medicine, hermeneutics and religious and spiritual pursuits. The following diagram gives an outline of this distinction.

If we accept this distinction, we can see that a category error occurs whenever we apply the methods which are valid in pleroma to creatura and vice versa. The latter of these is what is usually called superstition while the former is sometimes called scientism. What Chomsky was trying to do in linguistics is also what many others have been trying to do in the 20th and into the 21st centuries which is to apply the methods of science from the pleroma domain to the creatura domain. Chomsky’s appeal to intuition, however, was more in keeping with the domain of creatura and was for this reason criticised. This criticism is only valid if you think that real knowledge only comes from the methods applied to pleroma. But this exactly what Bateson, Jung and others would have denied. There were various movements and schools of thought in the 20th century that set out to put the domain of creatura on a firm footing and trying to address the imbalance that had arisen where the methods valid to pleroma had come to be seen as the be all and end all of knowledge. The creatura domain is always in the process of becoming. It simply can’t yield the static, reproducible results that we find in the dead world of pleroma. Moreover, in Bateson’s analysis, all the creatura domain is governed by mind and because mind implies a hierarchy of logical types and feedback loops up and down the hierarchy, it’s not possible to do reductionist science because the attempt at reductionism at best sets a new context which didn’t exist before and at worst outright changes the very object under study. Bateson would also have said that empathy is needed in the domain of creatura. It takes a mind to know a mind. That’s why linguistics must appeal to the intuitive judgements of speakers about what is and is not grammatical. Only a speaker of a language can know those things. If you try and remove the speaker from the equation, you lose something fundamental. This is exactly what happens in cognitive science research where the researcher will often hide the true intent of the study from the subject. It’s very common in such research to give the subject a task to do while the thing that the researcher is trying to test for is unrelated to that task. In that way, the subject’s rational mind will not ‘get in the way’ of the results. In doing so, the researcher has removed conscious awareness from the equation. Such results may have a certain validity but I think we can all agree that conscious awareness is a pretty important thing to be leaving out when it comes to human beings. That’s what happens when you apply the pleroma paradigm to the creatura domain. Even within the Chomskyan paradigm, with its pretension to hard science, that I am a native speaker of English gives me the right to make judgements about the whole of the English language. Bateson said the same in relation to biology. As biological creatures, we can compare ourselves to other biological creatures and determine the patterns which connect us. The same goes for Jungian psychology. Each of us is a psyche and, if we can recognise the elements of the psyche within us, we can also recognise it without. That’s why you have to know thyself.

The messiness of human language mirrors the messiness of human psychology. But the desire to clean up this messiness is not valid in the domain of creatura. Even in the hard sciences, the preference for simplicity known as Occam’s Razor is mostly a practical matter. For any simple explanation there are numerous, perhaps infinite, explanations that work to account for a phenomenon but we prefer the simplest one. However, to paraphrase Einstein, explanations should be as simple as possible but no simpler. By leaving out so much of what makes language what it is, Chomsky might have won some insights into syntax but he left out a whole lot of other things that are arguably just as important. As Mary Midgely noted, this zeal for reductionism seems tied very much to hubris and when applied to the domain of creatura it leads to results that miss the point by excluding from consideration that which cannot be excluded. 

Jung did not deny the validity of the hard sciences. Rather, he argued that the part of our mind that can think ‘causally’ and ‘rationally’ is actually quite young and is built upon a much larger and much more well-established part of the mind that interprets the world acausally or, we might say, religiously or symbolically. Many of the greatest scientists believed much the same thing and did not deny the validity of the older way of understanding. It is only in the modern world with our extremist materialist philosophy that the denial of that way of thinking has become common. We demand causal explanations based on quantity and number but when these are applied to the domain of creatura we leave out that which is crucial to our understanding. Consider this, virology actually sits right on the border between pleroma and creatura. We call it a part of biology but it deals with viruses and viruses are not alive. If we were to say that virology was actually a part of organic chemistry we would then also be saying that it is part of pleroma. This would be useful because it make very clear a distinction which has not been clear during corona which is that virology and viral disease are two very different things. The former has as its object a virus while the latter has as its object the relation between a virus and a person (medicine) and the relation between a virus and a population (epidemiology). These latter two are very clearly in the domain of creatura and this would make clear the fact that we would not want to apply the methods that are valid in pleroma to those disciplines. But, in fact, the whole corona discourse has been based on exactly that. The obsessive counting of ‘cases’ is just one aspect of that category error. The same people who are happy to gloss over the analytical difficulties of viral disease are the ones who would deny archetypal analysis because it does not establish causality. That correlation does not imply causality is one of the basic principles of hard science and yet it is correlation that has driven the whole corona business. Ironically, the corona event has been driven largely by those older faculties of the mind that Jung described and not the younger, scientific faculties. And yet it’s the people who deride those older faculties as superstition who have been most susceptible to them. In a way, that’s not surprising. If you aren’t used to using those faculties, you are going to be defenceless in the same way that somebody who has no fighting skills is defenceless against a trained martial artist. Our modern materialist society pretends those faculties don’t exist and so we have a population of people who are completely blind to what is still the main driver of human affairs. We pretend that the rationalist tip of the iceberg is all that exists while being wilfully ignorant of the power and mass of the submerged subconscious.

Thus, the error in corona has been the same error that Bateson and the other systems thinkers and cyberneticists had already identified in the 20th century. A category error of believing that only the precepts of reductionist science can give rise to knowledge and the invalid application of those precepts to the creatura domain. What is required in the creatura domain is the acceptance of holistic thinking. It’s a generalist approach that uses what the philosopher Charles Sanders Peirce called adductive reasoning. This involves drawing patterns across domains or looking for what Bateson called the pattern which connects. What abductive reasoning implies, and it is this which most offends against our materialist prejudices, is that certainty is not attainable. By failing to acknowledge this, we do what we have done during corona which amounts to nothing more than a desperate grasping after a certainty which can never be attained. We attempt to simplify things which cannot and should not be simplified and in so doing we fall into the trap outlined by the old saying – we cut off our nose to spite our face.

All posts in this series:-

The Coronapocalypse Part 0: Why you shouldn’t listen to a word I say (maybe)

The Coronapocalypse Part 1: The Madness of Crowds in the Age of the Internet

The Coronapocalypse Part 2: An Epidemic of Testing

The Coronapocalypse Part 3: The Panic Principle

The Coronapocalypse Part 4: The Denial of Death

The Coronapocalypse Part 5: Cargo Cult Science

The Coronapocalypse Part 6: The Economics of Pandemic

The Coronapocalypse Part 7: There’s Nothing Novel under the Sun

The Coronapocalypse Part 8: Germ Theory and Its Discontents

The Coronapocalypse Part 9: Heroism in the Time of Corona

The Coronapocalypse Part 10: The Story of Pandemic

The Coronapocalypse Part 11: Beyond Heroic Materialism

The Coronapocalypse Part 12: The End of the Story (or is it?)

The Coronapocalypse Part 13: The Book

The Coronapocalypse Part 14: Automation Ideology

The Coronapocalypse Part 15: The True Believers

The Coronapocalypse Part 16: Dude, where’s my economy?

The Coronapocalypse Part 17: Dropping the c-word (conspiracy)

The Coronapocalypse Part 18: Effects and Side Effects

The Coronapocalypse Part 19: Government and Mass Hysteria

The Coronapocalypse Part 20: The Neverending Story

The Coronapocalypse Part 21: Kafkaesque Much?

The Coronapocalypse Part 22: The Trauma of Bullshit Jobs

The Coronapocalypse Part 23: Acts of Nature

The Coronapocalypse Part 24: The Dangers of Prediction

The Coronapocalypse Part 25: It’s just semantics, mate

The Coronapocalypse Part 26: The Devouring Mother

The Coronapocalypse Part 27: Munchausen by Proxy

The Coronapocalypse Part 28: The Archetypal Mask

The Coronapocalypse Part 29: A Philosophical Interlude

The Coronapocalypse Part 30: The Rebellious Children

The Coronapocalypse Part 31: How Dare You!

The Coronapocalypse Part 32: Book Announcement

The Coronapocalypse Part 33: Everything free except freedom

The Coronapocalypse Part 34: Into the Twilight Zone

The Coronapocalypse Part 35: The Land of the Unfree and the Home of the Safe

The Coronapocalypse Part 36: The Devouring Mother Book Now Available

The Coronapocalypse Part 37: Finale

28 thoughts on “The Coronapocalypse Part 29: A Philosophical Interlude”

  1. I can mainly speak for biology, but for a while now the practitioners of Creatura disciplines suffered from “Physics Envy”: the feeling of inadequacy due to the perceived prestige of being able to describe the knowledge of your field using strictly the language of math and quantitative (rather than qualitative) measures.

    Chomsky’s universal grammar also seems like trying to mathematize the study of language – once I have the universal grammar, I have the rules I need to construct any sentence, so I can start talking about a language in as precise of terms as I can speak about going about solving a quadratic equation. Interestingly enough, the Babylonians actually used to guess the solutions to quadratic equations based on intuition, so I guess if I were to show them the quadratic formula, that would be like the hypothetical universal grammar – no more need for the intuition of a native speaker to form sentences, just work out the rules, much like how someone in a Pleroma field only needs to solve a math problem to make a prediction.

    I think there is a lot of psychological comfort in numbers – not only there is the widely believed conventional wisdom that “numbers no not lie”. Numbers also don’t require one to exercise his own judgment – Just look at the number and see if it is above or below some arbitrary threshold, nobody can say you made a mistake if the number told you.

    The apparent inability of exercising critical thinking for yourself, creating the need for an outside authority to take the responsibility is something I see often in children of overbearing parents.

  2. Bakbook – I’ve always liked the line in Nietzsche “I would rather guess than deduce.” Even in the hard sciences, most of the breakthroughs come from educated guessing. Deduction is really just about fine tuning things afterwards.

    That’s an interesting point about the Babylonians. We teach math by getting students to solve formulas which is dead boring and something you can do without even understanding the problem. One of the best computer programing books I ever read would get the student to solve a problem first using whatever tools and understanding they had. This usually meant that you would solve it in a exploratory way that was very long winded. Then, in the next chapter you’d learn a way to solve the problem much quicker. The beauty of that approach was that you had done the hard work of understanding the problem first and when you were given the easier way you understood why it worked.

  3. You are spot on about deduction and guessing. One of the things I found the most fascinating, was how much of Physics, even modern one, was guessed. A great example is one of Physic’s most famous equations, the Schrödinger equation, which governs the time evolution of a quantum probability distribution. This equation was in fact first guessed, and the nice mathematical derivation in fact came later. Many professors, including my own Quantum Mechanics professor, teach the derivation in a manner that misleads students without an interest in the history of Physics into thinking someone first worked out the math and arrived at the equation as a result of a derivation.

    The approach you describe in the programming book is actually similar to the philosophy behind the way my Physics degree was arranged – first you learn Mechanics the way Newton originally formulated it – solving problems using Newtonian Mechanics requires quite a bit of intuition and creativity. Then the next year you learn Analytical Mechanics, which is a series of mathematical formulations that allow you to easily transform even complicated problems into a math problem you can then solve. This is done to give you the intuition that comes from the hard work of solving problems using Newtonian mechanics, so that when you use Analytical Mechanics you can tell if you made a mistake in the math as well as appreciate the simplicity it offers.

    I think you are right about the way they teach math – while in high school my cousin complained about how boring trigonometry is. As someone who had a blast in high school geometry, I asked to have a look at her textbook, and what I saw astonished me. Not even one illustration of the unit circle. I did not even see one right – angled triangle. It was all “definitions”, and identities she was being thought to memorize as formulas rather than understand them as conclusions that follow from the geometry. Pythagoras must be rolling in his grave.

  4. Bakbook – Have you ever read Richard Feynman’s book “Surely you must be joking, Mr Feynman”? There’s a great chapter called “Judging a book by its cover” where he recollects the time he was put on a committee to judge high school maths textbooks. It’ll give you a real insight into why high school maths is usually so awful (and most other subjects too).

  5. Simon: “In my opinion, what happened with Chomskyan linguistics is that the generative grammar became a complex game in its own right and arguably became a distraction from the underlying goal.”

    Yeah, I’ve heard other linguists express similar sentiments. (I have no formal training in linguistics, but I often listen to linguistics lectures aimed at the general audience.) I guess it’s one of those ideas that are helpful if perceived as a rule of thumb, or perhaps as a useful reminder of something or other (in this case: “children have some sort of innate faculty that allows them to learn languages with surprising ease”), but if you try to take it much further than that, you end up wasting time. Well, not that there’s necessarily anything wrong with playing complex games for their own sake (chess comes to mind), as long as you realize that that’s what you’re doing and refrain from pretending otherwise.

    As for “pleroma” and “creatura”: I would imagine that the real distinction is between simple and complex systems. Weather is notoriously difficult to predict, even though it’s not a living organism. Well, I suppose that weather on Earth is influenced by living organisms, but is weather forecast any easier for Venus than for Earth? Probably not.

    Back to COVID, it’s mind-boggling how simplistic the response has been. Most of the anti-COVID measures are basically guaranteed to exacerbate diseases of civilization (such as diabetes) on the population level, when those diseases are precisely what makes people vulnerable to COVID (and to a whole bunch of other pathologies). Sigh.

  6. Irena – yes, but the thing about generative grammar was that it was a very specific waste of time that came from a category error that is very common in our culture i.e. applying the methods of the hard sciences to domains where they are invalid. We are making the same category error right now in relation to corona. The “Simple vs complex” distinction is the the materialist perspective within which the only difference between a human and a rock is that we are more complex. What Bateson, Jung, Greer and others would say is that there is a qualitative difference and that is what the pleroma-creatura distinction is meant to capture. Yes, the covid measures are making the population sick all in the name of protecting them. Exactly what The Devouring Mother would do in her Munchausen by Proxy form 😉

  7. Simon: “it was a very specific waste of time that came from a category error that is very common in our culture i.e. applying the methods of the hard sciences to domains where they are invalid.”

    Hmm. I wouldn’t discount the hard science methods in linguistics, etc. wholesale. Think about machine translation, for example. It’s no good for poetry, and it might never be, but if you want to translate a recipe or a government form, it actually works quite well (at least for some languages). And those algorithms are based on statistics.

    Back to COVID: Sunetra Gupta made the argument that, as conceptual tools, mathematical models are very useful for understanding the general shape of a pandemic, but are fairly useless for making very precise predictions (about the actual number of infections, deaths, etc.). This made quite a lot of sense to me.

    So, it seems to me that you can indeed use mathematical models for “creatura,” but that you have to be modest, and not expect anything like the level of precision that you might hope for with gadgets (“pleroma”).

  8. Irena – it’s more fundamental than that. To go back to the WW1 analogy, I’m sure we could have got a physicist to explain the theory behind why sending men running into machine guns was a bad idea. But the problem was not a ‘science’ problem. The problem was that society had gone mad or, in Jungian terms, been overwhelmed by an archetype or, in old religious terms, been possessed by the devil. The arguments by Gupta and the others are totally correct but they assume that the only thing wrong at the moment is that we are following the wrong science. That is incorrect. The thing wrong at the moment is that we have gone mad. If there’s one lesson from the last year and half it’s that society does not run on ‘science’. It runs on something else and that is the unconscious, arguably the domain where the scientific method is least valid.

  9. > The arguments by Gupta and the others are totally correct but they assume that the only thing wrong at the moment is that we are following the wrong science. That is incorrect. The thing wrong at the moment is that we have gone mad. If there’s one lesson from the last year and half it’s that society does not run on ‘science’. It runs on something else and that is the unconscious, arguably the domain where the scientific method is least valid.

    Just a note to say _thank you_ for putting it so succinctly – it was quite the relief to read that!

  10. Hey mate,

    there is a difference between the skills needed to get a job and the skill needed to actually do the job properly. It seems to me in a lot of fields these two skillsets are almost mutually exclusive.
    At uni if you can count beans fast enough, the sky is the limit. If you are a creative independent thinker, you’re in trouble. So the output units of the university system are selected for their ability and commitment to the pleroma, materialism and algorithmic processes. Not the kind of thinkers that you would expect a Kuhnian paradigm shift from.

    “became a complex game in its own right and arguably became a distraction from the underlying goal.”
    This seems to be a rather common phenomenon, but that does not make it healthy. You’ll probably agree that the use of certain practices in Software-Development cannot be explained any other way. And it seems to be an important part of the current madness. A game with complex rules. Masks on, masks off, lockdown and vaccines and whatnot. Some people seem to enjoy playing it. Totally oblivious to the fact that it was started over a virus about as dangerous as a bad flu and that is probably endemic now anyway in most countries.

    “The thing wrong at the moment is that we have gone mad”. Amen to that.

  11. @Simon (and others)

    There is no question at all that the world has gone mad, and that that’s at the root of our current plight. My point was simply that mathematical models have their place in the study of “creatura,” though they won’t work quite the same (or as well) as they do for rocks and gadgets.

  12. Actually, that was a little ambiguous. I guess I should have written “no doubt at all that the world has gone mad.”

  13. Irena – I think we can be more precise. Mathematical models work best in physics and become progressively less useful the ‘further away’ you get from it. They are of some use in biology and related fields, as good as useless in psychology (cognitive science) and of no use whatsoever in explaining consciousness or the self.

  14. @Simon

    You think mathematical models are useless in cognitive science? I’m not so sure. There are some interesting results about the “forgetting curve” and such. And how do you determine the best method of (say) teaching reading without some sort of statistics? But again, this seems to work best for populations, and less well for individuals.

  15. Irena – the best method of teaching reading is to make the child want to read. That is a matter of empathy and insight, not statistics.

    Daniel – my pleasure.

    Roland – yes, I saw a lot of parallels between the people who were into Chomskyan linguistics and the software development nerds. In fact, an very strong argument can be made that you only get into such games when there is no value in the work itself. For example, there are people working on low value software but who have incredibly elaborate methods for the delivery of that software. If they didn’t have those elaborate methods, they might actually realise how little value there was in the thing they were working on.

  16. Simon: “the best method of teaching reading is to make the child want to read and that is a matter of empathy and insight, not statistics.”

    If you want to design a first grade reading curriculum for your country (or province or whatever), then it’s very useful to know what works best on average, and for that, you need statistics. No curriculum will work for everyone (obviously), but a curriculum that that results in 10% of children winding up either illiterate or needing extensive specialist help is quite a bit better than one that results in 40% of children winding up in the same situation, especially if the former takes no more time and money than the latter.

    Of course, this sort of thing works best for relatively simple skills. By “reading,” I really meant “decoding.” It’s much more difficult to figure out what will make a kid become an avid and critical reader. That’s much more of an art than a science, and plenty of kids will never get there, no matter what you do. But of course, decoding is a prerequisite for any and all more advanced reading skills.

  17. Irena – to summarise your point as I understand it: assuming an undifferentiated mass of children and an undifferentiated mass of teachers, what algorithm can be applied to achieve some outcome that I, the expert/technocrat, care about. That’s exactly the mindset of applying pleroma methods to the creatura domain. It’s also exactly the mindset that has turned western societies into hellholes in the last year and a half.

  18. @Simon

    ::shrug::

    Relying on the whole language method, rather than the much more effective (on average) phonics, has left masses of children (most of them poor) functionally illiterate. That purely practical matter strikes me as far more relevant than any pleroma/creatura philosophical distinctions.

  19. So educate the teachers in the available methods and leave it up to them to decide which one applies best to which student. Except teachers can’t do that. They get told what to teach by a technocrat. Just like a doctor gets told by a technocrat what treatment they must give to corona cases.

  20. Simon, the thing is that some methods really are a lot better than others, i.e. they give noticeably better results for a significant majority of students. (How do we know? Statistics.) Phonics will work better than whole language for almost all children. Are there exceptions? Sure. Deaf children come to mind, and they obviously need to be taught in a different way. But yes, phonics should be the default, and children who still struggle should be given extra help, tailored to their special needs.

    In general, though, you can’t have too much personal attention. To do it right, you’d need miniscule classes (think three-to-five students), and that’s very, very expensive. If you try it with even just 20 students (which is a relatively small class), what happens in practice is that most children are neglected most of the time, while the teacher works one-on-one with just one kid.

  21. Irena – fair enough. I simply point out that such results apply to an undifferentiated mass of humans and, in my opinion, facilitate the treatment of humans as an undifferentiated mass.

  22. Hey mate,

    Sounds a lot like my current job. Minimal actual dev, maximal rube Goldberg style process. After a decade working in small businesses and startups, I had forgotten what the corporate world is like. And it has become a lot crazier since I last looked. Working in this environment requires a real effort to hold on to your sanity.
    So it is not all that surprising, that society lost its mind. It was already quite insane for a lot of people before 2020, so it only took a few small steps to get to where we are now.

  23. Simon, one thing to keep in mind is that what’s happened over the past year and a bit makes absolutely no sense whatsoever, technocratically speaking. Our technocracy has been infected with some sort of virus, and I do not mean SARS-CoV-2.

  24. Irena – no, this is exactly where we disagree. What has happened in the last year and a half makes perfect sense. There is no virus in the technocracy. This is exactly what the technocracy is. In the west, we have escaped the problems of the technocracy by keeping it in its box. It has resided mostly in the government bureaucracy where it causes only a minor annoyance. With corona we let it out of the box. What has happened is exactly what James C Scott called the ‘high modernist ideology’. That’s what killed so many people in Maoist China and the USSR. In the last year, it killed many people in the west too. That ideology commits exactly the error I have spoken about in this post of applying pleroma methods to the creatura domain. When you employ pleroma methods in the creatura domain of education, it’s merely an inconvenience and makes school boring. When you apply it to the medical industry, people die. When you apply it in politics, democracy dies. That’s what the 20th century should have taught us but we still haven’t learned the lesson.

  25. @irena
    I have to support Simon on this.
    What we are seeing could be simply technocratic thinking taken to its natural conclusion.
    Having been a well paid slut of the techocracy for many years, I am quite familiar with the insane way of applying rational thought.
    When we finally went obviously mad last year, we didn’t have very far to go.
    This should be a testable hypothesis. If it is correct, we won’t get out of it. Things will get a lot worse before they get better. Corona will be a thing of the past soon, but the pattern will get more extreme. There is already talk along those lines.
    Not sure if you heard of Iain mcgilchrist. He comes from a slightly different angle, but reaches similar conclusions.
    https://m.youtube.com/watch?v=1kAlwrnpHIs

  26. Roland: “What we are seeing could be simply technocratic thinking taken to its natural conclusion.”

    I suppose that’s one way of thinking about it. But take anything at all to its “natural conclusion,” and madness follows. Anything at all: thriftiness, adventurousness, hygiene, exercise – you name it. A certain amount of technocratic thinking will give you things like an mass literacy and the eradication of smallpox. Push it far enough, and you get Auschwitz. So, don’t push it that far!

    But yes, I fully expect things to get worse before they get better. And we won’t get out of this technocratically, that much is clear by now. I think the problems created by the response to lockdowns et al will lead to an explosion (in some countries, in a fairly literal fashion, i.e. with war), and then we’ll just sort of forget about COVID, although it’ll remain endemic, of course.

  27. @Simon

    What’s happened is that our societies have become monomaniacally focused on this one particular virus. Is that monomania a virus in the technocracy, or is it technocracy taken to its natural conclusion, when (as I said above) you should never take *anything* all the way to its natural conclusion? Does it matter?

    Some technocrats have been pointing out that the COVID response has been a disaster for (among many other things) children’s education and health (physical and mental). But those technocrats have been ignored. Why? Monomania.

  28. Irena – i’m sure many of the technocrats are as surprised as anybody about what has happened and are just as much flying by the seat of their pants as the rest of us.

Leave a Reply

Your email address will not be published.