Two of the common criticisms of Jungian archetypes are that they lack analytical rigor and that they are not causative, or at least can’t be shown to be causative in the way we normally associate with science. It is no small irony, of course, that the field of virology suffers from the same difficulties. As we saw in part 8 in this series, there are still no firm rules by which to show that a virus causes an illness and this is especially true of respiratory viruses. And the criteria about what constitutes a distinct virus that is different from other viruses have been changed a number of times recently and there is still much disagreement within virology about what they should be. Nevertheless, our society has had no problem believing that sars-cov-2 is a ‘new’ virus and that it causes a ‘new’ disease. In fact, it’s the same people who believe in that ‘science’ who would criticise Jungian psychology for being unscientific. What is really at stake is not science as such but metaphysics. Jung did not conform to the standard materialist dogma that is dominant in western society and the criticisms of him are mostly nothing more than punishment for this indiscretion. Recently, it occurred to me that there are strong parallels between Jungian psychology and the branch of knowledge I did my university degree in – linguistics. One of the things that is interesting about that is that modern linguistics aims for exactly the kind of rigor some people would say is lacking in Jungian psychology. So, I thought it might be worthwhile to spend a post detailing these correspondences as a way to try and illuminate Jungian psychology and also put the criticisms of it into perspective. Some people deny that the archetypes exist, but nobody would deny that the English language exists. Nevertheless, the two are directly comparable and we should be able to use the latter to elucidate the former. In doing so, we’ll also have a look at why linguistics and psychology have so much in common and how that commonality makes them not amenable to the strict analytical rules and causal determination that can be achieved in the ‘hard’ sciences. Before we get to that, though, let’s do a lightning review of the main moves in 20th century linguistics wrought by the man whose work overturned the discipline, Noam Chomsky.
One of the main tasks Chomsky set for linguistics was how to account for the ages-old problem of how children learn language. We know that every healthy child with faculties intact will learn language to native speaker level automatically and without conscious effort. We also know that a child will learn the language of wherever they happen to be in the world even though the languages of the world show radical diversity in form. And we know that the language a child is exposed to is imperfect and incomplete. As I write this post, I am correcting myself as I go and I have the ability to go over what I have written and correct errors afterwards. But we can’t do that with spoken language. A transcript of me reading this exact same post out loud, even if I was reading directly from the written material, would show errors, false starts and other inconsistencies. Every-day, spontaneous spoken language contains even more errors, false starts and incomplete sentences and that is before you factor in metaphorical language or creative and novel use of forms and all the other bits and pieces that add spice to the spoken word. The task for the language learning child of bringing order to his mess seems insurmountable if the child was using general deduction to try and ascertain the rules of the language. Therefore, Chomsky posited that there must exist a faculty of some kind which is innate and whose structure the child brings to the problem so that it knows most of the rules in advance at some level of abstraction and its job in language learning is to figure out how the language it is hearing fits onto those pre-existing rules. Because these rules are limited, the child has a relatively small set of options to choose from and the task of language learning becomes manageable. Chomsky called this innate faculty Universal Grammar and we can map what this looks like as follows:-
This diagram is a logical diagram that is meant to refer to three logical types of individual speaker, a specific language spoken by a community of speakers and the Universal Grammar whose rules govern the possible forms of individual languages. In reality, universal grammar is in each of us and, according to the materialist position, must be present genetically so that it can be handed down through the generations. Meanwhile, what we call a language is itself an abstraction from the everyday spoken word. The linguist deduces a grammar from the linguistic behaviour of a community of speakers and calls that the language. This language is always in flux and its vocabulary and grammar can and do change over time. Nevertheless, within the Chomskyan paradigm, it is hypothesised that the grammar will always map back to the abstract rules of Universal Grammar which every one of us has in our mental makeup. By analysing the rules of each language spoken in the world, we should be able to hypothesise and then prove the rules of Universal Grammar. In doing so, we would account for how children acquire language and we would shine a scientific light into the previously dark recesses of human cognition. That was, in a nutshell, the goal which Chomsky set and which made linguistics, at least for a few decades, one of the most exciting fields of study in the 20th century.
There are all kinds of problems with the Chomskyan program which we don’t need to go into here. However, this way of formulating the structure of the way we learn language is almost beyond doubt. Linguists will argue about the nature of the faculties used in language acquisition but almost nobody doubts that there must be some innate faculties the child brings to the job. What Chomsky was trying to do was to bring the issue down to concrete, testable hypotheses and thereby to arrive over time at a more precise understanding of the language faculty. What is of relevance to us is that this way of framing things is, as far as I can tell, identical to what Jung had in mind with his archetypes. In fact, the exact same diagram can be drawn to characterise Jung’s theory of the archetypes:
What Maximus (the character in the movie Gladiator), Jesus and Wotan are meant to represent here are culturally specific examples of archetypes which bear the same relation to the collective unconscious as a specific language such as English bears to Universal Grammar. Jung would posit that there exists a more abstract form of each archetype in the Collective Unconscious which governs the expression in a particular culture. Thus, Maximus is one possible example of The Warrior archetype in the same way that English is one possible example of a human language. Just like Chomsky and other linguists study the patterns of language to try and determine the more abstract forms of Universal Grammar, so Jung and other psychologists study the cultural manifestations of the archetypes to try and arrive back at the more abstract forms. At this resolution, the two methods seem the same. However, Chomsky created a detailed and highly technical analysis of syntactic forms in his generative grammar whereas Jung did not aim for anything like that level of precision. In my opinion, what happened with Chomskyan linguistics is that the generative grammar became a complex game in its own right and arguably became a distraction from the underlying goal. What’s more, it doesn’t appear to have worked. The early results, which seemed promising, were limited by a focus on syntax and specifically the syntax of English. As the model was applied to other languages the limitations of this approach became clear. There are languages in the world where syntax simply isn’t that relevant and where morphology does most of the heavy lifting. In such languages, generative grammar becomes a clumsy and laborious way to account for the language and this throws doubt of the relation back to Universal Grammar. It looks to me like Chomsky’s project fell into the kind of category error that we’ll have a look at shortly and the purported rigor and detailed technical analysis actually left out a great deal that was essential to language itself.
It’s instructive that one of the criticisms labelled against Chomsky was that, for all his purported scientific rigor, ultimately he had to rely on the intuitions of individual speakers of a language to decide what was and was not grammatical. His rules were based on such intuitions but, within the edicts of materialist science, that is bad form. We need empirically verifiable data and not ‘subjective’ data. Of course, this same accusation has been levelled against Jung. In order to see why it is invalid and in order to elucidate the category error that I believe Chomsky fell into, let’s have a look at a way of dividing up the domains of knowledge that I first read about in Gregory Bateson’s book Mind and Nature. We’ll see that both linguistics and psychology are related disciplines that fall into a separate category from the hard sciences. Because of that, they can and do share an approach to knowledge that is valid within that domain but would be considered invalid in the domain of hard science.
The two domains that Bateson outlined are pleroma and creatura. To pleroma belongs the two disciplines that we generally consider hard science: physics and chemistry. To creatura belongs disciplines such as linguistics and psychology as well as biology, medicine, hermeneutics and religious and spiritual pursuits. The following diagram gives an outline of this distinction.
If we accept this distinction, we can see that a category error occurs whenever we apply the methods which are valid in pleroma to creatura and vice versa. The latter of these is what is usually called superstition while the former is sometimes called scientism. What Chomsky was trying to do in linguistics is also what many others have been trying to do in the 20th and into the 21st centuries which is to apply the methods of science from the pleroma domain to the creatura domain. Chomsky’s appeal to intuition, however, was more in keeping with the domain of creatura and was for this reason criticised. This criticism is only valid if you think that real knowledge only comes from the methods applied to pleroma. But this exactly what Bateson, Jung and others would have denied. There were various movements and schools of thought in the 20th century that set out to put the domain of creatura on a firm footing and trying to address the imbalance that had arisen where the methods valid to pleroma had come to be seen as the be all and end all of knowledge. The creatura domain is always in the process of becoming. It simply can’t yield the static, reproducible results that we find in the dead world of pleroma. Moreover, in Bateson’s analysis, all the creatura domain is governed by mind and because mind implies a hierarchy of logical types and feedback loops up and down the hierarchy, it’s not possible to do reductionist science because the attempt at reductionism at best sets a new context which didn’t exist before and at worst outright changes the very object under study. Bateson would also have said that empathy is needed in the domain of creatura. It takes a mind to know a mind. That’s why linguistics must appeal to the intuitive judgements of speakers about what is and is not grammatical. Only a speaker of a language can know those things. If you try and remove the speaker from the equation, you lose something fundamental. This is exactly what happens in cognitive science research where the researcher will often hide the true intent of the study from the subject. It’s very common in such research to give the subject a task to do while the thing that the researcher is trying to test for is unrelated to that task. In that way, the subject’s rational mind will not ‘get in the way’ of the results. In doing so, the researcher has removed conscious awareness from the equation. Such results may have a certain validity but I think we can all agree that conscious awareness is a pretty important thing to be leaving out when it comes to human beings. That’s what happens when you apply the pleroma paradigm to the creatura domain. Even within the Chomskyan paradigm, with its pretension to hard science, that I am a native speaker of English gives me the right to make judgements about the whole of the English language. Bateson said the same in relation to biology. As biological creatures, we can compare ourselves to other biological creatures and determine the patterns which connect us. The same goes for Jungian psychology. Each of us is a psyche and, if we can recognise the elements of the psyche within us, we can also recognise it without. That’s why you have to know thyself.
The messiness of human language mirrors the messiness of human psychology. But the desire to clean up this messiness is not valid in the domain of creatura. Even in the hard sciences, the preference for simplicity known as Occam’s Razor is mostly a practical matter. For any simple explanation there are numerous, perhaps infinite, explanations that work to account for a phenomenon but we prefer the simplest one. However, to paraphrase Einstein, explanations should be as simple as possible but no simpler. By leaving out so much of what makes language what it is, Chomsky might have won some insights into syntax but he left out a whole lot of other things that are arguably just as important. As Mary Midgely noted, this zeal for reductionism seems tied very much to hubris and when applied to the domain of creatura it leads to results that miss the point by excluding from consideration that which cannot be excluded.
Jung did not deny the validity of the hard sciences. Rather, he argued that the part of our mind that can think ‘causally’ and ‘rationally’ is actually quite young and is built upon a much larger and much more well-established part of the mind that interprets the world acausally or, we might say, religiously or symbolically. Many of the greatest scientists believed much the same thing and did not deny the validity of the older way of understanding. It is only in the modern world with our extremist materialist philosophy that the denial of that way of thinking has become common. We demand causal explanations based on quantity and number but when these are applied to the domain of creatura we leave out that which is crucial to our understanding. Consider this, virology actually sits right on the border between pleroma and creatura. We call it a part of biology but it deals with viruses and viruses are not alive. If we were to say that virology was actually a part of organic chemistry we would then also be saying that it is part of pleroma. This would be useful because it make very clear a distinction which has not been clear during corona which is that virology and viral disease are two very different things. The former has as its object a virus while the latter has as its object the relation between a virus and a person (medicine) and the relation between a virus and a population (epidemiology). These latter two are very clearly in the domain of creatura and this would make clear the fact that we would not want to apply the methods that are valid in pleroma to those disciplines. But, in fact, the whole corona discourse has been based on exactly that. The obsessive counting of ‘cases’ is just one aspect of that category error. The same people who are happy to gloss over the analytical difficulties of viral disease are the ones who would deny archetypal analysis because it does not establish causality. That correlation does not imply causality is one of the basic principles of hard science and yet it is correlation that has driven the whole corona business. Ironically, the corona event has been driven largely by those older faculties of the mind that Jung described and not the younger, scientific faculties. And yet it’s the people who deride those older faculties as superstition who have been most susceptible to them. In a way, that’s not surprising. If you aren’t used to using those faculties, you are going to be defenceless in the same way that somebody who has no fighting skills is defenceless against a trained martial artist. Our modern materialist society pretends those faculties don’t exist and so we have a population of people who are completely blind to what is still the main driver of human affairs. We pretend that the rationalist tip of the iceberg is all that exists while being wilfully ignorant of the power and mass of the submerged subconscious.
Thus, the error in corona has been the same error that Bateson and the other systems thinkers and cyberneticists had already identified in the 20th century. A category error of believing that only the precepts of reductionist science can give rise to knowledge and the invalid application of those precepts to the creatura domain. What is required in the creatura domain is the acceptance of holistic thinking. It’s a generalist approach that uses what the philosopher Charles Sanders Peirce called adductive reasoning. This involves drawing patterns across domains or looking for what Bateson called the pattern which connects. What abductive reasoning implies, and it is this which most offends against our materialist prejudices, is that certainty is not attainable. By failing to acknowledge this, we do what we have done during corona which amounts to nothing more than a desperate grasping after a certainty which can never be attained. We attempt to simplify things which cannot and should not be simplified and in so doing we fall into the trap outlined by the old saying – we cut off our nose to spite our face.
All posts in this series:-