
In my continuing analysis of the Coronavirus situation, this week on The Melbourne Flâneur I want to discuss with you how we make sense of this and the other n-th order infinite impact crises which are resulting from the Coronavirus pandemic.
In my previous post, I set forth a theory of how viral incivility operates online, via social media, to poison our collective sensemaking environment—what I call the ‘cognitive commons’—and how the pollution of disinformation, misinformation, and insufficiently targeted information creates ‘externalities’ for us all in the virtual environment, just as pollution creates externalities in the real world.
I stated in that post that online viral incivility is an abuse of the privilege of free speech, which is essentially an abuse of the freedom of thought.
It occurs when people who are high in external individualism—that is, who intrinsically value their freedom of will, of action, the right to ‘do what they like’, more than their freedom of thought—instrumentally use the privilege to speak their minds to spread bile, banality and abuse throughout the cognitive commons.
Free speech, as I said, is the vector along which human beings communicate their thoughts and ideas with one another, and when, in times of existential crisis, our cognitive commons are polluted with disinformation, misinformation, and information that not everybody needs to know, our ability to make sense of the Coronavirus crisis and to take concerted, collective action against it is severely hampered.
If, as a writer, I am concerned about the way that the Coronavirus has exposed the issues associated with free speech that Western democratic societies were debating prior to the pandemic, it is because the instrumental abuse of the privilege of free speech has profound implications and ramifications not only for how we understand this crisis by the best vector we have available to communicate with ourselves as a collective, but what we do to solve it and the other n-th order infinite impact crises it has unleashed.
Our ability to collectively apply our will and act in a concerted, global effort is predicated upon our collective ability to think and make communal sense of this crisis by means of speech.
The alternative to high external individualism, as I explained in my last post, is high internal individualism—the intrinsic valuing of freedom of thought, and its instrumental application in the freedom to act, to do, to speak.
Freedom of thought and its ancillary, freedom of speech, need to be intrinsically safeguarded because it is difficult to make a case whereby acting first and thinking second is a more appropriate response to a species-level crisis than thinking before acting.
The latter value, freedom of will, is therefore instrumentally guaranteed by the intrinsic value of freedom of thought—for free thought cannot be actualized except by freedom of action.
In my third post on this topic, I introduced you to the notion of networkcentricity as the symbol—and symptom—of life in the 21st century. The more deeply we look into our collective reality, the more the metaphor of the exponentially expanding network jumps out at us as the mandala of our age.
The Coronavirus is an exponential network. Our social media are exponential networks, and the Internet—‘the World Wide Web’—is an exponential network. We begin to conceive the ecosystems of nature as ‘networks’: animals school, swarm, flock and herd in networks, and we ourselves, spread across the globe, are ‘networked’, physically and virtually.
And when we consider with what avidity people high in external individualism attach themselves to clustered groups based on shared interests, preferences and biases, willingly subordinating themselves to what I called, in my last post, the ‘memetic possession’ of ideology and groupthink, it appears as though the metaphor of the network has revealed something new about the intrinsic nature of man.
We too ‘swarm’ like ants and bees, ‘school’ like fish—and ‘herd’ like sheep.
The classical model of the sovereign, Cartesian individual, he who thinks himself into being, can no longer hold—and perhaps this critical thinker high in internal individualism is more an historical and social anomaly from his peers in the long race of man, one to be ‘replaced in the modern age by transhistorical impersonal forces,’ as Edward Said put it.
Psychologist Kenneth Gergen hazarded that ‘we may be entering a new era of self-conception. In this era the self is redefined as no longer an essence in itself, but relational’ [my emphasis].
Perhaps it is not man’s ultimate destiny to realize himself as a sovereign individual in a social network, but to realize himself as a node, a neuron in a eusocial cognitive collective.
Perhaps we’ve always been a swarming creature, like ants or bees, in our deepest nature, and the extraordinary, outsize neural network which each one of us carries in his head, and which is the species characteristic and evolutionary advantage of homo sapiens—‘thinking man’—has merely required the technological development of a non-local, externalized, virtual neural network in order for us to realize our evolutionary destiny as a ‘hive mind’.
It is well-known that human beings are the most social animals on the planet. All the fruits of civilization we enjoy are due to our sociality, our ability to co-operate and collaborate.
Equally, all the bad things that human beings do to one another can be attributed to our instinct for sociality and the absolute abhorrence which most human beings (your humble author excepted) feel at the prospect of being utterly cut off from human society. Like siblings who tease and hurt one another, we would rather have another human being to hate, beat, violate and murder—or be hated, beaten, violated and murdered by—than be left utterly alone.
But despite the claims of E. O. Wilson, and despite the fact of being the most pro-social lifeform on our planet, we have never graduated to that condition which certain swarming insects, such as ants and bees, have evolved to, the condition of eusociality, where every member of the collective acts individually in the best interests of the collective.
Our individual consciousness, the rational mind which allows us to think, to discriminate, to divide, which enables each of us not merely to demonstrate the same freedom of will and action that other animals demonstrate, but gives each of us the potential for sovereign freedom of thought to manifest in free action, discriminates and divides our personal, individual interests from those of the collective.
When each of us can do whatever we like to maximize our own private fitness functions, what need have we, like ants and bees, to consider the maximal fitness function of the collective?
The consciousness of our individual sovereignty, the sense of ourselves as being ‘separate’ from one another, and from the larger network of nature, has been both the tool which has gotten us to the place where we are now as a species, and the hindrance which prevents us from evolving to a steady-state, sustainable model of eusocial fitness in a finite environment.
Technology, the exponentially exploding product of our consciousness, has finally produced the Internet and social media, intellectual technologies which manifest our species’ deep-seated avidity not merely to connect relationally with one another, but to have access to each other’s minds, to pool our cognitive resources in a geometric synergy which increases computing power exponentially as each neural node is added to the network.
Which is to say that, with the digital technology of the Internet and social media, we have finally evolved a means to consciously evolve ourselves.
In his journal article “The New Superorganic” (2004), F. Allan Hanson of the University of Kansas states that, in considering our capacity to pool and share our cognitive resources via the commons of the Internet, we should include the medium itself as essentially an agentic partner in this non-local, externalized hive mind.
Citing Gregory Bateson’s ‘provocative insight that the agent conducting any activity should be defined so as to include the lines of communication essential to that activity rather than cutting across them,’ Hanson defines agency as an ‘intelligent performance’ which ‘manifests mind’. And the act or behaviour of ‘minding’, according to Hanson, is the management of information.
In other words, there is a generalized intelligence diffused in ‘the field’—the field vectorized by the medium of the Internet—as an embodied activity. The field itself has intelligence and synergistically ‘manifests mind’—not merely the minds of all the individual users, but a geometric ‘over-mind’ which includes the relational architecture of the network itself.
What I’m calling ‘the field’, the integration of our lines of technological communication into the collective management of information, is a symbiotic, agentic partner which is itself coevolving in our species’ coevolution towards eusociality.
We know that there is a definite biological limit to the number of social relationships human beings can healthily maintain. Robin Dunbar found that the number of 150 social relationships (taken as an average) is directly correlated to neocortical size in primates.
We know equally that the brain is not hardwired, but malleable. Neuroplasticity allows neurons to forge new cortical connections, reorganizing our individual neural networks so that we can learn new things, new habits.
I would hazard that if we are to evolve, in a quantum leap, to a new steady-state, sustainable model of eusocial cognitive fitness in a world we are exponentially exhausting, a species-wide ‘consciousness-raising exercise’ needs to take place: En masse, we need to forgo old cognitive habits and enter the brain’s ‘exploration mode’, leveraging our individual capacities for neuroplasticity to consciously create an external, non-local neural network of social relations which scales beyond Dunbar’s Number to include the whole human family.
How can we achieve such a biological and spiritual coevolution in consciousness when the complexity of our common crises is exponentially reducing our capacity to make good collective sense of our problems?
Fortunately, we have one cognitive technology which maps very nicely to the externalized, visual affordances of the Internet. It’s called a ‘schema’.
A schema is a cognitive map. It is the assemblage of items stored in long-term, unconscious memory arranged as organized patterns of knowledge, and the possession of complex, detailed schemas is absolutely essential for an individual to make good sense of himself, of the world, and of any given domain of knowledge.
‘Our intellectual prowess is derived largely from the schemas we have acquired over long periods of time,’ says psychologist John Sweller. ‘We are able to understand concepts in our areas of expertise because we have schemas associated with those concepts.’
Individuals have schemas. At a higher level of recursion, groups such as churches, businesses, social organizations, political parties and nation-states all have schemas, which are composed of overlapping individual schemas based around shared perspectives, interests, preferences and biases. And, at the highest level of recursion, humanity has one very schizophrenic ‘meta-schema’ composed of all schemas.
Schemas can be adapted very quickly, but their weakness is that they tend to remain static—even in the face of overwhelming contradictory evidence.
As a species, this is the situation we find ourselves in with respect to the Coronavirus. The spirit of life in the 21st century demands that we should change our standard ways of perceiving, thinking and doing at an exponential rate, but we are very slow and reluctant to change even with a tsunami of entangled crises bearing down on us.
Human beings—like all organisms—have two main ways of perceiving reality, thinking about it, and acting in accord with their perceptions of and thoughts about what is real. These two main ways have been variously defined: Herbert A. Simon defines them as ‘recipes’ and ‘blueprints’; Jordan Hall has described them as ‘habit mode’ and ‘explore mode’; and Luís M. A. Bettencourt calls them ‘exploitation’ and ‘exploration’.
These methods of thought and action have to do with relative perceptions of risk and reward. Like all organisms, human beings like to minimize risk and maximize reward. ‘Recipes’, ‘habits’ and ‘exploitation’ are all variations on this strategy: where there is a perception of low uncertainty (and therefore risk) in the environment, we respond to it in a habitual fashion, employing ‘recipes’ we’ve developed to exploit the maximum reward from the environment.
These cognitive and behavioural ‘short-cuts’ are useful: they optimize our thoughts and behaviour towards efficiency in highly predictable circumstances which accord with our schemas of reality.
‘Blueprints’, ‘explore mode’ and ‘exploration’ have to do with schema-formation itself. In novel circumstances where risk to the organism is high and the promise of reward is uncertain, it is useful to approach the environment with a fluid conceptual map of reality, one which can be redrawn to identify risks and rewards.
Obviously, the cognitive and behavioural work of schema-formation is energy-intensive, and it would not be an efficient use of an organism’s energy to ‘remake the wheel’ in conditions where rewards can be predictably exploited. But in novel circumstances of high risk, attempting to maximize a diminishing reward in preference to exploring alternatives in the environment is a maladaptive behaviour.
As Bettencourt puts it in his 2009 journal article “The Rules of Information Aggregation and Emergence in Collective Intelligent Behavior”, ‘uncertainty’ is essentially an entropic state: when what we know about a target variable X is less than what we don’t know, the domain of knowledge to which X refers is tending towards a state of disorder.
We use the cognitive and behavioural techniques of recipes and blueprints, habit mode and explore mode, exploitation and exploration to attenuate entropic uncertainty by bringing ‘order’ to complex problems. We arrange what we know with respect to X in a schema which shows not only what information has been collected from habitual exploitation, but also the ‘holes’ in our knowledge which need to be explored and mapped.
The novel Coronavirus is a good example of X: being new, the amount of entropic uncertainty with respect to this target variable is high, and with high uncertainty, there is high risk and low reward. There are things our doctors and scientists do know about X, information obtained from previous cognitive explorations which has been collectively absorbed into habitual exploitation, but on the whole, there are many more gaps in our knowledge of the Coronavirus due to its novelty.
The collective schema of ‘Science’ (which is composed of many of sub-schemas such as ‘Medicine’ and ‘Biology’, not to mention the sub-sub-schemas which individual doctors and scientists hold) helps to orient the collective quest for attenuating the entropic complexity associated with X.
To quote Donald Rumsfeld’s infamous phrase, there are ‘known knowns’ associated with the Coronavirus and there are ‘known unknowns’: the shared schema of Science allows us to know what we know with respect to this novel virus, but also to know what we don’t know and need to explore.
The example of the Coronavirus makes it clear that expertise is needed in domains of knowledge to develop schemas of thinking which accurately reflect reality in order to orient efficient action: the primacy of free thought undergirds the effectiveness of free will.
More specifically, the Coronavirus, as a risk which impacts all humans across a multitude of entangled dimensions, makes it clear that we need experts in all domains of knowledge if we are to develop a global schema which enables us to take effective collective action to mitigate the threat of the virus and all the n-th order infinite impact risks it entails.
And here precisely is where our meta-schema has broken down, for the popular distrust of experts produced by externalities in the global sensemaking architecture of our cognitive commons makes it that much harder for us all to be on the same eusocial page with respect to Coronavirus and individually act for the good of the collective.
Citations of the Dunning-Kruger effect with respect to the externalities introduced into our collective sensemaking environment by non-hierarchical, peer-to-peer communication of information through the Internet are now so frequent that the term hardly needs to be explained.
In fine, incompetent individuals tend to overestimate their competence in domains where they are not expert, and lacking well-developed schemas in these fields, they lack even the meta-cognitive capacity to accurately perceive their incompetence.
On the other hand, as Alexander Plencner of the University of SS. Cyril and Methodius in Trnava explains in his article “Critical Thinking and the Challenges of Internet” (2014): ‘Because [the most competent individuals] were more experienced in tested domains, they were well aware of what they did not know. But as soon as the researchers presented them their positive results, they adjusted their self-assessment to more objective levels.’
From this we can hypothesize two implications for the Dunning-Kruger effect. If the target variable X represents a domain of skill or knowledge, then the schema that expert individuals have built up of X is lower in entropic uncertainty than for non-expert individuals, for not only is there more definite information with regards to X available to the expert within his schema, but there is less entropy regarding what is actually uncertain within the schema: these gaps are ‘known unknowns’.
But I would go even further and hypothesize that not only can we regard X as a domain of knowledge external to the expert, but that X may also describe a ‘meta-schema’ within the cognitive economy of the expert himself: the differential uncertainty of knowledge with respect to oneself.
As entropy diminishes in that domain through the assimilation of aggregated information derived from another, more objective source, so too does uncertainty about oneself, about one’s own cognitive competencies and aptitudes, one’s own fitness in accurately adjudging and acting with respect to reality, decrease, and the accuracy of one’s schema about oneself increase.
This differential knowledge about one’s own competency to accurately adjudge and act with respect to reality is not available to the non-expert prey to the Dunning-Kruger effect. This is because, as Plencner observes, when such people are confronted with their negative results in the tested domain, they do not downwardly adjust their positive self-assessments as experts upwardly adjust their negative self-assessments.
So why does this happen? Why do masses of people with access to ungated knowledge in the cognitive commons so radically overestimate their capacity to make good sense in domains which demand expert guidance to orient effective action?
I think two issues are at play with respect to schema-formation.
On the one hand, confronted with great entropic uncertainty such as we are facing with the Coronavirus, a maladaptive, habit-based approach which assumes there to be much less entropy in the present environment than there actually is leads people who are heavily invested in the status quo, such as politicians and inordinately successful businesspeople, to rehearse scripted, recipe-based responses intended to exploit fitness rewards which are, in fact, exponentially diminishing.
This is perhaps the easier of the two maladaptive mindsets to understand. What I am essentially suggesting is that there is an overwhelming temptation for people who have been well-rewarded by exploitation to ignore the salient signals of danger in the environment and continue to employ the habit mode of exploitation when exploration would be a much more appropriate response to entropic uncertainty.
But on the other hand, the explosion of paranoiac conspiracy thinking which may be attributed to the popular distrust of experts in the non-hierarchical, peer-to-peer cognitive environment of the Internet demonstrates how explorative schema-mapping itself can be a maladaptive behaviour in response to a common crisis where we desperately need the responsible guidance of experts to direct us.
In this model we see that, when confronted with the same highly entropic, highly novel environment which the Coronavirus has thrust us into, non-expert individuals assume there to be too much salience in the environment: they misperceive what is actually noise as being ‘super-salient signal’.
And, paradoxically, instead of remaining open in explore mode, these people actually curtail and arbitrarily attenuate the complexity presented by the environment, seeking to collapse the super-salient signal they perceive as quickly as possible into an alternative schema which is itself more a product of exploitative habit than of genuine intellectual exploration.
In the two cases I’ve described above, both schemas are faulty, the first because it fails to apprehend novel irregularities in the environment as such, and the second because it misapprehends novel irregularities as being regularities in a schema which bears even more doubtful fidelity to reality.
One could say that in the paranoid, psychotic schema of conspiracy thinking which has exponentially exploded with the Coronavirus, the exploratory mode of schema-making itself is ‘hijacked’, exploited by habit mode. It condenses novel irregularities too quickly into the regularities of an inaccurate, totalizing discourse which is an alternative to the ‘official story’ of the experts, producing not ‘alternative knowledge’, but what Damian Thompson calls ‘counterknowledge’—an ignorance which is the inverse of knowledge.
Why has this happened?
In my first post on the Coronavirus, I posited a theory of value extraction. When it comes to extracting reward from any given domain of competence, an expert in that field will always be a fitter agent than a non-expert. Our institutions—governments, corporations, universities and media—are composed of specialized experts who, due to their fitness in their respective domains, have derivatively extracted more and more real value from the commons, which they have centralized to themselves.
And information—the commodity that experts traffic in—is itself an extractable resource which, like more tangible resources, can be siloed and centralized to oneself for exclusive value gain. With information as with wheat, corn, oil, gold or a Coronavirus vaccine, if you have the market cornered on some domain of knowledge, you can name your own price in parsing the resource out to the non-initiate.
The inequalities associated with hierarchies of experts deriving inflated value from the mass of non-experts under a policy of extraction has activated a sense of distrust in the populace.
It is very difficult to trust the motives of someone who stands to gain materially—often at our expense—by compelling or influencing our compliance with their vested prescriptions. Whatever gain we might get from accepting the advice of experts is still differentially less than the exorbitant gain they extract from us by compelling compliance with their recommendations.
At the world-historical moment when we meet the crisis of the Coronavirus, hierarchical authorities of all kinds have lost their credibility with the populace due to the progressive erosion of trust that is a consequence of their extractive policies.
In response to the perceived greed of experts, whose superior sensemaking capacity is compromised and corrupted by vested interest in compelling compliance with their prescriptions, in the cognitive commons of the Internet and social media, people prefer and crave peer-to-peer recommendations from those they actually know and trust.
Knowledge is power, and since power inevitably corrupts, the justification for a distrustful, conspiratorial view of hierarchical authority due to the abuse of experts who have eroded our collective sensemaking capacity for personal ends is legitimated in the population, and the alternative—non-hierarchical, peer-to-peer sensemaking by non-experts—is popularly preferred.
In essence, at the moment in history when humanity desperately needs to be guided by its experts in schematic collective sensemaking, the great mass of people perceive that those who have access to the most—and the best—information do not have the collective’s best interests at heart.
The successive and exponential erosion of public trust through a policy of extraction by the fittest members in our society leads non-experts to believe that these people ‘lack empathy’ for them, since they continually extract hyper-inflated value in return for the little, heavily compromised information they parse out in return.
And in some sense, this perception of a ‘lack of empathy’ in our ‘egghead experts’ represents the differential between the game-theoretic sociality of our species and the eusociality towards which we need to collectively evolve.
The aggressive rationality, the ability to divide, discriminate and derive in our expert discernment of reality is the one-sided, left-brain development which has enabled us to optimize towards our present efficiency as a species: our reason-based collective schema is the most detailed and accurate map of reality on the planet.
But until we can transcend the neocortical limit of Dunbar’s Number, until we can consciously regard the members of our species who are most distant from us as being as much a part of ourselves as our nearest and dearest—that is, until we evolve our right-brain empathic capacity to think in wholes just as well as we can think in derivatives—we cannot transcend our game-theoretic sociality to our eusocial destiny of creating a collective schema which enables us to make the most efficient sense of reality.
How do we make such a collectively coherent map of reality which facilitates efficient, concerted action in the world towards the solution of highly complex, highly entropic problems?
The first thing to acknowledge is that non-experts have as great a place at the table in schema-formation as experts, for as Bettencourt notes, there are a surprising number of instances ‘when a solution produced by an informal collective of individuals, each with partial information, can surpass in quality and speed those produced by experts….’
We need diversity, but, as Jackie Jeffrey of Middlesex University states in her article “Diversity management through a neuro-scientific lens” (2015), the mania to include ‘diverse perspectives’ in our collective decision-making has to be re-oriented from its present approach, based on superficial and external group differences, to a ‘brain-based approach’.
‘In a review of a variety of literature and organizational policies from around the world,’ Jeffrey writes, ‘all appear to think about diversity from an equal-opportunities perspective dominated by what is perceived as “homogenous group differences, most of which are rooted in some form of oppression”.’
While it stands to reason that people from diverse races, sexes, ages and religious backgrounds bring something that is diverse to collective sensemaking, these ‘homogenous group differences’ are not the best selection criteria for geometrically amplifying our cognitive diversity in collective schema-formation.
As Jeffrey, quoting the Co-Intelligence Institute says, these homogenous group differences ‘“overshadow [the] hundreds of other differences, most of them very individual—and many of which are far more significant to our ability to generate collective intelligence” needed to effectively manage diversity….’
Individual differences in thinking style—which of course may be influenced by generic differences but are rarely determined by them—are the keys to leveraging the cognitive diversity necessary to develop a species-wide schema which will facilitate coherent and efficient collective action in the face of existential risk.
Moreover, as F. Allan Hanson points out, the individual differences in thinking style necessary to leverage cognitive diversity are enhanced by our individual interactions with the cognitive tool we use to collate and communicate information—the Internet.
The indexing of information on the Internet rather the traditional classification of it in rigid, hierarchical, taxonomic schemas enables different thinkers to ‘pull out’ different data sets from the cognitive commons. The creative capacity to fluidly and individually reconceptualize schemas of human knowledge through indexing facilitates the inclusion of diverse perspectives in the meta-schema of our cognitive commons.
So in terms of complex problem-solving with respect to the incomplete schema of any one of our existential risks, there are definitely advantages to the non-hierarchical, peer-to-peer cognition which the Internet facilitates as an agentic partner in the management of information.
For if, as I stated in my third post on the Coronavirus, all the problems of our century are variations on the question of how we deal, as a species, with the non-linear progression of exponentials through networks, it is heartening to note, as Bettencourt does, that information itself—like the Coronavirus, like the Internet, like the network of humanity—also aggregates in a non-linear, exponential fashion.
The solution to our problems is in the form-problem itself.
Unlike matter or energy, Bettencourt states, the peculiar characteristic of information as a quantity is that ‘knowledge from many sources can in fact produce more information (synergy) or less (redundancy) than the sum of its parts.’
This can occur even in relatively non-expert populations. If we view information, as Bettencourt does, as ‘a sort of differential between two levels of uncertainty’ with respect to a target variable X, then the entropic uncertainty regarding the schema which X constitutes can be synergistically diminished even by a group of non-experts pooling their cognitive resources.
In this example, each individual in the collective has a schema which includes partial information with respect to X. Where the knowledge gained from each member about X is not merely additive in relation to X, but adds to the knowledge of the other members of the collective, synergy emerges from the pooling of cognitive resources.
The uncertainty surrounding the communal schema of X is reduced, but the uncertainty in the meta-schema of the group as a whole is also synergistically reduced. As more information is transactively introduced into the consciousness of the group via the mechanism of free speech, the more probable it becomes that something somebody says unlocks a piece of information in another member’s mind which helps to fill in another gap in the schema surrounding X.
As Bettencourt says, so long as the pieces of information pooled in this form of collective cognition are ‘conditionally dependent’ (that is, they are relevant to the target variable X) yet still reasonably independent of one another, synergistic problem-solving of superior speed and quality can occur even among a population of non-experts.
How much quicker and more discerning, therefore, could our responses be to common existential problems like the Coronavirus if we transactively utilized the detailed individual and communal schemas of experts in guiding non-expert decision-making at scale?
But there’s also a place for redundancy—the aggregation of information which adds nothing new to the schema yet serves to confirm what can definitely be known about X—in schema-formation.
In triangulating and triaging information for collective sensemaking, redundancy serves the important function of validating and verifying the accuracy of information contributed by individual sources. As Bettencourt points out, it is not sufficient that the pool of information should be ‘conditionally dependent’ with respect to the entropic state of X, but it is also necessary that the sources of information themselves should be ‘sufficiently independent’ of each other so as to confirm what can be communally known about X.
There is, therefore, a ‘social compact’ which we enter into whenever we undertake collective sensemaking via the cognitive commons.
The social compact is that each neural node in the network will earnestly contribute to reducing entropic uncertainty with respect to a communal problem through mechanisms of synergy and redundancy without needlessly and deliberately introducing misinformation about X, disinformation about X, or information which is not conditionally relevant to X into the cognitive commons.
I’ve described schemas as being ‘cognitive maps’. When dealing with complex problems in cognition, rather than attempting to organize information within abstract taxonomies and rubrics, it’s often easier to ‘visualize’ information, and this is certainly true when it comes to cognition at scale.
As human beings, we have a right-brain bias towards holistic, visual thinking, which comes more naturally to us than left-brain, language-based reasoning. The popularity of ‘brainstorming’ and other communal sensemaking activities which visualize information demonstrates that, when it comes to getting everyone ‘on the same page’ conceptually, it’s often best to get everyone on the same page literally, mapping out the network of connections which random ideas suggest.
When it comes to complex problems which require collective sensemaking to solve, the target variable X is both the problem and the schema—the domain of knowledge to be mapped.
My favourite example of a cognitive technology which seems to perfectly illustrate how schematic collective sensemaking attenuates this paradox is the crisis-mapping platform Ushahidi.
Developed after the Kenyan general election in 2007, the platform was initially designed to visualize real-time reports of post-election violence onto a map of Kenya. It’s had many applications since then, most notably in crowdsourcing information and co-ordinating crisis response to the 2010 Haiti earthquake.
When it comes to as visceral a problem as an earthquake, it’s difficult for a population not to be on the same cognitive page: the crisis is sensed, known, experienced, and felt. Problem and schema coalesce: X is the crisis, and we need, as a collective, to rapidly understand and rapidly respond to X.
As Claude Gilbert defines the problem: ‘disaster is first of all seen as a crisis in communicating within a community—that is, as a difficulty for someone to get informed and to inform other people.’
With the Ushahidi crisis map of Haiti, information about the crisis from the edges of the network—the sources on the ground who were most immediately knowledgeable about real conditions—was visually organized, in real-time, onto an open-source map of Haiti.
And as Craig Clarke, an intelligence analyst in the U.S. Marine Corps, told Jessica Heinzelman and Carol Waters in their report “Crowdsourcing Crisis Information in Disaster-Affected Haiti” (2010) : ‘In this postmodern age, open-source intelligence outperforms traditional intel…. The notion of crisis mapping demonstrates the intense power of open-source intelligence…. [W]hen compared side by side, Ushahidi reporting and other open sources vastly outperformed “traditional intel” [after the Haiti earthquake].’
The reason Ushahidi and other open-source platforms which intrinsically value free speech in order to instrumentally facilitate free action provide better intel than traditional sources is because mutually dependent data from the edges of the network synergistically produces more information about a given unknown situation X, but it also enables verification of the accuracy of information through the redundancy of reports.
As more information is mapped onto the crisis map, the entropic uncertainty inherent in the schema which the crisis map visualizes reduces. More information is produced which enables the collective to identify known knowns in the schema, and known unknowns which require further investigation and report.
The extraordinary thing is that this wealth of actionable information, which was vastly superior to the expert systems of crisis responders, came largely from a population of non-experts. As Heinzelman and Waters write: ‘Of the more than 3,500 messages published on the Ushahidi-Haiti crisis map, only 202 messages were tagged as “verified”….’
Since the Haiti crisis, Ushahidi has developed better systems for triangulating and triaging crowdsourced information to better conserve the social compact of collective cognition, but even in this instance, when the technology was in its beta phase, in terms of efficiency of actionable response, the net benefits associated with schematic collective sensemaking far outstripped the hierarchical siloing of information in expert crisis response systems.
Moreover, if experts are to regain the trust of populations, there is a lesson for them to learn from the Ushahidi-Haiti crisis response which is appropriate to their expertise.
What Heinzelman and Waters call ‘sentiment analysis’, the algorithmic processing of large volumes of written language derived from emails, text messages, and social media posts to derive the overall mood of given populations in real-time, would enable experts to rebuild their empathy with the collective by demonstrating that they hear the emotions of non-experts and are sincere in offering solutions which address their emotional concerns.
Sentiment analysis at scale would enable authorities not merely to regain the trust of populations by allaying their apprehensions in real time, but would serve to mitigate extremist polarization. As Plencner observes: ‘When dissatisfaction is strong enough and overtakes [a] critical mass of [the] public, the electorate often turns towards radical political parties.’
Failing to get their needs and concerns adequately acknowledged and addressed by mainstream experts leads non-experts to succumb to the pull of polarization and the consequent schizophrenic breakdown in schematic collective sensemaking.
As an existential threat which affects us all, the Coronavirus has revealed the urgent necessity for our species to develop a coherent shared map of reality which will facilitate agile collective action to curtail exponential risks.
It’s also exposed how fractured our communal sensemaking environment is, how deeply out of touch we are with reality, both individually and collectively.
wow – lot of info to digest there. i like very much what you are saying about humans swarming like bees or ants – the idea of the collective good overriding any individually defined goals. gutkind coined the phrase – ‘the absolute collective’, which i think points towards the same idea.
there must be already a natural tendency towards the common good, being as we are wholly integrated within the phenomenon of life. in other words our own unique predilections and talents, if followed and developed faithfully, will naturally lead to a common benefit.
the idea of a collective consciousness points towards a transcendence of ‘tribal’ differences, and if we take the collective consciousness as being the collective unconscious brought into the light of consciousness – then we can see that this ‘common ground’ already exists as latent potential.
another way of putting it is in terms of computing power. the quest for the quantum computer – which is the quest for a ternary instead of a binary system. in the quantum computer there are is one, zero and one/zero. in binary it is just one and zero.
the binary system is aristotelian logic; the quantum computer does not conform to aristotelian logic – however it does fit the electrical phenomenon of life itself – a positive charge, a negative charge and a neutral charge.
what i am saying is that life itself is the quantum computer we are searching for – we are embedded within this quantum intelligence, which *contains* the binary systems we use everyday, ‘for nothing stands outside of nature, not even the products of the mind’ (spinoza).
life is a healing process – and healing is effected through disease. our primary disease is schizophrenia, as you touched on, and schizophrenia is the disconnection of the two poles – logos and eros. ie what is amiss is that our schema or picture or reality does not resonate with that within us. our intellectual picture is not accurate, so we have a conflict between head and heart, between mind and body.
ie we must come up with a more accurate intellectual model of reality, one that reflects what we know instinctively, for instinct is the perfect intelligence of life.
schopenhauer said that materialism is science and idealism is philosophy; in other words science *presumes* the objective materiality of the world; philosophy *knows* that the world is *not* given objectively but rather is conditioned by the subject.
with quantum physics we see science find reality again, ie the perceiver cannot be separated from what is being perceived – we come round to berkeley – esse est percipi.
in my opinion the first step is to make comprehensible, as comprehensible as possible, the first principle of true philosophy, which is its idealistic foundation: ie *we* are the source of that which we take to be external to ourselves. this resituation is absolutely necessary to escape the intractable confusion and impotence of thinking oneself incidental to the operations of the great cosmic machine.
with this reversal of perspective we become causal agents in the world – in our world and the world at large. we are not incidental or ancillary, rather we are central. the world we take as objectively real is created continually through us. the task therefore is to switch from being an unconscious projector to a conscious projector. to taking personal responsibility for the world.
or we could say one ceases being used by the binary matrix and instead logs on to the quantum computer of life….one begins to work with the ‘source code’.
LikeLiked by 1 person
I’m basically in agreement with you, Gav. Certainly I am trying to suggest that in order to biologically scale our capacity for collective sensemaking, we need to make a conscious effort to spiritually scale our capacity for collective sensemaking, which includes the conscious incorporation of the shadow. I do not see these two processes as being mutually exclusive: I am hypothesizing that the spiritual (by which I mean, the ‘psychological’) coevolution will create a feedback loop which coevolutionarily amplifies our biological capacity to make sense of complexity at geometric scale.
In other words (as you seem to be saying), as an agent, each individual needs to understand that ‘what is without me is within me’, that the ‘enshadowed Other’ (which ultimately includes all of nature itself) is as much a part of oneself as those entities one consciously acknowledges as being a part of one’s Dunbar in-group.
I thank you for alerting me to the quantum computing concept, Gav. I must admit my ignorance on this score, but if I am following your précis of it well, I see in your suggestion an opportunity to clarify some practical points which might be useful to others seeking to realize such a technology.
With regards to the employment of binary thinking, this is a computing technique for attenuating complexity which is already well-established, both within our own brains and in our external computing systems. As Stafford Beer says in “The Brain of the Firm” (1972), the practical purpose of an efficient binary is to halve the factor of uncertainty so as to facilitate a decision which leads to action.
While I agree with you that ultimately a more nuanced system of computing complexity is desirable, in terms of practical action towards such a goal, in binaries we already have a demonstrably effective heuristic technique for exponentially slashing entropic uncertainty. When applied within the field of a group schema surrounding a target problem X, the simple application of a 1,0, yes/no binary is actually a surprisingly effective way to leverage collective intelligence which leads to concerted action.
I think you make some tacit admission in your comment that we naturally tend to organize our thinking with respect to two salient variables (you mention the ‘Logos’/‘Eros’, head/heart binaries as examples). This is perfectly reasonable: the human brain tends to struggle with computations involving three or more independent variables, so to collectively elevate our individual computing capacity to handle a third 1/0 variable with the same dexterity that we handle 1 and 0 would, I imagine, represent a raise in our conscious group computing capacity to the third power. That would be a not insignificant leap in our capacity to solve problems at scale.
The electro-chemical computer of a single human brain currently has a storage capacity of between 10 to the 12th and 10 to the 15th bits. Multiplied across the whole human population, this represents the resolution (in bits) of our total human schema.
Beer (citing H. J. Bremermann) undertakes an interesting thought experiment in “The Brain of the Firm”. I’ll leave it to him to explain:
‘According to quantum mechanics, there is a lower limit for the accuracy with which energy can be measured. That is, there is a permanent and fundamental degree of uncertainty in matter. Any attempt to improve the accuracy of one relevant measurement will, according to Heisenberg’s Uncertainty Principle, perforce drive accuracy out of an associated measurement. The quantities involved are very small, but they turn out to matter very much. What Bremermann did was to apply the quantum law to one gram of matter for one second, and to show that the lower limit for accurate measurement places an upper limit on the information-processing capacity of the material. Beyond that limit, noughts will become confused with ones, and computation must become ambiguous. In one second, he concludes, this gram of typical matter cannot cope with more than 2 x 10 to the 47th bits of data. Of course, no one has a gram of anything which can actually be used to compute so great an amount of data; microminiaturization has not advanced that far. But, and this is his point, even at the end of the technological road it would be impossible to cram more than 2 x 10 to the 47th bits into a gram of matter in one second – because the bits would be confused by Heisenbergian uncertainty. Bremermann has conjectured, with this argument, the requisite variety of matter itself.
‘The number sounds large; indeed we have just been studying the explosive power of 2 to the power of n – and here n is 10 followed by forty-seven noughts. Moreover, we can build larger computers than a gram weight, and use them for longer than a second. But even people who are accustomed to thinking in terms of exponentials may be taken aback by the next stage of the argument. Suppose we turn the whole of this earth, the terrestrial globe, into a computer and run it for its whole history. What variety has this fantastic machine? Well, says Bremermann, there are about π x 10 to the 7th seconds in a year, and the age of the earth is about a thousand million years. Its mass is about 6 x 10 to the 27th grams. So the earth-computer, in its whole history, could have handled (2 x 10 to the 47th) (π x 10 to the 7th) (10 to the 9th) (6 x 10 to the 27th) bits. And that works out at something like 10 to the 92nd bits.’
The punchline to this joke is that Beer, in his thought experiment, decided to create a very simple business which could undergo a maximum of n states of complex variety, where n = 300 states. The brain of that very simple firm being its collective intelligence, it was capable of computing 3 x 10 to the 92nd bits of data. In other words, as Beer says, ‘It now turns out that a computer the size of the earth, running as long as the age of the earth and in the ultimate condition of technological perfection is needed to do the sums for this tiny firm.’
To conclude, therefore, if I am understanding your argument aright, Gav, a quantum computing capacity which would allow us to cube our collective intelligence capacity is indeed ultimately desirable, but, as Beer demonstrates, getting there is a long and arduous road. To solve even the simplest problems requires a computer the size and the age of the earth running at the peak of its computing capacity.
Unfortunately, our collective schema, measured in bits, is still differentially less than the complexity of the world, measured in bits. Moreover, as Bremermann’s findings suggest, at the quantum mechanical level, uncertainty (which a binary heuristic acts to slash in order to facilitate practical action) enters once again into computations associated with complex decisions. In other words, above a certain limit defined by Bremermann, there is less resolution to orient collective action. It would appear, on the surface, that the upper limit defined by Bremermann which would affect our collective sensemaking capacity under a quantum model is similar to the very low neocortical limit defined by Dunbar which currently limits our capacity to form co-operative social structures in which effective collective sensemaking can take place to reduce entropic uncertainty.
If I understand your argument aright with respect to this evidence, we would need to able to evolve beyond both limits in order to take advantage of the technology you are suggesting.
Thank you as ever for your provocative insights, Gav. Your comment has given me a lot to digest as well, and I hope, in this reply, I have given as good as I got! 🙂
LikeLiked by 1 person
absolutely! the major point i am making is that we live inside of a ‘quantum computer’ intelligence already – VALIS (vast active living intelligence system) if you like, after PKD.
we are, by virtue of being alive, integrated within an information system that is orders of magnitude more powerful than any binary computer system we could manufacture. furthermore i believe that we can treat this intelligence as a technology, ie there are means, techniques, we can employ so as to interface with this intelligence (maybe we should call it N.I. – natural intelligence). these means/techniques we can loosely label as ‘shamanic’. miller in his later years became more and more prophetic in his writing. he saw the human future is such a way – ie the technologies that will be employed in the future will be of a more subtle order, i believe miller used the word ‘occult’, but with the caveat that the word ‘occult’ only applies to those technologies we don’t currently understand, or are incapable of conceptualising within an existing schema.
and of course the importance of realising that the existing internet matrix, in its totality, ie as a globe girdling system of communications integration, is itself part of, contained within the VALIS ‘supercomputer’ which is reality itself, or more accurately the source/vehicle of the phenomenal world in its totality.
matter is plastic in the face of mind – JC performed what we call miracles, perhaps one day we will understand the techniques he employed better and no longer refer to them as miracles. in the recent past we have theresa neumann who ate virtually nothing and drank nothing for decades…at the moment, in my current conceptual schema, i would say theresa neumann was able to imbibe spiritual energy directly through her heart, and this energy was then transduced into the energy required to maintain her physical form. theresa had stigmata every easter, and the blood that flowed from her hands feet and head defied gravity….very weird and documented.
i believe that the meaning of JC is simple – he is our evolutionary template, at least for westerners. JC both in terms of his politics and his philosophy, introduces the idea of the individual as ground of being, as ultimate authority. and this of course means the end of the pyramid scheme we have struggled under for centuries. this is why JC is always anathema to power, indeed he is a virus in the system, and has been for 2000 years. spengler talks of the ‘second religiousness’ which grips a culture in its very late stage. this i believe is already happening, for as i related in ‘sympathy for the devil’, the shadow side of god is still god – if people lose belief in the moral order of the universe it means that they have forgotten that it is they who maintain it, and if we dont, then the shadow will become more and more active until we understand this absolutely essential point – ie god, the moral order, can only manifest and be maintained through human agency. passivity leads to the shadow – the devil finds work for idle hands.
LikeLiked by 1 person
I see. Thanks for amplifying the notion of quantum computing for me, Gav.
There is definitely a place for ‘occult’ psychotechnologies in the development of the scaled coherence necessary to enable good schematic collective sensemaking. Indeed, I would be inclined to agree that engaging mindfully in shamanic practices with others in one’s face-to-face relational network is the one practical thing that human beings can do to raise their consciousness above Dunbar’s Number, opening the empathic, holistic, right-brain dimension of cognition.
Thanks for your exhaustive comments, Gav. They are a wonderful addendum to what I know is an exhaustive (and exhausting!) post.
LikeLiked by 1 person
one last thing that has just occurred to me:
as i have said previously in my own writings, schizophrenia is the characteristic or identifying disease of modern man – in that we all suffer it to some degree. now, there is a famous correlation made between true, or clinical schizophrenia and shamanism, namely that about 4% of people are diagnosed as fully fledged schizophrenics in western culture, and about the same percentage are deemed to be suitable candidates for a shamanic vocation in tribal cultures.
schizophrenia is not only a disease but reveals something else, ie it can be a tool or an aid in reaching shamanic states.
if my assertion holds water, ie that we have all been rendered schizophrenic to some degree, perhaps we can see an evolutionary movement in the very disease which seems to plague or modern existence.
schizophrenia may also be thought of as a loosening of the bonds of collective or consensual reality, which enables a dreaming state to superimpose itself on consensual reality, a dreaming state that can be in some way navigated or employed for particular ends.
what i am saying is that in the sense of viz medicatrix naturae, the very disease which plagues late western culture is perhaps a preparation, a ‘loosening’ of this reality, which is necessary for the emergence of a transformative dreaming.
LikeLike
I would tend to diverge from you here, Gav.
I am also familiar with the relationship between clinical schizophrenia and shamanism. I was introduced to it by Andreas Mavromatis in his wonderful book “Hypnagogia: The Unique State of Consciousness Between Wakefulness and Sleep” (1991). I can recommend this fascinating book on a little-known and little-explored subject unreservedly, but to you I can recommend it in particular, as Mavromatis’ findings speak to the loosening of ego-boundaries and the incorporation of semi-conscious, psychedelic, dreaming and trance experiences of holistic knowing which you’re exploring in “Make Way for the Bad Guy”.
But the distinction needs to be clearly drawn between clinical psychosis, which is a maladaptive sensemaking condition vis-à-vis reality precisely because it perceives signal in what is actually noise, and the range of mentally beneficial right-brain experiences which Mavromatis examines under the head of hypnagogia, including hypnagogia itself, dreaming, lucid dreaming, the predominantly feminine (in the Taoist sense) thinking styles of children and animals, psychedelic experiences, trance states, spiritual ecstasies, etc.
Perhaps I did not make this distinction sufficiently clear, but in clinical schizophrenia, the right-brain is hijacking the sensemaking capacity: its holistic, pattern-recognition style of thinking is ‘over-perceiving’ signal in the environment. I am definitely suggesting that we must collectively leverage the right-brain capacity to perceive holistic patterns in order to adapt to a salience landscape of exponential complexity where the signal is so subtle and yet so vast that it is beyond the cognitive capacity of an individual agent to perceive. But we must at all costs avoid the danger of falling prey to our constitutional right-brain bias of perceiving signal in what is actually noise: this is the ‘magical thinking’ which leads to schizophrenic group delusions.
In fine, the bicameral brain is like the two houses of parliament in the Westminster system: in our individual and collective cognitive economies, we need the government of both to serve as check and balance on the other. I agree with you entirely that an historically anomalous left-brain bias, a one-sided development of discriminatory rationality, has led humanity to its current pass, and that a reincorporation of holistic, right-brain ways of perceiving, experiencing and knowing are vitally necessary to evolve creative solutions to problems which rationality and reason alone cannot solve. But the manifest danger is that, in the face of inassimilable complexity, we fall back too easily on the historically-preponderant constitutional bias of the right-brain for low-resolution pattern recognition when our recently evolved, high-resolution discriminatory capacities are equally vital for the task of collective sensemaking.
In fine, I have to diverge from you, Gav (with the greatest respect, of course), on the subject of schizophrenia: I don’t think one can make a good case that a maladaptive psychosis which is, by its very definition, a complete retreat from reality, serves an organism in the day-to-day furtherance of its existence, and it certainly doesn’t help an entire species to adapt to an exponentially evolving climate of complexity. Where I would agree with you (and I take this view from Mavromatis’ majesterial survey and synthesis of the little literature which exists on the topic) is that what we might call the ‘portfolio of psychotechnologies’ filed under the head of hypnagogia, techniques and experiences which open a permeable, semi-conscious channel of communication between the left- and right-brains—everything, in fine, which leads up to the threshold of psychosis but does not fall over into it—is eminently necessary and practical in amplifying our sensemaking capacities. This is because it allows us to perceive maximal signal in both low-resolution, macro-patterns and high-resolution, discriminatory micro-detail.
Objections aside, you ought to get a gold star, Gav: it was such an effort to write this article (which is the longest blog post I’ve written on this website) that I wondered if it would be worth it to put forth such a detailed argument in what I regard as the most disposable form of literature. Your engagement with this piece is more than I could have expected, and I’m so grateful that you have found so much to comment on.
LikeLike
the credit belongs to you my friend – you started the cogs turning…
i think our difference is one of degree only…ie we both agree that a ‘loosening’ of left brain dominance is absolutely essential.
i dont want to trivialise schizophrenia. my only intimate contact with it was my ex-girlfriends brother, now passed. he was very intelligent and sensitive and it was very difficult to see him fade away at an early age – 40. indeed it was his death that kicked me into a more serious writing attitude – to see such an innocent soul die for lack of real care and understanding…and again i do not impugn…his family really thought he was sick (people trust doctors) but i knew he wasnt – the only sickness he had was alcoholism…i got to know know him well enough to see that it was all trauma (my girlfriend -his sister -suffered it for years too but thank god she is now past it and a strong and centred and wonderful mother).and beneath the trauma he was still there, signalling to the blind…anyway….so many have been lost already and we need to remember that.. really its been happening all the time, all my life anyway – sensitive souls, brilliant souls, lost to us…because we believed they were ‘sick’.
LikeLiked by 1 person
A question of degree, certainly. What I learnt from reading Mavromatis is that there are so many net benefits to allowing the right-brain to communicate with the left-brain through many of the techniques, processes and experiences beneath the threshold of schizophrenia. But the right-brain also has its tyrannical tendencies, and schizophrenia may be regarded as a right-brain dominance.
The example you give of your ex-girlfriend’s brother is really the tragedy of this condition, for we certainly feel that these people are in touch with shamanic states of consciousness. But one critical shamanic technique I failed to mention which falls under the head of hypnagogic consciousness is writing poetry. The right-brain’s prophetic vision is reconciled with the left-brain’s language-processing ability. So perhaps in witnessing the tragedy of your friend’s fading away, the loss of that shamanic gift, you were subsconsciously inspired to share your own with us, Gav.
LikeLike