Criticality




Order                  -          Chaos
Closed                -          Open
Efficiency           -          Evolvability




Metastable: poised, ready to react.

Myths and stories are good for criticality because they allow a degree of wiggle-room. Theories have a tendency to coagulate into paradigms, or strong attractors, which can only be escaped when their proponents die off.

Criticality is akin to dialectical tension - a point of dynamic equilibrium between opposing forces.




In physics, metastability is a stable state of a dynamical system other than the system's state of least energy.

A ball resting in a hollow on a slope is a simple example of metastability. If the ball is only slightly pushed, it will settle back into its hollow, but a stronger push may start the ball rolling down the slope.

'Metastability'




You can have something that’s very fragile but stays for a very long time.

In phase transitions - between gaseous, liquid, and solid forms - there is a thing called a metastable state. The material is already at the right temperature for, say, water to start boiling, but because there is no disturbance to it, it stays in the previous transition. If you then drop a little speck into this kettle of metastable water, it will instantly start boiling.

I think societies when they become stabilised or inactive in this way, they’re in a metastable state. And they can last there for centuries. It’s very similar to a dry forest.

[Samo Burja]
Live Players w/ Samo Burja (June 18, 2020)




[…] many composite systems naturally evolve to a critical state in which a minor event starts a chain reaction that can affect any number of elements in the system.

Although composite systems produce more minor events than catastrophes, chain reactions of all sizes are an integral part of the dynamics. According to the theory, the mechanism that leads to minor events is the same one that leads to major events.

Furthermore, composite systems never reach equilibrium but instead evolve from one meta-stable state to the next.

When a number of trajectories lead towards a point (or area) in state-space, that point (or area) is an 'attractor', and represents a stable state of the system. When trajectories all lead away from a point, that point is unstable - a 'repellor'. A point that has trajectories leading towards it as well as away from it is known as 'meta-stable'.

[…] In a very stable system there will be one, or only a few strong attractors. The system will quickly come to rest in one of these, and will not move to another one easily. The resulting behaviour of the system is not very interesting. On the other hand, in a very unstable system, there will be no strong attractors, and the system will just jump around chaotically.

The theory of self-organised criticality tells us the following. A self-organising system will try to balance itself at a critical point between rigid order and chaos. It will try to optimise the number of attractors without becoming unstable.

[Paul Cilliers]
Complexity and Postmodernism, p.96-7




[…] empirical evidence has proliferated that living systems might operate at criticality - i.e. at the border-line between order and disorder - with examples ranging from spontaneous brain behavior to gene expression patterns, cell growth, morphogenesis, bacterial clustering, and flock dynamics.

[…] why is a living system fitter when it is critical? Living systems need to perceive and respond to environmental cues and to interact with other similar entities. Indeed, biological systems constantly try to encapsulate the essential features of the huge variety of detailed information from their surrounding complex and changing environment into manageable internal representations, and they use these as a basis for their actions and responses.

The successful construction of these representations, which extract, summarize, and integrate relevant information, provides a crucial competitive advantage, which can eventually make the difference between survival and extinction.

[…] criticality is an optimal strategy to effectively represent the intrinsically complex and variable external world in a parsimonious manner.

This is in line with the hypothesis that living systems benefit from having attributes akin to criticality - either statistical or dynamical - such as a large repertoire of dynamical responses, optimal transmission and storage of information, and exquisite sensitivity to environmental changes.

As conjectured long ago, the capability to perform complex computations, which turns out to be the fingerprint of living systems, is enhanced in “machines” operating near a critical point, i.e., at the border between two distinct phases: a disordered phase, in which perturbations and noise propagate unboundedly - thereby corrupting information transmission and storage - and an ordered phase where changes are rapidly erased, hindering flexibility and plasticity.

The marginal, critical situation provides a delicate compromise between these two impractical tendencies, an excellent tradeoff between reproducibility and flexibility and, on larger time scales, between robustness and evolvability.

Any given genetic regulatory network, formed by the genes (nodes) and their interactions (edges), can be tightly controlled to robustly converge to a fixed almost-deterministic attractor - i.e. a fixed “phenotype” - or it can be configured to be highly sensitive to tiny fluctuations in input signals, leading to many different attractors, i.e., to large phenotypic variability.

These two situations correspond to the ordered and disordered phases, respectively. The optimal way for genetic regulatory networks to reconcile controllability and sensitivity to environmental cues is to operate somewhere in between the two limiting and impractical limits alluded to above.

Under the mild assumption that living systems need to construct good although approximate internal representations of the outer complex world and that such representations are encoded in terms of probability distributions, we have shown - by using concepts from statistical mechanics and information theory - that the encoding probability distributions do necessarily lie where the generalized susceptibility or Fisher information exhibits a peak, i.e., in the vicinity of a critical point, providing the best possible compromise to accommodate both regular and noisy signals.

[Hidalgo, Grilli, Zuweist, Muñoz, Banavar, & Maritan]
‘Information-based fitness and the emergence of criticality in living systems’




[…] this is what basically is going on in relevance realization. You can see it in your attention: 'default mode' and 'task centre'. ‘Default’ is making your mind wander and introduce variation, and then task focus selects. You kill off most of the variations, but some of them come in because your mind wandered enough.

You give people a problem and they're impassing - they can't solve it - and you just introduce a little bit of entropy into the system - you put some static on the computer screen or you shake it - then they'll have the insight because it puts in enough criticality.

They stop this unidimensional task focused attention and it allows the spread of activation. Then they reselect and they evolve a new way of framing the problem.

[John Vervaeke]
‘A Conversation So Intense It Might Transcend Time and Space | John Vervaeke | EP 321, YouTube




Vervaeke: […] you get a system simultaneously differentiating and integrating […] a system that is complexifying. If it's done right, because of adaptive fittedness its complexification is increasingly conforming to the complexity of the world.

Peterson: That's the scientific enterprise in some sense, right? Calibration against real world patterns.

Vervaeke: If it has good synoptic integration, it has self organising criticality. When it fires in self-organizing criticality, it tends to create a small world network wiring. It’s mostly organized, but when it breaks apart it opens up the possibility of one of these long distance connections. If a system starts to wire as a small world network, it has mostly regular connections keeping you in the norm, but with a few long distance connections that can suddenly snap you out.

Peterson: That's like the balance between conservatism and liberalism.

Vervaeke: If it fires at self-organizing criticality, it tends to wire as a small world network. And if it wires as a small world network, it tends to fire as self-organizing criticality. So these two things can mutually inform each other.

The self-organizing criticality theory of insight - you have to break out of an inappropriate frame, that's the criticality, and it reorganizes into the better frame. You do that evolution.

Peterson: And ‘better’ would be something like, both efficient and capable of performing a broader range of action. That was like a Piagetian description of what constituted a better theory. A better theory allows you to do everything the previous theory allowed you to do, plus something more, hopefully with a gain of efficiency.

Vervaeke: A good theory is efficient in that sense, but a good theory is also generative. You're always trying to optimize between efficiency and evolvability. You don't just do compression - that's epilepsy. You've just locked the system down and it has no capacity to adaptively refit itself to the world.

I think of this as mapping on to Piaget’s notion of assimilation and accommodation. Assimilation is compression, making everything integrated; accommodation is how you introduce [variety]; and then the calibration is this dynamical, constantly trading between them.

You don't come to any kind of stable thing. You're constantly evolving. You don't find the final theory - you're constantly moving to a theory that grabs more differences and yet brings them into an integration.

[Jordan B. Peterson & John Vervaeke]
‘A Conversation So Intense It Might Transcend Time and Space | John Vervaeke | EP 321, Jordan B. Peterson, YouTube



Related posts: