Fragile / Resilient






Fragile                     -                  Resilient





[...] what makes a complex adaptive system resilient is its learning and transformational capabilities, not its ability to merely resist a shock.

As phrased by Folke: “[R]esilience is not only about being persistent or robust to disturbance. It is also about the opportunities that disturbance opens up in terms of recombination of evolved structures and processes, renewal of the system and emergence of new trajectories”

Resilience enables the system to cushion the effects of unforeseen disturbances by absorbing the shock and adapting to changing conditions, thus bouncing not back but forward to a more advanced level better suited for future hazards.

[Rasmus Dahlberg]
'Resilience and Complexity: Conjoining the Discourses of Two Contested Concepts', Culture Unbound, Vol. 7, p. 545, 553




Consider all the different manifestations of pressure on a system, which Taleb calls the “disorder brothers”: uncertainty, variability, imperfect knowledge, chance, chaos, volatility, disorder, entropy, time, the unknown, randomness, turmoil, stressors, error.

If something doesn’t “like” any one of these, it’s not going to like the others (and will therefore be short-lived before failure). On the other hand, if something is made stronger by these, it is antifragile—and therefore also displays the “Lindy” effect (the longer it lasts, the longer it is expected to continue lasting).

The insights are useful to check the ambitions of modern power. Should we trust a model that recommends engineering drastic change in the atmosphere, or should we defer to and protect the Earth’s proven, inscrutable systems of climactic balance? Should we tinker with DNA to design a new kind of pest-resistant crop, or should we respect the nucleic wisdom encoded in long-proven varieties?

Risk management’s “precautionary principle” can be understood as respecting essential systems that are Lindy and making sure one doesn’t interfere with whatever makes them antifragile: solve world hunger with better distribution logistics (low downside, huge upside), not by playing God with crop genes (huge possible downside).

In social life, this suggests a bias in favor of traditionalism (including respect for religion) as well as encouragement for experimenters and entrepreneurs (tinkerers, who actually try new technology, not scientists and economic “experts” who merely theorize).

In political organization, it shows the wisdom of localism—or what Taleb calls “fractal localism,” to distinguish it from simplistic decentralization [...] Political community is healthiest when people making decisions also have the most at stake in their outcomes (“skin in the game,” the title of Taleb’s fifth book).

[Joshua P. Hochschild]
'Optionality and the Intellectual Life: In Gratitude for the Real World Risk Institute'


Global / local




I don’t trust [Steven Pinker’s] optimism […] more and more kinetic energy, like war, has been turned into potential energy, like unused nuclear weapons - [but] if you don’t have a potential energy term, then everything’s just getting better and better.

[Eric Weinstein]




The tide is out, and globalism is exposed. In its place, watch for the rise of localism.

This won’t be some Jeffersonian agrarian world where we’re all threshing our own wheat, but a complex, locally–adapted system that is vibrant and resilient because it’s interconnected but not centrally controlled. 

Our current supply chain and financial system evolved by using standardization to create efficiencies. Finding “synergies” led to geometric growth of certain companies, lowering nominal costs for customers and concentrating decision-making among few. For example, four companies are now responsible for 70% of the pork production in the U.S.

The sticker price might appear lower at the grocery store, but hidden costs exist.

Think of it this way: A $4 Mr. C’s hotdog sounds like a deal, but it comes with the low-probability kicker that if a virus/terrorist/cyber-attack occurs, we might be sheltering in place for an extended period of time and our economy could go down the toilet along with millions of jobs. Now consider a locally–sourced hotdog that costs $5, $6, or even $15. Localism would likely result in higher nominal cost, but if localized decision-making, supply chains and finance prevented being shut-in at home, hoping centralized governments and corporations can avoid being overwhelmed, what is that worth?

Some things are better managed at scale, just not ALL things. By pushing most decision-making to the locally-dispersed, what Nassim Nicholas Taleb calls “fractal localism”, we can maximize customization, adaptation, and responsiveness while still keeping those systems and institutions that are most effective across locales.

[Eric Weatherholtz]
'Localism: Retail’s Coronavirus Hangover Cure'




Things appeared to be getting better and better but our prosperity was built on credit. The things we gained came at the price of an increasingly fragile system, caused by ever-increasing imbalances.




Localism


1. The principle of subsidiarity which holds that the state should undertake only those initiatives which exceed the capacity of individuals or private groups acting independently.

2. If a complex function is carried out at a local level just as effectively as on the national level, the local level should be the one to carry out the specified function.

3. Subsidiarity assumes that people are by their nature social beings, and emphasizes the importance of small and intermediate-sized communities or institutions, like the family, the church, trade unions and other voluntary associations, as mediating structures which empower individual action and link the individual to society as a whole.

4. But the principle of subsidiarity also allows for some decisions to be taken at regional or national (or indeed international) levels, for example in order to protect human rights or for reasons of social or economic justice.

'Localism'



Related posts:
Scale

Notes: Dave Snowden - '#12 MANAGING IN COMPLEXITY - DAVE SNOWDEN | Being Human'


................................................................................................................................................................................

'#12 MANAGING IN COMPLEXITY - DAVE SNOWDEN | Being Human'
[Dave Snowden]

................................................................................................................................................................................


39:58 - We’ve got very few polymaths left in the UK under the age of 50, because the education system is now highly specialised. That’s a major mistake, because one of the strengths of British education has been our ability to produce generalists, but we’re not producing them any more.

A collection of specialists is not the same thing as a generalist. A generalist knows a little bit about a lot of things and can integrate disciplines; a specialist can’t integrate.

Exaptation is a process by which you suddenly notice novel side effects and associations.


................................................................................................................................................................................


41:19 - Art comes before language in human evolution - we learned to draw on the walls of caves before we really spoke.

Like everything in evolution, its accident. Basically we draw because it has use for the hunt, but what it also does is allow us to shift up a level of abstraction. If you go up a level of abstraction you make novel associations. I have some of my best ideas either walking or at the opera, because I’ve moved up a level of abstraction. My mind associates things in a less concrete way. Abstraction is key to innovation.

It is one of the arguments most of us from a scientific background are making against the focus on STEM education, because if you don’t have art you don’t have innovation. It is this engineering culture coming through again. Engineers who appreciate art are more likely to be exaptive.


................................................................................................................................................................................


50:28 - The problem with a hypothesis is it’s based on what we understand from the past. So if something novel has happened, it will restrict our ability to see it.

If you have a hypothesis it’s highly risky under conditions of uncertainty because the past is not going to repeat. You have massive asymmetry between the past and the future, so hypothesis based approaches don’t work. You move from deductive logic to abductive logic.

Abductive is sometimes known as the logic of hunches - what is the most plausible connection between apparently unconnected things. Human beings have evolved to think abductively which means we’re brilliantly inventive, but also prone to conspiracy theories. 

We’ve got fifty-five people who come with these wild ideas. [To objectivise these abductive leaps] we present the wild ideas to panels of several thousand, they interpret it - if we get a dominant pattern we know it’s probably okay.

You can’t rely on individual judgement. Human beings evolved to make decisions collectively, not individually. That’s our strength, we can cooperate. We can [also] cooperate outside kinship groups - the advantage of that is that you can have specialists.

So-called educational deficiencies [autism, dyslexia, etc] are actually part of the collective intelligence. This is now called cognitive diversity. If you can increase the number of people in the collective decision cycle you can make it more objective.


................................................................................................................................................................................



Notes: Dave Snowden - 'Multi-ontology sense making: a new simplicity in decision making'


................................................................................................................................................................................

'Multi-ontology sense making: a new simplicity in decision making'
[Dave Snowden]

................................................................................................................................................................................


Order / Un-order

The vertical dimension of the matrix contrasts two types of system, namely order and un-order. In the earlier story of the childrens’ party the first approach, namely that of objectives, planning and best practice is in effect an illustration of the type of approach that is typically adopted in an ordered system and it can be legitimate. Where there are clearly identified (or identifiable) relationships between cause and effect, which once discovered will enable us control the future, then the system is ordered. It can be structured on the basis of a desired outcome with structured stages between where I am “now” and where I want to be “then”.

This is contrasted with un-order in which the relationships between cause and effect do not repeat, except by accident and in which the number of agents interacting with other agents is too great to permit predictable outcome based models, although we can control starting conditions and monitor for emergence.

“Un” is used here in the sense that Bram Stoker uses it of Dracula: the un-dead are neither dead not alive, they are something different that we do not fully understand or comprehend.

--

Undead/un-order - liminal, in-between states. This implies that complexity = in-between. The limit of control is the line between order and un-order.


................................................................................................................................................................................


Efficient / Inefficient

A strong mechanical metaphor characterizes [process engineering] approaches. The focus is on efficiency, stripping away all superfluous functions in order to ensure repeatability and consistency.

The engineering process takes place in a specific context and once achieved, shifts in that context require the engineering design process to be repeated to some degree before efficiency can be achieved again. Radical shifts in context may make the entire approach redundant or lead to catastrophic failure.

Manufacturing plant, payment systems in a bank and the like are all closed systems that can be structured and standardized without any major issue. We can in effect define best practice. However when we apply the same techniques to systems with higher levels of ambiguity, for example customer interactions, sales processes and the like we encounter more difficulties.

[Some of these] arise from the impossibility of anticipating all possible situations and shifting context. In these cases we need a different focus, one of effectiveness in which we leave in place a degree of inefficiency to ensure that the system has adaptive capacity and can therefore rapidly evolve to meet the new circumstances. 

Examples would include apprentice schemes of knowledge transfer, maintaining mavericks or misfits, allowing people to take training in subjects with no apparent relevance to their current jobs and providing more delegated authority.

There is nothing wrong with an engineering approach; there are many things that need high degrees of order and control. However taken to excess, and it has nearly always been so taken, it sacrifices human effectiveness, innovation and curiosity on the altar of mechanical efficiency .

--


Efficient                            -                      Inefficient
Engineer                           -                      Artist
Specialist                          -                      Dilettante
Narrow base                     -                      Wide base
Closed                               -                      Open
Order                                 -                      Chaos


Complex situations/interactions cannot be standardised. Standardisation implies known territory.
In complex circumstances, an abstracted/wide view is more advantageous than a concrete/narrow view.


................................................................................................................................................................................


Engineering thinking - top-down (controlled), bottom of pyramid (specific, narrow)

Systems thinking - top-down (controlled), top of pyramid (general, whole)

Complexity thinking - bottom-up (emergent), top-of pyramid (general, whole)


Systems thinking widens the scope of engineering thinking by attempting to map a whole system, as opposed to a part. However, it still assumes that the system can be mapped (and therefore controlled).

Complexity thinking does not assume that the extent of the system can be known, and instead of coming up with a theory of the system, it widens the range of its view as much as possible and looks for emergent patterns.


................................................................................................................................................................................


Humans make decisions based on patterns

This builds on naturalistic decision theory in particular the experimental and observational work of Gary Klein (1994) now validated by neuro-science, that the basis of human decision is a first fit pattern matching with past experience or extrapolated possible experience.

Humans see the world both visually and conceptually as a series of spot observations and they fill in the gaps from previous experience, either personal or narrative in nature.

Interviewed they will rationalize the decision in whatever is acceptable to the society to which they belong: “a tree spirit spoke to me” and “I made a rational decision having considered all the available facts” have the same relationship to reality.

Accordingly in other than a constrained set of circumstances there are no rules to model.


................................................................................................................................................................................


We both create and maintain multiple often parallel identities shifting between and amongst them as needed without so much as a second thought.

Accordingly in other than a constrained set of circumstances there are no clear agents to be modeled.

--

A clear agent would have to be unipolar (consistent) in all contexts, across the board. Human beings are tricky to model because they are inconsistent.


................................................................................................................................................................................


Humans ascribe intentionality and cause where none necessarily exist.

There is a natural tendency to ascribe intentionality to behavior in others, whilst assuming that the same others will appreciate that some action on our part was accidental.

Equally if a particular accidental or serendipitous set of actions on our part lead to beneficial results we have a natural tendency to ascribe them to intentional behavior and come to believe that because there were good results, those results arose from meritorious action on our part.

In doing so we are seeking to identify causality for current events. This is a natural tendency in a community entrained in its pattern of thinking by the enlightenment.

One of the key insights of social complexity is that some things just “are” by virtue of multiple interactions over time and the concept of a single explanation, ascription of blame or for that matter credit are not necessary.


................................................................................................................................................................................



Bret Weinstein's Probability Map


................................................................................................................................................................................


It is best to approach a complex issue with a provisional, probabilistic approach (allowing room for uncertainty i.e. the ability to move to a better position in line with further information) rather than be tempted into premature certainty.

Weinstein's chart is a neat way of laying out an ambiguous issue (i.e. one that allows a number of interpretations), allowing a certain narrative to be favoured whilst keeping the door open to competing interpretations.


................................................................................................................................................................................


You know you're in a complex space if you have competing hypotheses and can't resolve them.

[Dave Snowden]


................................................................................................................................................................................

Notes: Dave Snowden - 'LAS Conference 2013 - Keynote Dave Snowden - Making Sense of Complexity'


................................................................................................................................................................................

'LAS Conference 2013 - Keynote Dave Snowden - Making Sense of Complexity'
[Dave Snowden]

................................................................................................................................................................................


A chaotic system is one in which there are no constraints, which means every agent of whatever nature is independent of every other agent.


................................................................................................................................................................................


Heretics, outliers

The line which goes through the middle is called the line of coherence. If we’re on that line it’s okay.

We really don’t want to have excessive proof but not have [?] - if we do that we get heretics and mavericks. This is where a small group of people know they’re right, but nobody else believes them. Their solution to this problem is to explain to people why they’re right, and when that doesn’t work they explain to the people why they’re wrong. Then they wonder why they get [burned as heretics].

You need heretics in an organisation because they think differently. 

There are two strategies from a management perspective. One is coaching: this is finding people that can interpret them to the wider community. What you’re doing is pulling them back onto the coherence line. One of the big roles of coaches is to reinterpret material because the people with the bright new ideas are very poor at explaining them, in the main. The other alternative is hide it until it can prove itself.


................................................................................................................................................................................

Notes: Dave Snowden - 'Dealing with unanticipated needs'


................................................................................................................................................................................

'Dealing with unanticipated needs – Dave Snowden'
[Dave Snowden]

................................................................................................................................................................................


Inattentional blindness, heuristics

'Inattentional blindness' - we do not see what we do not expect to see.

The most anybody [...] will scan of the available data before you make a decision is about five percent. That’s on a good day, if you’re really focused. If you’re Chinese it is ten percent (there are actually different evolutionary processes in the brain as a result of symbolic as opposed to non-symbolic language).

You then make a decision based on a first-fit pattern match privileging your most recent experiences - that’s called conceptual blending. You scan five percent of the data, that causes trigger of memories of your own experience - things you were taught, things you learned from other people in narrative form - you blend that together and you come up with a unique form of action.

That’s how you make decisions - unless you’re fully autistic. The only people who make rational decisions by assessing all available data are autistic, which is why they can’t operate.

If you think about it in evolutionary terms, you can see why this happens. If you imagine the first hominids on the savannahs of Africa, something large and yellow with very sharp teeth runs toward you at very high speed. Do you want to autistically scan all available data, look up a catalogue of the flora and fauna of the African veldt, and having identified ‘lion’ look up best practice case-studies on how to avoid lions?

We evolved to make decisions very quickly based on a partial data scan, privileging our most recent experiences. 

In modern cognitive science we don’t call these biases, we call them heuristics. Evolution doesn’t produce things that have no utility. So-called biases are actually heuristics that allow us to make decisions faster.


................................................................................................................................................................................


Art comes before language in human evolution. 

We drew and produced music before language happened. That’s unique to us as a species. That actually then continues to develop to the heights of modern art. If you look at modern fine art and music it’s usually sophisticated.

The reason that is so valuable to us in evolutionary terms is that if you move up a level of abstraction you see novel connections. Art has been critical to human inventiveness because it disconnects us from the material and moves us into the abstract.

Which is why the focus on STEM education is a potential disaster for the species, because if you don’t have art you don’t have inventiveness. You’re just connecting them with the material.


................................................................................................................................................................................


If you want innovation forget people between the age of twenty-five and forty-five. 

They don’t innovate  (unless you put them under considerable stress and so increase brain plasticity). By the time you hit about twenty/twenty-five, you’ve locked down how you see the world based on what you need to do in the society you belong. It doesn’t really change until you reach your late forties/fifties.

You don’t see racism in kids before puberty. Racism comes in after puberty because by then the brain is starting this lock-down process to meets the needs of the society to which it belongs. Therefore it will assume the prejudices of that society.

Chemically triggered in the fifties, the same things happens - the brain becomes plastic again.

So if you look at innovation in the humanities its older people, in the natural sciences its younger people. In the older people innovation is synthesis, in younger people it is originality. 


................................................................................................................................................................................

Notes: Dave Snowden - Managing for Serendipity


................................................................................................................................................................................

'Managing for Serendipity or why we should lay off "best practice" in KM'
[Dave Snowden]

................................................................................................................................................................................


Best practice

In an ordered system a “best way” is theoretically possible as we are dealing with repeating relationships between cause and effect […] If, and it is a very big if, there is a stable and repeating relationship between cause and effect in a common context then best practice can and should be mandated.

Human social systems are uniquely able to create such stable contexts by agreeing and establishing conventions for matters such as payment systems and traffic regulations.

If we are dealing with a complex system then there is no such repetition. Even in an ordered system the degree to which we understand the relationship between cause and effect determines the degree to which we can define best practice. This is true even of scientific knowledge where serendipity is as frequently the cause of major breakthroughs as is disciplined method and where old knowledge frequently used best practice to exclude new thinking.

For complex systems best practice is dangerous, for ordered systems it is valid, but not universally and only in very stable situations, in all other cases it is entrained past practice.

--

‘Best practice’ - codified knowledge, devoid of context. Script, code. You can only run the code if the situation is known, predictable because the code has been written for specific circumstances. When the situation is complex, running code - doing what has worked formerly - will not work.


................................................................................................................................................................................

Habituation

It is also true that habituation is necessary for the consistent application of best practice. Fire fighters do not just enter each situation with a manual, they practice daily to ensure that best practice is engrained in their thinking, and that practical experience provides both knowledge of when not to follow best practice, and also creates high levels of trust based on interdependency (Weick & Sutcliffe 2001).

This has implications for much of the so called attempts to create efficiencies in human actions. A large part of the attempts to introduce process improvements in professional services for example fails to recognise this need for habituation.

For a computer there would not be an issue as each task would look up the processes on the basis of articulated decision rules, but humans do not work that way, they need to build and habituate patterns to be effective.

We actively seek out multiple encounters to increase the probability of an emergent solution, that does not just repeat the past, but which opens up new possibilities.

The loss of content, but particularly context involved in codification means that written knowledge is only ever a partial representation of what we know.

[…]  innovation is dependent on disruption of entrained patterns of thinking.

--


................................................................................................................................................................................


Pattern matching, patterns of expectation

[…] humans do not make rational logical decisions based on information input, instead they pattern match with either their own experience, or collective experience expressed as stories. It isn’t even a best fit pattern match but a first fit pattern match (Klein 1998).

The human brain is also subject to habituation, things that we do frequently create habitual patterns which both enable rapid decision making, but also entrain behaviour in such a manner that we literally do not see things that fail to match the patterns of our expectations.

We do not see what we do not expect to see - and you can't train yourself to see the unexpected 

--

Cyclists on roads. Most drivers are habituated to see cars, not cyclists. When they scan, they scan for car-shaped objects. A cyclist does not, generally speaking, match the ‘pattern of expectation.’ The cyclist has a greater chance of breaking into this scan if they can catch attention - i.e. move erratically or wear something jarring.


................................................................................................................................................................................


Efficiency

Unfortunately while efficiency does achieve effectiveness in mechanical or highly structured human systems it does not in respect of the majority of human interaction which, as previously stated is complex in nature.

An interesting feature of complex systems, particularly in social insects, is that for a system to be effective there needs to be a degree of inefficiency in the operation of its agents. Humans are the same; the efficiency focus of best practice harms effectiveness because it assumes repeatable past patterns of cause and effect. Driving out inefficiencies increases vulnerability to new threat as the adaptive mechanism of the complex system has been withdrawn.

--

Efficiency= 'minimum wasted effort'. Something that is efficient has been stripped of any redundancy, boiled down to its essentials. It makes a minimum of moves to reach an intended goal.

Inefficiency is more desirable in complex systems as it implies a wider base, and a potentially wider range of movement. Inefficiencies are doorways to alternate patterns.


................................................................................................................................................................................


Top down / bottom up

A true narrative database uses only original material and searches it based on abstract questions that discourage directed enquiries to create serendipitous encounter.

Attempts to engineer a network through design and allocation of staff to groups generally fail as they create artificial relationships that are not sustainable.

--

Define boundaries in which things can emerge. Define the playing field and let the game take care of itself. Bottom-up (emergent) within top-down (planned).


................................................................................................................................................................................




Outliers


................................................................................................................................................................................


Normal                            -                      Divergent
Centre                              -                      Periphery


................................................................................................................................................................................


Dave Snowden: There’s a level of dissent you want to have permanently present within the organization. The thing [is] to measure the degree of inefficiency a system [needs] in order to be effective.

Jim Rutt: Do you have anything you can explicate on on how one would think about what’s the right amount of diversity? I suppose it’s situationally dependent.

Dave Snowden: That links in with apex predators. If you’ve got a stable ecosystem you don’t need so much diversity. If the system is suddenly destabilized you need to increase diversity very quickly.

[Jim Rutt & Dave Snowden]
'EP11 Dave Snowden and Systems Thinking', Jim Rutt Show


................................................................................................................................................................................


Related posts:

Notes: Daniel Schmachtenberger - Jim Rutt Show


................................................................................................................................................................................

EP7 Daniel Schmachtenberger and the Evolution of Technology
Jim Rutt Show



................................................................................................................................................................................


Anything that has bottom-up coordination only, but abstraction-mediated capacities, like markets, is going to fall to multipolar traps. Multipolar traps with exponential tech will be catastrophically bad.



43:35 - Because of the nature of evolved systems we get antifragility. Complicated systems subsuming their complex substrate increase fragility.

If I burn a forest it will regenerate itself, if I cut my body it will heal itself. If I damage my laptop it won’t heal itself.

Humans take the antifragility of the natural world and turn it into fragile stuff. We turn it into simple and complicated stuff. So we turn a tree, that’s antifragile and complex, into a two-by-four that is simple; or a house that is complicated; but both are fragile. 

We have complicated systems subsume the complex systems, so we’re creating an increasingly higher fragility-to-antifragility ratio. We’re trying to run exponentially more energy through an exponentially more fragile system.



45:20 - The way humans solve problems tends to create worse problems. For the solution to solve the problem it has to be larger, faster, somehow bigger than the problem was.

The solution typically is to solve a very narrowly defined problem - [a small number of] metrics - and yet it’s going to interact with with complex systems that affect lots of other metrics, where it will end up having harm-externality that will be larger than the original thing.

The plough solved the problem of local famines, but ended up causing desertification, and species extinction - and all these things writ large globally. The internal combustion engine solved the problem of too much horse shit in the cities and the difficulties of horses, but climate change, oil spills, wars over oil, and the destabilisation of the Middle East are the unintended externalities.

We can see the same for the value of Facebook compared to the unintended externalities it created.

I can define a problem in a narrow way, but that’s actually not the problem it’s a little part of it. It’s the same with biotech - I can say the problem is one biometric that I’m trying to address, for instance LDL, and I can give something that lowers that, but it also might do a bunch of other things that are negative which are the side effects. This is not a good approach to medicine.

The information processing that it takes to come up with a new piece of tech is orders of magnitude less than the information processing it takes to ensure that tech won’t have any externality in its long term application. The safety analysis is going to end up being NP-Hard relative to the work that it takes to come up with the tech being expressible as polynomial.



51:43 - Why do we get so much concentration of sociopathy in the top of Fortune 500 companies, and politics, and especially finance?

They’re basically systems to attract, reward, incentivise and condition sociopathy. 

People who are attracted to power and people who are good at winning a bunch of Win-Lose games get to the top of a power game. At each step they move up the ladder they’re winning against somebody else, usually via things like disinformation and deception. If you think about the nature of a government, or a corporation, or any top down power system it is basically a strange attractor for people who want to have power.

If there are forty, fifty - up to a Dunbar number - of people living in a tribe there is an extraordinarily high degree of transparency that is forced in that scenario. Everybody pretty much sees what is going on with everybody else, everybody knows everyone, everyone has fealty relationships with everyone.

So sociopathy is not going to be advantageous - you’re not going to have an evolutionary niche in that environment for much in the way of conspiring and lying, because it will get found out and punished.

So the forced transparency creates an accounting system where you don’t get an evolutionary niche for somebody fucking the other people in the system. As soon as the system starts to get large enough that…

- there are anonymous people, so I can harm people who I don’t really know and care about
- I can do stuff that people won’t be able to see; if I can have a corruption of the accounting in the system…

…we get an evolutionary niche for internal defection, rather than participating with the system. I’m not externally defecting and leaving the system, I’m internally defecting and playing the system.

Most people inside a corporation or a government are optimising what is good for them and their direct fealty relationships, rather than what is good for the whole - and nobody can tell.

We do our social science inside of a world where these systems have become ubiquitous, and we assume that those properties - where there is ubiquitous conditioning - are intrinsic to human nature. We have to be careful about that because I think a lot of them are not intrinsic to human nature, they are a result of the ubiquitous conditioning - and we could create conditioning environments in which things like sociopathy are not advantageous and don’t get up-regulated.

................................................................................................................................................................................