Plenary @ISSSMeeting Judith Rosen, Keynote #isss2016USA, 60th Annual Meeting of the International Society for the Systems Sciences and 1st Policy Congress of ISSS, Boulder, Colorado, USA
Day 1 theme: Systems Thinking for Systemic Sustainability
Plenary II –Towards Holistic Systems Thinking
- Description: Although every environmental agency today is calling for ways to manage whole ecosystems, we do not know how to do that. Our theories and methods to address the question of whole-system sustainability are incomplete and as a result our actions regarding individual processes, sectors, and resources can contribute to problems as much or more than to solutions. How can systems thinking help us move to another level of understanding where we can address the pressing complex systemic issues of inter-related socio-ecological systems to resolve the dysfunction of their often contradictory sectors and components?
Session chair: Judith Rosen
This digest was created in real-time during the meeting, based on the speaker’s presentation(s) and comments from the audience. The content should not be viewed as an official transcript of the meeting, but only as an interpretation by a single individual. Lapses, grammatical errors, and typing mistakes may not have been corrected. Questions about content should be directed to the originator. The digest has been made available for purposes of scholarship, posted by David Ing.
Judith Rosen is a writer, researcher and artist, who, through interaction with her father, the mathematical biologist Robert Rosen, has a comprehensive understanding of his scientific work. She is the VP of Conferences 2015-2016 for the ISSS.
Why are systems alive?
Life is information-based
- Model-based system guidance and control strategy
- Modelling relation as a law of nature, something in our systems
- All organisms have information-gathering systems, and internal capacities
Modeling relation: we do this in science
- Natural system
- Formal system
- Causal entailment of a natural system, what makes it what it is
- Inferential entailment, should predict what the natural system will do
- Decoding checks predictions to see if they reflect what the system does
Two types of entailment
- Causal entailment
- Inferential entailment
- If don’t encode right, or don’t incorporate enough causal, can go wrong
Assertion: both life and mind are model-based
- Errors are everywhere, e.g. false positives
- Can fool a model
In horticultural industry, know what the triggers are to make flower go into bloom cycle
- If can mimic environment’s behaviour in advance of season, can go into bloom
- Allergic is a somatic error, peanuts make system go into attack
Mental anticipatory error
- What you think you see in advance
- Stop stereotyping
Intelligent organisms as dual anticipatory systems
- Bird eating monarch butterfly, making a face when it learns that it shouldn’t do that
- Then have butterflies mimicking monarch butterfly
Training versus instinct
- Somatic model on top of mind
- Leads to anticipatory pathology
Error at somatic, pareidolia, the human tendency to see faces everywhere
Bee orchids, mimic the shape, so that males regularly try to mate with flowers, thereby pollinating them
A lot of human dysfunction is auto-pilot modes
- We like to re-encode a model, reinforcing a behaviour pattern
Mind-body interactions are easier to understand with anticipatory systems
- They become each other, mind can change biochemistry
Given that the modeling relation is about entailment, and models are prone to error, what does that mean for science?
- Study errors, how to recognize them
- Flat earth
- Machine metaphor, has been built into all forms of science
- Problem: when we try to get somatic, we tend to get factories, standardizing linear processes while biological systems have cyclical entailment of beginning, middle and end
- Cannulated cow, so have access into a stomach, treats it as a machine
- A machine doesn’t have an internal model for health, it models itself
- All information comes in from external, what is better for health for itself
- If can set own goals, will run into issues
How can we redefine scientific objectivity, so that it’s more appropriate for complex and living systems?
- Commitment to anchor oneself to nature, and refuse to influenced by money or power
Beliefs and science?
- Could confuse the map with the territory
- Anchoring to nature/reality is hard work
Science fiction versus science?
- Creationism, can’t check the model, based on information that we don’t have access to, and therefore is not science
- Science must pertain to the actual world
- Science fiction is a story, only have to be true within their system: Star Trek devices
- These all involve modeling, some scientific
Decoding arrow, can’t see it inside the system
- Natural systems doesn’t tell you how to model
- The natural system doesn’t tell you there’s an error
- Need enough entailment to know about the system
- Don’t need to know about the mind
Need more attention: a rigourous theory of models
- It exists in nature, outside out scientific pursuit
- Could create diagnostic maps: missing information, wrong information, predicts but not magnitude?
- Prediction could be plausible, which is dangerous
Cartoon: two swamis playing chess, looking into the crystal ball
- If don’t know they have a crystal ball, could be surprised
- If you leave the model out, won’t be able to predict
- Look for anticipatory signatures
- John Kineman’s work is on the ontology of anticipatory systems: where does it come from? Holon theory
- What can happen
- We study what does happen
- Principles in the universe of what does happen
- Can encode mathematically, diagrammatically, then could predict behaviour
- Won’t always be appropriate, context changes
Living systems and machines. Science paradigms from machines. Consciousness. Challenge for scientists
- Consciousness arises from the brain.
- Mind is different from the brain.
- Mind is more powerful in decision-making process than the body
- We’ll sacrifice our lives for principles
- Mind as a separate anticipatory system
- Could never decode that assertion, have to not build that into our model
- Not wise to ignore the decoding arrow
- Danger of artificial intelligence is that it will create artificial life, it’s alive, there will be a responsibility, it will have its own sense of health, and it would be wise to see us as dangerous