Open and closed loop learning

Some time ago I realized that the histopathology laboratory I was designing and building together with my colleagues was a system. This may sound like a trivial insight but it turned out to be quite important as it opened up a very useful body of knowledge of systems engineering that we could bring to bear.

Having got that insight, I wanted to specify our own systems engineering process to guide the further development of the laboratory according to the principles of systems engineering. I studied the INCOSE Systems Engineering Handbook, NASA’s systems engineering handbook, and the ISO 15288 systems and software engineering – system lifecycle processes standard. All these sources describe processes for designing systems and analyzing the system designs.

Although I have always (well, since the 1990’s) described system development as iterations of “guessing a design and checking if it works”, I first didn’t pay as much attention to the system analysis (checking) part as to the system design part but I have over time come to think of the two activities as an integral whole; one of them doesn’t make sense without the other. More specifically, there is no linear or algorihmic way to go from requirements to design. System design is an inherently creative step where the engineer “guesses” a design and then seeks to establish through analysis whether the design will satisfy the requirements. The design – analysis loop is repeated until a satisfactory solution is found.

Many processes in the world can be described as learning processes. Such processes can take place on time scales between milliseconds and eons. Human perception is a process where the individual learns about the environment with millisecond temporal resolution. The evolution of species operates on time scales of thousands or millions of years. Systems engineering operates somewhere between milliseconds and eons.

An example of open loop learning is a lecture where a lecturer imparts knowledge to an audience but the audience has not (yet) verified the truthfulness of the message.

The systems engineering process is a closed loop learning where we learn about, propose and analyze hypothetical solutions in an iterative loop. More crucially, we eventually test the solution in its real-world operational environment. That is the ultimate test of the system’s fit to the world and a practical test of the truth of the hypothesis.

Examples of closed loop learning

Closed loop learning is not only at the heart of systems engineering and efficient learning in general. It can be found (or could be used, if not already used) in almost every context in which goal oriented action is taking place, both in naturally occurring processes and processes designed by humans. Some examples of closed loop learning in nature:

  • Biological evolution: Random mutations and genetic recombination suggest new traits, which are then tested by natural selection.
  • Immune system: The adaptive immune system generates a diverse array of antibodies and then selects those that effectively combat pathogens.
  • Active inference: Perceptual inference suggests that the brain continually makes predictions (“suggests”) about sensory inputs and updates its internal models based on the discrepancy between actual sensory input and predicted sensory input (“test”). In action inference, the brain analyzes by a simulation the merits of a number of candidate actions before taking action.

Examples of closed loop learning in a human context:

  • Scientific method: Scientists formulate hypotheses (suggestions) and then test them through experiments and observations.
  • Business and entrepreneurship: The “lean startup” methodology emphasizes creating minimum viable products and testing them in the market.
  • Training of an AI model: During training the error between the actual output of the model and the target output is used to adjust the model to produce a smaller error the next time that particular training data point is presented to the model.

In all of the examples above the ultimate objective is to find and apply knowledge that furthers some objective function in the real world, a pragmatic truth. For biological organisms the objective function is to survive and procreate.

Why do we need to close the loop?

The answer to this question should be fairly obvious from the examples above but let’s make a simple thought experiment. Imagine a stretch of road that is perfectly straight for several kilometers (such that can be found cutting through endless Finnish forests). At the start of that stretch of road, point your car exactly in the direction of the road and adjust the steering wheel so that the front wheels point exactly straight ahead. Start driving. Do you think you will still be on the road, in the right lane, after two kilometers, if you don’t touch the steering wheel? You had after all done a meticulous job of setting the car off in the right direction.

I suspect that you, like I, would feel the urge to grab the steering wheel after just a few hundred meters. Why is that? One reason is that where the car ends up is almost infinitely sensitive to the initial settings of the cars position and the steering wheel; it is as impossible to get it right as it is to make a pen stay standing on its sharp tip. There will in a real scenario also be wind, potholes, mooses (if in my native Sweden) and other unpredictable circumstances nullifying the best of plans. Sticking to a linear plan can be fatal.

Some reasons for why a “linear” open loop approach doesn’t work as well as a closed loop approach are:

  • Sensitivity to initial conditions, e.g., the initial setting of the steering wheel in the example above, which undermines the reliability of fixed plans.
  • Epistemic uncertainty, or gaps in our knowledge, which prevent us from predicting outcomes with sufficient accuracy. In the example above this might be uncertainty about road conditions, wind, etc.
  • Changes in the environment rendering fixed approaches obsolete.
  • Old models of the reality replaced by new and better ones.
  • Complex systems often exhibit unanticipated behavior (sometimes called emergent behavior, a term I don’t use).
  • If the brain would employ a linear perception mechanism, it would probably be too slow to act. A hypothesis driven perception is faster.

Why don’t we close the loop?

Even though we know that a closed loop approach is the best and sometimes only way to navigate unknown epistemic territories humans don’t apply it effectively and consistently. There are many possible reasons for this:

  • Cognitive dissonance: This is illustrated with the following quote from [1]: “…when you tell somebody in a kind and warm and friendly way, ‘Here’s some information that shows why you’re wrong,’ they rarely thank you. They tell you where you can go with your findings and what you can do with them and they’re just not inclined to be grateful.”
  • Focus on individuals instead of the system: One illustrative comparison is that between aviation and health care. In aviation, each incident is investigated with the aim to improve the aviation system, not with an aim to find a pilot or other professional to blame. In health care systems thinking is often lacking and the focus is on finding (and potentially suing) the individual doctor presumably to blame for the incident. The latter type of culture does not promote learning on the system level or even on the individual level. See an earlier post and [2].
  • Short-term thinking: Many organizations and individuals prioritize immediate results over analysis of incidents and long-term improvements, which can discourage the iterative nature of closed-loop approaches.
  • Fear of failure: A closed-loop approach inherently involves trial and error. In cultures where failure is stigmatized, this can lead to resistance to adopt such methods.
  • Sunk cost fallacy: Once resources have been invested in a particular approach, there’s often a reluctance to change course, even when feedback suggests it’s necessary.
  • Complexity and cognitive load: Closed-loop approaches can be more mentally taxing, requiring constant evaluation and adjustment. This can lead to decision fatigue and a preference for simpler, linear approaches.
  • Hierarchical structures: In organizations with rigid hierarchies, the feedback loop may be disrupted if information doesn’t flow freely between levels.
  • Overconfidence bias: People often overestimate their (or other’s) ability to predict outcomes, leading them to undervalue the need for feedback and adjustment. I sometimes attribute this to cognitive laziness or worse, a will to shift responsibility to a party that is not able to take the responsibility.
  • Lack of measurement methods and tools: In some fields, it can be challenging to quantify progress or success, making it difficult to implement effective feedback loops.
  • Cultural resistance to change: Some organizational or societal cultures may view constant change and adaptation as destabilizing, preferring the illusion of stability offered by linear approaches.
  • Lack of systems thinking education: Many educational systems don’t emphasize systems thinking, making it less intuitive for people to adopt closed-loop approaches.

Conclusions

Closed-loop learning is almost a law of nature. It is applied by all sustainable natural systems and in most successful human endeavors.

Despite the obvious benefit or even need of closed-loop learning, we fail to implement it in many contexts that would clearly benefit from it. The reasons range from psychological and cultural inertia [3] to short-termism and lack of systems thinking [4].

Links

[1] Interview with Carol Tavris. Sean Carrol’s Mindscape podcast.
[2] What Can Healthcare Learn From Aviation Safety? Stephen Rice. Forbes. February 07, 2020.
[3] The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Amy C. Edmondson.
[4] Fifth Discipline. The Art & Practice of the Learning Organization. Peter M Senge.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *