Prognostication vs Software Development: Subjectivity and Complexity

Posted 2016-01-25

"Crede, ut intelligas" – "Believe that you may understand"

– Saint Augustine, Sermon 43.7

As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.

– Albert Einstein, Geometry and Experience

1 Uncertainty in Life

Underlying every experience in life is uncertainty. It is in general impossible to obtain complete confidence about anything.

When a human walks through a room, she can instantly filter out irrelevant details. The reason that she can filter out irrelevant details is that we as humans subconsciously accept certain truths and foundational beliefs. Without making these operational assumptions, we would have to consider an untold number of possibilities at any given point in time and we would not be able to function in the real world.

Examples of details which we safely disgard include:

  1. The ceiling is not going to fall on me, I do not need to analyze its structural integrity.
  2. The pattern of light on the wall is still, therefore it is unlikely that it is a threat to me but rather is just light shining through the window.
  3. The laws of gravity, force, and momentum will continue to stay in effect.

Subconsciously accepting these beliefs and many others about how our world functions allows us to focus on important threats and goals, such as

  1. My little sister just stuck out her leg to trip me.

While it is easy for us to filter out extraneous details and sensory inputs as we go about our daily lives, this is by NO MEANS easy for automated systems. Computers, unless they are programmed quite cleverly, are prone to get bogged down in the "combinatorial explosion" of possibilities which real-world inputs and problems tend to create. Computers' difficulty with real-world problems has prompted the creation of a saying known as "Moravec's Paradox":

"it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility" – Hans Moravec, 1988, Mind Children

Anything which is done following a rigid set of rules can often be automated comparatively easily, while tasks, however simple, which involve subjective perceptions can often strongly resist automation.

2 Uncertainty/Subjectivity in Software Engineering

Most human occupations make use of our ability to navigate ambiguity, and software "engineering" is no exception.

Modern software development is essentially climbing through the apparatus of an immensely complex logical system and making appropriate modifications to produce a desired result. At every step, ambiguity and intuition rears its head. We face ambiguity in our objectives, subjectivity at every implementation step toward meeting those objectives, and immense ambiguity in every interpretation of human communication.

I think that much of "software engineering" is quite subjective. In the design phase of a project, people with different backgrounds and skillsets can often approach the problem with vastly differing methodologies and architectures, but no one methodology can be proven to be "correct". Programmers can often achieve similar results using any several of a variety of programming paradigms, languages, and planning approaches. Whether it is best to design in a functional style with Immutable and Declarative design principles, in Object-Oriented style with SOLID design principles, or a procedural style with close affinity to the hardware, depends on the problem at hand and the prior experience of the programmers. The goals for a project may include hard boundaries such as reliability and performance guarantees, and perhaps runtime-complexity bounds for the behavior of the program, but the organization of the code which gets us to the end-result is in large part a matter of taste and discernment. It all comes down to achieving the desired external behavior in an acceptable amount of time. Regardless of the specific design choices the designer makes, what is critical is that the designer is able to intuitively grasp the problem and the interacting actors which affect implementing the problem, and can swiftly navigate the logic maze towards a workable solution. In other words, a software designer must be able to intuitively navigate complexity as well as subjectivity in order to produce a successful software design.

3 Complexity

If a planner cannot effectively grasp the interacting pieces going into a planned software system, but instead is overwhelmed by the software's complexity, the planner's designs and estimates are likely to be quite poor.

The issue of managing complexity is a bridge from the skillset of software development to the skillset of many other fields, and appears to be a core human mental skill. Just as it is difficult to deliver quality software in a system for which you do not have an effective mental model, it is difficult for people to deliver any reasonable insight on real-world systems for which they do not have an effective mental model.

The classic case of a system which is too complex for people to gain reasonable insight on in general is… any system for which a person is trying to predict a future state.

It seems that once a system reaches a certain minimum complexity level, its specific behavior becomes completely unpredictable over time without actually observing the system to see what will happen. A New Kind of Science by Steven Wolfram popularizes this idea as the "Principle of Computational Equivalence" –

"Almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication". – (A New Kind of Science, page 717).

Wolfram supports his Principle with analysis of hundreds of systems and processes from a diverse set of domains, and claims important implications for the Principle. The most obvious implication is that, once a system has reached a certain level of complexity, trying to predict its future by using computational shortcuts, formulas, or intution, will very often fail.

I think Wolfram's Principle of Computational Equivalence closely matches our experience with trying to predict the future and extrapolate the past within human society. Prognosticators have long claimed to be able to foresee future events, but they have a poor track record indeed. Ask any economist what the state of the economy will be, any MBA what company's stock will be high, or any professor of world affairs what the major flashpoints for world conflict will be, in only a few short years, and while they may very well give you their opinion, and wax eloquent about technical details in their field and complex models which they may consult, they will also almost certainly be wrong. Just as most economists incorrectly predicted financial stability through the period of 2007/2008 ( http://knowledge.wharton.upenn.edu/article/why-economists-failed-to-predict-the-financial-crisis/), most mutual-fund managers will find that they cannot beat the odds of the market for long, but will instead regress to the mean in the long run (for proof, compare the long-term returns of the largest mutual funds relative to plain Exchange Traded Funds).

The difficulties which arise when attempting to predict the future are very similar to the previously mentioned difficulties with complexity which often arise in software development. Prognosticators attempt to predict the outcome of an immensely complex system (the world) with myriad free variables. Because the world is more complex than any person can make an effective mental model of (besides using various heuristics which may or may not appear correct in hindsight), the track record of people trying to predict what will happen in the future is extremely poor.

Similarly, in software development we work with very complex systems. In some respects, software development could be considered less complex than international politics, but in other respects, it could be considered more complex. Modern software development involves coordinating the execution of billions of networked logic units over time. While you may be able to largely understand the physics for the operation of a single transistor within a computer (complete with various quantum and traditional analog phenomena) you can certainly not completely understand or predict the operation of a machine with billions of interacting transistors (which is what today's computers are). Rather, you take certain assertions about your machine on faith, and assign them a very high degree of confidence. When something goes wrong with your computer, then, your limited knowledge about how your computer works allows you to assign an informative prior over the sample space of possible issues with your computer, and quickly zoom in to the likely cause of the issue. When a person is developing software, if he does not have good enough mental model of the processes he is trying to control, he will almost certainly find that the project does not go as expects.

4 Conclusion

In this paper, we discussed subjectivity and complexity in the context of software development. We discussed how humans' intuitive ability to operate in the presence of subjectivity gives us a key advantage over mechanical automata such as computers. We also discussed how real-world phenomena often display a great deal of apparent complexity as well as subjectivity. Wolfram's Principle of Computational Equivalence argues that there is often no shortcut to predict the future state of a system, as you must run a computation of equivalent complexity to the system itself in order to predict the future state of the system. Just as it proves difficult to predict the future state of the world, because we have no effective means to simulate the operation of the world, it is difficult to to predict the success or failure of a project such as a software project if the project leader does not have an effective grasp on all the important variables, considerations, and plans, so that he can mentally compute all the possible outcomes.

As long as programmers make appropriate simplifying assumptions and form an effective mental model of the interacting pieces of the program, software development can proceed quickly and efficiently. The moment, however, that a programmer loses an effective grasp of how the system he is working on works, the programmer is facing the same scenario as the prognosticator – how can you predict how something will pan out when you do not have an appropriate understanding of the forces at play?