How To Understand The Computational Irreducibility - A cartoon showing a boy dreaming with a candy dream of a candy seeing a computer in a television dreaming of the same candy.

Computational irreducibility is a concept that dates back to the 1980s. What got me particularly interested in this topic is Stephen Wolfram’s contemporary take on it.

At its core, Wolfram’s notion of computational irreducibility says that some systems or processes cannot be simplified or accelerated beyond their natural course. In other words, there is no shortcut to predicting the outcome of these systems without actually simulating the entire process.

This might sound like a simple observation, but there is profound complexity hiding underneath. What’s more, this concept has useful implications in critical fields ranging from cybersecurity to Artificial intelligence (AI).

In this essay, I will initially delve into the intricacies of computational irreducibility in layman’s terms. Then, I will explore its applications in the aforementioned fields, and finally consider its implications in everyday life.

As it turns out, this fascinating concept has the potential to influence our understanding of many natural and artificial phenomena, with consequences permeating our daily lives.

But first, let me begin by telling you a little bit about my background which led me to explore this topic in the first place.

This essay is supported by Generatebg

A product with a beautiful background featuring the sponsor: Generatebg - a service that generates high-resolution backgrounds in just one click. The description says "No more costly photographers" and displays a "Get Started" button beneath the description.

A Note on My Formal Background

My very nature is to explore a wide array of fields and subjects. If you have been following my writing, you would have also realised this.

That being said, I have a “formal” background in numerical mathematics and simulation science. The “formal” part here refers to what we typically call a “degree”.

To be very frank with you, I feel somewhat embarrassed to share this. This has to do with how little I knew back when I was actually doing my degree. Fast forward to today, I strongly feel that degrees are overrated and will be extinct in the near future.

Back to computational irreducibility now. What does my formal background have to do with it? Well, having studied and worked extensively with the mathematical/computational modelling and simulation of several systems, I stumbled upon a few crucial realisations on my own.

My Key Realisation About the Nature of Simulations

Many complex physical processes that we know of can be approximated via simplified abstractions. Consider Newton’s laws, for instance.

When calculating the trajectory of a falling apple or an orbiting planet, we consider the ball or the planet as a unitary particle with properties such as mass, momentum, etc. We don’t bother with what each molecule of the ball or planet contributes to the net result.

Why should we? Even without considering the intricate details, we get acceptable results. That is the beauty of the scientific approach; it considers only the necessary details to get the result.

Having said this, there are a few key details hidden here. When we are computing the trajectory of the ball or the planet, we are essentially simulating the physical process ahead of “time”.

In other words, we are able to compress time behaviour using the power of mathematics and abstractions (logic). My key realisation from having worked with numerical modelling and simulations is that not all physical processes allow us to compress time.


The Essence of Computational Irreducibility – Incompressible Time

There are physical processes wherein we cannot simply plug variables (initial conditions) into a formula and get the result. Such complex physical processes (systems) require us to compute each time step (state of the system as steps in time) based on the previous state(s).

Examples of such systems include fluid motion (weather prediction, aerodynamics, etc.), stock market behaviour, etc. In a sub-optimal setting, we would be able to simulate these systems at a ratio of 1-to-1 (simulation-time/physical-process-time). That is, the simulation would proceed at the same pace as the actual process.

As you can imagine in this case, there would essentially be no difference between simulating the physical process and actually performing the physical process. So, the advantage of simulating the behaviour starts to dwindle.

But the harsh reality we live in often forces us to simulate at a ratio of (1/0.x)-to-1. In other words, we currently take hours to simulate physical processes that would play out in seconds. What’s worse, we often make simplifications and/or assumptions which lead to an inherent “error” in our simulations.

How to Simulate Our Universe?

Ever since I read The Last Question by Isaac Asimov, I have often pondered upon what it would take to simulate our universe. If we factor in our current simulation state-of-the-art, things don’t look bright.

But will it ever be possible? From what I know now, it might very well be the case that at the limit, the difference blurs between such a simulation and actually constructing the process of a running universe.

Note that I used the word “construct” here. One cannot be a constructor (known as God in religion) whilst being a mere observer who experiences such a universe (a whole other problem for another time). In any case, I realised that I must not be the first person to have stumbled upon this line of thought.

So, I started researching and landed upon Wolfram’s work on computational irreducibility. Basically, Wolfram’s work classifies a set of systems (such as weather prediction or stock market behaviour or simulating the universe) as computationally irreducible systems.

Wolfram bases his work on computational irreducibility in cellular automata, which are simple computational systems characterized by a grid of cells that evolve according to a set of rules. Each cell’s state in the next iteration depends on the state of its neighbours, as determined by these rules.

Essentially, these are systems we cannot simulate by compressing time. What does this mean in terms of the implications of computational irreducibility in real life?

The Predictive Implications of Computational Irreducibility

One of the most profound implications of computational irreducibility is the limitation it imposes on our ability to predict the future. The proof is in the pudding of our reality.

As I mentioned earlier, weather systems exhibit chaotic behaviour and are computationally irreducible. This is the reason why your typical weather forecasts are limited in accuracy.

Consider the forecasts that predict the weather days in advance; the longer the time horizon ahead, the lesser the accuracy. Another example of a computationally irreducible system that I mentioned is stock market behaviour.

Despite the development of sophisticated models (with which I also have experience), it remains impossible to predict market behaviour with complete certainty.

If you ever come across a trader or “finance expert” claiming that they can predict the market, I suggest you read my essay on how to perfectly predict improbable events.


Applications in Cryptography and Cybersecurity

Cryptography is the study of secret and secure communication. It turns out that this field directly benefits from the principles of computational irreducibility.

By exploiting the difficulty of predicting the behaviour of certain mathematical functions, cryptographers develop encryption algorithms that are resistant to attacks.

As a result, the security of our online communications, financial transactions, and sensitive data is bolstered by these computationally irreducible functions.

Similarly, computational irreducibility also contributes to the effectiveness of hashing algorithms, which are widely employed in cybersecurity. Hash functions transform input data into a fixed-size output, often referred to as a hash.

These functions are often designed to be fast and efficient, yet challenging to reverse. The unpredictable nature of computationally irreducible systems lends itself well to the development of robust hash functions, safeguarding digital information from bad actors.

I just happened to learn the art of password handling from scratch for an app that I have been developing. I plan to cover the topic of password hashing and security in a future essay. For now, let us shift our focus to AI.

The Emergence of Artificial Intelligence

If you have an online presence these days, it is hard to miss the impact of ChatGPT. Every second internet marketer is one the case, and you often read one or more versions of the following claims:

“Here’s how you can make your next million using ChatGPT.”

“99% are using ChatGPT wrong. I will show you how to use ChatGPT correctly.”

Online scammers and marketers aside, the system (ChatGPT) itself has unquestionable value. What is even more impressive is the fact that this will be the least impactful it will ever be from this point onward.

How does a Large Language Model (LLM) operating using statistical probabilities answer many of our complex questions so effectively? Think about it this way. Your question is not necessarily unique.

It is actually an age-old philosophical problem; any thought you or I have is not necessarily unique. It is highly likely that someone, at some point in human history, has had the same thought before.

If you think about it, this is the same line of thought that led me to Wolfram’s work on computational irreducibility in the first place. It turns out that ChatGPT (GPT-3/4) is recognising the statistical patterns in our questions and using simple processes to deliver the results.

These processes, albeit simple, are inherently computationally irreducible. This is not just inherent to ChatGPT. Most modern machine learning systems rely on computationally irreducible processes to learn from data and adapt their behaviour.

In other words, even the creators of GPT will still likely not understand its step-by-step functioning algorithm, contributing to a phenomenon known as the “black box” problem.

Despite the challenges associated with understanding these systems, their inherent complexity allows them to excel at tasks that were previously unattainable by traditional algorithms. What a fascinating time to live in!

Conclusion

Computational irreducibility is a powerful concept that has the potential to reshape our understanding of various phenomena in natural and artificial systems.

From bolstering the security of our online communications to driving advances in AI, computational irreducibility has a profound impact on our daily lives. For better or for worse, the nature of these complex systems remains unpredictable.

How To Understand The Computational Irreducibility - A cartoon showing a boy dreaming with a candy dream of a candy seeing a computer in a television dreaming of the same candy.
The Computationally Irreducible Kid – Illustrative art created by the author

We as human beings (all life forms, really) have always feared the unpredictable. This lands us in a paradoxical situation. It is the same unpredictable complexity of these systems that currently drives human innovation and growth.

What are our options here? I would say that such a discussion is beyond the scope of this essay.

Ultimately, the applications and implications of computational irreducibility serve as a testament to the beauty and intricacy of the natural and artificial systems that surround us. As we continue to explore this enigmatic concept, we can only marvel at the potential insights it holds for the future.


If you’d like to get notified when interesting content gets published here, consider subscribing.

Further reading that might interest you: 

If you would like to support me as an author, consider contributing on Patreon.

Street Science

Explore humanity's most curious questions!

Sign up to receive more of our awesome content in your inbox!

Select your update frequency:

We don’t spam! Read our privacy policy for more info.