Recursion is one of those computer science concepts that drives students crazy and makes programmers drool. There is something captivating about it, yet it has certain features that seem out of grasp even for the experienced professionals.
One of the reasons why recursion is tricky is because it is not a native computer science concept; it is a borrowed concept. Recursion is actually a mathematician’s tool by design.
If you look around on the internet, you would find a wealth of “tutorial”-like content trying to explain recursion using programming examples. These “tutorials” are great for computer science students and aspirants to learn the programming logic of recursion.
However, I’m here to argue that these tutorials are doing more harm than good. Part of the problem is what I call “interview recursion” (more on that later). One does not truly learn recursion by simply learning the program logic alone. There is certainly much more to it than meets the eye. And that’s where this essay comes in.
In this essay, we will start with a simple illustration of recursion — first from a mathematical standpoint and then from a programming standpoint.
Following this, we will dive into the history and philosophy of recursion in the context of computer science. Finally, we will proceed to analyse and discuss the practical challenges and applications of recursion (including “interview recursion”). Let us begin.
This essay is supported by Generatebg
A Mathematical Illustration of Recursion
A classic example of a recursive function in mathematics is the factorial function. As a reminder, a factorial function is only applicable to non-negative integers. For further insights into this function, check out my essay on why exactly zero factorial is equal to one.
Coming back to recursion, we could easily reveal the recursive nature of the factorial function using the following simple formula:
n! = n*(n — 1)!
To see this formula in action, let us say that n = 8. Then, we have the following recursive execution:
Well, isn’t that a little bit cheeky? Each time, we just replace (n — 1) with a new number. At what number does the recursion end? Herein lies an important feature of recursion. Any recursive function/algorithm must include a well-defined base case!
Note that for the base case, there is no computation necessary; we define what the function’s value is for the base case. In our case, let us say that the base case is the following:
1! = 1
Consequently, the recursion execution would play out as follows:
Note how each factorial state is revisited in the later steps once the base case has appeared. Now that we have covered a mathematical illustration of recursion, let us proceed to programming.
A Programming Illustration of Recursion
We need not look any further away from the factorial function to illustrate recursion from the world of computer science. Let us say that we are to implement the factorial function as a program. To do this, consider the following pseudocode:
factorial(int n) {
if (n == 1) {
return 1;
} else {
return n * factorial(n — 1);
}
}
In the above pseudocode, we see a function called “factorial” take an integer argument ’n’. But inside the function, the function “factorial” calls itself again. Only this time, it supplies the integer argument (n-1). Here again, we see that the base case is defined for n = 1.
So, for each “n”, a stack layer with its own space would be created. Only after the base case has been computed would the stack layers be closed in the order of last-in-first-out.
Each stack layer would return its ’n’ output to the stack layer that called it as it closes. And finally, the main program would return the final output. Here is an illustration of what this process looks like:
We have now covered how the recursion logic works in mathematics and computer science using an illustrated example each. Thus far, this essay has been no different from how the usual “tutorials” approach the topic of recursion. However, it is now time for us to depart from this wonderland and face reality!
The Interview Recursion Problem
The illustrative examples we covered are great for anyone who is just learning the concept of recursion. However, it is not prudent to solve factorial functions using recursion in real life. Other common examples used to teach programming recursion include: reversing strings, countdown algorithms, sort/search algorithms, etc.
In almost all of these cases, recursion is the wrong tool to solve the problem. Here is something even worse: programming interviews tend to promote the idea of candidates using recursion to solve problems that actually don’t require recursion.
Why does this happen? My best guess here is that since recursion is a very counter-intuitive concept, interviewers value candidates who show a good understanding of recursion.
But still, I think that this does more harm than good. This practice gives programmers the illusion that recursion is a viable tool to solve many non-linear logical problems, while in reality, it is not the case.
“Hang on a minute! If you say that almost ALL of the examples used to teach recursion are not good candidates for recursion, where can we actually use recursion?”
If this question has popped up in your head, I hear you! The truth is recursion has its place in the programming world. But the instances where it makes sense to use recursion are so complex that using them to teach the concept ends up being counter-productive.
To understand this issue deeper, let us visit the history of recursion in computer science.
The History and Philosophy of Recursion in Computer Science
I mentioned that almost all of the common examples of recursion should not be solved using recursion. If this is indeed true, if not recursion, what else could we use? The answer is: iteration. In other words: use loops!
A loop is a set of instructions that is repeated again and again until a certain condition is met. Loops come in a different forms in the programming world: for, do, while, etc.
Loops came into the programming world earlier than recursion. In fact, loops first appeared (in their primitive forms) in compilers. Slowly, they made their way from compilers into higher level languages like FORTRAN.
As the concept of loops continued to evolve in the mid-twentieth century, the fields of science and engineering found a clever way of leveraging their utility even further. They started using the unique concept of loops within loops. A classic example of an application of loops within loops is manipulating matrices/arrays.
When one loop is called inside another loop, the problem effort is said to be quadratic. When three loops are involved, the problem effort is cubic, and so on. Consequently, as the number of loops increases, the effort increases exponentially.
As computers come with limited resources, there is a limit to the number of loops that can be nested within each other. This number varies depending upon the programming language and/or hardware. But there exists a limit.
However, we cannot solve a particular class of problems by using nested loops (at least not within the limits). This class of problems is what triggered the porting of recursion from mathematics to computer science. In other words, recursion was meant to be used ONLY when for-loops were a no-go!
When Should You Actually Use Recursion?
Before I tell you when you should actually use recursion, let me tell you when not to. Any recursive problem that can be de-recursed should not be solved using recursion. The reason for this is almost always the cost of resources.
Recall from my illustration that each stack layer in a recursive algorithm grabs computer resources and releases them only when it closes down. The equivalent iterative algorithm using loops would consume lesser computer resources. It is as simple as that.
Recursive problems such as the factorial function, Fibonacci series, etc., can be de-recursed. Such problems are known in the biz as primitively recursive problems. Recursion is not meant to solve primitively recursive problems.
Recursion should only be used to solve innately recursive problems (known as generally recursive in the biz). These are problems that CANNOT be solved using for-loops. Historically, compilers have posed generally recursive problems.
Other than these, innately recursive functions like the Ackermann function, randomness functions like the Mandelbrot recursive function, etc., are good candidates for recursion. Note that the subset of programmers who work with compilers or innately recursive functions is very minimal.
For the vast majority of programmers, there remains no valid need to use recursion other than to impress interviewers. The “interview recursion” phenomenon in turn gives programmers the illusion of the recursive hammer. To the one who possesses the recursive hammer, every non-linear problem looks like a nail!
Reference: David Brailsford.
If you’d like to get notified when interesting content gets published here, consider subscribing.
Further reading that might interest you: How To Benefit From Computer Science In Real Life and Why Is LaMDA Not Really Sentient?
If you would like to support me as an author, consider contributing on Patreon.
Ich denke ĂĽber Vererbung von Eigenschaften nach.
In wiefern meinen Sie das? Können Sie bitte näher erläutern? Ich wĂĽrde mich interessieren…