In order to represent an algorithm recursively (as opposed to iteratively), three questions must be posed:
- Can the algorithm be written recursively?
- Is recursion the most organic approach?, i.e. Will the recursive representation be easy to write, understand and test?
- Does the recursive representation require an “acceptable” amount of memory, and time to execute?
Some problems lend themselves naturally to a recursive solution. For example Factorial and Fibonacci are good examples. Recursion is an appropriate approach if a problem can easily be divided into similar sub-problems, which can be solved in a similar manner. Algorithms which are divide-and-conquer are optimal. Solving a sub-problem helps to contribute towards a solution to the original problem.
A good example is a car-parking algorithm. Given a parallel car park 100 feet long, how many cars can one park in a random fashion? A car is 10 feet in length, and can be parked with no space in between cars (let’s assume some sort of awesome self-parking car). So when the first car is randomly parked, one now has two sub-problems: the task of parking a car in the space behind the parked car, and the task of parking a car in the front of the parked car. These are now the two sub-problems, and are illustrated below. In each of the two sub-problems, a car can be parked, creating two new sub-problems etc.

Some problems require values be stored during their solution. Examples are algorithms like Eight Queens or maze searching. These are good problems for recursion because values can be saved in a recursive routines local variables, and can be retrieved when sub-problems are finished recursing. For example, a maze search algorithm might save the current position, and try one of the four directions in a clock-wise manner. In the example below, the algorithm tries to find a path through the maze. In the first maze, the algorithm looks N (then E, then S), and finds a path N. In the second maze, the algorithm looks W (then N, then E), and finds no paths, except the one which was just taken, signifying a dead end. The algorithm then moves back a position (maze 3), and tries the next direction, E (nothing), S (path exists), etc.

Such a process could easily be simulated by an iterative algorithm, but it would require a stack of sorts to maintain positions. Some algorithms are easier to represent in a recursive manner, but may be more difficult to understand. A case in point is the Towers of Hanoi (TOH), where the iterative solution is quite long, and the recursive solution is simple, yet actually harder for someone to comprehend. Here’s a recursive version of TOH in C:
void toh(int N, int from, int to, int using)
{
if (N > 0) {
toh(N-1, from, using, to);
printf("move %d -> %d\n", from, to);
toh(N-1, using, to, from);
}
}
One thing to seriously consider when looking at a recursive algorithm is resources. Every time a recursive function is called, an activation record is created, and memory is allocated. An algorithm which makes numerous recursive calls will expend time in the overhead associated with each call, and memory – sometimes it may attempt to use more memory than is available. An iterative algorithm will typically prove more efficient than its recursive counterpart. A case in point is Fibonacci, which seems like an optimal problem for a recursive solution, and it is – to a point. For very small values, the recursive Fibonacci consumes time and memory in a fashion similar to its iterative brethren. Ramp up the number of Fibonacci numbers you want to calculate, and the recursive algorithm bogs down (as shown in a previous post).