Interview Kickstart has enabled over 3500 engineers to uplevel.

Recursive thinking is a key factor in solving complex programming problems. It helps in breaking down larger problems into smaller subproblems, making them easier to visualize. If you are a software engineer preparing for a technical interview at FAANG or other top tech companies, you must brush up on recursion. Most interviewers at tech companies will expect you to come up with a recursive approach to test your ability to solve complex problems.

**In this article, we’ll cover all the basics of recursion:**

- What is recursion?
- Properties of recursion
- When to use recursion
- Why does the stack overflow occur in recursion?
- Difference between direct and indirect recursion
- Difference between tail and non-tail recursion
- Memory allocation to different function calls in recursion
- Analysis of recursion
- Recursive optimization
- Common mistakes to avoid in recursion
- Advantages of recursion over iteration
- Disadvantages of recursion over iteration
- FAQs on recursion

Recursion is a strategic technique in which a problem can be solved by dividing it into multiple subproblems. Basically, recursion is a way to represent “infinite” using a “finite” description. Computer science and mathematics, perhaps, are the first subjects where it was studied in great depth. In computer science, it is one of the fundamental problem-solving algorithms.

Software engineers use it to solve problems concisely because recursive code is much easier to implement than iterative code. Recursive code is also comparatively easy to read and understand, and hence, does not require a lot of comments to understand the complete logic of the solution. It saves a lot of time and effort while reading, writing, and testing code.

**Recursion **is a process in which a function calls itself a certain or uncertain number of times. It can be easily explained with the help of two mirrors. If you place two mirrors facing each other, you will see one mirror reflecting the other one an infinite number of times.

A bigger problem can easily be solved by first solving related subproblems and then using the solution of these subproblems to find the solution to the bigger problem.

Take this mathematical function for example:

*f(n) = f(n - 1) + f(n - 2)*

Here, the function *f *is defined in its own definition of its previous state. If you want to find the value for the state *n*, first, you need to calculate the value for states *(n - 1) *and *(n - 2).* Similarly, for the state *(n - 1), *you have to calculate state *(n - 2)* and *(n - 3)*. And if we don’t know the value for a certain state, this process will continue indefinitely.

Let’s calculate the value for *f(3)*:*f(3) = f(2) + f(1)*

Since we don’t have the value of *f(2)*, we need to calculate it:

*f(2) = f(1) + f(0)*

Here, we don’t the value of *f(1)*; So:

*f(1) = f(0) + f(-1)*

If we keep going, we’ll keep calculating indefinitely. To avoid this, we need to define the function for some specific state. These states are called “base states,” and associated conditions are called “base conditions.”

The condition that terminates the further call of the function by defining the state of the function in advance is called the base condition. Base conditions, if defined correctly, prevent the function from going into indefinite or infinite recursion.

Let’s define a base condition for the above function:

*If n = 0, f(n) = 0*

*If n = 1, f(n) = 1*

*Tower of Hanoi problem and tree traversal are other examples of recursive algorithm problems.*

The basic idea of recursion is to find the solution to a problem without going into an infinite loop. For this, there are two simple rules:

- There should be a base condition at which the function should stop calling itself and return the already defined value.
- Every next function call should lead to the solution to its subproblems.

Following are the two most common uses of recursion:

- For solving problems that can be broken down into smaller subproblems.
- For problems that can be divided into multiple states. In this case, we solve each state individually and then use all of their solutions to form the solution of the actual problem we have. This is also known as the divide-and-conquer method.

For example, In the above function *f(n)*, there are two states:

- Immediate previous state
*(n - 1)* - Second-last state
*(n - 2)*

Solving these two states and then adding the two solutions will give the solution for *f(n).*

In recursion, a function calls itself repeatedly until the base condition is reached. If there is no such condition, it can go into an infinite loop. Every function call requires memory. This memory is used for storing the state of the function, which includes information such as:

- Who calls the function?
- From where will the code start executing after executing this function?
- What are the values passed to this function?
- What value will this function return?
- Which local variables were created before this function call?

So if the recursive function does not have any base case, it will call itself for an indefinite amount of time. Since every function call requires some memory and memory space is limited, no more memory will be allocated after a specific time — this leads to the **stackoverflow error**.

**For example:**

int calculate(int n)

{

if(n == 1) { // Base condition

return 1;

}

return calculate(n - 1) + calculate(n - 2)

}

Let's call this function for *(n == 2)*

*calculate(2) = calculate(1) + calculate(0)calculate(1) = 1 // base casecalculate(0) = calculate(-1) + calculate(-2) *

Since we didn’t consider the case for n == 0, it will go on indefinitely and require memory. After some time, the program halts with an error signal of segmentation fault.

In **direct recursion**, a function *f* will call the same function *f* recursively until the base condition is reached. Here’s an example of direct recursion:

#include <bits/stdc++.h>

using namespace std;

int factorial(int n)

{

if(n <= 1) return n;

return n * factorial(n - 1); // Direct Recursion

}

int main()

{

int n = 10;

cout << factorial(n);

return 0;

}

In **indirect recursion**, a function *f *will call the function *g, *and *g *will call the function *f*. Example:

#include <bits/stdc++.h>

using namespace std;

int calc_next(int);

int calc(int n)

{

if(n <= 1) return n;

return n * calc_next(n-1); // Indirect Recursion

}

int calc_next(int n)

{

if(n <= 1) return n;

return n * calc(n - 1); // Indirect Recursion

}

int main()

{

int n = 10;

cout << calc(n);

return 0;

}

In **tail recursion**, there is no other operation to perform after executing the recursive function itself; the function can directly return the result of the recursive call.

In **non-tail recursion**, there are some operations that need to be performed using the returned value of the recursive call.

Tail-recursive functions are considered better than non-tail-recursive functions — the compiler can easily optimize the tail-recursive function as there is nothing left to do in the current function after the recursive call. Hence, the function's stack frame need not be saved. (Stack frame holds a set of values related to the function, for example, return address, parameters, local variables, etc.)

**Here’s an example of tail recursion:**

#include <bits/stdc++.h>

using namespace std;

// Find gcd of the number

int gcd(int a, int b)

{

if(b == 0) {

return a;

}

return gcd(b, a % b); // Tail recursion

}

int main()

{

int a = 10, b = 4;

cout << gcd(a, b);

return 0;

}

**Example of non-tail recursion:**

#include <bits/stdc++.h>

using namespace std;

// Find factorial of the number

int factorial(int n)

{

if(n <= 1) {

return 1;

}

return n * factorial(n - 1); // Non tail recursion

}

int main()

{

int a = 10;

cout << factorial(a);

return 0;

}

When any function is called, some memory is allocated to it on the stack. It can also be said that an activation record is generated for that function, which is a data structure that stores values related to the function, such as:

- Local variable of the callee function
- Return address of the caller function so that after execution of the function, there’s a way to know which function is responsible for the value returned by the callee function
- Parameter of the callee function, if any.

When a function is called, an activation record gets stored on the memory stack. When this function gets executed completely, the record gets removed from the memory stack, hence freeing up the memory.

In recursion, the activation record keeps adding up on the memory stack until the base condition is reached. After executing the base condition, the function starts to return, thus freeing up the memory. In cases where the base condition is not defined correctly, the function keeps calling itself, and the activation record keeps piling up on the memory stack, causing the segmentation fault error.

There are two methods for analyzing the time complexity and space complexity of recurrence relation:

- Recursion tree method
- Master theorem

In the recursion tree method, each node of the recursion tree represents the cost of certain recursive subproblems. We take the sum of each value of a node to find the complexity of the algorithm.

These are the steps for solving a recurrence relation using the recursion tree method:

- Draw a recursion tree based on the given recurrence relation
- Determine the number of level and cost of each level
- Add the cost of each level and finally simplify the relation

According to master theorem, if a recurrence relation has a form:

*T(n) = aT(n/b) + f(n),* where a and b are constant with values *a >= 1* and *b > 1*.

f(n)is the cost of execution of instructions in the recursive function call.

If f(n) = O(nk), where k >= 0, then the following cases are possible:

**Problem: **Write an algorithm to find the *nth *Fibonacci number, where *n <= 30.*

**Solution: **

#include <bits/stdc++.h>

using namespace std;

// Find nth fibonacci number

int fibonacci(int n)

{

if(n <= 1) {

return n;

}

return fibonacci(n - 1) + fibonacci(n - 2);

}

int main()

{

int n = 30;

cout << nacci(n);

return 0;

}

Let’s find the time complexity of this algorithm using the recursion tree method.

For* n = 4*, the recursion tree will look something like this:

Here, from every node, there is a path to two different states. So:

- For level 1, number of nodes = 1
- For level 2, number of nodes = 2
- For level 3, number of nodes = 4
- For level n, number of nodes = 2 ^ (n - 1)

**Time complexity:** T(n) = O(2^n)

**Space complexity:** The space complexity of the recursive algorithm is proportional to the maximum depth of the recursion tree generated. So, if each function call requires *O(n) *space and the maximum depth of a recursion tree is *m* then the total space complexity will be *O(n * m).*

In our problem, the maximum depth of the recursion tree will be *O(log(N))*, where **N is the total number of recursive function calls** or the total number of nodes in a binary tree. How? Because the depth of the binary tree for *N *nodes is *O(logN). *

We are using constant space in each function call; so, the space complexity of our algorithm is *O(log(N) * a) *where *a *is constant.

Hence, space complexity will be *O(log(N)), *which will be equal to *O(n), *where *n *is the *nth *Fibonacci number. We can prove this easily, as there are *2^n* nodes in the binary tree. Therefore:

N = 2^n

logN = n

O(logN) = O(n)

Let's define the our Fibonacci function again:

*f(n) = f(n - 1) + f(n - 2)*

The recursive solution of this problem will continue to calculate the same subproblem for different values of *n *multiple times.

For example, for calculating the value for *n = 4*, we calculate values of *(n = 1), (n = 2), *and *(n = 3) *twice*. *This repeated calculation results in slower function execution; the time taken to return the solution is not optimal.

The solution to this problem is **memoization**. Memoization works by remembering the function state that is already calculated and reusing the result when the same value is required again. So, how do we remember the state? By creating a map. Let’s take an example:

In the following code, we calculate the value and store it in the map. The next time, for the same value *n*, we check if the value already exists in the map. If yes, we return the value without going into the further recursion call.

#include <bits/stdc++.h>

using namespace std;

const int N = 100;

int rec_map[N];

int fibonacci(int n) {

if(n <= 1) return n;

if(rec_map[n] > -1) { // memoization

return rec_map[n];

}

int calculated_value = fibonacci(n - 1) + fibonacci(n - 2);

rec_map[n] = calculated_value;

return calculated_value;

}

int main()

{

// Initialize the rec_map

for(int i = 0; i < N; i ++) {

rec_map[i] = -1;

}

int n = 10;

cout << fibonacci(n);

return 0;

}

While recursion is a useful method, it can cause issues if not used correctly. Here are some mistakes you should avoid when using recursion.

- Always define the base case of the recursive function; otherwise, it will go into an infinite recursive call, leading to segmentation fault error.
- Recursive function should be progressive, meaning every next function call should lead to the next smaller subproblems.
- If there are overlapping subproblems, optimize the function with the help of memoization.
- The best order to follow in recursion is
**CMR:**1. First, check the base

**condition**(C)

2. **Mark **or store the current result (M)

3. **Recurse **further (R)

The first thing you get with recursion is a simple and easy-to-understand code. Sometimes, iterative solutions are difficult to implement and not straightforward.

A lot of problems can easily be solved with recursion. For example, graph traversal, Tower of Hanoi, and complex mathematical functions.

Recursive functions require a lot of memory space, as each function call requires memory to be executed. Also, the recursive function is a bit slower than an iterative function because of extra call and return overhead.

Recursive functions can keep calculating the same subproblem again and again. This problem can easily be solved with the help of memoization.

For a detailed comparison between recursion and iteration, check out this article:Difference Between Recursion and Iteration

**Question 1. How can I find the base condition for a recursive function?**

Answer:** **Most of the time, the base condition is defined in the given problem. If not,** **you can easily figure out the base condition by following this approach:

- Understand the problem.
- Try to create a recursive formula that can solve the problem
- Create the solution for some initial state with the help of the problem statement (like for
*n = 0*or*n = 1,*depending on the problem). - With the help of recursive function, check if your base condition helps find the solution of other states (for
*n = 2*or more)

**Question 2:** **How can I define a recursive function or recursive formula to solve any problem?**

Answer: Recursive formulas can be hard to get. To find a recursive formula:

- First, be confident that you have properly understood the problem.
- Define the “state“ of the function, meaning how you plan to proceed with that function or how your recursive call will lead to other related subproblems.
- Define a relationship between the states.
- Define the base condition, if not already provided.

For example: If the problem is to find the factorial of *n:*

*factorial(n) = 1 * 2 * 3 * 4 * 5 …. * n*

With that series, we can see that for *n = 1*, the *f(n) value *will be 1 (base case).

Now, if we want to calculate the value for *f(n) — *how can we relate the value of *f(n) *to its previous states (*n - 1, n - 2,* or any other state)?

With the help of the given series, we already know the value of *f(4). *Therefore, the value of *f(5) *will be *f(4) * 5*. So, the value of *f(n)* will be *f(n - 1) * n. *We’ve got the recursive relation between the states of the function. Now, we can define the function by:

*f(n) = f(n - 1) * n, *with base case *f(1) = 1***.**

Nailing tech interviews at FAANG and Tier-1 tech companies can be challenging even for seasoned software engineers. It requires a deep understanding of data structure and algorithms as well as systems design.

If you’re looking for guidance and help to nail these questions and more, sign up for our free webinar. As pioneers in the field of technical interview prep, we have trained thousands of software engineers to crack the toughest coding interviews and land jobs at their dream companies, such as Google, Facebook, Apple, Netflix, Amazon, and more!

**----------**

*Article contributed by Problem Setters Official*