** Dynamic programming**, or DP, is an optimization technique. It is used in several fields, though this article focuses on its applications in the field of algorithms and computer programming. Its a topic often asked in algorithmic interviews.

Since DP isn’t very intuitive, most people (myself included!) often find it tricky to model a problem as a dynamic programming model. In this post, we’ll discuss when we use DP, followed by its types and then finally work through an example.

## When is DP used?

There are two necessary conditions a problem must satisfy for DP to work.

- Overlapping Sub-problems
- Optimal substructure

Let's go over these in a little more detail.

### Overlapping Sub-Problems

This property is exactly what it sounds like: repeating sub-problems. But for this to make sense, we need to know what a sub-problem is.

A ** sub-problem** is simply a smaller version of the problem at hand. In most cases, this would mean smaller parameter values which you would pass to your recursive function.

If you’re looking for a particular page in a book, what would you do? You’d open the book to a particular page and compare the page number you’re on with the page number you’re looking for.

If the current page is smaller than the required page, you’d start looking in between the current page and the last page. On the other hand, if the current page number is greater, you’d start searching between the start of the book and the current page.

You’d continue this until you found the page.

If you had to model this as a recursive function, what would that look like? Maybe something like this.

**Note: ***The following snippets have been written in a form of pseudocode to improve readability*

```
getpage(book, from_page, to_page, target_page)
middle_page = (from_page + to_page) / 2
if middle_page == target_page
return book[middle_page]
else if middle_page < target_page
return getpage(book, middle_page + 1, to_page, target_page)
else if middle_page > target_page
return getpage(book, from_page, middle_page - 1, target_page)
```

Pretty straightforward. There’s a **getpage** function which returns the page (

**, here) we’re looking for. The function looks at the middle page between**

*target_page***and**

*from_page***and checks if we have a match.**

*to_page*If not, the function looks at either the left half or the right half of the section we are looking at.

But what do those two recursive calls to **getpage** represent? You’ll notice that at each recursive call, we are reducing our search space by half. What we’re doing is solving the same problem, that is, looking for a specific page, in a smaller space. We’re solving sub-problems.

** Divide and Conquer, **or

**algorithms work through the principle of sub-problems. The “divide” part refers to splitting a problem into sub-problems. Sorting algorithms like mergesort and quicksort are great examples. Note that binary search isn’t exactly a DAC algorithm for the simple reason that it doesn’t have a “combine” step, whereas an actual divide and conquer algorithm would combine the results of its sub-problems to get the final solution.**

**DAC**Now that we have answered the question of what a sub-problem is, we move on to the other word: “** overlapping**”.

When these sub-problems have to be solved more than once, they are said to be overlapping. Look at the call graph for computing the value of the nth Fibonacci term.

The recurrence relation is:

the relation

```
f(n) = f(n - 1) + f(n-2)
```

the base case

```
f(0) = 0
f(1) = 1
```

The calls have been shaded to represent overlapping subproblems. Compare this with something like binary search, where the subproblems aren’t overlapping.

### The optimal substructure property

The optimal substructure property is slightly more intricate: it refers to the scenario where optimal solutions to sub-problems can directly be considered when computed the overall optimal solution.

A quick example? Say you want to find the shortest path from **A** to

**. Let**

*B***be an intermediate point between**

*X***and**

*A***with a**

*B***connecting it to**

**single edge****.**

*A*To solve this, we can find the shortest path from all intermediate nodes (**X**) to B, and then find the path from A to X plus the shortest path from X to B which is shortest for all X.

```
shortest(A, B) = min(AX + shortest(X, B)) for all intermediate nodes X.
```

What we’re doing here is using an optimal intermediate solution (**shortest(X, B)**) and use that (as opposed to considering every solution for a sub-problem) to find the final optimal answer.

## The two kinds of DP

### The top-down (memoization) approach

In a top-down approach, we start from the highest level of our problem. In this approach, we initially check if have already solved the current sub-problem. If we have, we just return that value. If not, we solve that sub-problem. We use recursive calls to solve our sub-problem.

Since those calls require solving smaller sub-problems which we haven’t seen before, we continue this way, until we encounter a sub-problem we have either solved or know the answer to trivially.

### The bottom-up (tabulation) approach

In this approach, we start at the very bottom and then work our way to the top. Since we start from the “base case”, and use our recurrence relation, we don’t really need recursion, and so, this approach is iterative.

The main difference between the two approaches is that bottom-up calculates all solutions, while top-down computes only those that are required. For example, to find the shortest path between source and destination, using the top-down approach, we only compute the distances with intermediate points near the shortest path, choosing the minimum at each stage.

On the other hand, in a bottom-up approach, we end up calculating the shortest distance between each point on the grid and the destination, finally returning the shortest distance from the start to the end.

As a comparison, let's look at a possible top-down and bottom-up function that returns the nth Fibonacci term.

While both approaches have the same asymptotic time complexities, the recursive calls in a top-down implementation may lead to a stack overflow, which is a non-issue owing to the iterative nature of the bottom-up approach.

Remember that though we implement the latter iteratively, your logic would still use the recurrence relation from the very basic recursive approach, as we shall see in this example.

## An example

Let's go over a problem which we’ll solve using both approaches to dynamic programming.

### The Problem

Find the maximum sum of elements in an array ensuring that no adjacent elements are included. Let’s assume that no elements are negative.

```
example 1:[1, 2, 3] => 1 + 3 = 4
example 2:[1, 1, 1, 1] => 1 + 1 = 2
example 3:[2, 5, 2] => 5 = 5
```

### The Analysis

First, let's try a **greedy approach.**

Since our goal is to maximize the sum of the elements we choose, we could hope to accomplish this by choosing the biggest elements, ignoring its neighbours, and then continuing this way. Here, we’re ensuring that at each step of the way, we have a maximum sum. But this would be correct only in a local context, while we are, of course, looking for a global solution.

This approach could work in certain cases.

```
[1, 9, 1, 10, 1, 9, 1]
```

Here, we first choose 10, since its the biggest element. We then ignore its neighbours, so that we don’t violate the condition that we aren’t allowed to choose adjacent elements.

Next, we choose both the 5’s, since they’re the next biggest elements, and then ignore their neighbours. Our algorithm ends here since there aren’t any elements left. The result we get — 10 + 5 + 5 — is in fact, the right answer.

But this won’t always work. Take the following example:

```
[1, 1, 9, 10, 9, 1, 1]
```

At every step, if you chose the maximum element, ignored its neighbours and continued that way, you’d end up choosing 10, then 1 and then 1 again after ignoring both the 9's, which would add up to 12, but the right answer would be 1 + 9 + 9 + 1, which is 20.

Its clear this approach isn’t the right one. Let’s start from a basic recursive solution and work up to one that uses dynamic programming one.

This is the difference between the greedy and dynamic programming approaches. While a greedy approach focuses on doing its best to reach the goal at every step, DP looks at the overall picture. With a greedy approach, there’s no guarantee you’ll even end up with an optimal solution, unlike DP. Greedy algorithms often get trapped in local maxima, leading to sub-optimal solutions.

The recursive solution

After thinking for a bit, you can probably see that we have a condition to keep in mind: ** no adjacent elements**. You can probably figure out that:

- we can choose to either consider an element in our sum or ignore it
- if we consider it, we will have to ignore its adjacent element

For the sake of brevity, let **f(a..b)** represent a call to

**our array from index**

*f***to index**

*a***(both inclusive). That function**

*b***would represent our recursive function which would solve the problem.**

*f*So **f(0..4)** would mean running the function from index 0 to index 4.

The two arrows pointing from a cell represent our choices of subsequent function calls. Since this is a maximization problem, we’d have to choose the maximum out of these options.

Let’s come back to our array.

```
[5, 10, 100, 10, 5]
```

Keeping the conditions discussed above in mind let’s actually write down what we would be doing.

Our first call would be on the entire array, which is of length 5 as can be seen above.

```
f(0..4)
```

For the element at index 0 (which happens to be 5 here), we can either choose to:

our current sum would then be 5 + the maximum sum of the rest of the array, but excluding the next element (index 1). Thus, our sum becomes**include it in our sum:**. Or to generalize it,*5 + f(2..4)**arr[0] + f(2..4)*our current sum would then just be equal to the maximum sum of the remaining array. This can be written as:**exclude it:**. Notice that our next call is from index 1 and not 2 as in the previous case. Since we aren’t considering the element at index 0, we are free to consider the element at index 1 — we aren’t forced to ignore it.*0 + f(1..4)*

The graph here visually explains this. As mentioned earlier, all arrows at a given level represent our choices, from which we choose the greatest one.

So our final answer would be:

```
f(0..4) = max(arr[0] + f(2..4), f(1..4))
```

Let’s expand this for the next iteration.

First, we’ll do it for the left tree, which is **f(2..4)**. This is just like what we did for the first call to f. Remember that the

**part is still there. It will be added to the value of**

*arr[0] +***on our way back up the call tree.**

*f(2..4)*Our choices:

our sum at this stage becomes**consider**in our sum:*arr[2]*. Remember that since we’re considering the element at index 2, we would have to ignore the next element — index 3.*arr[2] + f(4..4)*our sum here is the same as the maximum result of the remaining array without having to ignore the adjacent element. So, that's**ignore**:*arr[2]*.*f(3..4)*

Just like before, the value of **f(2..4)** would be the maximum of our two choices.

```
f(2..4) = max(arr[2] + f(4..4), f(3..4))
```

### The base case

What do you think **f(4..4)** would evaluate to? Following our notation, it is the result of our function call on the array from index 4 to … well, index 4. That means that we are calling the function on a single element. The maximum sum of a single element is itself.

Another thing to keep in mind: in **f(a..b)**, a should never be greater than b. Since this call represents starting from index a and going up to index b, we would have to return 0 if

**ever gets bigger than**

*a***. There is no maximum sum if there are no elements.**

*b*We have our base case here. Our function **f**, when called on a single element, would return that element directly and returns 0 if we are not in a valid range. There are no further recursive calls. That’s why its called the base case.

In our case, our call to **f(3..4)** leads to an invalid call to

**, which we handle by returning 0. We’ll generalize this later.**

*f(5..4)*```
f(4..4) = arr[4]
f(5..4) = 0
```

### The recurrence relation

Let’s have another look at our results.

```
first call:
f(0..4) = max(arr[0] + f(2..4), f(1..4))
second call:
f(2..4) = max(arr[2] + f(4..4), f(3..4))
the base case:
f(4..4) = arr[4]
f(5..4) = 0
```

Notice a pattern in the first two results? If we generalize these, we get:

```
f(a..b) = max(arr[a] + f(a+2 .. b), f(a+1, b))
```

This still isn’t the most simplified version of our relation. Notice the occurrences of **b** here. In fact, go back and look at our specific calls in the previous block.

They don’t change. There’s no **b + 1** or

**. It’s always**

*b + 2***. And what’s the value of**

*b***in our first call? The last index. Since**

*b***is constant throughout our algorithm, we can remove it.**

*b*Our recurrence relation becomes:

```
f(a) = max(arr[a] + f(a+2), f(a+1))
```

where **f(a)** is a call on the array from index

**onwards.**

*a*Another thing to realize is that similar to how we removed **b** since it was always equal to the last index in the array, the base case, which refers to a single element, would only happen if that element was the last in the array.

A generalized version of our base case is:

```
f(n-1) = arr[n-1] where n is the size of the array
f(a) = 0 if a >= n where n is the size of the array
```

Thus, we have our relation:

```
f(a) = max(arr[a] + f(a+2), f(a+1))
f(n-1) = arr[n-1] where n is the size of the array
f(a) = 0 if a >= n where n is the size of the array
```

Let’s implement the recursive approach based on this relation.

```
f(array, start)
// base case: return 0 for invalid bounds
if start >= size(array)
return 0
// base case: return last element if start is the index
// of the last element
if start == size(array) - 1
return array[start]
// recurrence relation
return max(array[start] + f(array, start + 2),
f(array, start + 1))
```

This function would be called like so:

```
array := [1, 5, 2, 4, ...]
return f(array, 0)
```

What would be the complexity of this?

If we were to approximate the complexity based on the size of the array (**n**) we are operating on, we get something like this:

```
T(n) = T(n-2) + T(n-1) + O(1)
T(0) = O(1)
```

Intuitively, every call to f on an array of size n — represented as **T(n)** — leads to two calls on f on arrays of size

**and**

*n-2***. That is, at each stage, we’re doubling the number of calls to**

*n-1***.**

*f*The asymptotic time complexity is exponential. With the above reasoning, we get *O(2^n).*

This is a loose estimate on the upper bound, since the **n-2 **tree is bound to end before the

**tree, and so we are doing slightly less than doubling the calls. The actual complexity is**

*n-1***is the golden ratio — or**

*O(phi^n) — phi***which is slightly lesser than our original estimate, but let's stick to**

*O(1.618^n),***.**

**O(2^n)**Another thing to notice is that the recurrence relation above is similar to that of the nth Fibonacci term, which would hence give a similar complexity.

### A dynamic programming approach

Here’s where dynamic programming comes into the picture.

If you look closely, you’ll see the overlapping sub-problems we were talking about earlier.

Now comes the important part — converting this recursive implementation to a dynamic programming approach. What if we stored the values of the function calls that are being repeated?

Let’s maintain an array where the ith element is the value of **f(i)**, which in turn, is the maximum sum of the array from index

**to the end.**

*i*```
dp[i] = f(i..n) = f(i)
```

And since we already have a result for f(i),

```
dp[i] = max(arr[i] + f(i + 2), f(i + 1))
```

Now that we have this relation, we can go two different ways. Either we go the top-down route, where our function is still recursive, like our result above, or we remove all recursive calls and go the bottom-up route.

We’ll focus on the bottom-up route, but let's discuss the top-down approach.

### - The Top-down approach

Look at our previous result.

```
dp[i] = max(arr[i] + f(i + 2), f(i + 1))
```

That’s all we need to implement the top-down approach. For any call to **f**, we’ll first check in our array

**if we have already made that call earlier, and if we have, we use the pre-calculated value directly.**

*dp*On the other hand, if the call we are making has never been done before, we have to compute the entire thing. In that case, once we arrive at a value, we make sure to store it in our array **dp** so that we won’t have to repeat the whole process.

The call tree should look something like this:

Let’s implement this algorithm.

```
f_rec(array, start, dp)
// base case: return 0 for invalid bounds
if start >= size(array)
return 0
// base case: return pre-computed value
if dp[start] != 0
return dp[n]
// populating dp[start]
dp[start] := max(array[start] + f(array, start + 2),
f(array, start + 1))
return dp[start]
f(array)
n := size(array)
dp := array of size n with each element initialized to 0
// populating the last element of array dp based on our base case.
dp[n-1] := array[n-1];
return f_rec(array, 0, dp);
```

The additional space required to store the results of our sub-problems grows linearly with the size of the input array. Hence, apart from the **O(n) **space required due to the recursive stack, we have an

**space for the**

*O(n)***array, n being the size of the input array.**

*dp*The time complexity, though harder to compute, is linear to the input size. This is because we are storing the answers to the sub-problems we have already solved, and so, we have **O(n) **unique sub-problems that we have to solve. This result can also be verified with the complexity we get using the bottom-up approach.

## - The Bottom-up approach

Recall that in this approach, we seek to eliminate all recursive calls by following an iterative approach, where we start from the base case, or the “bottom” and make our way up.

Let’s replace the other calls to **f** with accessing elements of

**.**

*dp*```
dp[i] = max(arr[i] + dp[i + 2], dp[i + 1])
```

What about the base case, **f(n-1) = arr[n-1]**? This would be the last element of the array

**.**

*dp*```
dp[n-1] = arr[n-1]
```

And just like that, we have our solution for a bottom-up **dp** approach!

Let’s implement this, just like we did for the recursive approach.

```
f(array)
n := array size
// create an array of size n + 1
dp := an array of size n + 1
// our base cases:
// last element sum is equal to last element
dp[n - 1] := array[n-1]
// invalid bounds return 0
dp[n] := 0
// loop over array dp from the end, populating it
for(i = n-2 -> 0)
dp[i] := max(array[i] + dp[i + 2],
dp[i + 1])
// return the first element of dp
// which has our answer
return dp[0]
```

This function would be called like so:

```
array := [1, 5, 2, 4, ...]
output(f(array))
```

The complexity here would be linear in both space and time.

Why?

We are running a single for-loop **n-1** times, and in each iteration, we are performing constant time operations — a linear time complexity.

Since the size of the array **dp** depends on the size of the input array — which, of course, is variable — our space complexity is also linear.

### Improving the algorithm

But can we do better? Let’s see.

In terms of asymptotic time complexity, we can’t do better. To find the answer, we have to check every element of the array. So we can’t do better than linear time.

But what about space complexity? Do we need to maintain an array of size **n** to solve the problem?

Look closely at the line inside the for-loop:

```
dp[i] = max(arr[i] + dp[i + 2], dp[i + 1])
```

At any point of time, all we need to populate **dp[i]** is the next two elements in

**— at indices**

*dp***and**

*i +1***. There’s no reason to maintain all of our results. We just need to keep track of the last two iterations.**

*i + 2*Let’s use three variables here. Let’s name them **i_0**,

**and**

*i_1***for make it easier to relate between them.**

*i_2*```
dp[i] --> i_0
dp[i+1] --> i_1
dp[i+2] --> i_2
```

Notice that in the next iteration of our loop, our loop counter **i*** , *becomes

**, since we’re decrementing**

*i + 1***in each iteration.**

*i***would be the next**

*dp[i +1]***,**

*dp[i +2]***would be the next**

*dp[i]***and**

*dp[i +1]***— which we wouldn’t need since**

*dp[i+2]***isn’t required — can be reused as the next**

*dp[i +3]***.**

*dp[i]*Replacing this with our three new variables, the code inside our loop becomes:

```
i_0 := max(arr[i] + i_2, i_1)
i_2 := i_1
i_1 := i_0
```

We initialize these variables just like our array implementation.

```
dp[n-1] = arr[n-1] --> i_1 = arr[n-1]
dp[n] = 2 --> i_2 = 0
```

One last thing to keep in mind: what if the input array has only a single element? Our loop, which runs from **n-2** to

**, wouldn’t run even once.**

*0*Hence, we initialize **i_0** with the value of

**. So if the loop never runs — the input array has only one element — returning**

*i_1***would return the value of**

*i_0***, which is the arrays only element.**

*i_1*Finally, we return **i_0** instead of

**.**

*dp[0]*```
return dp[0] --> return i_0
```

Thus, our final algorithm would look something like this.

```
f(array)
n := array size
define three ints i_0, i_1, i_2
// initialize the 3 variables just like
// we did for our dp array
i_1 := array[n-1]
i_2 := 0
// this is just to handle the case where the loop
// doesn't even run for a single iteration -- when
// the array has a single element.
i_0 := i_1
// run the loop, updating the variables similar
// to the array implementation.
for(i = n-2 -> 0)
i_0 := max(array[i] + i_2, i_1)
i_2 := i_1
i_1 := i_0
return i_0
```

Just like the previous dynamic programming approach, this function would be called by simply passing in an array or a reference to one.

```
array := [1, 5, 2, 4, ...]
return f(array)
```

For an array of any length, all we need is three variables. Thus, the space complexity of our algorithm is now **O(1)** — constant.

Summarizing our results,

Comparing the recursive approach with our top-down approach, it's clear that ** we are trading space complexity for better time complexity**. Of course, since both are recursive, they have the additional space required for the recursive call stack.

In a similar vein, the lowest two rows are the results of our bottom-up approaches. They are iterative, so they don’t require storing function records recursively on the stack. And since they’re essentially the same algorithm as the top-down approach, they have the same linear time complexity.

The best case is the ** bottom up approach requiring O(1) space **— meaning that the space our dp algorithm is using doesn’t change with the input size n.

### The code

Let's implement our final algorithm of ** constant space bottom-up dynamic programming** in C++. The variable and function names are the same as before.

```
int f(vector <int> &array)
{
n = array.size();
int i_0, i_1, i_2;
// these variables handle our base cases
i_1 = array[n-1];
i_2 = 0;
// this line handles the case where the input array
// has a single element
i_0 = i_1;
// iteratively update all three variables
// based on our recurrence relation.
for(i = n-2; i >= 0; i--)
{
i_0 = max(array[i] + i_2, i_1);
i_2 = i_1;
i_1 = i_0;
}
// the final answer is in i_0
return i_0;
}
```

```
#include <iostream.h>
#include <std/c++.h>
using namespace as std;
int main()
{
vector <int> array = {5, 10, 100, 10, 5}
cout << f(array) << endl;
return 0;
}
```

** Note: **the final space complexity optimization step is slightly harder to look for, but drastically improves your space usage as we just saw. See if you can spot a similar relation for the bottom-up approach for the nth Fibonacci term.

## Conclusion

Dynamic Programming is not often very intuitive or straightforward. Then again, most complex things aren’t. But things do get easier with practice. There are tonnes of dynamic programming practise problems online, which should help you get better at knowing when to apply dynamic programming, and how to apply it better. Hopefully, this post served as a good starting point.

I'm a CS Undergrad at BITS Pilani. Right now, I'm in my 4th year. I write infrequently on several topics, from technical concepts to the latest customizations on my laptop on my blog. My most interesting posts are also published on Medium.