A Short Story About Dynamic Programming

Having just finished my midterm in Analysis of Algorithms (yes, the class is as dry as it sounds), my brain is still sharp on a few topics; one of them being dynamic programming, which I mentioned in my last post. In that post, wherein I tried to find motivation for forcing myself to relearn calculus, I used the classic example of trying to calculate the nth term of the Fibonacci sequence.

I thought it would be helpful to see this example running with some real code. Below, we have a JavaScript function – fibRecursive – that takes an integer as a parameter. This integer represents the term that we want from the Fibonacci sequence. For example, a call to the function like so fibRecursive(6) would return 8.

In order to compute that, this function recursively calls itself twice for every term up to the term you specified. This sort of recurrence relationship is exponential – specifically O(2^n). That means, very literally, for every nth term we wish to compute, we will actually compute it 2^n times. To compute the 8th term, we must perform 2^8 (that’s 256) actions.

While this is a very elegant way of explaining, in code, how the Fibonacci sequence works, it is not the most performant way of computing these sorts of values. There is, however, a very well-defined algorithmic paradigm that we can apply, and you’ve guessed it, it’s dynamic programming.

The general notion of dynamic programming is that the problem exhibits some optimal substructure (i.e., within the solution to the problem are solutions to smaller problems – as a byproduct of computing the 5th term in the Fibonacci sequence, we will compute the 1st, 2nd, 3rd, and 4th terms) and that these problems overlap (i.e., you must revisit each subproblem’s solution repeatedly in order to get the final solution).

The function below shows how we can use a technique called memoization to save the solutions to smaller subproblems in a lookup table (typically an array of some shape and size, but any data structure that helps you solve the problem is acceptable) and then refer back to that solution for the next subproblem. The difference here is that we compute each solution to the subproblems once as opposed to computing them 2^n times. This means that the function below – fibDynamicProgramming – runs in O(n) time – a considerable uptick in performance. This means that to get to the 8th term in the sequence, we’re actually performing only 8 constant time computations.

Magic! To see more common dynamic programming problems and solutions, I would suggest taking a look at this YouTube playlist by Tushar Roy. His explanations are more thought-out and concise than anything I plan to write 😉