Dynamic programming is mainly an optimization technique for recursive solutions. Whenever you have a recursive function where you are making repeated calls to the same inputs, you have an opportunity to refactor your code with dynamic programming.
This method of programming involves breaking complex problems into subproblems and storing the results of repeated calls so that you can point to them instead of performing the calculation again. The concept is particularly useful in situations where the optimal solution to a complex problem is dependent on the optimal solutions to the smaller sub-problems.
What is recursion?
Recursion is a fundamental concept within computer science in which the solution to a computational problem depends on solutions to smaller instances of the same problem.
A function is recursive if it calls itself during execution. This process may repeat itself many times before the solution is computed. In fact a recursive function can repeat forever if it is not provided a base case to help it fulfill its computation and stop the execution.
When to use dynamic programming
Problems that are solvable via dynamic programming generally exhibit the following two properties:
- Optimal Substructure
- Overlapping Subproblems
Let’s take a closer look at the properties of dynamic programming
Optimal Substructure
A problem is said to have an optimal substructure if its optimal solution can be obtained from the optimal solutions to its subproblems.
The textbook programming example of this is to create a function for predicting the nth position in the Fibonacci sequence:
Fib(n) = Fib(n-1) + Fib(n-2)
Notice how effective this simple formula works for all cases, except when the position n is equal to 0 or 1. Because of this, we know that the actual function must include a conditional for these special cases. For this article we will be using Python for all code examples.
#Non-DP Plain Recursive Solution
def Fib(n):
if n < 2:
return n
return Fib(n - 1) + Fib(n - 2)
Notice how the Fib(n) must call itself within its own code? That’s what makes it a recursive function.
Overlapping subproblems
You can use dynamic programming in any situation where you have overlapping subproblems such as in this recursion tree:
In such situations, it makes sense to find the optimal structure-property, i.e., the subproblems needed to solve fib(n) for all values of n. When solutions of the same subproblems are needed, again and again, it makes sense to store their solutions somewhere that they can easily be called and reused instead of recalculated. This cuts down computation times. In the next section, we’ll look at the two main methods of functional programming for storing the solutions to subproblems.
Key methods of dynamic programming
In the previous section, we mentioned how the key to optimizing recursive problems lies in storing the values of repetitive function calls. We use the optimal structure and overlapping subproblems to identify these repetitive function calls. Once found, we can store their values through one of two techniques:
Memoization
In this top-down approach, we store the solution to a subproblem each time we solve one in case we encounter it again later. As we attempt to solve the bigger problem, we can call on a memo of previously solved solutions each time we encounter an overlapping subproblem during recursion.
Below is an example of optimizing the Fibonacci function with memoization:
# Still recursive, but optimized with memoization
def Fib(n):
memo = [-1 for x in range(n+1)]
return FibRecur(memo, n)
def FibRecur(memo, n):
if n < 2:
return n
if memo[n] >= 0:
return memo[n]
memo[n] = FibRecur(
memo, n - 1) + FibRecur(memo, n - 2)
return memo[n]
Tabulation
In this bottom-up approach, we use a table to hold “n” subproblem solutions within an n-dimensional table. This approach avoids recursion altogether by simply calling on a table of previously solved subproblems. Of course, when you initially populated the table, you used brute force to do so, but once you have the table it’s possible to write an optimized Fib function like so:
def Fib(n):
myTable = [0, 1]
for i in range(2, n + 1):
myTable.append(myTable[i - 1] + myTable[i - 2])
return myTable[n]
When not to use dynamic programming
Since the heart of dynamic programming involves caching solutions to optimize recursive calls, it doesn’t make sense to use this method for situations where those stored solutions don’t save you time. If your program doesn’t use recursion, uses recursion only for control flow, or lacks overlapping subproblems, it doesn’t need dynamic programming. In such situations, storing solutions for callbacks would mean consuming memory without the benefit of faster computation times.
Common dynamic programming problems to study for your next coding interview
Looking for a coding job on Upwork? Dynamic programming is an incredibly popular subject for coding interviews.
These are the 10 most common dynamic programming problems:
- 0-1 Knapsack problem
- Coin change problem
- Shortest common subsequence
- Longest common subsequence
- Longest increasing subsequence
- Matrix chain manipulation
- Partition problem
- Rod cutting
- Edit distance problem (Levenshtein)
- Word break problem
Many of the dynamic programming problems that appear in coding interviews have uses in the real world. For example, the Fibonacci sequence is used in studying nature, specifically patterns in plants, the knapsack problem is used in finance for optimizing the value of a set of items (or in the game show Supermarket Sweep). Looking to apply those dynamic programming algorithms to a real project?