Dynamic Programming Common Patterns and Code Template
This article will resolve
LeetCode | Difficulty |
---|---|
322. Coin Change | 🟠 |
509. Fibonacci Number | 🟢 |
Prerequisites
Before reading this article, you should first learn:
This article is an advanced version of my previous work Detailed Explanation of Dynamic Programming, which received over 200 likes a few years ago. I have added more valuable content, aiming for this article to serve as a "guideline" for solving dynamic programming problems.
Dynamic Programming (DP) problems can be quite challenging for many readers. However, these problems are also among the most technical and interesting. This book dedicates an entire chapter to this algorithm, highlighting the importance of dynamic programming.
This article addresses several questions:
What is dynamic programming? What are the techniques for solving dynamic programming problems? How can one learn dynamic programming?
After practicing numerous problems, you'll realize that there are only a few algorithmic techniques. Our subsequent chapters in the dynamic programming series will use the problem-solving framework outlined in this article. Once you understand this framework, tackling these problems becomes much easier. Therefore, this article is placed in the first chapter, hoping to be a guideline for solving dynamic programming problems. Now, let's delve into the details.
Firstly, the general form of dynamic programming problems is to find the optimal value. Dynamic programming is essentially an optimization method from operations research, extensively applied in computer problems such as finding the longest increasing subsequence or the minimum edit distance.
Since the objective is to find the optimal value, what is the core issue? The core issue in solving dynamic programming is enumeration. To find the optimal value, you must enumerate all possible solutions and then identify the best one.
Is dynamic programming that simple, just enumeration? But the dynamic programming problems I've seen are quite difficult!
Firstly, while the core idea of dynamic programming is to exhaustively search for the optimal value, problems can vary greatly. Exhausting all feasible solutions is not easy and requires a熟练 grasp of recursive thinking. Only by listing the correct "state transition equation" can you correctly exhaust all possibilities. Moreover, you need to determine if the algorithm problem has the "optimal substructure", meaning whether the optimal value of the subproblems can lead to the optimal value of the original problem. Additionally, dynamic programming problems have "overlapping subproblems", and brute-force exhaustion would be inefficient. Therefore, you need to use a "memoization" or "DP table" to optimize the exhaustive process and avoid unnecessary calculations.
The overlapping subproblems, optimal substructure, and state transition equation mentioned above are the three key elements of dynamic programming. I will explain what these mean with examples shortly. However, in actual algorithm problems, writing the state transition equation is the most challenging part, which is why many people find dynamic programming difficult. I will provide a thought framework I have summarized to help you think through the state transition equation:
Clarify the "state" -> Clarify the "choices" -> Define the meaning of the dp
array/function.
Following this approach, the final solution code will fit into the following framework:
# top-down recursive dynamic programming
def dp(state1, state2, ...):
for choice in all_possible_choices:
# the state has changed due to the choice made
result = find_optimal(result, dp(state1, state2, ...))
return result
# bottom-up iterative dynamic programming
# initialize the base case
dp[0][0][...] = base case
# perform state transition
for state1 in all_values_of_state1:
for state2 in all_values_of_state2:
for ...
dp[state1][state2][...] = find_optimal(choice1, choice2...)
Below, we will use the Fibonacci sequence problem and the coin change problem to explain the basic principles of dynamic programming. The former is mainly to help you understand what overlapping subproblems are (the Fibonacci sequence does not involve finding the optimal value, so strictly speaking, it is not a dynamic programming problem), while the latter focuses on how to formulate state transition equations.
1. Fibonacci Sequence
LeetCode Problem 509 "Fibonacci Number" is this problem. Please do not disregard this example for its simplicity. Only simple examples can allow you to fully concentrate on the general ideas and techniques behind the algorithm, without getting confused by obscure details. For more difficult examples, you will find plenty in the following series on dynamic programming.
Brute-Force Recursion
The mathematical form of the Fibonacci sequence is recursive, and the code is written as follows:
int fib(int N) {
if (N == 1 || N == 2) return 1;
return fib(N - 1) + fib(N - 2);
}
int fib(int N) {
if (N == 1 || N == 2) return 1;
return fib(N - 1) + fib(N - 2);
}
def fib(N: int) -> int:
if N == 1 or N == 2:
return 1
return fib(N - 1) + fib(N - 2)
// fib calculates the N-th term of the Fibonacci sequence
func fib(N int) int {
if N == 1 || N == 2 {
return 1
}
return fib(N-1) + fib(N-2)
}
var fib = function(N) {
if (N === 1 || N === 2) return 1;
return fib(N - 1) + fib(N - 2);
};
This concept is well-known, as it is often used by school teachers to illustrate recursion. While the code is concise and easy to understand, it is highly inefficient. Let's consider ( n = 20 ) and draw the recursion tree:

Tip
Whenever you encounter a problem that requires recursion, it's best to draw a recursion tree. This helps in analyzing the complexity of the algorithm and identifying inefficiencies.
How do we interpret this recursion tree? To compute the original problem ( f(20) ), we must first solve the subproblems ( f(19) ) and ( f(18) ). To compute ( f(19) ), we need to solve ( f(18) ) and ( f(17) ), and so on. When we reach ( f(1) ) or ( f(2) ), the results are known, and the recursion tree stops growing.
How do we calculate the time complexity of a recursive algorithm? It is the product of the number of subproblems and the time required to solve each subproblem.
First, let's determine the number of subproblems. The total number of nodes in a binary tree is exponential, so the number of subproblems is ( O(2^n) ).
Next, calculate the time to solve a subproblem. In this algorithm, there are no loops, only an addition operation ( f(n-1) + f(n-2) ), which takes ( O(1) ) time.
Thus, the overall time complexity is ( O(2^n) ), which is exponential and inefficient.
Observing the recursion tree reveals the inefficiency: there is a significant amount of redundant computation. For example, ( f(18) ) is computed twice, and this redundancy extends to many other nodes, leading to a massive waste of time.
This illustrates the first property of dynamic programming problems: overlapping subproblems. Next, we will explore how to address this issue.
Recursive Solution with Memoization
Understanding the problem is already half the battle. Since the main issue is redundant calculations, we can create a "memo" to store answers. After solving a subproblem, instead of immediately returning the result, we first record it in the memo. Whenever we encounter a subproblem, we check the memo first. If we find that the problem has already been solved, we simply use the stored answer instead of recalculating.
Typically, an array is used as this memo, but you could also use a hash table (dictionary). The concept remains the same.
int fib(int N) {
// initialize the memo array to all zeros
int[] memo = new int[N + 1];
// perform recursion with memoization
return dp(memo, N);
}
// perform recursion with memoization
int dp(int[] memo, int n) {
// base case
if (n == 0 || n == 1) return n;
// already calculated, no need to calculate again
if (memo[n] != 0) return memo[n];
memo[n] = dp(memo, n - 1) + dp(memo, n - 2);
return memo[n];
}
int fib(int N) {
// initialize the memoization array to all 0s
vector<int> memo(N + 1, 0);
// perform recursion with memoization
return dp(memo, N);
}
// perform recursion with memoization
int dp(vector<int>& memo, int n) {
// base case
if (n == 0 || n == 1) return n;
// already calculated, no need to recalculate
if (memo[n] != 0) return memo[n];
memo[n] = dp(memo, n - 1) + dp(memo, n - 2);
return memo[n];
}
def fib(N: int) -> int:
# initialize the memoization array to 0
memo = [0] * (N + 1)
# perform recursion with memoization
return dp(memo, N)
# perform recursion with memoization
def dp(memo: List[int], n: int) -> int:
# base case
if n == 0 or n == 1: return n
# already calculated, no need to calculate again
if memo[n] != 0: return memo[n]
memo[n] = dp(memo, n - 1) + dp(memo, n - 2)
return memo[n]
func fib(N int) int {
// initialize the memoization array to all 0s
memo := make([]int, N+1)
// perform recursion with memoization
return dp(memo, N)
}
// perform recursion with memoization
func dp(memo []int, n int) int {
// base case
if n == 0 || n == 1 {
return n
}
// already calculated, no need to compute again
if memo[n] != 0 {
return memo[n]
}
memo[n] = dp(memo, n-1) + dp(memo, n-2)
return memo[n]
}
var fib = function(N) {
// Initialize the memo array with all zeros
let memo = new Array(N + 1).fill(0);
// Perform recursion with memoization
return dp(memo, N);
};
// Recurse with memoization
var dp = function(memo, n) {
// base case
if (n == 0 || n == 1) return n;
// Already calculated, no need to compute again
if (memo[n] != 0) return memo[n];
memo[n] = dp(memo, n - 1) + dp(memo, n - 2);
return memo[n];
};
Now, draw the recursion tree, and you will understand what the "memoization" actually does.

In fact, the recursive algorithm with "memoization" transforms a recursion tree, which contains a large amount of redundancy, into a recursion graph without redundancy by "pruning". This significantly reduces the number of subproblems (i.e., nodes in the recursion graph).

How do you calculate the time complexity of a recursive algorithm? It's the number of subproblems multiplied by the time needed to solve one subproblem.
The number of subproblems, i.e., the total number of nodes in the graph, is proportional to the input size n = 20, as the algorithm avoids redundant calculations. The subproblems are f(1)
, f(2)
, f(3)
... f(20)
, so the number of subproblems is O(n).
The time to solve one subproblem is O(1), as there are no loops involved.
Therefore, the time complexity of this algorithm is O(n), which is a significant improvement over the brute-force algorithm.
At this point, the efficiency of the recursive solution with memoization is the same as that of the iterative dynamic programming solution. In fact, this approach is very similar to the common dynamic programming solutions, except that it solves problems "top-down" using recursion, whereas the more common dynamic programming code solves problems "bottom-up" using iteration.
What does "top-down" mean? Notice the recursive tree (or graph) we drew earlier, which extends from top to bottom. It starts from a larger problem, such as f(20)
, and gradually breaks it down into smaller problems until it reaches the base cases f(1)
and f(2)
. Then, it returns the answers layer by layer. This is what we call "top-down."
What does "bottom-up" mean? Conversely, we start from the bottom, the simplest, smallest-scale problems with known results, f(1)
and f(2)
(base cases), and work our way up until we reach the desired answer f(20)
. This is the idea behind "iteration," and it's why dynamic programming typically avoids recursion and relies on loop iterations for computation.
Iterative (Recursive) Solution Using the dp
Array
Inspired by the previous step of using a "memoization" table, we can separate this "memoization" into an independent table, commonly known as the DP table. Wouldn't it be great to perform "bottom-up" calculations on this table!
int fib(int N) {
if (N == 0) return 0;
int[] dp = new int[N + 1];
// base case
dp[0] = 0; dp[1] = 1;
// state transition
for (int i = 2; i <= N; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[N];
}
int fib(int N) {
if (N == 0) return 0;
vector<int> dp(N + 1);
// base case
dp[0] = 0; dp[1] = 1;
// state transition
for (int i = 2; i <= N; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[N];
}
def fib(N: int) -> int:
if N == 0:
return 0
dp = [0] * (N + 1)
# base case
dp[0] = 0
dp[1] = 1
# state transition
for i in range(2, N + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[N]
func fib(N int) int {
if N == 0 {
return 0
}
dp := make([]int, N+1)
// base case
dp[0], dp[1] = 0, 1
// state transition
for i := 2; i <= N; i++ {
dp[i] = dp[i-1] + dp[i-2]
}
return dp[N]
}
var fib = function(N) {
if (N === 0) return 0;
let dp = new Array(N + 1).fill(0);
// base case
dp[0] = 0; dp[1] = 1;
// state transition
for (let i = 2; i <= N; i++) {
dp[i] = dp[i - 1] + dp[i - 2];
}
return dp[N];
};
A diagram makes it much easier to understand, and you might notice that this DP table closely resembles the result after "pruning" from before, only calculated in reverse:

In fact, the "memo" array in the recursive solution with memoization, once completed, is the same as the dp
array in this approach. By comparing the execution process of the two algorithms in the visual panel, you can more intuitively see their connection.
Therefore, the top-down and bottom-up approaches are essentially similar, and in most cases, their efficiency is nearly the same.
Further Insights
Here, we introduce the term "state transition equation," which is essentially a mathematical form describing the structure of a problem:

Why is it called a "state transition equation"? Simply to sound sophisticated.
The function parameter f(n)
continuously changes, so you can think of the parameter n
as a state. This state n
is derived (by addition) from the states n - 1
and n - 2
, which is what we call state transition. That's all there is to it.
You will notice that all operations in the solutions above, such as return f(n - 1) + f(n - 2)
, dp[i] = dp[i - 1] + dp[i - 2]
, and the initialization of memoization or the DP table, revolve around different expressions of this equation.
This illustrates the importance of establishing a "state transition equation" as it is the core of solving the problem. It is also easy to see that the state transition equation directly represents the brute-force solution.
Do not underestimate brute-force solutions; the hardest part of dynamic programming problems is writing this brute-force solution, which is the state transition equation.
Once you write the brute-force solution, optimization is merely a matter of using a memoization or a DP table, with no additional mysteries.
At the end of this example, let's discuss a detail for optimization.
Observant readers will notice that, according to the Fibonacci sequence's state transition equation, the current state n
is only related to the two previous states, n-1
and n-2
. Thus, it is unnecessary to store all states in a long DP table; you only need to store the previous two states.
Therefore, you can further optimize to reduce the space complexity to O(1). This leads to the most common algorithm for calculating Fibonacci numbers:
int fib(int n) {
if (n == 0 || n == 1) {
// base case
return n;
}
// represent dp[i - 1] and dp[i - 2] respectively
int dp_i_1 = 1, dp_i_2 = 0;
for (int i = 2; i <= n; i++) {
// dp[i] = dp[i - 1] + dp[i - 2];
int dp_i = dp_i_1 + dp_i_2;
// rolling update
dp_i_2 = dp_i_1;
dp_i_1 = dp_i;
}
return dp_i_1;
}
int fib(int n) {
if (n == 0 || n == 1) {
// base case
return n;
}
// represent dp[i - 1] and dp[i - 2] respectively
int dp_i_1 = 1, dp_i_2 = 0;
for (int i = 2; i <= n; i++) {
// dp[i] = dp[i - 1] + dp[i - 2];
int dp_i = dp_i_1 + dp_i_2;
// rolling update
dp_i_2 = dp_i_1;
dp_i_1 = dp_i;
}
return dp_i_1;
}
def fib(n: int) -> int:
if n == 0 or n == 1:
# base case
return n
# represent dp[i - 1] and dp[i - 2] respectively
dp_i_1, dp_i_2 = 1, 0
for i in range(2, n + 1):
# dp[i] = dp[i - 1] + dp[i - 2];
dp_i = dp_i_1 + dp_i_2
# rolling update
dp_i_2 = dp_i_1
dp_i_1 = dp_i
return dp_i_1
func fib(n int) int {
if n == 0 || n == 1 {
// base case
return n
}
// represent dp[i - 1] and dp[i - 2] respectively
dp_i_1, dp_i_2 := 1, 0
for i := 2; i <= n; i++ {
// dp[i] = dp[i - 1] + dp[i - 2];
dp_i := dp_i_1 + dp_i_2
// rolling update
dp_i_2 = dp_i_1
dp_i_1 = dp_i
}
return dp_i_1
}
var fib = function(n) {
if (n === 0 || n === 1) {
// base case
return n;
}
// represent dp[i - 1] and dp[i - 2] respectively
let dp_i_1 = 1, dp_i_2 = 0;
for (let i = 2; i <= n; i++) {
// dp[i] = dp[i - 1] + dp[i - 2];
let dp_i = dp_i_1 + dp_i_2;
// rolling update
dp_i_2 = dp_i_1;
dp_i_1 = dp_i;
}
return dp_i_1;
};
This is usually the final step of optimization in dynamic programming problems. If we find that each state transition only requires a part of the DP table, we can attempt to reduce the size of the DP table, recording only the necessary data, thereby reducing the space complexity.
The above example effectively reduces the size of the DP table from n
to 2, thus decreasing the space complexity by an order of magnitude. I will further explain this technique for compressing space complexity in the article Dimensionality Reduction in Dynamic Programming. Generally, it is used to compress a two-dimensional DP table into a one-dimensional one, reducing the space complexity from O(n^2) to O(n).
Some may ask why the other important characteristic of dynamic programming, "optimal substructure," is not discussed. It will be addressed later. Strictly speaking, the Fibonacci sequence example does not count as dynamic programming because it does not involve finding an optimal value. The purpose was to illustrate the method of eliminating overlapping subproblems and demonstrate the process of refining to an optimal solution. Next, let's look at the second example, the Coin Change Problem.
2. Coin Change Problem
This is LeetCode's problem 322, "Coin Change":
You are given k
types of coins with denominations c1, c2, ... ck
, each type of coin is available in unlimited quantities. Given a total amount amount
, determine the minimum number of coins needed to make up that amount. If it's not possible to make up that amount, the algorithm returns -1. The function signature of the algorithm is as follows:
// `coins` contains the denominations of available coins, and `amount` is the target amount
int coinChange(int[] coins, int amount);
// coins contains the denominations of available coins, and amount is the target amount
int coinChange(vector<int>& coins, int amount);
# coins contains the denominations of available coins, amount is the target amount
def coinChange(coins: List[int], amount: int) -> int:
// coins contains the denominations of available coins, and amount is the target amount
func coinChange(coins []int, amount int) int {}
var coinChange = function(coins, amount) {}
For example, if k = 3
with coin denominations of 1, 2, and 5, and the total amount amount = 11
, you need at least 3 coins to make up the amount, i.e., 11 = 5 + 5 + 1.
How do you think a computer should solve this problem? Obviously, it involves listing all possible ways to combine the coins and then finding the minimum number of coins needed.
Brute Force Recursion
Firstly, this problem is a dynamic programming problem because it has the "optimal substructure" property. For a problem to have an "optimal substructure," its subproblems must be independent of each other. What does it mean to be independent? You probably don't want to see a mathematical proof, so I'll explain with a simple example.
Imagine you are taking an exam where each subject's score is independent of the others. Your main goal is to achieve the highest total score. Your subproblems would be to score the highest in each subject, like Chinese, Math, etc. To score the highest in each subject, you need to maximize your scores in multiple-choice questions, fill-in-the-blank questions, and so on. Ultimately, if you score full marks in each subject, you achieve the highest total score.
This process gives the correct result: the highest total score is the sum of the individual scores. This works because the process follows the optimal substructure, where the subproblems of scoring the highest in each subject are independent and do not interfere with each other.
However, if we add a condition where your Chinese and Math scores are interdependent, meaning you cannot score full marks in both simultaneously (if your Math score is high, your Chinese score decreases, and vice versa), then the highest total score you can achieve will not be the sum of the individual maximum scores. Following the previous approach would lead to an incorrect result. This is because the subproblems of scoring the highest in each subject are not independent; the scores in Chinese and Math affect each other, preventing simultaneous optimization, thus breaking the optimal substructure.
Returning to the coin change problem, why does it have an optimal substructure? Suppose you have coins of denominations 1, 2, 5
and you want to find the minimum number of coins needed for amount = 11
(the main problem). If you know the minimum number of coins needed for amount = 10, 9, 6
(subproblems), you just need to add one more coin (of denominations 1, 2, 5
) to each subproblem's solution and take the minimum of these values to get the main problem's solution. Since there is no limit to the number of coins, the subproblems are independent of each other.
Tip
Regarding the issue of optimal substructure, further examples and discussions will be provided in the subsequent article Dynamic Programming Q&A.
Now that we have identified this as a dynamic programming problem, the next step is to consider how to formulate the correct state transition equation.
1. Determine the 'State', which is the variable that changes in both the original problem and the subproblems. Given that the number of coins is unlimited and the denominations are predefined, the only variable that changes as we approach the base case is the target amount amount
.
2. Determine the 'Choices', which are the actions that cause the 'State' to change. The target amount changes because you are selecting coins; each coin chosen reduces the target amount. Therefore, all coin denominations represent your 'Choices'.
3. Define the dp
function/array clearly. Since we are discussing a top-down approach, there will be a recursive dp
function. Generally, the parameters of the function are the variables that change during state transitions, which are the 'States' mentioned earlier; the return value of the function is the quantity that the problem requires us to calculate. In this case, there is only one state, the 'target amount', and the problem asks us to find the minimum number of coins needed to make up the target amount.
Thus, we can define the dp
function as follows: dp(n)
indicates that, given a target amount n
, it returns the minimum number of coins required to make up the target amount n
.
According to this definition, our final answer will be the return value of dp(amount)
.
Understanding these key points allows us to write the pseudocode for the solution:
// Pseudocode framework
int coinChange(int[] coins, int amount) {
// The final result required by the problem is dp(amount)
return dp(coins, amount);
}
// Definition: To make up the amount n, at least dp(coins, n) coins are needed
int dp(int[] coins, int n) {
// Make a choice, choose the result that requires the fewest coins
for (int coin : coins) {
res = min(res, 1 + dp(coins, n - coin));
}
return res;
}
// Pseudocode framework
int coinChange(vector<int>& coins, int amount) {
// The final result required by the problem is dp(amount)
return dp(coins, amount);
}
// Definition: To make up the amount n, at least dp(coins, n) coins are needed
int dp(vector<int>& coins, int n) {
// Make a choice, choose the result that requires the fewest coins
int res = INT_MAX;
for (const int coin : coins) {
res = min(res, subProb + 1);
}
return res;
}
# Pseudocode framework
def coinChange(coins: List[int], amount: int) -> int:
# The final result required by the problem is dp(amount)
return dp(coins, amount)
# Definition: To make up the amount n, at least dp(coins, n) coins are needed
def dp(coins: List[int], n: int) -> int:
# Make a choice, choose the result that requires the fewest coins
# Initialize res to positive infinity
res = float('inf')
for coin in coins:
res = min(res, sub_problem + 1)
return res
// Pseudocode framework
func coinChange(coins []int, amount int) int {
// The final result required by the problem is dp(amount)
return dp(coins, amount)
}
// Definition: To make up the amount n, at least dp(coins, n) coins are needed
func dp(coins []int, n int) int {
// Initialize to the maximum value
res := math.MaxInt32
// Make a choice, choose the result that requires the fewest coins
for _, coin := range coins {
res = min(res, 1+subProblem)
}
return res
}
// Pseudocode framework
var coinChange = function(coins, amount) {
// The final result required by the problem is dp(amount)
return dp(coins, amount);
}
// Definition: To make up the amount n, at least dp(coins, n) coins are needed
var dp = function(coins, n) {
// Initialize to the maximum value
let res = Infinity;
// Make a choice, choose the result that requires the fewest coins
for (let coin of coins) {
res = Math.min(res, 1 + subProblem);
}
return res;
};
According to the pseudocode, we add the base case to get the final answer. Clearly, when the target amount is 0, the number of coins needed is 0; when the target amount is less than 0, there is no solution, and we return -1:
class Solution {
public int coinChange(int[] coins, int amount) {
// the final result required by the problem is dp(amount)
return dp(coins, amount);
}
// definition: to make up the amount n, at least dp(coins, n) coins are needed
int dp(int[] coins, int amount) {
// base case
if (amount == 0) return 0;
if (amount < 0) return -1;
int res = Integer.MAX_VALUE;
for (int coin : coins) {
// calculate the result of the subproblem
int subProblem = dp(coins, amount - coin);
// skip if the subproblem has no solution
if (subProblem == -1) continue;
// choose the optimal solution from the subproblem, then add one
res = Math.min(res, subProblem + 1);
}
return res == Integer.MAX_VALUE ? -1 : res;
}
}
class Solution {
public:
int coinChange(vector<int>& coins, int amount) {
// the final result required by the problem is dp(amount)
return dp(coins, amount);
}
private:
// definition: to make up the amount n, at least dp(coins, n) coins are needed
int dp(vector<int>& coins, int amount) {
// base case
if (amount == 0) return 0;
if (amount < 0) return -1;
int res = INT_MAX;
for (int coin : coins) {
// calculate the result of the subproblem
int subProblem = dp(coins, amount - coin);
// skip if the subproblem has no solution
if (subProblem == -1) continue;
// choose the optimal solution in the subproblem, then add one
res = min(res, subProblem + 1);
}
return res == INT_MAX ? -1 : res;
}
};
class Solution:
def coinChange(self, coins: List[int], amount: int) -> int:
# the final result required by the problem is dp(amount)
return self.dp(coins, amount)
# definition: to make up the amount n, at least dp(coins, n) coins are needed
def dp(self, coins, amount):
# base case
if amount == 0:
return 0
if amount < 0:
return -1
res = float('inf')
for coin in coins:
# calculate the result of the subproblem
subProblem = self.dp(coins, amount - coin)
# skip if the subproblem has no solution
if subProblem == -1:
continue
# choose the optimal solution in the subproblem, then add one
res = min(res, subProblem + 1)
return res if res != float('inf') else -1
func coinChange(coins []int, amount int) int {
// The final result required by the problem is dp(amount)
return dp(coins, amount)
}
// Definition: To make up the amount n, at least dp(coins, n) coins are needed
func dp(coins []int, amount int) int {
// base case
if amount == 0 {
return 0
}
if amount < 0 {
return -1
}
res := math.MaxInt32
for _, coin := range coins {
// Calculate the result of the subproblem
subProblem := dp(coins, amount - coin)
// Skip if the subproblem has no solution
if subProblem == -1 {
continue
}
// Choose the optimal solution in the subproblem, then add one
res = min(res, subProblem + 1)
}
if res == math.MaxInt32 {
return -1
}
return res
}
func min(x, y int) int {
if x < y {
return x
}
return y
}
var coinChange = function(coins, amount) {
// The final result required by the problem is dp(amount)
return dp(coins, amount);
}
// Definition: To make up the amount n, at least dp(coins, n) coins are needed
function dp(coins, amount) {
// base case
if (amount === 0) return 0;
if (amount < 0) return -1;
let res = Number.MAX_SAFE_INTEGER;
for (let coin of coins) {
// Calculate the result of the subproblem
let subProblem = dp(coins, amount - coin);
// Skip if the subproblem has no solution
if (subProblem === -1) continue;
// Choose the optimal solution in the subproblem, then add one
res = Math.min(res, subProblem + 1);
}
return res === Number.MAX_SAFE_INTEGER ? -1 : res;
}
Info
The signatures of the coinChange
and dp
functions are identical, so theoretically, there is no need to write an additional dp
function. However, for the sake of clarity in the following explanation, a separate dp
function is provided to implement the main logic.
Additionally, some readers have asked why the result of a subproblem is incremented by 1 (subProblem + 1
) instead of adding the coin value. The key to understanding dynamic programming problems lies in the definition of the dp
function or array. What does the return value of this function represent? Once you understand this, you will see why the subproblem's return value is incremented by 1.
At this point, the state transition equation is essentially complete. The above algorithm already represents the brute-force solution.

At this point, the problem is essentially solved, but we need to eliminate overlapping subproblems. For example, consider amount = 11, coins = {1,2,5}
and draw the recursion tree:

Time Complexity Analysis of Recursive Algorithm: The total number of subproblems multiplied by the time required to solve each subproblem.
The total number of subproblems is the number of nodes in the recursion tree. However, due to pruning based on the specific coin denominations provided, the tree's growth is irregular, making it difficult to precisely calculate the number of nodes. Generally, we estimate an upper bound for the time complexity in such cases.
Assuming the target amount is n
and the given number of coins is k
, the worst-case height of the recursion tree is n
(using only 1-denomination coins). Assuming it is a full k-ary tree, the total number of nodes is on the order of k^n
.
The complexity of each subproblem involves a for-loop in each recursive call, resulting in a complexity of . Multiplying these factors gives a total time complexity of , which is exponential.
Memoized Recursion
Similar to the Fibonacci sequence example, we can eliminate subproblems by incorporating memoization:
class Solution {
int[] memo;
public int coinChange(int[] coins, int amount) {
memo = new int[amount + 1];
// Initialize the memo with a special value that
// won't be picked, representing it has not been
Arrays.fill(memo, -666);
return dp(coins, amount);
}
int dp(int[] coins, int amount) {
if (amount == 0) return 0;
if (amount < 0) return -1;
// Check the memo to prevent repeated calculations
if (memo[amount] != -666)
return memo[amount];
int res = Integer.MAX_VALUE;
for (int coin : coins) {
// Calculate the result of the subproblem
int subProblem = dp(coins, amount - coin);
// Skip if the subproblem has no solution
if (subProblem == -1) continue;
// Choose the optimal solution in the subproblem, then add one
res = Math.min(res, subProblem + 1);
}
// Store the calculation result in the memo
memo[amount] = (res == Integer.MAX_VALUE) ? -1 : res;
return memo[amount];
}
}
class Solution {
public:
vector<int> memo;
int coinChange(vector<int>& coins, int amount) {
memo = vector<int> (amount + 1, -666);
// Initialize the memo with a special value that
// won't be picked, representing it has not been
return dp(coins, amount);
}
int dp(vector<int>& coins, int amount) {
if (amount == 0) return 0;
if (amount < 0) return -1;
// Check the memo to prevent repeated calculations
if (memo[amount] != -666)
return memo[amount];
int res = INT_MAX;
for (int coin : coins) {
// Calculate the result of the subproblem
int subProblem = dp(coins, amount - coin);
// Skip if the subproblem has no solution
if (subProblem == -1) continue;
// Choose the optimal solution from the subproblems and add one
res = min(res, subProblem + 1);
}
// Store the calculation result in the memo
memo[amount] = (res == INT_MAX) ? -1 : res;
return memo[amount];
}
};
class Solution:
def __init__(self):
self.memo = []
def coinChange(self, coins: List[int], amount: int) -> int:
self.memo = [-666] * (amount + 1)
# Initialize the memo with a special value that
# won't be picked, representing it has not been
return self.dp(coins, amount)
def dp(self, coins, amount):
if amount == 0: return 0
if amount < 0: return -1
# Check the memo to prevent repeated calculations
if self.memo[amount] != -666:
return self.memo[amount]
res = float('inf')
for coin in coins:
# Calculate the result of the subproblem
subProblem = self.dp(coins, amount - coin)
# Skip if the subproblem has no solution
if subProblem == -1: continue
# Choose the optimal solution from the subproblems and add one
res = min(res, subProblem + 1)
# Store the calculation result in the memo
self.memo[amount] = res if res != float('inf') else -1
return self.memo[amount]
import "math"
func coinChange(coins []int, amount int) int {
memo := make([]int, amount + 1)
for i := range memo {
memo[i] = -666
}
// Initialize the memo with a special value that won't
// be taken, representing that it has not been
return dp(coins, amount, memo)
}
func dp(coins []int, amount int, memo []int) int {
if amount == 0 {
return 0
}
if amount < 0 {
return -1
}
// Check the memo to prevent repeated calculations
if memo[amount] != -666 {
return memo[amount]
}
res := math.MaxInt32
for _, coin := range coins {
// Calculate the result of the subproblem
subProblem := dp(coins, amount - coin, memo)
// Skip if the subproblem has no solution
if subProblem == -1 {
continue
}
// Choose the optimal solution in the subproblem and then add one
res = min(res, subProblem + 1)
}
// Store the calculation result in the memo
if res == math.MaxInt32 {
memo[amount] = -1
} else {
memo[amount] = res
}
return memo[amount]
}
var coinChange = function(coins, amount) {
let memo = new Array(amount + 1).fill(-666);
// Initialize the memo with a special value that
// won't be picked, representing it has not been
return dp(coins, amount, memo);
}
function dp(coins, amount, memo) {
if (amount === 0) return 0;
if (amount < 0) return -1;
// Check the memo to prevent repeated calculations
if (memo[amount] !== -666) {
return memo[amount];
}
let res = Infinity;
for (let coin of coins) {
// Calculate the result of the subproblem
let subProblem = dp(coins, amount - coin, memo);
// Skip if the subproblem has no solution
if (subProblem === -1) continue;
// Choose the optimal solution from the subproblem and add one
res = Math.min(res, subProblem + 1);
}
// Store the calculation result in the memo
memo[amount] = res === Infinity ? -1 : res;
return memo[amount];
}
Without drawing a diagram, it's evident that the "memoization" significantly reduces the number of subproblems, completely eliminating redundancy. Therefore, the total number of subproblems won't exceed the amount n
, meaning the number of subproblems is . The time to process one subproblem remains constant at , making the total time complexity .
Iterative Solution with the DP Array
Of course, we can also use the dp table in a bottom-up manner to eliminate overlapping subproblems. Regarding "state," "choice," and the base case, there is no difference from before. The definition of the dp
array is similar to the dp
function, where the "state," meaning the target amount, is used as a variable. However, in the dp
function, it is reflected in the function parameters, while in the dp
array, it is reflected in the array indices:
Definition of the dp
array: When the target amount is i
, at least dp[i]
coins are needed to make up the amount.
Based on the dynamic programming framework provided at the beginning of our article, the following solution can be written:
class Solution {
public int coinChange(int[] coins, int amount) {
int[] dp = new int[amount + 1];
// The array size is amount + 1, and the initial value is also amount + 1
Arrays.fill(dp, amount + 1);
// base case
dp[0] = 0;
// The outer for loop traverses all possible values of all states
for (int i = 0; i < dp.length; i++) {
// The inner for loop finds the minimum value among all choices
for (int coin : coins) {
// The subproblem has no solution, skip
if (i - coin < 0) {
continue;
}
dp[i] = Math.min(dp[i], 1 + dp[i - coin]);
}
}
return (dp[amount] == amount + 1) ? -1 : dp[amount];
}
}
class Solution {
public:
int coinChange(vector<int>& coins, int amount) {
// the size of the array is amount + 1, and the initial value is also amount + 1
vector<int> dp(amount + 1, amount + 1);
dp[0] = 0;
// base case
// the outer for loop is traversing all possible values of all states
for (int i = 0; i < dp.size(); i++) {
// the inner for loop is finding the minimum value of all choices
for (int coin : coins) {
// the subproblem has no solution, skip
if (i - coin < 0) {
continue;
}
dp[i] = min(dp[i], 1 + dp[i - coin]);
}
}
return (dp[amount] == amount + 1) ? -1 : dp[amount];
}
};
class Solution:
def coinChange(self, coins: List[int], amount: int) -> int:
# The array size is amount + 1, and the initial value is also amount + 1
dp = [amount + 1] * (amount + 1)
dp[0] = 0
# base case
# The outer for loop is traversing all possible values of all states
for i in range(len(dp)):
# The inner for loop is finding the minimum value among all choices
for coin in coins:
# The subproblem has no solution, skip
if i - coin < 0:
continue
dp[i] = min(dp[i], 1 + dp[i - coin])
return -1 if dp[amount] == amount + 1 else dp[amount]
func coinChange(coins []int, amount int) int {
// the array size is amount + 1, and the initial value is also amount + 1
dp := make([]int, amount+1)
for i := range dp {
dp[i] = amount + 1
}
// base case
dp[0] = 0
// the outer for loop is traversing all possible values of all states
for i := 0; i < len(dp); i++ {
// the inner for loop is finding the minimum value among all choices
for _, coin := range coins {
// the subproblem has no solution, skip
if i - coin < 0 {
continue
}
dp[i] = min(dp[i], 1 + dp[i - coin])
}
}
if dp[amount] == amount+1 {
return -1
}
return dp[amount]
}
func min(a, b int) int {
if a < b {
return a
}
return b
}
var coinChange = function(coins, amount) {
// The array size is amount + 1, and the initial value is also amount + 1
let dp = new Array(amount + 1).fill(amount + 1);
dp[0] = 0;
// base case
// The outer for loop is traversing all possible values of all states
for (let i = 0; i < dp.length; i++) {
// The inner for loop is finding the minimum value of all choices
for (let coin of coins) {
// The subproblem has no solution, skip
if (i - coin < 0) {
continue;
}
dp[i] = Math.min(dp[i], 1 + dp[i - coin]);
}
}
return dp[amount] == amount + 1 ? -1 : dp[amount];
};
Info
Why are the values in the dp
array initialized to amount + 1
? This is because the maximum number of coins needed to make up the amount
can only be equal to amount
itself (using only 1-unit coins). Initializing to amount + 1
is equivalent to initializing to infinity, which facilitates finding the minimum value later. Why not initialize directly to the maximum integer value Integer.MAX_VALUE
? Because the subsequent calculation dp[i - coin] + 1
could lead to integer overflow.

III. Final Summary
The first problem on the Fibonacci sequence explains how to optimize the recursive tree using either a "memoization" or a "dp table" approach. It clarifies that these two methods are fundamentally the same, differing only in their top-down or bottom-up approaches.
The second problem on making change demonstrates how to systematically determine the "state transition equation." Once the brute-force recursive solution is written using this equation, the remaining task is simply to optimize the recursive tree and eliminate overlapping subproblems.
If you are not very familiar with dynamic programming but have managed to follow up to this point, you deserve applause. You have likely grasped the design techniques of this algorithm.
Computers solve problems without any special tricks; their only method is brute-force, enumerating all possibilities. Algorithm design essentially involves first considering "how to brute-force" and then striving for "how to brute-force intelligently."
Listing the state transition equation addresses the question of "how to brute-force." The difficulty lies in the fact that many brute-force approaches require recursion, and some problems have inherently complex solution spaces, making complete enumeration challenging.
Memoization and DP tables aim to achieve "intelligent brute-force." The strategy of trading space for time is the key to reducing time complexity. Beyond this, what other tricks are there?
We will have a dedicated chapter on dynamic programming problems later. If you have any questions, feel free to revisit this article. We hope readers will focus on the concepts of "state" and "choice" when reading each problem and solution, to develop their own understanding and apply the framework proficiently.
Related Problems
You can install my Chrome extension then open the link.