Divide and Conquer Principles and Techniques
This article will resolve
LeetCode | Difficulty |
---|---|
23. Merge k Sorted Lists | 🔴 |
Prerequisite Knowledge
Before reading this article, you should first learn:
This article mainly introduces the core principles and problem-solving techniques of the divide-and-conquer algorithm.
The concepts of divide-and-conquer algorithm and divide-and-conquer strategy are not quite the same, and we will explain this through a simple example.
Divide-and-Conquer Strategy
The broad concept of the divide-and-conquer strategy is a general idea, often referred to as the "approach of breaking down problems" in our tutorials.
The divide-and-conquer strategy involves breaking down a problem into several subproblems, then solving each of these subproblems individually, and finally combining the solutions of the subproblems to solve the original problem. This strategy is widely used in recursive algorithms.
For example, in the recursive solution of the Fibonacci sequence, the original problem fib(n)
is broken down into two subproblems fib(n-1)
and fib(n-2)
. By combining the solutions of the subproblems, the solution to the original problem is obtained. This is the approach of breaking down problems:
int fib(int n) {
// base case
if (n == 0 || n == 1) {
return n;
}
return fib(n - 1) + fib(n - 2);
}
A basic binary tree algorithm, such as calculating the total number of nodes in a binary tree:
// Given a binary tree, count how many nodes in the tree.
int count(TreeNode root) {
// base case
if (root == null) {
return 0;
}
// count nodes of left subtree and right subtree
int leftCount = count(root.left);
int rightCount = count(root.right);
// sum up nodes of left subtree, right subtree and current node
return leftCount + rightCount + 1;
}
This solution also follows the divide-and-conquer approach: you break down the total number of nodes in the entire tree (the original problem) into the number of nodes in the left subtree and the right subtree (subproblems). You then recursively calculate the node counts for the left and right subtrees, and finally derive the total node count from these subproblem solutions.
For example, does Dynamic Programming fall under the divide-and-conquer paradigm?
Yes, because all dynamic programming algorithms decompose a large problem into smaller, structurally similar subproblems, combining the optimal solutions of these subproblems to solve the original problem. This process often includes special optimization techniques.
Many more examples can be cited. In fact, I have summarized in Binary Tree Mastery (Outline) that recursive algorithms follow two main approaches: traversal or problem decomposition (divide-and-conquer).
Traversal-based algorithms are typically represented by Backtracking Algorithms. Therefore, apart from backtracking, other recursive algorithms can be categorized under the problem decomposition approach (divide-and-conquer).
Given this, "divide-and-conquer" plays a significant role in recursive algorithms. When we talk about "divide-and-conquer algorithms," what do we specifically mean? Can we say that all the recursive algorithms mentioned above are "divide-and-conquer algorithms"? Actually, not quite.
Divide-and-Conquer Algorithms
Narrowly defined, divide-and-conquer algorithms are recursive algorithms that apply the divide-and-conquer principle, but they have one distinctive feature that the previously listed algorithms lack:
The time complexity after decomposing the problem is lower than solving it directly without decomposition.
Only algorithms with this characteristic are referred to as "divide-and-conquer algorithms".
The algorithms mentioned earlier inherently require decomposition for their solutions; there's no direct method to solve them otherwise. Hence, we say they use the divide-and-conquer principle but aren't classified as divide-and-conquer algorithms.
For instance, Bucket Sort divides an array into several buckets, sorts each bucket using insertion sort, and then merges the sorted buckets, reducing the time complexity to .
Directly using Insertion Sort has a complexity of . By decomposing the problem and applying insertion sort on each bucket, the overall complexity drops to . This qualifies as a divide-and-conquer algorithm.
So why does dividing and conquering reduce complexity? If all problems were solved using divide-and-conquer, would they always result in lower complexity?
Let’s delve deeper into when divide-and-conquer reduces complexity, when it doesn’t, and the underlying principles.