Skip to main content

Divide and Conquer Algorithms

Divide and conquer, also known as "divide and rule", is a fundamental algorithm strategy based on recursion, involving two main phases: divide and conquer.

  • Divide (partition phase): The original problem is recursively broken down into two or more sub-problems until the smallest sub-problem is reached.
  • Conquer (merge phase): Solutions to the smallest sub-problems are combined from bottom to top to form the solution to the original problem.

Merge sort exemplifies the divide and conquer approach:

  • Divide: The array is recursively split into two sub-arrays until only single-element sub-arrays remain.
  • Conquer: These sub-arrays are merged in order to create a sorted array.

Identifying Divide and Conquer Problems

To determine if a problem is amenable to the divide and conquer approach, consider the following criteria:

  1. The problem can be decomposed into smaller, similar sub-problems.
  2. Sub-problems are independent, with no overlap, allowing them to be solved separately.
  3. Solutions to sub-problems can be merged to solve the original problem.

Efficiency Improvements

Divide and conquer can enhance algorithm efficiency by reducing operation count and enabling parallel computation.

Optimization of Operation Count

By dividing a problem (e.g., sorting an array) and solving smaller parts independently, the efficiency can often be increased. For instance, using a simplified model of bubble sort:

  • Example: Bubble Sort: Sorting an array of length nn requires O(n2)O(n^2) time. If divided into two sub-arrays:

    • Splitting takes O(n)O(n).
    • Sorting each sub-array requires O((n2)2)O((\frac{n}{2})^2).
    • Merging takes O(n)O(n).

    Overall complexity:

    O(n+(n2)2×2+n)=O(n22+2n)O(n + (\frac{n}{2})^2 \times 2 + n) = O(\frac{n^2}{2} + 2n)
  • Inequality Analysis:

    n2>n22+2nn2n222n>0n(n4)>0\begin{aligned} n^2 & > \frac{n^2}{2} + 2n \newline n^2 - \frac{n^2}{2} - 2n & > 0 \newline n(n - 4) & > 0 \end{aligned}

    Conclusion: When n>4n > 4, partitioning reduces operation count.

  • Merge Sort: Recursively divides arrays, achieving O(nlogn)O(n \log n) time complexity.

  • Bucket Sort: Divides arrays into kk sub-arrays, reaching theoretical complexity O(n+k)O(n + k).

This suggests a reduced number of operations for larger arrays (n>4n > 4), as shown by the inequality n(n4)>0n(n - 4) > 0.

Parallel Computation

The independence of sub-problems in divide and conquer allows them to be solved in parallel, leveraging multiple cores or processors, which can significantly speed up processing.

Common Applications

  1. Closest Point Pair: Divides points into two parts, finds the closest pairs in each, and identifies pairs spanning the division.
  2. Large Integer Multiplication: The Karatsuba algorithm breaks down large integer multiplication into smaller multiplications.
  3. Matrix Multiplication: The Strassen algorithm decomposes large matrix multiplication into smaller ones.
  4. Tower of Hanoi: Solved recursively using divide and conquer.
  5. Solving Inverse Pairs: Utilizes merge sort.

Widespread Use in Algorithms and Data Structures

  1. Binary Search: Divides an ordered array into two parts and repeatedly excludes half.
  2. Merge Sort: Recursively divides and merges sorted sub-arrays.
  3. Quicksort: Selects a pivot, divides into smaller/larger parts, and repeats partitioning.
  4. Bucket Sort: Distributes data across buckets, sorting each separately.
  5. Trees: Binary search trees, AVL trees, etc., apply divide and conquer in search and modification operations.
  6. Heaps: A special binary tree type, utilizing divide and conquer in operations like insertion and deletion.
  7. Hash Tables: Some collision resolution approaches use divide and conquer principles indirectly.