Tutorials
Algorithms
Sort
QuickSort

QuickSort and Its Optimizations

This article provides an overview of quicksort—a popular and efficient sorting algorithm—and discusses several practical optimizations as well as an application for finding the kth largest element in an array.

Definition

QuickSort, also known as partition-exchange sort, is a widely used sorting algorithm. It sorts an array by partitioning it into subarrays, sorting the smaller parts recursively, and then combining the results. Although very fast on average, quicksort is considered an unstable sorting algorithm because it does not necessarily preserve the relative order of equal elements.

Basic Principles and Implementation

The Process

Quicksort uses the divide-and-conquer strategy to sort an array. The overall process can be summarized in three steps:

  1. Partitioning: Divide the array into two subarrays such that all elements in the left subarray are less than a chosen pivot and all elements in the right subarray are greater than or equal to the pivot. To maintain an average time complexity, the pivot is typically chosen at random.
  2. Recursion: Recursively apply the quicksort algorithm to the two subarrays.
  3. Concatenation: No merging is necessary because, once partitioned and individually sorted, the subarrays form a fully sorted array when placed together.

Unlike merge sort, which divides the array by simply splitting it into halves, quicksort ensures that the array is split around a pivot element so that the relative order is maintained—elements in the left part are guaranteed to be less than those in the right part.

During the partitioning process, two pointers are maintained:

  • A pointer for the left side (let’s call it pp).
  • A pointer for the right side (let’s call it qq).

The algorithm examines each element and swaps those that are on the wrong side with respect to the pivot. When, for example, the right pointer finds an element smaller than the pivot while the left pointer is at an element larger than or equal to the pivot, the two elements are exchanged, and the left pointer is advanced. This continues until the pointers meet.

The choice of implementation for partitioning and selecting the pivot is flexible, and several versions exist.

Below are some code examples illustrating the quicksort algorithm in Python.

Recursive Quicksort Implementation in Python

def quick_sort(arr, first, last):
    if first >= last:
        return
    mid_value = arr[first]
    low = first
    high = last
    while low < high:
        while low < high and arr[high] >= mid_value:
            high -= 1
        arr[low] = arr[high]
        while low < high and arr[low] < mid_value:
            low += 1
        arr[high] = arr[low]
    arr[low] = mid_value
    quick_sort(arr, first, low - 1)
    quick_sort(arr, low + 1, last)

Note: A non-recursive (iterative) implementation exists by using an explicit stack to simulate the recursion, but the core idea remains the partitioning process described above.

Properties

Stability

QuickSort is an unstable sorting algorithm because equal elements might not retain their original relative positions after sorting.

Time Complexity

  • Best and Average Case: O(nlogn)O(n \log n)
    In the ideal case, each pivot divides the array into two nearly equal halves. The recurrence relation is:

    T(n)=2T(n2)+Θ(n)T(n) = 2T\left(\frac{n}{2}\right) + \Theta(n)

    By the master theorem, this gives T(n)=Θ(nlogn)T(n) = \Theta(n \log n).

  • Worst Case: O(n2)O(n^2)
    The worst-case occurs when the pivot is always the smallest or largest element. In that case, the recurrence is:

    T(n)=T(n1)+Θ(n)T(n) = T(n - 1) + \Theta(n)

    Leading to a quadratic time complexity in the worst-case scenario.

Average Case Analysis (Sketch)

In the average case, the pivot is assumed to be chosen uniformly at random. Let XX represent the total number of comparisons during partitioning. One can show that

X=i=1n1j=i+1nXi,j,X = \sum_{i=1}^{n-1}\sum_{j=i+1}^{n} X_{i,j},

where Xi,jX_{i,j} is a binary indicator representing whether the elements aia_i and aja_j are compared.

Due to the linearity of expectation:

E[X]=i=1n1j=i+1nP(ai and aj are compared)E[X] = \sum_{i=1}^{n-1}\sum_{j=i+1}^{n} P(a_i \text{ and } a_j \text{ are compared})

Using the analysis based on the fact that two elements are compared only if either one is the first chosen pivot among all elements in the subarray between them, one finds that

P(ai and aj are compared)=2ji+1.P(a_i \text{ and } a_j \text{ are compared}) = \frac{2}{j-i+1}.

Summing over all pairs shows that:

E[X]=O(nlogn)E[X] = O(n \log n)

Thus, on average, quicksort runs in O(nlogn)O(n \log n) time. In practical scenarios, the worst-case behavior is rare, and due to good memory locality, quicksort often outperforms other O(nlogn)O(n \log n) algorithms like heapsort.

Optimizations

While a straightforward implementation of quicksort works well in many cases, pathological input (for example, an already sorted array) can trigger the worst-case performance. Here are several common optimization techniques:

  • Median-of-Three Pivot Selection: Rather than directly selecting a pivot (such as the first or last element), choose the median value from the first, middle, and last elements. This helps avoid degradation in performance with already sorted or reverse sorted arrays.
  • Insertion Sort for Small Subarrays: For very small arrays, insertion sort may be faster. Thus, when the number of elements falls below a certain threshold, it can be beneficial to switch to insertion sort.
  • Grouping Equal Elements: After partitioning, group all elements equal to the pivot near its center. This is especially effective if the array contains many duplicate values.

Three-Way Quicksort

Definition

Three-way quicksort integrates ideas from quicksort and radix sort. Its core idea is inspired by the solution to the Dutch National Flag problem.

The Process

Instead of partitioning the array into two parts, three-way quicksort divides the array into three segments after selecting a random pivot:

  • Elements less than the pivot.
  • Elements equal to the pivot.
  • Elements greater than the pivot.

This approach ensures that elements equal to the pivot are clustered together, which greatly improves efficiency when many duplicates exist.

Properties

When many duplicate values are present, three-way quicksort can outperform the classical two-way partitioning method. In the best-case scenario, its time complexity can be linear, i.e., O(n)O(n).

Python Implementation

import random
 
def three_way_quick_sort(arr, l, r):
    if l >= r:
        return
    # Randomly select a pivot and swap it with the first element.
    random_index = random.randint(l, r)
    pivot = arr[random_index]
    arr[l], arr[random_index] = arr[random_index], arr[l]
    
    i = l + 1  # Pointer for the current element.
    j = l      # End of the "less than pivot" segment.
    k = r + 1  # Start of the "greater than pivot" segment.
    
    while i < k:
        if arr[i] < pivot:
            arr[i], arr[j + 1] = arr[j + 1], arr[i]
            j += 1
            i += 1
        elif arr[i] > pivot:
            arr[i], arr[k - 1] = arr[k - 1], arr[i]
            k -= 1
        else:
            i += 1
    arr[l], arr[j] = arr[j], arr[l]
    three_way_quick_sort(arr, l, j - 1)
    three_way_quick_sort(arr, k, r)

Introspective Sort

Definition

Introsort (or introspective sort) is a hybrid algorithm that combines quicksort with heapsort. Developed by David Musser in 1997, introsort begins with quicksort but monitors the recursion depth. If the depth exceeds log2n\lfloor \log_2 n \rfloor, it switches to heapsort to guarantee an overall worst-case time complexity of O(nlogn)O(n \log n).

Properties

By limiting the recursion depth and switching to heapsort when necessary, introsort achieves both the practical speed of quicksort and the worst-case performance assurance of heapsort. Many standard library implementations (such as in the SGI C++ STL) use introsort for the sort() function.

Finding the kth Largest Element

Often, you may need to determine the kth largest element (based on zero-indexing, the element at position k when the array is sorted in ascending order) without fully sorting the array. A naive approach would be to sort the entire array in O(nlogn)O(n \log n) time, but there is a more efficient method that uses ideas from quicksort.

The Quickselect Idea

Quickselect works similarly to quicksort:

  1. After partitioning the array around a pivot, the algorithm determines the number of elements less than the pivot.
  2. If the count of elements in the left partition exceeds k, the kth element is in the left subarray; if the count including elements equal to the pivot is less than or equal to k, search in the right subarray.
  3. Otherwise, the pivot itself is the kth largest element.

When using random pivot selection, the expected running time for quickselect is O(n)O(n).

Python Implementation

import random
 
def find_kth_element(arr, rk):
    n = len(arr)
    if n <= 1:
        return arr[0]
    # Randomly select a pivot.
    pivot = arr[random.randint(0, n - 1)]
    # Initialize pointers for three-way partitioning.
    i = 0
    j = 0
    k = n
    while i < k:
        if arr[i] < pivot:
            arr[i], arr[j] = arr[j], arr[i]
            i += 1
            j += 1
        elif arr[i] > pivot:
            k -= 1
            arr[i], arr[k] = arr[k], arr[i]
        else:
            i += 1
    # If kth element lies in the left section.
    if rk < j:
        return find_kth_element(arr[:j], rk)
    # If kth element lies in the right section.
    elif rk >= k:
        return find_kth_element(arr[k:], rk - k)
    # Otherwise, the pivot is the kth element.
    return pivot

Improvement: Median-of-Medians

The median-of-medians algorithm provides a deterministic strategy for pivot selection. Its steps are:

  1. Divide the array into groups of at most 5 elements.
  2. Find the median of each group (using an algorithm like insertion sort on the small groups).
  3. Recursively compute the median of these medians, and use it as the pivot.

Time Complexity Proof (Sketch)

Let T(n)T(n) be the time to process an array of size nn. Finding the median for each small group takes constant time per group, so the overall cost is O(n)O(n). With recursive calls on T(n5)T\left(\frac{n}{5}\right) and on at most 7n10\frac{7n}{10} elements (after partitioning), we have the recurrence:

T(n)T(n5)+T(7n10)+O(n)T(n) \leq T\left(\frac{n}{5}\right) + T\left(\frac{7n}{10}\right) + O(n)

Assuming T(n)cnT(n) \leq cn for some constant cc and substituting yields:

T(n)cn5+7cn10+O(n)9cn10+O(n)=O(n)\begin{aligned} T(n) &\leq \frac{cn}{5} + \frac{7cn}{10} + O(n) \\ &\leq \frac{9cn}{10} + O(n) \\ &= O(n) \end{aligned}

Thus, even in the worst case, the median-of-medians approach guarantees linear time selection.


By understanding these techniques, beginners can appreciate the elegance and efficiency of quicksort and its various improvements. Whether you use quicksort for sorting or apply its partitioning strategy for selection tasks, these algorithms provide a powerful toolset for handling many practical problems in computer science.