Tutorials
Algorithms
Basic
Complexity
Time and Space Complexity

Time and Space Complexity

Algorithms are often measured by how efficiently they run. Two key metrics for this are time complexity and space complexity. Time complexity helps us understand how the running time of an algorithm increases with the size of the input, while space complexity measures how much extra memory an algorithm uses.

Basic Operation Count

Even though different computers may execute the same algorithm at different speeds, and real running times are hard to calculate exactly, we can estimate performance by counting the number of basic operations the algorithm performs. Basic operations include arithmetic operations (addition, subtraction, multiplication, division), variable access (for fundamental data types), and assignment operations. By estimating the count of these operations, we can gauge the algorithm's efficiency without getting bogged down by machine-specific details.

Time Complexity

Definition

When we talk about how “fast” an algorithm is, we must consider how its running time increases with the size of the input. This input size can refer to the number of elements in a list, the number of vertices in a graph, the number of edges, etc. Generally, as the input size grows, the run time increases. In practice, we are less focused on the time required for a specific input size and more interested in the trend of the growth – this trend is what we call the time complexity of the algorithm.

Why Focus on the Trend?

There are several reasons to focus on the time trend rather than exact timings:

  1. Modern computers can execute hundreds of millions of basic operations each second. As a result, we often deal with very large input sizes. For example, suppose we have two algorithms:

    • Algorithm A runs in time proportional to 100n100n.
    • Algorithm B runs in time proportional to n2n^2.

    For very small inputs (say, when n<100n < 100), Algorithm B might be faster. However, if we allow a longer execution time, Algorithm A can handle much larger inputs compared to Algorithm B. This is why the growth trend with respect to nn is more important than the constant factors when input sizes are huge.

  2. Counting basic operations abstracts away the machine-specific differences in execution time. Although different operations (like addition versus division) take different amounts of time, when we analyze time complexity, we ignore these differences along with constant factors. This approach simplifies the analysis and gives a clear picture of the algorithm's scalability.

Time complexity is often analyzed in two ways:

  • Worst-case time complexity: The upper limit on the running time for any input of size nn. This is most common in competitive programming, where the algorithm must handle every possible input within a given range.
  • Average (expected) time complexity: The expected running time averaged over all inputs of size nn, assuming a random distribution of inputs.

In essence, time complexity is a measure of how the running time grows with increasing input size, and we formalize this concept using asymptotic notations.

Asymptotic Notations

Asymptotic notations provide a formal way to express the growth rates of functions while ignoring lower-order terms and constant factors. The most common notations are as follows:

  • Uppercase symbols (or non-strict inequalities) include equals within their definition, while lowercase symbols (or strict inequalities) do not. For instance, the notation Θ\Theta is used when an expression is tightly bounded from above and below, whereas OO is used for an upper bound and Ω\Omega for a lower bound. The notations OO and oo come from the Greek letter Omicron, representing similar ideas of “small” and “big.”

Big Theta (Θ) Notation

We write

f(n)=Θ(g(n))f(n) = \Theta(g(n))

if and only if there exist positive constants c1,c2c_1, c_2 and n0n_0 such that for all nn0n \ge n_0,

0c1g(n)f(n)c2g(n).0 \le c_1 \cdot g(n) \le f(n) \le c_2 \cdot g(n).

This means f(n)f(n) is bounded both above and below by the function g(n)g(n) multiplied by some constants. For example,

3n2+5n3=Θ(n2).3n^2 + 5n - 3 = \Theta(n^2).

Big O (O) Notation

If we are only interested in an upper bound, then we use:

f(n)=O(g(n))f(n) = O(g(n))

if and only if there exist positive constants cc and n0n_0 such that for all nn0n \ge n_0,

0f(n)cg(n).0 \le f(n) \le c \cdot g(n).

In many analyses, especially when focusing on the worst-case scenario, we use OO notation.

Big Omega (Ω) Notation

Conversely, to express a lower bound we write:

f(n)=Ω(g(n))f(n) = \Omega(g(n))

if and only if there exist positive constants cc and n0n_0 such that for all nn0n \ge n_0,

0cg(n)f(n).0 \le c \cdot g(n) \le f(n).

Little o (o) Notation

When we need a strict upper bound (ignoring the possibility of equality), we use:

f(n)=o(g(n))f(n) = o(g(n))

if and only if for every positive constant cc, there exists an n0n_0 such that for all nn0n \ge n_0,

0f(n)<cg(n).0 \le f(n) < c \cdot g(n).

Little omega (ω) Notation

Similarly, to express a strict lower bound, we write:

f(n)=ω(g(n))f(n) = \omega(g(n))

if and only if for every positive constant cc, there exists an n0n_0 such that for all nn0n \ge n_0,

0cg(n)<f(n).0 \le c \cdot g(n) < f(n).

Useful Properties

  • f(n)=Θ(g(n))f(n) = \Theta(g(n)) if and only if f(n)=O(g(n))f(n)=O(g(n)) and f(n)=Ω(g(n)).f(n)=\Omega(g(n)).
  • f1(n)+f2(n)=O(max(f1(n),f2(n))).f_1(n) + f_2(n) = O(\max(f_1(n), f_2(n))).
  • f1(n)×f2(n)=O(f1(n)×f2(n)).f_1(n) \times f_2(n) = O(f_1(n) \times f_2(n)).
  • For any constant a1a \neq 1, logan=O(log2n).\log_a{n} = O(\log_2 n).
    (Using the change-of-base formula, all logarithms grow at the same rate asymptotically, so we generally omit the base in complexity discussions.)

Simple Examples of Time Complexity Calculation

For Loops

Consider the following Python example that contains three nested loops:

n = int(input())
m = int(input())
for i in range(n):
    for j in range(n):
        for k in range(m):
            print("hello world")

If we treat the input values nn and mm as our problem size, the running time of this code is proportional to the number of loop iterations, which is Θ(n2m)\Theta(n^2 m).

Depth-First Search (DFS)

When performing a DFS on a graph with nn vertices and mm edges, each vertex and each edge is visited a constant number of times. As a result, the overall time complexity of DFS is Θ(n+m)\Theta(n + m).

What Counts as a Constant?

When analyzing time complexity, it’s important to determine which variables are considered part of the input size. For instance, consider the following code snippet:

N = 100000
for i in range(N):
    print("hello world")

If the value of NN is fixed and not part of the problem's variable input, then this loop runs in constant time, or O(1)O(1). In complexity analysis, any value that does not scale with the input is treated as a constant factor (effectively counted as 1).

It’s important to note that theoretical discussions assume an algorithm can process any input size. Thus, precomputing answers for every possible input within a limited range doesn’t actually reduce an algorithm’s complexity to O(1)O(1) when the input size might potentially be unbounded.

The Master Theorem

The Master Theorem is a handy tool for quickly determining the time complexity of many recursive algorithms. Consider a recurrence of the form

T(n)=aT(nb)+f(n)for n>b,T(n) = a \, T\left(\frac{n}{b}\right) + f(n) \quad \text{for } n > b,

then:

T(n)={Θ(nlogba)if f(n)=O(nlogbaϵ) for some ϵ>0,Θ(f(n))if f(n)=Ω(nlogba+ϵ) and if af(nb)cf(n) for some c<1, and large n,Θ(nlogbalogk+1n)if f(n)=Θ(nlogbalogkn) for some k0.T(n) = \begin{cases} \Theta\left(n^{\log_b a}\right) & \text{if } f(n) = O\left(n^{\log_b a-\epsilon}\right) \text{ for some } \epsilon > 0, \\ \Theta\left(f(n)\right) & \text{if } f(n) = \Omega\left(n^{\log_b a+\epsilon}\right) \text{ and if } a\, f\left(\frac{n}{b}\right) \leq c\, f(n) \text{ for some } c < 1, \text{ and large } n, \\ \Theta\left(n^{\log_b a} \log^{k+1} n\right) & \text{if } f(n)=\Theta\left(n^{\log_b a}\log^k n\right) \text{ for some } k \ge 0. \end{cases}

The intuition behind this theorem is that at each level of the recursion, a problem of size nn is divided into aa subproblems of size nb\frac{n}{b}, with an additional work of f(n)f(n) done at that level. Summing the work done across all levels of the recursion tree leads to the overall complexity.

Let’s look at a few examples:

  1. For the recurrence

    T(n)=2T(n2)+1,T(n) = 2T\left(\frac{n}{2}\right) + 1,

    we have a=2a=2, b=2b=2, and log22=1\log_2 2=1. Since the added work f(n)=1f(n)=1 grows slower than nlogba=nn^{\log_b a} = n, the Master Theorem tells us that T(n)=Θ(n)T(n) = \Theta(n).

  2. For the recurrence

    T(n)=T(n2)+n,T(n) = T\left(\frac{n}{2}\right) + n,

    we have a=1a=1, b=2b=2, and log21=0\log_2 1=0. Here, the work f(n)=nf(n)=n dominates, so we conclude that T(n)=Θ(n)T(n) = \Theta(n).

  3. For

    T(n)=T(n2)+logn,T(n) = T\left(\frac{n}{2}\right) + \log n,

    with a=1a=1 and b=2b=2 (so log21=0\log_2 1=0), the work is f(n)=Θ(logn)f(n)=\Theta(\log n). In this case, the Master Theorem yields T(n)=Θ(log2n)T(n)=\Theta(\log^2 n).

  4. Finally, for

    T(n)=T(n2)+1,T(n) = T\left(\frac{n}{2}\right) + 1,

    again with a=1a=1 and b=2b=2, we find that T(n)=Θ(logn)T(n) = \Theta(\log n).

Amortized Complexity

In some cases, while a single operation might be costly, most operations are cheap, leading to a low average or amortized cost. For a deeper introduction to this topic, see additional resources on amortized analysis.

Space Complexity

Just as we analyze running time, we can look at how an algorithm’s memory requirements grow with the input. Space complexity measures the additional memory needed relative to the size of the input, providing insight into the algorithm’s efficiency beyond just execution speed.

Computational Complexity

Although this article focuses on analyzing algorithms from an efficiency perspective, there is a broader field called computational complexity that dives deeper into the intrinsic difficulty of computational problems. Exploring that subject can provide further insights into how and why certain problems are hard to solve.

This tutorial has introduced the core concepts behind analyzing algorithm efficiency, including time complexity, space complexity, and asymptotic notations. With these tools, you'll be better equipped to compare algorithms and understand the trade-offs underlying different approaches.