👋 Welcome to the World of Algorithms!

Hello future Decision Mathematicians! This chapter is your entry point into the exciting and highly relevant field of Algorithms. Don't worry if this sounds intimidating; an algorithm is just a fancy name for a structured plan or set of instructions.

In Decision Mathematics (D1), we use algorithms to solve complex real-world problems efficiently—like finding the fastest route, scheduling tasks, or organizing massive amounts of data. Mastering these concepts is crucial for all the network and optimization problems we will tackle later.


Section 1: Defining the Algorithm

What Exactly is an Algorithm?

Think of an algorithm as a recipe. If you want to bake a cake (the output), you need specific ingredients (the input) and a set of clear, ordered instructions (the algorithm) to transform those ingredients into the final product.

In mathematical and computational terms:

  • An algorithm is a finite, well-defined sequence of instructions designed to perform a specific task or solve a particular problem.

Characteristics of a Good Algorithm

For any set of instructions to be considered a proper algorithm, it must possess these four key properties:

  1. Finiteness: The algorithm must stop after a finite number of steps. It cannot go on forever.
  2. Clarity/Precision: Every step must be clear, unambiguous, and precisely defined. (No instruction saying, "Bake until it looks nice.")
  3. Input: The algorithm must take zero or more specified quantities as input data.
  4. Output: The algorithm must produce one or more specified results as output data.

Did you know? The word 'algorithm' is derived from the name of the 9th-century Persian mathematician, Muḥammad ibn Mūsā al-Khwārizmī.

Quick Review: Algorithm Fundamentals

An algorithm is simply a step-by-step procedure that must be finite, clear, take input, and produce output.


Section 2: Measuring Efficiency

In Decision Mathematics, it is not enough that an algorithm works; it must also work fast. Efficiency refers to how quickly an algorithm solves a problem, usually measured by the number of operations or comparisons required relative to the size of the input.

Time Complexity and Order of Magnitude

We often measure the efficiency using the Order of Magnitude (or Big O Notation, though you only need to understand the basic magnitude classifications). This describes how the time taken increases as the size of the data set, \(n\), increases.

Case 1: Linear Time Complexity — \(O(n)\)

If an algorithm has a linear relationship with the input size \(n\), we say it is \(O(n)\).

  • What this means: If you double the size of the input data (e.g., from 10 items to 20 items), the time taken approximately doubles.
  • Example: If you have a list of \(n\) names and you need to check every single name once to find a specific person. The number of steps is proportional to \(n\).
Case 2: Quadratic Time Complexity — \(O(n^2)\)

If an algorithm's running time is proportional to the square of the input size, we say it is \(O(n^2)\).

  • What this means: If you double the size of the input data (from 10 items to 20 items), the time taken increases by a factor of \(2^2 = 4\). This quickly becomes very slow for large data sets.
  • Example: If you have a list of \(n\) items and for every item, you have to compare it against every other item in the list.

Analogy: Imagine trying to organize a library.
If you use an \(O(n)\) method, you look at each book once.
If you use an \(O(n^2)\) method, for every book, you go and check its position against *all the other books* repeatedly.

Input Size (\(n\)) Time for \(O(n)\) Time for \(O(n^2)\)
10 10 steps 100 steps
100 100 steps 10,000 steps
1000 1,000 steps 1,000,000 steps

Key Takeaway: When dealing with large datasets, an algorithm with \(O(n)\) efficiency is dramatically better than one with \(O(n^2)\) efficiency.


Section 3: Sorting Algorithms

Sorting algorithms are used to arrange a list of items (numbers, names, etc.) into a specific order (ascending or descending). You must be able to execute and analyze the efficiency of the following two methods.

1. Bubble Sort

The Bubble Sort is the simplest sorting algorithm, but also the slowest for large lists. It works by repeatedly stepping through the list, comparing adjacent elements, and swapping them if they are in the wrong order. The process stops when a complete pass through the list requires zero swaps.

The Bubble Sort Process (Ascending Order)
  1. Start at the beginning of the list. This is Pass 1.
  2. Compare the first element with the second. If they are in the wrong order, swap them.
  3. Move to the next pair (second and third elements) and repeat the comparison and swap.
  4. Continue this process until the end of the list is reached. The largest unsorted item will "bubble up" to its correct position at the end of the list.
  5. Record the total number of swaps made in the pass.
  6. If the number of swaps is zero, the list is sorted, and the algorithm terminates.
  7. If swaps were made, start Pass 2, repeating steps 2-5, but you do not need to check the items already sorted at the end.

Example: Sorting the list [4, 1, 3, 2]

  • Start: [4, 1, 3, 2]
  • Pass 1:
    • (4, 1) -> Swap -> [1, 4, 3, 2]
    • (4, 3) -> Swap -> [1, 3, 4, 2]
    • (4, 2) -> Swap -> [1, 3, 2, 4] (4 is now in position)
    Total swaps: 3.
  • Pass 2:
    • (1, 3) -> No swap -> [1, 3, 2, 4]
    • (3, 2) -> Swap -> [1, 2, 3, 4] (3 is now in position)
    Total swaps: 1.
  • Pass 3:
    • (1, 2) -> No swap -> [1, 2, 3, 4]
    Total swaps: 0. STOP.

Efficiency Note: Bubble Sort is generally an \(O(n^2)\) algorithm in the worst-case and average-case scenarios because it often requires nested loops (a loop for the passes, and a loop for the comparisons within the pass).

2. Quick Sort

Quick Sort is a much more efficient method for sorting large data sets. It uses a "divide and conquer" strategy.

The Quick Sort Process
  1. Choose a Pivot: Select one element from the list, called the pivot. (In D1 exams, the pivot is usually chosen to be the middle item, or the median of three items).
  2. Partition: Rearrange the list such that all elements less than the pivot are placed to the left of the pivot, and all elements greater than the pivot are placed to the right.
  3. Sub-sort: Recursively apply the Quick Sort algorithm to the sub-list of elements to the left of the pivot and the sub-list to the right of the pivot, until every sub-list contains only one element.
Step-by-Step Partitioning using Pointers (The D1 Standard Method)

We typically use two pointers, one starting from the left (L) and one from the right (R).

Example: Sort [5, 1, 8, 3, 9, 2] (Pivot = 8, the middle element)

  1. Setup:
    List: [5, 1, 8, 3, 9, 2]
    L starts at 5. R starts at 2. Pivot = 8.
  2. Find L and R:
    • Move L rightwards until an element greater than or equal to the pivot is found. (L stops at 8).
    • Move R leftwards until an element less than or equal to the pivot is found. (R stops at 2).
  3. Compare L and R:
    • If the position of L is to the left of the position of R (\(L < R\)), swap the elements pointed to by L and R.
    • If \(L \ge R\), the pass is complete, and the pivot is in its correct place.
    In our example, \(L\) (at 8) is to the right of \(R\) (at 2) in terms of value, but L is to the left of R in terms of position: $L=3^{rd}$ position, $R=6^{th}$ position. Wait, $L=3^{rd}$ index, $R=6^{th}$ index.
    Let's re-run the pointer logic: $L$ needs $\ge 8$. $R$ needs $\le 8$.
    [5, 1, 8, 3, 9, 2]
    L starts at 5. R starts at 2. Pivot = 8.
    L moves past 5, 1. L stops at 8.
    R moves past 2, 9, 3. R stops at 2.
    L position (index 3) is greater than R position (index 6 - wait, indices are confusing. Let's use positions 1 to 6).
    L (Pos 3) = 8. R (Pos 6) = 2.
    Since L position (3) is less than R position (6), we swap:
    List becomes: [5, 1, 2, 3, 9, 8].
  4. Repeat: Continue finding L and R and swapping until $L \ge R$.
    *Self-Correction*: The definition of Quick Sort in D1 often simplifies the partitioning phase, focusing on ensuring the final pivot position is correct.
Efficiency Note on Quick Sort

Quick Sort is typically an \(O(n \log n)\) algorithm in the average case, making it one of the fastest sorting algorithms. In the worst-case scenario (e.g., if the pivot is always the smallest or largest element), it degrades to \(O(n^2)\). This is why choosing a good pivot is important.

Common Mistake to Avoid

When performing Bubble Sort, students often forget to check the total number of swaps. If a pass results in one or more swaps, you MUST perform another pass. You only stop when a pass results in zero swaps.


Section 4: Searching Algorithms

Searching algorithms help us find a specific item (the target) within a list of items.

Binary Search

Binary Search is a highly efficient way to find an item in a list, but it has one critical prerequisite: The list MUST be sorted.

The Binary Search Process (The Halving Principle)

The algorithm works by repeatedly dividing the search interval in half.

  1. Find the Middle: Identify the start, end, and middle element of the list.
  2. Compare: Compare the target item with the middle element.
  3. Decision:
    • If the target equals the middle element, STOP (success!).
    • If the target is less than the middle element, discard the upper half of the list (everything from the middle element onwards). Set the new end to be just before the current middle element.
    • If the target is greater than the middle element, discard the lower half of the list. Set the new start to be just after the current middle element.
  4. Repeat: Go back to step 1 with the new, smaller list segment until the target is found or the search segment is empty (meaning the item is not present).

Example: Find 25 in the sorted list: [5, 10, 15, 20, 25, 30, 35]

  • Step 1: Start=5, End=35. Middle element is 20.
  • Step 2: 25 is greater than 20. Discard [5, 10, 15, 20].
  • Step 3: New list segment: [25, 30, 35]. New Middle is 30.
  • Step 4: 25 is less than 30. Discard [30, 35].
  • Step 5: New list segment: [25]. New Middle is 25.
  • Step 6: Target (25) equals Middle (25). Found!
Efficiency Note on Binary Search

Binary search is extremely efficient, categorized as \(O(\log n)\). Because the search space is halved in every step, it takes very few steps even for huge lists.

Analogy: If you are looking up a word in a 500-page dictionary, you don't start at page 1. You open it roughly to the middle (page 250). Based on the letter, you immediately discard half the dictionary, allowing you to find the word in very few attempts.

Key Takeaways for Algorithms

  • Definition: A finite, precise set of steps achieving a goal.
  • Efficiency: Focus on time complexity $O(n)$ (linear, good) vs $O(n^2)$ (quadratic, slow).
  • Bubble Sort: $O(n^2)$, simple, compares adjacent pairs, stops when zero swaps occur in a pass.
  • Quick Sort: Generally $O(n \log n)$, uses a pivot and partitioning (divide and conquer).
  • Binary Search: Must use a sorted list, $O(\log n)$, rapidly eliminates half the remaining data in each step.