πŸŽ’
Technical Interview Study Guide
  • πŸ•Welcome!
  • πŸ₯Getting Started
    • Study Plan
    • Optimizing Revision
    • Summer 2024 Timeline
    • FAQs
  • πŸ₯¨Algorithms
    • Binary Search
    • Sorting
    • Recursion
    • Graph
    • Quick Select
    • Intervals
    • Binary
    • Geometry
    • Dynamic Programming
  • πŸ₯žData Structures
    • Arrays
      • Matrices
    • Strings
    • Linked Lists
      • Doubly Linked Lists
    • Hash Tables
    • Graphs
      • Trees
        • Binary Search Trees
        • Heaps
        • Tries
        • Segment Trees
    • Stacks
    • Queues
      • Double Ended Queues
    • Union-Find Disjoint Set (UFDS)
  • 🍑Problems Guide
    • Dynamic Programming Roadmap
      • Warmup
        • Climbing Stairs
        • Nth Tribonacci Number
        • Perfect Squares
      • Linear Sequence
        • Min Cost to Climb Stairs
        • Minimum Time to Make Rope Colorful
        • House Robber
        • Decode Ways
        • Minimum Cost for Tickets
        • Solving Questions with Brainpower
  • 🍣Other Technical Topics
    • General Problem Solving
    • Runtime Predictions
    • System Design
      • SQL
      • Accessing APIs
    • Operating Systems
  • 🍿Non-technical Topics
    • Behavioral Interviews
    • Resumes
Powered by GitBook
On this page
  1. Algorithms

Sorting

Sorting algorithms aren't usually tested alone. They are often paired with another problem type and the goal is to really see if you're able to think about the algorithms in a wider context

PreviousBinary SearchNextRecursion

Last updated 1 year ago

O(n2)O(n^2)O(n2)
  • Traverse from left to right, swap element with neighbor is neighbor is less than element

  • Repeat till no more swaps are needed

  • Invariant: after every iteration, the largest elements are arranged at the end in ascending order

O(n2)O(n^2)O(n2)
  • Traverse from left to right, maintaining a β€œsorted” section in the front of the array

  • Every iteration, we seek the next smallest element in the β€œunsorted” section and insert to the front

  • Invariant: after iteration jjj, the front jjj elements are arranged the smallest elements sorted in ascending order

O(n2)O(n^2)O(n2)
  • Traverse from left to right

  • When encountering a number less than the current element, we move backwards, finding elements before the current element that is greater than the number

  • Repeat till a smaller number found, and then swap the elements into position

  • Invariant: after iteration jjj, the front jjj elements are arranged in ascending order (no guarantee to be the smallest jjj

  • Break up the array into half and merge at the end

  • Stop breaking it up once only 1 element is left

  • Invariant: the leftmost set of elements will be sorted before the rightmost

# [left, right)
def merge_sort(arr, left, right):
    if left == right:
        return arr[left]
    elif left < right:
        mid = left + (right - left) // 2
        left_merged = merge_sort(arr, left, mid)   # [left, mid)
        right_merged = merge_sort(arr, mid, right) # [mid, right)
        return merge(left_merged, right_merged)
        
def merge(a, b):
    c = []
    i, j = 0, 0
    while i < len(a) and j < len(b):
        if a[i] < b[j]:
            c.append(a[i])
            i += 1
        else:
            c.append(b[j])
            j += 1
    idx, remaining = (i, a) if i < len(a) else (j, b)
    for k in range(idx, len(remaining)):
        c.append(remaining[k])
        
    return c
  • Pick a pivot, partition array around pivot value, and repeat for all halves till 1 element

  • Same recurrence as merge sort

  • Count frequency of each element

  • Reconstruct the array from the frequency

  • Can appear as ways to re-construct an array that has very little elements but many duplicates

def counting_sort(arr):
    offset = min(arr)
    freq = [0] * (max(arr) - offset + 1)
    for num in arr:
        freq[num - offset] += 1
    result = []
    for i in range(freq):
        while freq[i] > 0:
            result.append(i + offset)
    return result
    
  • For every element in the array, create buckets that correspond to some property (such as value and each bucket is a range)

  • Sort elements of each individual bucket

  • Join all buckets to form result

O(nlog⁑n)O(n \log n)O(nlogn)

T(n)=2T(n/2)+O(n)T(n) = 2T(n/2) + O(n)T(n)=2T(n/2)+O(n)

O(nlog⁑n)O(n\log n)O(nlogn)

If duplicates allowed, use , otherwise, can refer to the partitioning algorithms used in Quick Select

Worst case can be O(n2)O(n^2)O(n2) if array already sorted

O(n+k)O(n+k)O(n+k)

kkk is the number of distinct elements

O(n+n2k+k)O(n + \frac{n^2}{k} + k)O(n+kn2​+k)

Usually best if kβ‰ˆnk \approx nkβ‰ˆn so the overall runtime could be O(n)O(n)O(n)

πŸ₯¨
dutch flag algorithm