Home / Coding / Linked Lists

Linked Lists

Linked list problems require manipulating node pointers directly — no random access, no indices — which makes the patterns distinct from array problems.

  • Four sub-patterns cover every interview ask: fast/slow pointers for cycles and midpoints, in-place reversal, gap-based two pointers for N-from-end, and two-list merge/partition
  • All operations are O(1) space by rewiring pointers instead of copying to arrays
  • Reach for it when the problem involves cycles, reversals, merging sorted lists, reordering, or removing nodes at a relative position

Quick Revision - Must Do Problems

If short on time, solve only these. They cover ~90% of linked list patterns.

# Problem Pattern Why It's Essential Time to Solve
1 Linked List Cycle [B75-141] Fast & Slow Foundation of the pattern — detect cycles in O(1) space 5-10 min
2 Merge Two Sorted Lists [B75-21] Two-List Merge Core merge operation — used as subroutine in many other problems 10 min
3 Reverse Linked List [B75-206] In-Place Reversal Fundamental operation — used as helper in 3+ problems in this guide 5-10 min
5 Palindrome Linked List Multi-step Combines finding middle, reversing, and comparing — tests everything at once 15-20 min
6 Remove N-th Node From End [B75-19] Gap Two Pointers Classic gap technique — one-pass deletion 10-15 min
12 Reorder List [B75-143] Multi-step Hardest combination: middle + reverse second half + interleave merge 20-25 min
14 LFU Cache with TTL Multi-step Synthesizes every linked list pattern in one design problem 40-50 min

Practice Tips

How to Identify a Fast & Slow Pointer Problem

  • The problem involves a linked list with no random access
  • You need to find a cycle, its start, or the middle of the list
  • You need to remove or locate a node k positions from the end
  • The problem asks you to restructure, reorder, or palindrome-check a list in O(1) space

Fast & Slow vs. Other Techniques

If you see... Use... Not...
Cycle detection in O(1) space Fast & Slow HashSet
Middle of list Fast & Slow Count then walk
N-th from end Two pointers with gap Stack
Palindrome check O(1) space Find middle + reverse Stack or array copy
Merge two sorted lists Dummy node + two pointers Recursion (O(n) space)

Interview Approach

  1. Draw 3-5 nodes on paper and trace the pointers — do this before coding
  2. Pick the right variant: while (fast != null && fast.next != null) for second-middle; while (fast.next != null && fast.next.next != null) for first-middle
  3. When the list is modified (reverse, merge, partition), always use a dummy node so the head never needs special handling
  4. Before returning, verify the tail is null-terminated — forgetting this creates cycles
  5. State the three steps out loud for multi-step problems (find middle, reverse, merge) before writing any code

4 Main Patterns

Pattern 1: Fast & Slow (Cycle / Middle)

// slow moves 1 step, fast moves 2 — they meet inside a cycle or fast exits at the end
ListNode slow = head, fast = head;
while (fast != null && fast.next != null) {
    slow = slow.next;
    fast = fast.next.next;
    if (slow == fast) {
        /* cycle detected */
    }
}
// if no cycle: slow is at the second middle (or only middle for odd length)

Pattern 2: In-Place List Reversal

// three-pointer walk — O(1) space, rewires links as it goes
ListNode prev = null, curr = head;
while (curr != null) {
    ListNode next = curr.next;  // save
    curr.next = prev;           // reverse
    prev = curr;                // advance
    curr = next;
}
return prev; // new head

Pattern 3: Gap-Based Two Pointers (N from End)

// advance first n+1 steps; then move together — second stops before the target
ListNode dummy = new ListNode(0);
dummy.next = head;

ListNode first = dummy, second = dummy;

for (int i = 0; i <= n; i++) {
    first = first.next;  // gap of n+1
}

while (first != null) { 
    first = first.next; 
    second = second.next; 
}
second.next = second.next.next; // delete target

Pattern 4: Two-List Merge / Partition

// dummy node per list avoids head-change edge cases
ListNode dummyA = new ListNode(0), dummyB = new ListNode(0);
ListNode a = dummyA, b = dummyB;

while (head != null) {
    if (condition(head)) { 
        a.next = head; 
        a = a.next;
    } else { 
        b.next = head; 
        b = b.next; 
    }
    head = head.next;
}
b.next = null;       // must null-terminate to avoid cycle
a.next = dummyB.next;
return dummyA.next;

General Templates

ListNode Definition

public class ListNode {
    int val;
    ListNode next;
    ListNode(int val) { this.val = val; }
}

Dummy Node Template

ListNode dummy = new ListNode(0);
dummy.next = head;
ListNode prev = dummy;
// ... modify list using prev ...
return dummy.next; // head may have changed

Problem 1: Linked List Cycle ⭐ MUST DO [B75-141]

Difficulty Time to Solve Pattern
Easy 5-10 min Fast & Slow

Given the head of a linked list, return true if there is a cycle.

Example:

Input:  3 → 2 → 0 → -4 → (back to 2)
Output: true

Input:  1 → 2 → null
Output: false

Solution:

public boolean hasCycle(ListNode head) {
    ListNode slow = head, fast = head;
    while (fast != null && fast.next != null) {
        slow = slow.next;
        fast = fast.next.next;
        if (slow == fast) {
            return true;
        }
    }
    return false;
}

Complexity:

Time O(n) — fast catches slow within one cycle loop
Space O(1) — two pointers only

Real World: Detecting circular dependencies in a module import system — if module A imports B which imports C which imports A, following next pointers loops forever. Also used in OS deadlock detection where processes form a wait-for cycle.

Variations:

  1. Find cycle length — after slow and fast meet inside the cycle, keep one pointer fixed and advance the other until they meet again. The number of steps is the cycle length.
  2. Find cycle start — Problem 4 below; requires an extra phase after detection using the F = C - k mathematical insight.

Interview Disguises:

  • "A build system stores task dependencies as a linked list of next-tasks. Detect if any task eventually depends on itself, which would cause an infinite build loop." — cycle detection; tasks are nodes, dependency links are next pointers.
  • "A network router follows next-hop pointers to route packets. Detect if any route loops back to a previously visited router and would cause a packet to loop forever." — linked list cycle; routers are nodes, next-hop is the next pointer.

Problem 2: Merge Two Sorted Lists ⭐ MUST DO [B75-21]

Difficulty Time to Solve Pattern
Easy 10 min Two-List Merge

Merge two sorted linked lists and return the merged list.

Example:

Input:  l1 = [1, 2, 4],  l2 = [1, 3, 4]
Output: [1, 1, 2, 3, 4, 4]

Solution:

public ListNode mergeTwoLists(ListNode l1, ListNode l2) {
    ListNode dummy = new ListNode(0), curr = dummy;
    while (l1 != null && l2 != null) {
        if (l1.val <= l2.val) {
            curr.next = l1; l1 = l1.next;
        }
        else {
            curr.next = l2; l2 = l2.next;
        }
        curr = curr.next;
    }
    curr.next = (l1 != null) ? l1 : l2;
    return dummy.next;
}

Complexity:

Time O(n + m) — one pass through both lists
Space O(1) — dummy node only; no new nodes created

Real World: Merging two sorted result sets from different database shards without loading both fully into memory — a common operation in distributed query engines. Also the core subroutine in merge sort (Problem 13 uses it directly).

Variations:

  1. Merge K sorted lists — use a min-heap of size K to always pick the smallest head; call this merge as a subroutine or use divide-and-conquer pairing. Covered in the heap guide.
  2. Merge two sorted lists recursively — elegant one-liner but costs O(n + m) call stack space vs O(1) here; worth knowing as a contrast to show why iterative is preferred.

Interview Disguises:

  • "Two database shards each return query results as sorted linked lists. Merge them into a single sorted stream for the client without buffering both lists in memory." — merge two sorted lists; rows are nodes, sorted by key.
  • "Two servers write sorted event logs independently. Merge both logs into a single chronological stream." — same dummy-node merge; events are nodes, timestamps are the comparison key.

Problem 3: Reverse Linked List ⭐ MUST DO [B75-206]

Difficulty Time to Solve Pattern
Easy 5-10 min In-Place Reversal

Reverse a singly linked list.

Example:

Input:  [1, 2, 3, 4, 5]
Output: [5, 4, 3, 2, 1]

Solution:

public ListNode reverseList(ListNode head) {
    ListNode prev = null, curr = head;
    while (curr != null) {
        ListNode next = curr.next;
        curr.next = prev;
        prev = curr;
        curr = next;
    }
    return prev;
}

Complexity:

Time O(n) — single pass
Space O(1) — iterative; recursive costs O(n) call stack

Real World: Reversing an undo history so the most recent operation is processed first. Also used in network protocol implementations that receive a packet stream in reverse order and must replay it in the original order.

Variations:

  1. Reverse in groups of k (LC 25) — apply the three-pointer reversal repeatedly every k nodes, connecting the tail of each reversed group to the head of the next. Significantly harder but same core operation.
  2. Reverse a subrange — Problem 7 below; uses the front-insertion trick to reverse only positions left to right without splitting the list.

Interview Disguises:

  • "A browser stores visited pages as a linked list with the oldest page at the head. Reverse it so the most recently visited page is first for display in the history panel." — reverse linked list; pages are nodes.
  • "An undo system stores operations in arrival order. Reverse the list so the most recent operation is at the head and gets processed first when the user hits undo." — same three-pointer reversal; operations are nodes.

Problem 4: Linked List Cycle II

Difficulty Time to Solve Pattern
Medium 15-20 min Fast & Slow

Return the node where the cycle begins. Return null if no cycle.

Example:

Input:  3 → 2 → 0 → -4 → (back to 2)
Output: node with value 2

Key Insight:

  • Let F = distance from head to cycle start, C = cycle length, k = distance from cycle start to meeting point.
  • When slow and fast meet: slow traveled F + k, fast traveled F + k + C (a full extra loop). Since fast = 2 × slow: F + k = C, so F = C - k.
  • C - k is exactly the distance from the meeting point to the cycle start (going forward). So resetting one pointer to head and advancing both at speed 1 makes them meet at the cycle start.

Solution:

public ListNode detectCycle(ListNode head) {
    ListNode slow = head, fast = head;

    while (fast != null && fast.next != null) {
        slow = slow.next;
        fast = fast.next.next;
        if (slow == fast) {
            ListNode finder = head;
            while (finder != slow) {
                finder = finder.next;
                slow = slow.next;
            }
            return finder; // cycle start
        }
    }
    return null;
}

Complexity:

Time O(n) — cycle detection + at most n more steps to find start
Space O(1)

Real World: Finding the exact module where a circular import begins — not just detecting the cycle but pinpointing the entry point so the developer knows which import to remove. Also used in garbage collectors to identify the root of a reference cycle.

Variations:

  1. Find cycle length — after meeting, keep one pointer at the meeting point and advance the other until they meet again; count the steps. Combine with this problem to get both start and length.
  2. Remove the cycle — once the cycle start is found, walk to the node whose next points back to the cycle start and set next = null. One extra pass after this algorithm.

Interview Disguises:

  • "A dependency resolver finds a circular import chain. Return the exact module where the cycle begins so the developer can break it." — cycle start detection; the F = C - k insight is the key.
  • "A garbage collector traces object references and finds a reference cycle. Return the first object in the cycle — the one that anchors it — so it can be freed." — linked list cycle II; objects are nodes, references are next pointers.

Problem 5: Palindrome Linked List ⭐ MUST DO

Difficulty Time to Solve Pattern
Medium 15-20 min Multi-step

Check if a linked list is a palindrome in O(n) time and O(1) space.

Example:

Input:  [1, 2, 2, 1]  →  true
Input:  [1, 2, 3, 2, 1]  →  true
Input:  [1, 2]  →  false

Key Insight:

  • Three steps: find the middle (fast/slow), reverse the second half in place, then compare the two halves node by node.
  • For odd-length lists the middle node belongs to neither half — comparing stops when the reversed pointer reaches null, so it is naturally skipped.

Solution:

public boolean isPalindrome(ListNode head) {
    // Step 1: find middle (second middle for even length)
    ListNode slow = head, fast = head;
    while (fast != null && fast.next != null) {
        slow = slow.next;
        fast = fast.next.next;
    }

    // Step 2: reverse second half
    ListNode secondHalf = reverse(slow);

    // Step 3: compare
    ListNode left = head, right = secondHalf;
    while (right != null) {
        if (left.val != right.val) {
            return false;
        }
        left = left.next;
        right = right.next;
    }
    return true;
}

private ListNode reverse(ListNode head) {
    ListNode prev = null, curr = head;
    while (curr != null) {
        ListNode next = curr.next;
        curr.next = prev;
        prev = curr;
        curr = next;
    }
    return prev;
}

Complexity:

Time O(n) — three O(n) passes
Space O(1) — all pointer manipulation

Real World: Validating symmetric data sequences in memory-constrained systems — e.g., checking if a DNA base sequence reads the same in both directions for restriction enzyme recognition, without copying the sequence to an array.

Variations:

  1. Return the node where palindrome breaks — instead of returning true/false, return the first mismatching node. Same three steps; change the compare loop to return the node instead of false.
  2. Longest palindromic sublist — find the longest contiguous sublist that is a palindrome. Much harder; requires a different expand-from-center approach adapted for linked lists.

Interview Disguises:

  • "A DNA sequence is stored as a linked list of base characters. Check if it is palindromic (reads the same forwards and backwards) in O(1) space without converting to an array." — palindrome linked list; DNA bases are node values.
  • "A streaming validator checks if a received packet sequence is symmetric as a data integrity check, using no extra buffer memory." — find middle, reverse second half, compare = palindrome check on a stream.

Problem 6: Remove N-th Node From End ⭐ MUST DO [B75-19]

Difficulty Time to Solve Pattern
Medium 10-15 min Gap-Based Two Pointers

Remove the n-th node from the end of the list in one pass.

Example:

Input:  [1, 2, 3, 4, 5],  n = 2
Output: [1, 2, 3, 5]

Input:  [1],  n = 1
Output: []

Key Insight:

  • Advance first by n+1 steps (not n) so second lands on the node before the target, enabling deletion via second.next = second.next.next.
  • A dummy node absorbs the edge case where the head itself is removed (n = list length).

Solution:

public ListNode removeNthFromEnd(ListNode head, int n) {
    ListNode dummy = new ListNode(0);
    dummy.next = head;
    ListNode first = dummy, second = dummy;

    for (int i = 0; i <= n; i++) {
        first = first.next; // gap of n+1
    }

    while (first != null) {
        first = first.next;
        second = second.next;
    }

    second.next = second.next.next;
    return dummy.next;
}

Complexity:

Time O(n) — single pass
Space O(1)

Real World: Evicting the oldest entry from a fixed-size sliding window cache implemented as a linked list — the N-th node from the end is the boundary of the window. Also used in log rotation to remove the entry that falls outside the retention window.

Variations:

  1. Return the N-th node from end without deleting — same gap technique; just return second.next instead of unlinking it. Useful as a building block.
  2. Remove all nodes at positions divisible by k from the end — generalize: run the gap technique with gap = k, removing each target; repeat or adapt the loop condition.

Interview Disguises:

  • "A fixed-size activity log keeps the last N user actions as a linked list. Remove the entry that just fell outside the retention window — the N-th from the end — in a single pass." — remove Nth from end; log entries are nodes.
  • "A network buffer enforces a maximum depth. Drop the packet that is exactly N positions from the tail of the queue in one traversal." — gap-based two pointers with dummy node; packets are nodes.

Problem 7: Reverse Linked List II

Difficulty Time to Solve Pattern
Medium 15-20 min In-Place Reversal

Reverse the nodes of the list from position left to right in one pass.

Example:

Input:  [1, 2, 3, 4, 5],  left = 2,  right = 4
Output: [1, 4, 3, 2, 5]

Key Insight:

  • Front-insertion trick: keep prev pointing to the node before the reversal zone, and tail pointing to what started as the left node (it becomes the tail of the reversed section).
  • Each iteration, detach the node just after tail, reattach it right after prev. Repeat right - left times.
  • This avoids splitting and rejoining the list.

Solution:

public ListNode reverseBetween(ListNode head, int left, int right) {
    ListNode dummy = new ListNode(0);
    dummy.next = head;
    ListNode prev = dummy;

    for (int i = 1; i < left; i++) {
        prev = prev.next;  // stop before zone
    }

    ListNode tail = prev.next;  // left node; becomes tail after reversal
    for (int i = 0; i < right - left; i++) {
        ListNode next = tail.next;
        tail.next = next.next;
        next.next = prev.next;
        prev.next = next;
    }

    return dummy.next;
}

Complexity:

Time O(n) — walk to left + reverse the range
Space O(1)

Real World: Undoing a batch of operations between steps L and R in a transaction log without touching earlier or later history. Also used in animation systems to play a segment of a keyframe sequence backwards.

Variations:

  1. Reverse in groups of k (LC 25) — apply the front-insertion trick every k nodes; connect each reversed group's tail to the next group's head. Same front-insertion logic, iterated.
  2. Reverse alternate groups — reverse k nodes, skip k nodes, repeat. Combine the front-insertion trick with a skip counter.

Interview Disguises:

  • "A transaction log stores operations as a linked list. Reverse only the operations between positions L and R to undo that batch without affecting the rest of the log." — reverse linked list II; the range [left, right] is the batch.
  • "An animation timeline stores keyframes as nodes. Reverse the segment between frame L and frame R to play that section backwards while keeping the rest of the timeline intact." — subrange reversal with front-insertion; keyframes are nodes.

Problem 8: Intersection of Two Linked Lists

Difficulty Time to Solve Pattern
Medium 10-15 min Gap-Based Two Pointers

Return the node where two linked lists intersect, or null if they don't.

Example:

listA: 4 → 1 → 8 → 4 → 5
listB:      5 → 6 → 1 → 8 → 4 → 5
                         ↑ intersection at node 8
Output: node with value 8

Key Insight:

  • If the two lists have lengths A and B, a pointer starting at headA and switching to headB when it reaches null traverses A + B total nodes. Same for the other direction. Both pointers arrive at the intersection (or null) after A + B steps.
  • No length calculation needed — the switch automatically equalizes the paths.

Solution:

public ListNode getIntersectionNode(ListNode headA, ListNode headB) {
    ListNode a = headA, b = headB;
    while (a != b) {
        a = (a == null) ? headB : a.next;
        b = (b == null) ? headA : b.next;
    }
    return a; // intersection node or null
}

Complexity:

Time O(n + m) — each pointer traverses at most both lists
Space O(1)

Real World: Finding the first shared dependency where two service dependency chains converge — e.g., both service A's chain and service B's chain eventually depend on the same shared library node. Also used in version control to find the merge base of two branches.

Variations:

  1. Detect intersection without the A+B equalization trick — compare tail nodes: if both lists end at the same node, they intersect; then use length difference to find the exact node. Same O(n+m) time, more explicit.
  2. Find all intersection nodes — if the lists share a suffix of k nodes, return all k nodes. Walk from the intersection node to the end of both lists together.

Interview Disguises:

  • "Two microservices each have a chain of middleware layers stored as a linked list. Find the first shared middleware component where both chains converge." — intersection of two linked lists; middleware layers are nodes.
  • "Two Git branches diverged from a common commit. Find the exact commit where they first shared history — the merge base." — two linked lists merging at a common node; commits are nodes, parent pointers are next.

Problem 9: Remove Duplicates from Sorted List II

Difficulty Time to Solve Pattern
Medium 15 min Dummy Node + Single Pointer

Remove all nodes that have duplicate numbers, leaving only distinct values.

Example:

Input:  [1, 2, 3, 3, 4, 4, 5]
Output: [1, 2, 5]

Input:  [1, 1, 1, 2, 3]
Output: [2, 3]

Key Insight:

  • Use prev to track the last confirmed unique node. When duplicates are found, advance through the whole group then connect prev directly past it.
  • Only advance prev when no duplicates were found at curr.

Solution:

public ListNode deleteDuplicates(ListNode head) {
    ListNode dummy = new ListNode(0);
    dummy.next = head;
    ListNode prev = dummy;

    while (head != null) {
        if (head.next != null && head.val == head.next.val) {
            while (head.next != null && head.val == head.next.val) {
                head = head.next;
            }
            prev.next = head.next; // skip entire group
        } else {
            prev = prev.next;      // unique node, advance prev
        }
        head = head.next;
    }

    return dummy.next;
}

Complexity:

Time O(n) — each node visited once
Space O(1)

Real World: Deduplicating a sorted audit log where any event appearing more than once is flagged as potentially tampered and must be removed entirely — stricter than keeping one copy. Also used in data pipelines to discard unreliable repeated sensor readings.

Variations:

  1. Remove Duplicates I — keep one copy of each value instead of removing all. Simpler: use a single curr pointer and skip curr.next while values match; no dummy node needed.
  2. Remove all nodes with a specific target value — same prev/skip structure; change the duplicate-detection condition to head.val == target. The dummy node handles the case where the head is the target.

Interview Disguises:

  • "A sorted audit log treats any event code appearing more than once as tampered. Remove every occurrence of any repeated event code, leaving only codes that appear exactly once." — remove duplicates II; event codes are node values.
  • "A sorted sensor stream discards any reading that appears more than once as a measurement glitch — remove all copies of glitched values, not just the extras." — same prev-skips-entire-group logic; sensor readings are node values.

Problem 10: Odd Even Linked List

Difficulty Time to Solve Pattern
Medium 10-15 min Two-List Partition

Group all odd-index nodes first, then even-index nodes. Preserve relative order within each group. Indexing starts at 1.

Example:

Input:  [1, 2, 3, 4, 5]
Output: [1, 3, 5, 2, 4]

Input:  [2, 1, 3, 5, 6, 4, 7]
Output: [2, 3, 6, 7, 1, 5, 4]

Key Insight:

  • Two pointers odd and even advance together, each skipping over the other's group.
  • Save evenHead at the start; connect the odd tail to it at the end.

Solution:

public ListNode oddEvenList(ListNode head) {
    if (head == null || head.next == null) {
        return head;
    }

    ListNode odd = head, even = head.next, evenHead = even;

    while (even != null && even.next != null) {
        odd.next = even.next;   // odd skips over even
        odd = odd.next;
        even.next = odd.next;   // even skips over new odd
        even = even.next;
    }

    odd.next = evenHead;        // connect tails
    return head;
}

Complexity:

Time O(n) — single pass
Space O(1)

Real World: Reorganizing a task queue so all high-priority tasks (odd-indexed positions) run before low-priority ones (even-indexed), preserving arrival order within each group. Also used in memory layout optimization to group related objects by access pattern.

Variations:

  1. Group by value parity instead of index parity — nodes with odd values first, even values second. Same two-pointer advance structure; change the condition from index tracking to node.val % 2 != 0.
  2. Group into k buckets by index mod k — generalize from 2 groups to k groups using k dummy-headed lists; connect them in order at the end. The Partition List pattern generalizes this further.

Interview Disguises:

  • "A print queue stores jobs as a linked list. Reorganize it so all odd-numbered jobs (by queue position) execute first, then even-numbered, without changing relative order within each group." — odd even list; jobs are nodes, positions are indices.
  • "A playlist mixes tracks from two sources: odd positions are from source A, even from source B. Separate it so all source A tracks come first, then source B, preserving each source's order." — index-based partition into two groups = odd even list.

Problem 11: Partition List

Difficulty Time to Solve Pattern
Medium 15 min Two-List Partition

Partition the list so all nodes with value < x come before nodes with value ≥ x, preserving relative order.

Example:

Input:  [1, 4, 3, 2, 5, 2],  x = 3
Output: [1, 2, 2, 4, 3, 5]

Key Insight:

  • Two dummy-headed lists collect the two groups separately. Route each node based on its value, then connect the less tail to the greater head.
  • Null-terminate the greater list — its tail still points into the original list, which would create a cycle.

Solution:

public ListNode partition(ListNode head, int x) {
    ListNode lessHead = new ListNode(0), greaterHead = new ListNode(0);
    ListNode less = lessHead, greater = greaterHead;

    while (head != null) {
        if (head.val < x) {
            less.next = head;    less = less.next;
        }
        else {
            greater.next = head;  greater = greater.next;
        }
        head = head.next;
    }

    greater.next = null;        // must null-terminate to avoid cycle
    less.next = greaterHead.next;
    return lessHead.next;
}

Complexity:

Time O(n) — single pass
Space O(1) — two dummy nodes only

Real World: A job scheduler separates a task queue into fast-lane (processing time < threshold) and slow-lane (processing time ≥ threshold) while preserving the arrival order of tasks within each lane. Also used in network routers to split packets into priority queues.

Variations:

  1. Partition into three groups (< x, == x, > x) — add a third dummy-headed list for equal values; connect all three (less → equal → greater) at the end. Same pattern, one more list.
  2. Stable partition by predicate — replace head.val < x with any boolean function f(head). The two-dummy-list structure is unchanged; only the routing condition changes.

Interview Disguises:

  • "A job scheduler stores tasks by estimated runtime. Move all tasks under a time threshold to the front of the queue for the fast worker, preserving their relative order." — partition list; tasks are nodes, threshold is x.
  • "A network router splits incoming packets into a priority lane (size < limit) and a standard lane (size ≥ limit) while preserving packet arrival order within each lane." — two dummy lists routed by value; packets are nodes.

Problem 12: Reorder List ⭐ MUST DO [B75-143]

Difficulty Time to Solve Pattern
Medium 20-25 min Multi-step

Reorder L0 → L1 → … → Ln into L0 → Ln → L1 → Ln-1 → L2 → Ln-2 → … in place.

Example:

Input:  [1, 2, 3, 4]
Output: [1, 4, 2, 3]

Input:  [1, 2, 3, 4, 5]
Output: [1, 5, 2, 4, 3]

Key Insight:

  • Three steps: (1) find the first middle using fast.next != null && fast.next.next != null — this keeps the first half equal or one node longer, (2) reverse the second half starting at slow.next, (3) interleave merge — alternate links from front and back, saving next pointers before relinking.
  • The merge loop runs until second is null; any leftover node in first is already correctly placed.

Solution:

public void reorderList(ListNode head) {
    if (head == null || head.next == null) {
        return;
    }

    // Step 1: find first middle
    ListNode slow = head, fast = head;
    while (fast.next != null && fast.next.next != null) {
        slow = slow.next;
        fast = fast.next.next;
    }

    // Step 2: reverse second half
    ListNode second = reverse(slow.next);
    slow.next = null;           // split the list

    // Step 3: interleave merge
    ListNode first = head;
    while (second != null) {
        ListNode tmp1 = first.next, tmp2 = second.next;
        first.next = second;
        second.next = tmp1;
        first = tmp1;
        second = tmp2;
    }
}

private ListNode reverse(ListNode head) {
    ListNode prev = null, curr = head;
    while (curr != null) {
        ListNode next = curr.next;
        curr.next = prev;
        prev = curr;
        curr = next;
    }
    return prev;
}

Complexity:

Time O(n) — three linear passes
Space O(1) — all in-place pointer rewiring

Real World: Interleaving the two halves of a data stream — used in audio processing to interleave left-channel and right-channel samples stored sequentially in a single linked buffer. Also used in card shuffling algorithms (perfect riffle shuffle).

Variations:

  1. Interleave two separate lists — skip Step 1 (no middle to find); go straight to the interleave merge of two already-separated lists. Simpler version that shows the merge step in isolation.
  2. Reverse the second half and append — stop after Step 2 (find middle + reverse); connect the reversed second half to the end of the first. Produces a different reordering — useful for contrast.

Interview Disguises:

  • "An audio codec stores left-channel samples followed by right-channel samples sequentially in a linked list. Interleave them so samples alternate L-R-L-R in a single pass." — reorder list; samples are nodes, first half is left channel, second half is right channel.
  • "A card dealer stores a full deck as a linked list split into two halves. Perform a perfect riffle shuffle — alternate one card from each half — in place." — find middle, reverse second half, interleave merge = reorder list.

Problem 13: Sort List

Difficulty Time to Solve Pattern
Medium 20-25 min Multi-step

Sort a linked list in O(n log n) time.

Example:

Input:  [4, 2, 1, 3]
Output: [1, 2, 3, 4]

Key Insight:

  • Merge sort is the natural fit — finding the middle (fast/slow) splits the list in O(n), and merging two sorted lists is O(n).
  • Unlike arrays, linked lists don't need auxiliary space to merge — just rewire pointers. Space cost is O(log n) for the recursion stack only.

Solution:

public ListNode sortList(ListNode head) {
    if (head == null || head.next == null) {
        return head;
    }

    // Find first middle and split
    ListNode slow = head, fast = head, prev = null;
    while (fast != null && fast.next != null) {
        prev = slow;
        slow = slow.next;
        fast = fast.next.next;
    }
    prev.next = null; // split into two halves

    ListNode left = sortList(head);
    ListNode right = sortList(slow);
    return merge(left, right);
}

private ListNode merge(ListNode l1, ListNode l2) {
    ListNode dummy = new ListNode(0), curr = dummy;
    while (l1 != null && l2 != null) {
        if (l1.val <= l2.val) {
            curr.next = l1; l1 = l1.next;
        }
        else {
            curr.next = l2; l2 = l2.next;
        }
        curr = curr.next;
    }
    curr.next = (l1 != null) ? l1 : l2;
    return dummy.next;
}

Complexity:

Time O(n log n) — log n splits, each with O(n) merge
Space O(log n) — recursion depth only; no auxiliary arrays

Real World: Sorting a large dataset that doesn't fit in memory — external merge sort on disk uses the same split-and-merge structure, just with file chunks instead of list halves. Also used in database engines to sort records returned as a cursor (linked list of rows).

Variations:

  1. Bottom-up merge sort — instead of recursing, iteratively merge sublists of size 1, then 2, then 4. Eliminates the O(log n) recursion stack, achieving true O(1) space. Harder to implement but optimal.
  2. Sort then deduplicate — combine Sort List with Remove Duplicates II; sort first makes all duplicates adjacent, making the deduplication pass trivial.

Interview Disguises:

  • "A memory-constrained embedded system needs to sort a linked list of sensor records. It cannot convert to an array. Sort in O(n log n) without allocating extra node storage." — sort list; merge sort is the only comparison sort that achieves O(n log n) on a linked list without O(n) extra space.
  • "A database cursor returns rows as a linked list in arbitrary order. Sort them by key in O(n log n) time for a subsequent merge join with another sorted cursor." — merge sort on linked list; rows are nodes, sort key drives the merge comparison.

Problem 14: LFU Cache with TTL ⭐ MUST DO

Difficulty Time to Solve Pattern
Hard 40-50 min Multi-step

Design a Least Frequently Used (LFU) cache that also expires entries after a TTL (time-to-live). get(key, currentTime) returns the value or -1 if the key is missing or expired. put(key, val, ttlMs, currentTime) inserts with a TTL in milliseconds. Expired entries are removed lazily on access and proactively via a periodic sweep.

Example:

LFUCacheWithTTL cache = new LFUCacheWithTTL(2);
cache.put(1, 10, 500, 0);   // key=1, val=10, expires at t=500
cache.put(2, 20, 500, 0);   // key=2, val=20, expires at t=500
cache.get(1, 0);             // → 10  (freq[1]=2, freq[2]=1)
cache.put(3, 30, 500, 1);   // cache full — evict key=2 (LFU, lowest freq)
cache.get(2, 1);             // → -1  (evicted)
cache.get(1, 600);           // → -1  (expired — TTL elapsed)

Why a doubly linked list? A doubly linked list is the only structure that gives O(1) arbitrary removal and O(1) insertion at any position simultaneously. Arrays shift on removal — O(n). Heaps support O(1) peek but O(log n) arbitrary removal. BSTs are O(log n) for both. The linked list rewires in O(1) because each node holds direct pointers to its neighbors — no searching needed.

The HashMap completes the picture: it gives you the node pointer directly, so the list never has to search. The map handles lookup, the list handles rewiring — neither works without the other.

Key Insight:

  • LFU eviction picks the least-frequently-used key. Ties break by LRU within the same frequency bucket, so each bucket is a mini LRU list (doubly linked list, newest at head, LRU at tail).
  • On every get or put, promote the accessed node: remove from bucket[freq], increment freq, add to head of bucket[freq+1]. Track minFreq — it resets to 1 only when a brand-new key is inserted.
  • TTL is stored as an absolute expiry timestamp (currentTime + ttlMs). Passing currentTime in (instead of calling the wall clock internally) makes the cache fully testable. Lazy expiry: check in get. Active expiry: a second sorted-by-expiry linked list lets sweepExpired() stop at the first non-expired node — O(expired) not O(capacity).
  • On eviction, if the front of the sorted expiry list is already expired, take it instead of the LFU victim — O(1) expired-first check using the sorted list head.
  • Each node lives in two doubly linked lists simultaneously: its frequency bucket list and the expiry list. This dual-intrusive-list design is why the node needs four link pointers (prev/next for the freq bucket, ePrev/eNext for the expiry list). It is the same design used in Caffeine (Java) and Guava production caches.

Solution:

import java.util.*;

class LFUCacheWithTTL {

    // ── Node lives in TWO intrusive doubly-linked lists simultaneously:
    //    1. A FreqBucket (LRU list for its frequency level)   ← dummy node anchor + arbitrary removal
    //    2. The ExpiryList (sorted ascending by expiry time)  ← sorted merge insert + arbitrary removal + sweep
    private static class Node {
        int key, val, freq;
        long expiry;           // absolute timestamp; 0 = no expiry
        Node prev, next;       // FreqBucket list links
        Node ePrev, eNext;     // ExpiryList links

        Node(int k, int v, long exp) { key = k; val = v; freq = 1; expiry = exp; }

        boolean isExpired(long currentTime) {
            return expiry != 0 && currentTime >= expiry;
        }
    }

    // ── FreqBucket: one doubly-linked list per frequency level.
    //    This is essentially a standalone LRU cache — newest at head, LRU at tail,
    //    O(1) insert at head, O(1) arbitrary removal, O(1) evict from tail.
    //    LFU is built by stacking N of these: one per frequency level.
    //    Dummy head + dummy tail make every operation branchless (no null checks on neighbors).
    //    Pattern: dummy node anchor + arbitrary node removal
    private static class FreqBucket {
        Node head, tail;   // sentinels — never hold real data
        int size;

        FreqBucket() {
            head = new Node(0, 0, 0);   // dummy head absorbs edge cases (empty list, head deletion)
            tail = new Node(0, 0, 0);   // dummy tail
            head.next = tail;
            tail.prev = head;
        }

        // Add node just after dummy head (MRU position)
        void addHead(Node n) {
            n.next = head.next;
            n.prev = head;
            head.next.prev = n;
            head.next = n;
            size++;
        }

        // Arbitrary node removal — O(1) because node holds its own prev/next.
        // Guard against double-remove; null out links so callers can detect removed state.
        void remove(Node n) {
            if (n.prev == null || n.next == null) {
                return;   // already removed — defensive guard
            }
            n.prev.next = n.next;
            n.next.prev = n.prev;
            n.prev = null;   // mark as removed so double-remove is a safe no-op
            n.next = null;
            size--;
        }

        // Peek at LRU tail without removing — caller removes via evict()
        Node peekLRU() {
            return isEmpty() ? null : tail.prev;   // dummy tail makes this safe even for size=1
        }

        boolean isEmpty() { return size == 0; }
    }

    // ── ExpiryList: a single doubly-linked list sorted ascending by expiry time.
    //    insertSorted: same pointer rewiring as merging into a sorted list.
    //    Pattern: sorted merge insert + arbitrary node removal
    private static class ExpiryList {
        Node head, tail;   // sentinels

        ExpiryList() {
            head = new Node(0, 0, 0);
            tail = new Node(0, 0, Long.MAX_VALUE);  // sentinel expiry = ∞
            head.eNext = tail;
            tail.ePrev = head;
        }

        // Sorted insert — walk until the right position,
        // then splice in exactly like linking two sorted lists at a merge point.
        void insertSorted(Node n) {
            Node cur = head.eNext;
            while (cur != tail && cur.expiry <= n.expiry) {
                cur = cur.eNext;
            }
            n.eNext = cur;
            n.ePrev = cur.ePrev;
            cur.ePrev.eNext = n;
            cur.ePrev = n;
        }

        // Arbitrary node removal — O(1) using node's own ePrev/eNext pointers.
        // Guard + null-out prevent double-remove.
        void remove(Node n) {
            if (n.ePrev == null || n.eNext == null) {
                return;   // already removed — defensive guard
            }
            n.ePrev.eNext = n.eNext;
            n.eNext.ePrev = n.ePrev;
            n.ePrev = null;   // mark as removed
            n.eNext = null;
        }

        // Peek at the soonest-to-expire node — O(1) check for expired-first eviction
        Node peekFront() {
            Node front = head.eNext;
            return (front == tail) ? null : front;
        }
    }

    // ── LFU internals
    private final int capacity;
    private int minFreq;
    private final Map<Integer, Node> map;             // key → Node
    private final Map<Integer, FreqBucket> buckets;   // freq → FreqBucket
    private final ExpiryList expiryList;

    public LFUCacheWithTTL(int capacity) {
        this.capacity = capacity;
        this.map = new HashMap<>();
        this.buckets = new HashMap<>();
        this.expiryList = new ExpiryList();
    }

    // currentTime passed in — caller controls the clock, making this fully testable
    public int get(int key, long currentTime) {
        Node n = map.get(key);
        if (n == null) {
            return -1;
        }
        if (n.isExpired(currentTime)) {
            evict(n);   // lazy expiry — single evict() handles all three structures
            return -1;
        }
        promote(n);   // increment frequency, move to next bucket
        return n.val;
    }

    // currentTime passed in — caller controls the clock, making this fully testable
    public void put(int key, int val, long ttlMs, long currentTime) {
        if (capacity <= 0) {
            return;
        }
        long expiry = (ttlMs > 0) ? currentTime + ttlMs : 0;
        if (map.containsKey(key)) {
            Node n = map.get(key);
            if (n.isExpired(currentTime)) {
                evict(n);   // treat expired key as a new insert — evict first, fall through
            } else {
                n.val = val;
                n.expiry = expiry;
                expiryList.remove(n);
                if (expiry != 0) {
                    expiryList.insertSorted(n);   // re-insert at correct sorted position
                }
                promote(n);
                return;
            }
        }
        if (map.size() >= capacity) {
            evictLFU(currentTime);
        }
        Node n = new Node(key, val, expiry);
        map.put(key, n);
        buckets.computeIfAbsent(1, f -> new FreqBucket()).addHead(n);
        if (expiry != 0) {
            expiryList.insertSorted(n);   // sorted insert by expiry time
        }
        minFreq = 1;   // new node always starts at freq=1 → reset minFreq
    }

    // Sweep expired entries — scan from front while expired, evict and advance.
    // Same pattern as removing a duplicate group from a sorted list: stop at the first
    // node that no longer matches the condition (expired). O(expired) not O(capacity).
    // Call from a background thread.
    public void sweepExpired(long currentTime) {
        while (true) {
            Node front = expiryList.peekFront();
            if (front == null || !front.isExpired(currentTime)) {
                break;
            }
            evict(front);   // evict removes from expiry list, freq bucket, and map
        }
    }

    // Partition pattern: promote re-routes a node from bucket[freq] to bucket[freq+1]
    // — exactly like moving a node from the "less" partition to the "greater" partition
    // while preserving relative order within the bucket.
    private void promote(Node n) {
        FreqBucket fb = buckets.get(n.freq);
        fb.remove(n);
        if (fb.isEmpty()) {
            buckets.remove(n.freq);
            if (n.freq == minFreq) {
                minFreq++;   // promote always goes to freq+1, so minFreq++ is exact here
            }
        }
        n.freq++;
        buckets.computeIfAbsent(n.freq, f -> new FreqBucket()).addHead(n);
    }

    // Evict victim by LFU-LRU, but prefer an already-expired entry first (O(1) check).
    // The sorted expiry list front is the soonest-to-expire — if it has already lapsed,
    // taking it avoids wasting a valid LFU slot.
    private void evictLFU(long currentTime) {
        Node soonest = expiryList.peekFront();
        if (soonest != null && soonest.isExpired(currentTime)) {
            evict(soonest);   // expired-first: free an already-dead slot, not a live one
            return;
        }
        FreqBucket fb = buckets.get(minFreq);
        if (fb == null || fb.isEmpty()) {
            return;
        }
        evict(fb.peekLRU());
    }

    // Single entry point for all eviction paths — removes from map, expiry list,
    // and freq bucket in one place so no path can leave partial state behind.
    private void evict(Node n) {
        if (n == null) {
            return;
        }
        map.remove(n.key);
        expiryList.remove(n);   // no-op if node has no TTL (ePrev/eNext already null)
        FreqBucket fb = buckets.get(n.freq);
        if (fb != null) {
            fb.remove(n);       // no-op if already removed (guard fires)
            if (fb.isEmpty()) {
                buckets.remove(n.freq);
                if (n.freq == minFreq) {
                    updateMinFreq();
                }
            }
        }
    }

    // Walk upward from minFreq to find the next non-empty bucket.
    // Needed after arbitrary eviction — promote() uses minFreq++ directly since it
    // always moves to minFreq+1 which it just populated.
    private void updateMinFreq() {
        if (map.isEmpty()) {
            minFreq = 0;
            return;
        }
        while (!buckets.containsKey(minFreq) || buckets.get(minFreq).isEmpty()) {
            minFreq++;
        }
    }
}

Complexity:

Time O(n) per put for insertSorted in the expiry list; O(1) for get, promote, evictLFU; O(expired) for sweepExpired
Space O(capacity) — one node per entry, two list links per node

Design Tradeoff: This design is O(1) for LFU operations, but expiry insertion is O(n) since we maintain a sorted list. If we wanted strict bounds, we could use a min-heap for expiry, giving O(log n) insert, but we'd lose O(1) arbitrary removal and need lazy deletion or index tracking. So this is a tradeoff between simplicity and worst-case guarantees.

Real World: Every production cache layer uses LFU + TTL together: Redis ALLKEYS-LFU eviction tracks frequency sketches and every key has an optional TTL. CDN edge caches keep popular assets longer (LFU eviction) while respecting cache-control max-age headers (TTL). The dual-list design — one per frequency bucket, one sorted expiry list — is inspired by the same principles used in production libraries like Caffeine: separating eviction policy from expiry tracking.

Variations:

  1. O(log n) insertSorted — replace the sorted expiry list with a TreeMap<Long, Deque<Node>> (expiry bucket map). put becomes O(log n) for tree insertion, but sweepExpired remains O(expired). Useful when TTLs cluster into tiers (1 min / 5 min / 1 hr) rather than arbitrary per-key values.
  2. LRU with TTL — drop frequency tracking; keep one doubly linked list (MRU at head) plus the expiry list. get moves the node to the head; eviction removes the tail. This is a strict subset of the LFU design — remove the freq field and the buckets map.

Interview Disguises:

  • "Design an in-memory session store for an auth service. Sessions expire after 30 minutes of inactivity, and under memory pressure the least-used sessions are evicted first." — LFU cache where each get acts as the activity ping; TTL resets on every access.
  • "You are building a DNS resolver cache. Each DNS record has a TTL from the record itself. Under memory pressure, evict the record queried least often. Design the data structure." — LFU cache where TTL comes from the DNS record's TTL field; sweepExpired runs in a background thread between queries.

Thread Safety Discussion: This implementation is single-threaded. In an interview, after writing the core logic, expect a follow-up: "How would you make this thread-safe for concurrent reads and writes?"

The key insight before picking an approach: every LFU operation touches shared global state regardless of which key is involved — freqMap, minFreq, and the expiry list. This rules out techniques that assume per-key independence.

Approach Works? Why
synchronized Yes Simple, correct, covers all shared state
ReadWriteLock No get still needs write lock — promote() mutates freq state
Stripe locking No Shared state (freqMap, minFreq, expiry list) breaks the independence assumption
Caffeine's ring buffer Yes Specifically designed around this problem
  1. Coarse-grained lock — correct answer for this implementation Wrap every public method with synchronized. One lock covers all shared state. Simple to reason about; all threads serialize. Acceptable for low-to-medium concurrency.

    public synchronized int get(int key, long currentTime) { ... }
    public synchronized void put(int key, int val, long ttlMs, long currentTime) { ... }
    public synchronized void sweepExpired(long currentTime) { ... }
  2. ReadWriteLock — right tool, wrong problem ReentrantReadWriteLock lets multiple readers run in parallel while writers hold an exclusive lock. It's the right tool when get is truly read-only. In LFU, get calls promote() which mutates freqMap and minFreq — so get needs the write lock anyway. Buys nothing over synchronized here. Reach for it in a read-through cache where get doesn't mutate state.

  3. Stripe locking — doesn't fit LFU Stripe locking partitions the key space so operations on different keys never contend — ConcurrentHashMap uses this internally. It works when per-key operations are truly independent. LFU breaks that assumption: promoting key 5 updates minFreq, which affects which key gets evicted when key 6 is inserted. There is no safe way to stripe without also locking all shared structures, which collapses back to a single lock anyway.

  4. Lock-free reads with buffered writes — how Caffeine does it Instead of reducing lock contention, eliminate it on reads entirely. Specifically designed around the shared-state problem in LFU:

    • get() reads from ConcurrentHashMap (lock-free), drops a frequency update note into a ring buffer via CAS, and returns immediately — no lock held
    • A background thread periodically drains the buffer and applies all frequency updates in batch under a single eviction lock
    • put() writes to the map and schedules an immediate buffer drain

    Tradeoffs:

    • get() never blocks — highest possible read throughput
    • Frequency counts are eventually consistent — eviction policy may be slightly stale between drains
    • Significantly more complex to implement correctly

Background sweep thread: sweepExpired is designed to run on a separate thread. The typical pattern:

ScheduledExecutorService scheduler = Executors.newSingleThreadScheduledExecutor();
scheduler.scheduleAtFixedRate(() -> {
    try {
        cache.sweepExpired(System.currentTimeMillis());
    } catch (Exception e) {
        log.error("sweep failed", e);  // don't let it die silently
    }
}, 1, 1, TimeUnit.SECONDS);

With a coarse lock this is safe — the sweep thread blocks until no get/put is in progress. Shutdown: call scheduler.shutdown() to stop the sweep thread when the cache is discarded.

Production pitfall: if the body of scheduleAtFixedRate throws an unchecked exception, the executor silently cancels all future runs — the sweep thread dies and expired entries accumulate forever with no error surfaced. Always wrap in try-catch to keep the sweep alive.


Quick Reference Table

# Problem Pattern Key Technique Time
1 Linked List Cycle ⭐ Fast & Slow Pointers meet inside cycle O(n)
2 Merge Two Sorted Lists ⭐ Two-List Merge Dummy node, compare and link O(n+m)
3 Reverse Linked List ⭐ In-Place Reversal Three-pointer prev/curr/next O(n)
4 Linked List Cycle II Fast & Slow Reset one pointer to head on meet O(n)
5 Palindrome Linked List ⭐ Multi-step Middle + reverse + compare O(n)
6 Remove N-th From End ⭐ Gap Two Pointers Gap of n+1, dummy absorbs head deletion O(n)
7 Reverse Linked List II In-Place Reversal Front-insertion in range, one pass O(n)
8 Intersection Gap Two Pointers Switch heads to equalize path length O(n+m)
9 Remove Duplicates II Dummy + Pointer prev skips entire duplicate group O(n)
10 Odd Even List Two-List Partition Alternate odd/even links, join tails O(n)
11 Partition List Two-List Partition Two dummy lists, null-terminate greater O(n)
12 Reorder List ⭐ Multi-step First middle + reverse + interleave O(n)
13 Sort List Multi-step Merge sort — split at middle, merge O(n log n)
14 LFU Cache with TTL ⭐ Multi-step Dual intrusive lists — freq bucket + sorted expiry O(1) get/put

When to Use Each Pattern

Fast & Slow Pointers

  • Detect whether a cycle exists and where it starts
  • Find the middle node to split or compare halves
  • Use the first-middle variant (fast.next != null && fast.next.next != null) when the first half must be equal or one node longer (Reorder List, Sort List)
  • Use the second-middle variant (fast != null && fast.next != null) when you just need any middle (Palindrome, basic Middle)

In-Place Reversal

  • Reverse the whole list or a subrange in O(1) space
  • Use the front-insertion trick for partial reversal — avoids splitting and rejoining the list
  • Always used as a helper step in multi-step problems (Palindrome, Reorder List)

Gap-Based Two Pointers

  • Remove or locate a node exactly k positions from the end
  • Find intersection of two lists by equalizing total path lengths with a head-swap
  • Always start from a dummy node so the gap technique works even if the head is the target

Two-List Merge / Partition

  • Merge two sorted lists in O(1) space using a dummy node
  • Partition a list into two groups while preserving relative order — build two separate lists, then connect them
  • Always null-terminate the second list before connecting — its tail pointer still points into the original list

Common Mistakes to Avoid

Fast & Slow Mistakes

  1. Wrong loop conditionwhile (fast != null && fast.next != null) is required; checking only fast != null causes a NullPointerException on fast.next.next
  2. Wrong middle variant — first-middle stops one step earlier (fast.next != null && fast.next.next != null); mixing the two variants in multi-step problems produces off-by-one bugs

Reversal Mistakes

  1. Forgetting to save next before reversingcurr.next = prev destroys the reference to the rest of the list; always do ListNode next = curr.next first
  2. Not null-terminating after split — after slow.next = null, the first half is correctly ended; skipping this creates a cycle through the second half

Gap & Dummy Node Mistakes

  1. Gap of n instead of n+1 for deletion — advancing first by n only lands second on the target itself; you need second one node before it to set second.next = second.next.next
  2. Skipping the dummy node — without a dummy, deleting the head requires a special case; dummy absorbs it automatically

Two-List Merge / Partition Mistakes

  1. Forgetting greater.next = null — the greater tail still points into the original list; not null-terminating creates a cycle that causes infinite loops
  2. Using <= x instead of < x in Partition — the problem requires < x for the less side; equal values go into the greater side

General Linked List Mistakes

  1. Not checking head == null at the start — most operations on null head cause NullPointerException; add the guard at the top
  2. Moving curr instead of saving and moving — in merge loops, always do curr = curr.next after setting curr.next; skipping this leaves curr pointing at a stale node