Advent of Code 2022

This book documents solutions to the Advent of Code 2022 programming puzzles implemented in Rust.

About Advent of Code

Advent of Code is an annual set of Christmas-themed programming puzzles created by Eric Wastl. Each year, starting on December 1st, a new programming puzzle is released every day until December 25th. These puzzles can be solved in any programming language and cover a wide range of algorithms, data structures, and problem-solving techniques.

Project Structure

This project contains solutions for Advent of Code 2022 implemented in Rust. Each day's puzzle has its own binary in the src/bin directory with a corresponding input file. This book provides explanations, walkthroughs, and code snippets for each solution.

How to Use This Book

You can navigate through the solutions using the sidebar. Each day's solution is organized into:

  • Problem Description: A summary of the day's challenge
  • Solution Explanation: A detailed walkthrough of the approach used
  • Code: The complete implementation with comments

Running the Solutions

To run any day's solution, use Cargo with the appropriate bin target. For example:

# Run Day 1's solution
cargo run --bin day1

You can also compile and run in release mode for better performance:

cargo run --release --bin day1

Day 1: Calorie Counting

Day 1 involves calculating the total calories carried by elves and finding which elves carry the most calories.

Problem Overview

The elves are on an expedition and each carries different food items with different calorie values. Your task is to:

  1. Calculate the total calories carried by each elf
  2. Find which elf is carrying the most calories
  3. Find the sum of calories carried by the top three elves

Day 1: Problem Description

Calorie Counting

The jungle must be too overgrown and difficult to navigate in vehicles or access from the air; the Elves' expedition traditionally goes on foot. As your boats approach land, the Elves begin taking inventory of their supplies. One important consideration is food - in particular, the number of Calories each Elf is carrying.

The Elves take turns writing down the number of Calories contained in the various meals, snacks, rations, etc. that they've brought with them, one item per line. Each Elf separates their own inventory from the previous Elf's inventory (if any) by a blank line.

For example, suppose the Elves finish writing their items' Calories and end up with the following list:

1000
2000
3000

4000

5000
6000

7000
8000
9000

10000

This list represents the Calories of the food carried by five Elves:

  • The first Elf is carrying food with 1000, 2000, and 3000 Calories, a total of 6000 Calories.
  • The second Elf is carrying one food item with 4000 Calories.
  • The third Elf is carrying food with 5000 and 6000 Calories, a total of 11000 Calories.
  • The fourth Elf is carrying food with 7000, 8000, and 9000 Calories, a total of 24000 Calories.
  • The fifth Elf is carrying one food item with 10000 Calories.

Part 1

In case the Elves get hungry and need extra snacks, they need to know which Elf to ask: they'd like to know how many Calories are being carried by the Elf carrying the most Calories. In the example above, this is 24000 (carried by the fourth Elf).

Find the Elf carrying the most Calories. How many total Calories is that Elf carrying?

Part 2

By the time you calculate the answer to the Elves' question, they've already realized that the Elf carrying the most Calories of food might eventually run out of snacks.

To avoid this unacceptable situation, the Elves would instead like to know the total Calories carried by the top three Elves carrying the most Calories. That way, even if one of those Elves runs out of snacks, they still have two backups.

In the example above, the top three Elves are the fourth Elf (with 24000 Calories), then the third Elf (with 11000 Calories), then the fifth Elf (with 10000 Calories). The sum of the Calories carried by these three elves is 45000.

Find the top three Elves carrying the most Calories. How many Calories are those Elves carrying in total?

Day 1: Solution Explanation

Approach

Day 1's problem requires us to parse a list of calorie values grouped by elves, calculate the total calories per elf, and then find either the maximum value (part 1) or the sum of the top three values (part 2).

Step 1: Parse the Input

The input format consists of groups of numbers separated by blank lines. Each group represents the food items carried by a single elf. We need to:

  1. Split the input by blank lines to get each elf's inventory
  2. For each elf's inventory, parse the individual calorie values and sum them

Step 2: Find the Maximum (Part 1)

Once we have the total calories for each elf, we simply find the maximum value among them.

Step 3: Find the Sum of Top Three (Part 2)

To find the sum of the top three values:

  1. Sort the list of calorie sums in descending order
  2. Take the first three elements
  3. Sum them

Implementation Details

Parsing the Input

We use Rust's string splitting capabilities to parse the input:

#![allow(unused)]
fn main() {
fn parse_input(input: &str) -> Vec<u32> {
    input
        .split("\n\n") // Split by blank lines to get each elf's inventory
        .map(|elf| {
            elf.lines() // Split each elf's inventory by lines
                .filter_map(|line| line.parse::<u32>().ok()) // Parse each line to a number
                .sum() // Sum the calories for each elf
        })
        .collect() // Collect into a vector of total calories per elf
}
}

Solving Part 1

Finding the maximum value is straightforward:

#![allow(unused)]
fn main() {
fn part1(calories: &[u32]) -> u32 {
    *calories.iter().max().unwrap_or(&0)
}
}

Solving Part 2

For part 2, we sort the values and sum the top three:

#![allow(unused)]
fn main() {
fn part2(calories: &[u32]) -> u32 {
    let mut sorted = calories.to_vec();
    sorted.sort_unstable_by(|a, b| b.cmp(a)); // Sort in descending order
    sorted.iter().take(3).sum() // Sum the top three values
}
}

Time and Space Complexity

  • Time Complexity: O(n log n) where n is the number of elves, due to the sorting operation in part 2
  • Space Complexity: O(n) for storing the calorie totals for each elf

Alternative Approaches

Using a Priority Queue

Instead of sorting the entire list for part 2, we could use a min-heap of size 3 to keep track of the top three values:

#![allow(unused)]
fn main() {
use std::collections::BinaryHeap;
use std::cmp::Reverse;

fn part2_with_heap(calories: &[u32]) -> u32 {
    let mut heap = BinaryHeap::new();
    
    for &calorie in calories {
        heap.push(Reverse(calorie));
        if heap.len() > 3 {
            heap.pop();
        }
    }
    
    heap.into_iter().map(|Reverse(cal)| cal).sum()
}
}

This approach has a time complexity of O(n log 3) ≈ O(n), which is more efficient than sorting the entire list.

Day 1: Code

Below is the complete code for Day 1's solution. The solution uses a BinaryHeap to efficiently track the elves with the most calories.

Full Solution

use std::collections::BinaryHeap;
use std::str::FromStr;

fn main() {

    let fs = std::fs::read_to_string("./src/bin/day1_input.txt").unwrap_or_else(|e| panic!("{e}"));

    let out = fs.split("\n\n")
        .map(|e| e.split('\n'))
        .map(|v|
            v.filter_map(|e| u64::from_str(e).ok() ).collect::<Vec<u64>>()
        )
        .fold(BinaryHeap::new(), |mut out, v|{
            out.push(v.iter().sum::<u64>());
            out
        });
    println!("Q1: {:?}",out.iter().take(3).collect::<Vec<_>>());
    println!("Q2: {:?}",out.iter().take(3).sum::<u64>());

}

Code Walkthrough

Imports

#![allow(unused)]
fn main() {
use std::collections::BinaryHeap;
use std::str::FromStr;
}

The solution imports:

  • BinaryHeap - A max-heap implementation for efficiently finding the largest elements
  • FromStr - A trait for parsing strings into other types

Input Parsing and Solution

#![allow(unused)]
fn main() {
    let fs = std::fs::read_to_string("./src/bin/day1_input.txt").unwrap_or_else(|e| panic!("{e}"));

    let out = fs.split("\n\n")
        .map(|e| e.split('\n'))
        .map(|v|
            v.filter_map(|e| u64::from_str(e).ok() ).collect::<Vec<u64>>()
        )
        .fold(BinaryHeap::new(), |mut out, v|{
            out.push(v.iter().sum::<u64>());
            out
        });
}

The code:

  1. Reads the input file as a string
  2. Splits the input by double newlines (\n\n) to separate each elf's inventory
  3. For each elf, splits their inventory by single newlines
  4. Parses each line into a u64 integer, filtering out any lines that can't be parsed
  5. Collects each elf's calories into a vector
  6. Uses fold to build a BinaryHeap containing the sum of calories for each elf

Output

#![allow(unused)]
fn main() {
    println!("Q1: {:?}",out.iter().take(3).collect::<Vec<_>>());
    println!("Q2: {:?}",out.iter().take(3).sum::<u64>());
}

The code outputs:

  1. For part 1: The top three calorie counts (the first one is the answer to part 1)
  2. For part 2: The sum of the top three calorie counts

Implementation Notes

  • The solution leverages Rust's BinaryHeap which is a max-heap, automatically giving us the largest elements first
  • Instead of sorting the entire list of elf calorie totals, this approach is more efficient because it directly gives us the largest values first
  • The solution combines both part 1 and part 2 into a single processing pipeline

Day 2: Rock Paper Scissors

Day 2 involves implementing the rules of Rock Paper Scissors and calculating scores based on different strategy interpretations.

Problem Overview

You need to play Rock Paper Scissors against elves. Given an encrypted strategy guide with two columns, you need to:

  1. Calculate your total score following the first interpretation of the guide
  2. Calculate your total score following the second interpretation of the guide

Your score for each round is the sum of:

  • Points for the shape you selected (1 for Rock, 2 for Paper, 3 for Scissors)
  • Points for the outcome (0 for loss, 3 for draw, 6 for win)

Day 2: Problem Description

Rock Paper Scissors

The Elves begin to set up camp on the beach. To decide whose tent gets to be closest to the snack storage, a giant Rock Paper Scissors tournament is already in progress.

Rock Paper Scissors is a game between two players. Each game contains many rounds; in each round, the players each simultaneously choose one of Rock, Paper, or Scissors using a hand shape. Then, a winner for that round is selected: Rock defeats Scissors, Scissors defeats Paper, and Paper defeats Rock. If both players choose the same shape, the round instead ends in a draw.

Appreciative of your help yesterday, one Elf gives you an encrypted strategy guide (your puzzle input) that they say will be sure to help you win. "The first column is what your opponent is going to play: A for Rock, B for Paper, and C for Scissors. The second column--" Suddenly, the Elf is called away to help with someone's tent.

The second column, you reason, must be what you should play in response: X for Rock, Y for Paper, and Z for Scissors. Winning every time would be suspicious, so the responses must have been carefully chosen.

The winner of the whole tournament is the player with the highest score. Your total score is the sum of your scores for each round. The score for a single round is the score for the shape you selected (1 for Rock, 2 for Paper, and 3 for Scissors) plus the score for the outcome of the round (0 if you lost, 3 if the round was a draw, and 6 if you won).

For example, suppose you were given the following strategy guide:

A Y
B X
C Z

This strategy guide predicts and recommends the following:

  • In the first round, your opponent will choose Rock (A), and you should choose Paper (Y). This ends in a win for you with a score of 8 (2 for choosing Paper + 6 for winning).
  • In the second round, your opponent will choose Paper (B), and you should choose Rock (X). This ends in a loss for you with a score of 1 (1 for choosing Rock + 0 for losing).
  • In the third round, your opponent will choose Scissors (C), and you should choose Scissors (Z). This ends in a draw with a score of 6 (3 for choosing Scissors + 3 for drawing).

So, in this example, if you were to follow the strategy guide, you would get a total score of 15 (8 + 1 + 6).

Part 1

What would your total score be if everything goes exactly according to your strategy guide?

Part 2

The Elf finishes helping with the tent and sneaks back over to you. "Anyway, the second column says how the round needs to end: X means you need to lose, Y means you need to end the round in a draw, and Z means you need to win. Good luck!"

Now, you need to figure out what shape to choose so the round ends as indicated. The total score is still calculated in the same way, but now you need to figure out what shape to choose so the round ends as indicated.

For example, suppose you were given the same strategy guide:

A Y
B X
C Z

This strategy guide now predicts and recommends the following:

  • In the first round, your opponent will choose Rock (A), and you need to end the round in a draw (Y), so you also choose Rock. This gives you a score of 4 (1 + 3).
  • In the second round, your opponent will choose Paper (B), and you need to lose (X), so you choose Rock. This gives you a score of 1 (1 + 0).
  • In the third round, your opponent will choose Scissors (C), and you need to win (Z), so you choose Rock. This gives you a score of 7 (1 + 6).

Following this new interpretation of the strategy guide, you would get a total score of 12 (4 + 1 + 7).

Following the Elf's instructions for the second column, what would your total score be if everything goes exactly according to your strategy guide?

Day 2: Solution Explanation

Approach

Day 2's problem requires implementing a Rock Paper Scissors game with two different interpretations of a strategy guide. We need to:

  1. Parse the input into rounds of play
  2. Calculate scores for each round according to both interpretations
  3. Sum the scores to get the total

Strategy 1 vs Strategy 2

The key difference between the two strategies is the interpretation of the second column:

  • Strategy 1: The second column (X, Y, Z) represents your move (Rock, Paper, Scissors)
  • Strategy 2: The second column represents the desired outcome (Lose, Draw, Win)

Game Logic

To implement the game, we need to model:

  1. The possible moves (Rock, Paper, Scissors)
  2. The possible outcomes (Win, Loss, Draw)
  3. The scoring rules for moves and outcomes
  4. The winning relationships between moves
  5. How to derive a move given an opponent's move and a desired outcome

Implementation Details

The Move Enum

We define a Move enum with values for Rock, Paper, and Scissors, each with its corresponding score value:

#![allow(unused)]
fn main() {
#[derive(Debug,Copy,Clone,PartialEq)]
enum Move { Rock=1, Paper, Scissors }
}

The numeric values (1, 2, 3) are automatically assigned based on the enum declaration order.

Parsing Input

We implement the From<u8> trait to convert characters from the input into Move values:

#![allow(unused)]
fn main() {
impl From<u8> for Move {
    fn from(c: u8) -> Self {
        match c {
            b'A' | b'X' => Move::Rock,
            b'B' | b'Y' => Move::Paper,
            b'C' | b'Z' => Move::Scissors,
            _ => unreachable!()
        }
    }
}
}

Determining Outcomes

We implement a method to determine if one move wins against another:

#![allow(unused)]
fn main() {
fn is_winning(&self, other:&Self) -> bool {
    matches!(
        (other,self),
        (Move::Rock, Move::Paper) |
        (Move::Paper, Move::Scissors) |
        (Move::Scissors, Move::Rock)
    )
}
}

And a method to determine the outcome of a round:

#![allow(unused)]
fn main() {
fn outcome(&self, other:&Self) -> Outcome {
    if self.is_winning(other) {
        Outcome::Win
    } else if other.is_winning(self) {
        Outcome::Loss
    } else {
        Outcome::Draw
    }
}
}

Strategy 2: Deriving Moves

For Strategy 2, we need to determine what move to make given an opponent's move and a desired outcome:

#![allow(unused)]
fn main() {
fn derive(&self, out:Outcome) -> Move {
    let iter = once(Move::Rock).chain(once(Move::Paper)).chain(once(Move::Scissors)).cycle();
    iter.skip_while(|e| self != e).skip(out as usize).next().unwrap()
}
}

This is a clever solution that creates a circular iterator of moves and skips to the move that produces the desired outcome.

Scoring

We define an Outcome enum and implement scoring for outcomes:

#![allow(unused)]
fn main() {
enum Outcome { Draw, Win, Loss }

impl Outcome {
    fn score_value(&self) -> u64 {
        match self {
            Outcome::Loss => 0,
            Outcome::Draw => 3,
            Outcome::Win => 6
        }
    }
}
}

Combining Everything

We create a Round struct to represent a round of Rock Paper Scissors:

#![allow(unused)]
fn main() {
struct Round(Move,Move);

impl Round {
    fn score(&self) -> u64 {
        let Round(other, me) = self;
        me.outcome(other).score_value() + *me as u64
    }
}
}

Each round is scored by adding the outcome value to the value of the move chosen.

Processing the Input

Finally, we process the input file, calculating scores for both strategies:

fn main() {
    let (score1, score2) = std::fs::read_to_string("./src/bin/day2_input.txt")
        .unwrap()
        .lines()
        .map(|round| (
            Round::new(round).score(),      // Strategy 1
            Round::derived(round).score()   // Strategy 2
        ))
        .reduce(|sum, round| {
            (sum.0 + round.0, sum.1 + round.1)
        })
        .unwrap_or_else(|| panic!("Empty iterator ?"));
    
    println!("Strategy 1 : {:?}",score1);
    println!("Strategy 2 : {:?}",score2);
}

We map each line to a tuple of scores for both strategies, then reduce the results to get the total scores.

Alternative Approaches

Pattern Matching

A simpler approach could use direct pattern matching for each input combination:

#![allow(unused)]
fn main() {
fn strategy_1(round:&str) -> u64 {
    match round {
        "A X" => 3+1, // Rock vs Rock = Draw (3) + Rock (1)
        "A Y" => 6+2, // Rock vs Paper = Win (6) + Paper (2)
        "A Z" => 0+3, // Rock vs Scissors = Loss (0) + Scissors (3)
        // ... other combinations
        _ => panic!("unknown input")
    }
}
}

While this approach is more direct, it's less flexible and doesn't model the game's logic as cleanly.

Optimization Considerations

  • The current solution uses enums to represent both moves and outcomes, which makes the code clear and easy to understand.
  • The derive method is particularly elegant, using Rust's iterator functionality to find the right move.
  • For very large inputs, we could consider using a lookup table for move derivation instead of the iterator approach.

Day 2: Code

Below is the complete code for Day 2's solution, which implements Rock Paper Scissors with two different interpretations of the strategy guide.

Full Solution

use std::iter::once;

#[derive(Debug,Copy,Clone,PartialEq)]
enum Move { Rock=1, Paper, Scissors }
impl From<u8> for Move {
    fn from(c: u8) -> Self {
        match c {
            b'A' | b'X' => Move::Rock,
            b'B' | b'Y' => Move::Paper,
            b'C' | b'Z' => Move::Scissors,
            _ => unreachable!()
        }
    }
}
impl Move {
    fn is_winning(&self, other:&Self) -> bool {
        matches!(
            (other,self),
            (Move::Rock, Move::Paper) |
            (Move::Paper, Move::Scissors) |
            (Move::Scissors, Move::Rock)
        )
    }
    fn outcome(&self, other:&Self) -> Outcome {
        if self.is_winning(other) {
            Outcome::Win
        } else if other.is_winning(self) {
            Outcome::Loss
        } else {
            Outcome::Draw
        }
    }
    fn derive(&self, out:Outcome) -> Move {
        let iter = once(Move::Rock).chain(once(Move::Paper)).chain(once(Move::Scissors)).cycle();
        // match out {
        //     Outcome::Draw => iter.skip_while(|e| self != e).skip(0).next(),
        //     Outcome::Win => iter.skip_while(|e| self != e).skip(1).next()
        //     Outcome::Loss => iter.skip_while(|e| self != e).skip(2).next(),
        // }.unwrap()
        iter.skip_while(|e| self != e).skip(out as usize).next().unwrap()
    }
}
#[derive(Debug,Copy,Clone)]
enum Outcome { Draw, Win, Loss }
impl From<Move> for Outcome {
    fn from(m: Move) -> Self {
        match m {
            Move::Rock => Outcome::Loss,
            Move::Paper => Outcome::Draw,
            Move::Scissors => Outcome::Win
        }
    }
}
impl Outcome {
    fn score_value(&self) -> u64 {
        match self {
            Outcome::Loss => 0,
            Outcome::Draw => 3,
            Outcome::Win => 6
        }
    }
}
#[derive(Debug,Copy,Clone)]
struct Round(Move,Move);
impl Round {
    fn new(round:&str) -> Round {
        if let &[a,_,b] = round.as_bytes() { Round(Move::from(a), Move::from(b)) } else { unreachable!() }
    }
    fn derived(round:&str) -> Round {
        let Round(a,b) = Round::new(round);
        Round(a, a.derive(Outcome::from(b)))
    }
    fn score(&self) -> u64 {
        let Round(other, me) = self;
        me.outcome(other).score_value() + *me as u64
    }
}

fn main() {
    let (score1, score2) = std::fs::read_to_string("./src/bin/day2_input.txt")
        .unwrap()
        .lines()
        .map(|round| (
            Round::new(round).score(),
            Round::derived(round).score()
        ))
        .reduce(|sum, round| {
            (sum.0 + round.0, sum.1 + round.1)
        })
        .unwrap_or_else(|| panic!("Empty iterator ?"));
    println!("Strategy 1 : 15632 {:?}",score1);
    println!("Strategy 2 : 14416 {:?}",score2);
}

// fn strategy_1(round:&str) -> u64 {
//     match round {
//         // Question 1: ABC, XYZ denotes player choices
//         "A X" => 3+1,
//         "A Y" => 6+2,
//         "A Z" => 0+3,
//         "B X" => 0+1,
//         "B Y" => 3+2,
//         "B Z" => 6+3,
//         "C X" => 6+1,
//         "C Y" => 0+2,
//         "C Z" => 3+3,
//         _ => panic!("unknown input")
//     }
// }
// fn strategy_2(round:&str) -> u64 {
//     match round {
//         // Question 2: XYZ denotes your choice results to loose, draw, win
//         "A X" => 0+3,
//         "A Y" => 3+1,
//         "A Z" => 6+2,
//         "B X" => 0+1,
//         "B Y" => 3+2,
//         "B Z" => 6+3,
//         "C X" => 0+2,
//         "C Y" => 3+3,
//         "C Z" => 6+1,
//         _ => panic!("unknown input")
//     }
// }

Code Walkthrough

Core Types

The solution uses three main types:

  1. Move Enum: Represents Rock, Paper, or Scissors with their score values:
#[derive(Debug,Copy,Clone,PartialEq)]
enum Move { Rock=1, Paper, Scissors }
  1. Outcome Enum: Represents the possible outcomes of a round:
enum Outcome { Draw, Win, Loss }
impl From<Move> for Outcome {
  1. Round Struct: Represents a round of Rock Paper Scissors:
#[derive(Debug,Copy,Clone)]
struct Round(Move,Move);

Game Logic

The solution implements several key methods:

  1. Determining Win Conditions:
    fn is_winning(&self, other:&Self) -> bool {
        matches!(
            (other,self),
            (Move::Rock, Move::Paper) |
            (Move::Paper, Move::Scissors) |
            (Move::Scissors, Move::Rock)
        )
    }
  1. Determining Game Outcomes:
    fn outcome(&self, other:&Self) -> Outcome {
        if self.is_winning(other) {
            Outcome::Win
        } else if other.is_winning(self) {
            Outcome::Loss
        } else {
            Outcome::Draw
        }
    }
  1. Deriving Moves Based on Desired Outcome:
    fn derive(&self, out:Outcome) -> Move {
        let iter = once(Move::Rock).chain(once(Move::Paper)).chain(once(Move::Scissors)).cycle();
        // match out {
        //     Outcome::Draw => iter.skip_while(|e| self != e).skip(0).next(),
        //     Outcome::Win => iter.skip_while(|e| self != e).skip(1).next()
        //     Outcome::Loss => iter.skip_while(|e| self != e).skip(2).next(),
        // }.unwrap()
        iter.skip_while(|e| self != e).skip(out as usize).next().unwrap()
    }

Processing Input

The solution processes the input file and calculates scores for both strategies in a single pass:

fn main() {
    let (score1, score2) = std::fs::read_to_string("./src/bin/day2_input.txt")
        .unwrap()
        .lines()
        .map(|round| (
            Round::new(round).score(),
            Round::derived(round).score()
        ))
        .reduce(|sum, round| {
            (sum.0 + round.0, sum.1 + round.1)
        })
        .unwrap_or_else(|| panic!("Empty iterator ?"));
    println!("Strategy 1 : 15632 {:?}",score1);
    println!("Strategy 2 : 14416 {:?}",score2);
}

Alternative Approach

The commented-out functions at the end show an alternative approach using direct pattern matching for each input combination:

// fn strategy_1(round:&str) -> u64 {
//     match round {
//         // Question 1: ABC, XYZ denotes player choices
//         "A X" => 3+1,
//         "A Y" => 6+2,
//         "A Z" => 0+3,
//         "B X" => 0+1,
//         "B Y" => 3+2,
//         "B Z" => 6+3,
//         "C X" => 6+1,
//         "C Y" => 0+2,
//         "C Z" => 3+3,
//         _ => panic!("unknown input")
//     }
// }
// fn strategy_2(round:&str) -> u64 {
//     match round {
//         // Question 2: XYZ denotes your choice results to loose, draw, win
//         "A X" => 0+3,
//         "A Y" => 3+1,
//         "A Z" => 6+2,
//         "B X" => 0+1,
//         "B Y" => 3+2,
//         "B Z" => 6+3,
//         "C X" => 0+2,
//         "C Y" => 3+3,
//         "C Z" => 6+1,
//         _ => panic!("unknown input")
//     }
// }

This approach is more direct but less flexible than modeling the game with proper types.

Day 3: Rucksack Reorganization

Day 3 involves finding common items in rucksacks and determining their priorities.

Problem Overview

Elves packed their rucksacks incorrectly, and you need to help them find the misplaced items. Each rucksack has two compartments, and items of the same type should go in the same compartment. Your tasks:

  1. Find items that appear in both compartments of each rucksack
  2. Find badges (items common to each group of three elves)
  3. Calculate the sum of priorities for these items

Each item type is identified by a single letter (case-sensitive) and has a priority value:

  • Lowercase letters (a-z) have priorities 1-26
  • Uppercase letters (A-Z) have priorities 27-52

Day 3: Problem Description

Rucksack Reorganization

One Elf has the important job of loading all of the rucksacks with supplies for the jungle journey. Unfortunately, that Elf didn't quite follow the packing instructions, and so a few items now need to be rearranged.

Each rucksack has two large compartments. All items of a given type are meant to go into exactly one of the two compartments. The Elf that did the packing failed to follow this rule for exactly one item type per rucksack.

The Elves have made a list of all of the items currently in each rucksack (your puzzle input), but they need your help finding the errors. Every item type is identified by a single lowercase or uppercase letter (that is, a and A refer to different types of items).

The list of items for each rucksack is given as characters all on a single line. A given rucksack always has the same number of items in each of its two compartments, so the first half of the characters represent items in the first compartment, while the second half of the characters represent items in the second compartment.

For example, suppose you have the following list of contents from six rucksacks:

vJrwpWtwJgWrhcsFMMfFFhFp
jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL
PmmdzqPrVvPwwTWBwg
wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn
ttgJtRGJQctTZtZT
CrZsJsPPZsGzwwsLwLmpwMDw
  • The first rucksack contains the items vJrwpWtwJgWrhcsFMMfFFhFp, which means its first compartment contains the items vJrwpWtwJgWr, while the second compartment contains the items hcsFMMfFFhFp. The only item type that appears in both compartments is lowercase p.
  • The second rucksack's compartments contain jqHRNqRjqzjGDLGL and rsFMfFZSrLrFZsSL. The only item type that appears in both compartments is uppercase L.
  • The third rucksack's compartments contain PmmdzqPrV and vPwwTWBwg. The only item type that appears in both compartments is uppercase P.
  • The fourth rucksack's compartments only share item type v.
  • The fifth rucksack's compartments only share item type t.
  • The sixth rucksack's compartments only share item type s.

To help prioritize item rearrangement, every item type can be converted to a priority:

  • Lowercase item types a through z have priorities 1 through 26.
  • Uppercase item types A through Z have priorities 27 through 52.

In the above example, the priority of the item type that appears in both compartments of each rucksack is 16 (p), 38 (L), 42 (P), 22 (v), 20 (t), and 19 (s); the sum of these is 157.

Part 1

Find the item type that appears in both compartments of each rucksack. What is the sum of the priorities of those item types?

Part 2

As you finish identifying the misplaced items, the Elves come to you with another issue.

For safety, the Elves are divided into groups of three. Every Elf carries a badge that identifies their group. For efficiency, within each group of three Elves, the badge is the only item type carried by all three Elves. That is, if a group's badge is item type B, then all three Elves will have item type B somewhere in their rucksack, and at most two of the Elves will be carrying any other item type.

The problem is that someone forgot to put this year's updated authenticity sticker on the badges. All of the badges need to be pulled out of the rucksacks so the new authenticity stickers can be attached.

Additionally, nobody wrote down which item type corresponds to each group's badges. The only way to tell which item type is the right one is by finding the one item type that is common between all three Elves in each group.

Every set of three lines in your list corresponds to a single group, but each group can have a different badge item type. So, in the above example, the first group's rucksacks are the first three lines:

vJrwpWtwJgWrhcsFMMfFFhFp
jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL
PmmdzqPrVvPwwTWBwg

And the second group's rucksacks are the next three lines:

wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn
ttgJtRGJQctTZtZT
CrZsJsPPZsGzwwsLwLmpwMDw

In the first group, the only item type that appears in all three rucksacks is lowercase r; this must be their badges. In the second group, their badge item type must be Z.

Priorities for these items must still be found to organize the sticker attachment efforts: here, they are 18 (r) for the first group and 52 (Z) for the second group. The sum of these is 70.

Find the item type that corresponds to the badges of each three-Elf group. What is the sum of the priorities of those item types?

Day 3: Solution Explanation

Approach

Day 3's problem involves finding common items across different sets and calculating their priorities. We need to:

  1. Part 1: Find items that appear in both compartments of each rucksack
  2. Part 2: Find the common item (badge) among each group of three elves

The key techniques we'll use are:

  • String splitting to divide rucksacks into compartments
  • HashSets for efficiently finding common elements
  • Character mapping to calculate priorities

Implementation Details

Part 1: Finding Common Items in Compartments

The approach for Part 1 is:

  1. Split each rucksack into two equal compartments
  2. Find the characters that appear in both compartments
  3. Calculate the priority of each common character
  4. Sum the priorities
#![allow(unused)]
fn main() {
fn component_1(lines: &str) -> u32 {
    lines.lines()
        .map(|line| line.split_at(line.len()>>1))
        .map(|(compa, compb)| {
            compa.chars()
                .filter(|&c| compb.find(c).is_some())
                .collect::<HashSet<_>>()
        })
        .map(|set| set.into_iter().map(calculate_priority).sum::<u32>())
        .reduce(|sum, v| sum + v)
        .unwrap_or_else(|| unreachable!())
}
}

Key points about this implementation:

  • line.split_at(line.len()>>1) divides the string into two equal halves
  • compb.find(c).is_some() checks if character c appears in the second compartment
  • We use a HashSet to ensure we count each common character only once

Part 2: Finding Common Items Across Groups

The approach for Part 2 is:

  1. Group the rucksacks into sets of three
  2. For each group, find the characters that appear in all three rucksacks
  3. Calculate the priority of each common character
  4. Sum the priorities
#![allow(unused)]
fn main() {
fn component_2(lines:&str) -> u32 {
    lines.lines()
        .collect::<Vec<_>>()
        .chunks(3)
        .map(|group| {
            group.iter()
                .map(|a| a.chars().collect::<HashSet<_>>())
                .reduce(|a, b| a.intersection(&b).copied().collect::<HashSet<_>>())
                .unwrap_or_else(|| panic!("Ops!"))
        })
        .map(|set| set.into_iter().map(calculate_priority).sum::<u32>())
        .sum::<u32>()
}
}

Key points about this implementation:

  • .chunks(3) splits the lines into groups of three
  • We convert each rucksack into a HashSet of characters
  • We use reduce with intersection to find characters common to all three rucksacks

Priority Calculation

Both parts use the same logic to calculate priorities:

#![allow(unused)]
fn main() {
fn calculate_priority(c: char) -> u32 {
    match c {
        'a'..='z' => u32::from(c) - u32::from('a') + 1,   // 1-26
        'A'..='Z' => u32::from(c) - u32::from('A') + 27,  // 27-52
        _ => panic!("use only alphabetic characters")
    }
}
}

This function:

  • Maps lowercase letters (a-z) to priorities 1-26
  • Maps uppercase letters (A-Z) to priorities 27-52

Optimization and Efficiency

Time Complexity

  • Part 1: O(n) where n is the total number of characters across all rucksacks
  • Part 2: O(n) where n is the total number of characters across all rucksacks

The solution makes use of HashSets for efficient intersection operations.

Space Complexity

  • O(m) where m is the number of unique characters in the largest compartment/rucksack

Alternative Approaches

Bitsets for Character Tracking

An alternative approach could use bitsets to track character presence:

#![allow(unused)]
fn main() {
fn using_bitsets(lines: &str) -> u32 {
    lines.lines()
        .map(|line| {
            let half_len = line.len() / 2;
            let first_half = &line[0..half_len];
            let second_half = &line[half_len..];
            
            let mut first_set = 0u64;
            let mut second_set = 0u64;
            
            for c in first_half.chars() {
                let bit = if c.is_lowercase() {
                    1u64 << (c as u8 - b'a')
                } else {
                    1u64 << (c as u8 - b'A' + 26)
                };
                first_set |= bit;
            }
            
            for c in second_half.chars() {
                let bit = if c.is_lowercase() {
                    1u64 << (c as u8 - b'a')
                } else {
                    1u64 << (c as u8 - b'A' + 26)
                };
                second_set |= bit;
            }
            
            let common = first_set & second_set;
            common.trailing_zeros() + 1
        })
        .sum()
}
}

This approach would be more memory-efficient but slightly more complex to implement.

Conclusion

The solution uses Rust's powerful iterators and collection types to create a clean, functional implementation. The use of HashSets makes finding common elements efficient, while the string manipulation functions allow for straightforward parsing of the input.

Day 3: Code

Below is the complete code for Day 3, which solves the Rucksack Reorganization problem.

Full Solution

use std::collections::HashSet;

fn main() {
    // let lines = "vJrwpWtwJgWrhcsFMMfFFhFp\n\
    // jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL\n\
    // PmmdzqPrVvPwwTWBwg\n\
    // wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn\n\
    // ttgJtRGJQctTZtZT\n\
    // CrZsJsPPZsGzwwsLwLmpwMDw";

    let lines = std::fs::read_to_string("./src/bin/day3.txt").unwrap_or_else(|e| panic!("{e}"));

    println!("{:?}",component_1(&lines));
    println!("{:?}",component_2(&lines));
}

fn component_2(lines:&str) -> u32 {
    lines.lines()
        .collect::<Vec<_>>()
        .chunks(3)
        .map(|group| {
            group.iter()
                .map(|a| a.chars().collect::<HashSet<_>>())
                .reduce(|a, b|
                    a.intersection(&b).copied().collect::<HashSet<_>>()
                )
                .unwrap_or_else(|| panic!("Ops!"))
        })
        .map(|set|
            set.into_iter()
                .map(|c|
                    match c {
                        'a'..='z' => u32::from(c) - u32::from('a') + 1,
                        'A'..='Z' => u32::from(c) - u32::from('A') + 27,
                        _ => panic!("use only alphabetic characters")
                    }
                )
                .sum::<u32>()
        )
        .sum::<u32>()
}

fn component_1(lines: &str) -> u32 {
    lines.lines()
        .map(|line| line.split_at( line.len()>>1 ) )
        .map(|(compa, compb)| {
            compa.chars()
                .filter(|&c| compb.find(c).is_some() )
                .collect::<HashSet<_>>()
        })
        .map(|set|
            set.into_iter()
                .map(|c|
                    match c {
                        'a'..='z' => u32::from(c) - u32::from('a') + 1,
                        'A'..='Z' => u32::from(c) - u32::from('A') + 27,
                        _ => panic!("use only alphabetic characters")
                    }
                )
                .sum::<u32>()
        )
        .reduce(|sum, v| sum + v )
        .unwrap_or_else(|| unreachable!())
}

Code Walkthrough

Imports and Setup

use std::collections::HashSet;

fn main() {
    // let lines = "vJrwpWtwJgWrhcsFMMfFFhFp\n\
    // jqHRNqRjqzjGDLGLrsFMfFZSrLrFZsSL\n\
    // PmmdzqPrVvPwwTWBwg\n\
    // wMqvLMZHhHMvwLHjbvcjnnSBnvTQFn\n\
    // ttgJtRGJQctTZtZT\n\
    // CrZsJsPPZsGzwwsLwLmpwMDw";

    let lines = std::fs::read_to_string("./src/bin/day3.txt").unwrap_or_else(|e| panic!("{e}"));

    println!("{:?}",component_1(&lines));
    println!("{:?}",component_2(&lines));
}

The solution imports the HashSet collection type which is used to efficiently find common elements. The main function reads the input file and calls the two component functions that solve parts 1 and 2 of the problem.

Part 1: Finding Common Items Between Compartments

fn component_1(lines: &str) -> u32 {
    lines.lines()
        .map(|line| line.split_at( line.len()>>1 ) )
        .map(|(compa, compb)| {
            compa.chars()
                .filter(|&c| compb.find(c).is_some() )
                .collect::<HashSet<_>>()
        })
        .map(|set|
            set.into_iter()
                .map(|c|
                    match c {
                        'a'..='z' => u32::from(c) - u32::from('a') + 1,
                        'A'..='Z' => u32::from(c) - u32::from('A') + 27,
                        _ => panic!("use only alphabetic characters")
                    }
                )
                .sum::<u32>()
        )
        .reduce(|sum, v| sum + v )
        .unwrap_or_else(|| unreachable!())
}

This function handles Part 1 of the problem, finding items that appear in both compartments of each rucksack.

The solution works by:

  1. Splitting each rucksack into two halves using split_at
  2. Finding characters that appear in both halves using filter
  3. Using a HashSet to ensure each common character is counted only once
  4. Calculating the priority of each common character
  5. Summing all priorities

Part 2: Finding Group Badges

fn component_2(lines:&str) -> u32 {
    lines.lines()
        .collect::<Vec<_>>()
        .chunks(3)
        .map(|group| {
            group.iter()
                .map(|a| a.chars().collect::<HashSet<_>>())
                .reduce(|a, b|
                    a.intersection(&b).copied().collect::<HashSet<_>>()
                )
                .unwrap_or_else(|| panic!("Ops!"))
        })
        .map(|set|
            set.into_iter()
                .map(|c|
                    match c {
                        'a'..='z' => u32::from(c) - u32::from('a') + 1,
                        'A'..='Z' => u32::from(c) - u32::from('A') + 27,
                        _ => panic!("use only alphabetic characters")
                    }
                )
                .sum::<u32>()
        )
        .sum::<u32>()
}

This function handles Part 2 of the problem, finding the common item (badge) among each group of three elves.

The solution works by:

  1. Grouping rucksacks into sets of three using chunks(3)
  2. For each group, converting each rucksack into a HashSet of characters
  3. Using reduce with intersection to find characters common to all three rucksacks
  4. Calculating the priority of the common character
  5. Summing all priorities

Implementation Notes

  • Bit Shift Operation: line.len()>>1 is a bit shift operation that divides the length by 2, efficiently splitting the rucksack into equal compartments.
  • HashSet Usage: The use of HashSets eliminates duplicate characters in the results, ensuring each common character is counted exactly once.
  • Character Priority Calculation: The solution uses character code arithmetic to calculate priorities, mapping 'a'-'z' to 1-26 and 'A'-'Z' to 27-52.
  • Functional Programming Style: The implementation uses a functional programming style with method chaining, which makes the code concise and expressive.

Day 4: Camp Cleanup

Day 4 involves checking for overlapping section assignments among pairs of elves cleaning the camp.

Problem Overview

The elves have been assigned to clean different sections of the camp. Each elf has a range of section IDs they're responsible for. Your task is to:

  1. Count how many assignment pairs have one range fully containing the other
  2. Count how many assignment pairs have ranges that overlap at all

This problem tests your ability to work with ranges and determine subset and intersection relationships.

Day 4: Problem Description

Camp Cleanup

Space needs to be cleared before the last supplies can be unloaded from the ships, and so several Elves have been assigned the job of cleaning up sections of the camp. Every section has a unique ID number, and each Elf is assigned a range of section IDs.

However, as some of the Elves compare their section assignments with each other, they've noticed that many of the assignments overlap. To try to quickly find overlaps and reduce duplicated effort, the Elves pair up and make a big list of the section assignments for each pair (your puzzle input).

For example, consider the following list of section assignment pairs:

2-4,6-8
2-3,4-5
5-7,7-9
2-8,3-7
6-6,4-6
2-6,4-8

For the first few pairs, this list means:

  • Within the first pair of Elves, the first Elf was assigned sections 2-4 (sections 2, 3, and 4), while the second Elf was assigned sections 6-8 (sections 6, 7, 8).
  • The Elves in the second pair were assigned sections 2-3 and 4-5.
  • The Elves in the third pair were assigned sections 5-7 and 7-9.
  • The Elves in the fourth pair were assigned sections 2-8 and 3-7.
  • The Elves in the fifth pair were assigned sections 6-6 and 4-6.
  • The Elves in the sixth pair were assigned sections 2-6 and 4-8.

Part 1

Some of the pairs have noticed that one of their assignments fully contains the other. For example, 2-8 fully contains 3-7, and 6-6 is fully contained by 4-6. In pairs where one assignment fully contains the other, one Elf in the pair would be exclusively cleaning sections their partner will already be cleaning, so these seem like the most in need of reconsideration. In this example, there are 2 such pairs.

In how many assignment pairs does one range fully contain the other?

Part 2

It seems like there is still quite a bit of duplicate work planned. Instead, the Elves would like to know the number of pairs that overlap at all.

In the above example, the first two pairs (2-4,6-8 and 2-3,4-5) don't overlap, while the remaining four pairs (5-7,7-9, 2-8,3-7, 6-6,4-6, and 2-6,4-8) do overlap:

  • 5-7,7-9 overlaps in a single section, 7.
  • 2-8,3-7 overlaps all of the sections 3 through 7.
  • 6-6,4-6 overlaps in a single section, 6.
  • 2-6,4-8 overlaps in sections 4, 5, and 6.

So, in this example, the number of overlapping assignment pairs is 4.

In how many assignment pairs do the ranges overlap?

Day 4: Solution Explanation

Approach

Day 4's problem involves working with ranges and determining relationships between them. We need to check:

  1. Part 1: Whether one range fully contains the other (subset relationship)
  2. Part 2: Whether two ranges overlap at all (intersection relationship)

The core of the solution is to extend Rust's RangeInclusive type with functionality to check for these two conditions.

Implementation Details

Range Extension Trait

The most elegant part of this solution is defining a trait to extend the functionality of Rust's built-in RangeInclusive type:

#![allow(unused)]
fn main() {
trait InclusiveRangeExt {
    fn is_subset(&self, other: &Self) -> bool;
    fn is_overlapping(&self, other: &Self) -> bool;
}
}

This trait adds two methods to RangeInclusive:

  • is_subset - Checks if the other range is fully contained within this range
  • is_overlapping - Checks if this range overlaps with the other range at all

Implementing the Trait

The implementation uses the contains method that's built into RangeInclusive:

#![allow(unused)]
fn main() {
impl<T> InclusiveRangeExt for RangeInclusive<T>
    where T : PartialOrd {
    fn is_subset(&self, other: &Self) -> bool {
        self.contains(other.start()) && self.contains(other.end())
    }
    fn is_overlapping(&self, other: &Self) -> bool {
        self.contains(other.start()) || self.contains(other.end())
    }
}
}

The generic implementation works for any type T that can be compared (PartialOrd), which includes the integers we're using in this problem.

Parsing the Input

The input consists of pairs of ranges in the format a-b,c-d. We parse this into pairs of RangeInclusive<u32>:

#![allow(unused)]
fn main() {
let pairs = data.lines()
    .map(|line|
        line.split(|c:char| c.is_ascii_punctuation())
            .map(|c| u32::from_str(c).unwrap_or_else(|e| panic!("{e}")) )
            .collect::<Vec<_>>()
    )
    .map(|pair| {
        let [a, b, c, d] = pair[..] else { panic!("") };
        ((a..=b), (c..=d))
    })
    .collect::<Vec<_>>();
}

The parsing works by:

  1. Splitting each line by punctuation characters (hyphens and commas)
  2. Converting each part to a u32
  3. Creating a pair of ranges using the inclusive range syntax a..=b

Solving Part 1: Full Containment

With our ranges parsed and the extension trait implemented, solving Part 1 is straightforward:

#![allow(unused)]
fn main() {
let out = pairs.iter()
    .filter(|(a,b)|
        a.is_subset(b) || b.is_subset(a)
    )
    .count();
}

We check each pair to see if either range is a subset of the other, and count the number of pairs that satisfy this condition.

Solving Part 2: Overlapping

Similarly, for Part 2, we count pairs where ranges overlap at all:

#![allow(unused)]
fn main() {
let out = pairs.iter()
    .filter(|(a,b)|
        a.is_overlapping(b) || b.is_overlapping(a)
    )
    .count();
}

Alternative Solutions

Direct Range Comparison

Instead of using a trait extension, we could have compared range endpoints directly:

#![allow(unused)]
fn main() {
// Check if range a fully contains range b
fn is_subset(a: &(u32, u32), b: &(u32, u32)) -> bool {
    a.0 <= b.0 && a.1 >= b.1
}

// Check if ranges a and b overlap
fn is_overlapping(a: &(u32, u32), b: &(u32, u32)) -> bool {
    a.0 <= b.1 && a.1 >= b.0
}
}

This approach would use tuples instead of ranges, which is simpler but less expressive.

Using Set Operations

Another approach could model ranges as sets and use set operations:

#![allow(unused)]
fn main() {
use std::collections::HashSet;

fn range_to_set(start: u32, end: u32) -> HashSet<u32> {
    (start..=end).collect()
}

fn is_subset(a: &HashSet<u32>, b: &HashSet<u32>) -> bool {
    a.is_subset(b) || b.is_subset(a)
}

fn is_overlapping(a: &HashSet<u32>, b: &HashSet<u32>) -> bool {
    !a.is_disjoint(b)
}
}

However, this would be less efficient for large ranges due to the memory required to store every integer in each range.

Time and Space Complexity

  • Time Complexity: O(n) where n is the number of range pairs, since we process each pair once with constant-time operations.
  • Space Complexity: O(n) to store the parsed pairs.

Conclusion

This solution demonstrates how Rust's trait system can be used to extend existing types with new functionality. By using trait extensions, we achieve an elegant and readable solution that clearly expresses the problem's domain concepts.

Day 4: Code

Below is the complete code for Day 4's solution, which handles range containment and overlap checks.

Full Solution

use std::ops::RangeInclusive;
use std::str::FromStr;

trait InclusiveRangeExt {
    fn is_subset(&self, other: &Self) -> bool;
    fn is_overlapping(&self, other: &Self) -> bool;
}

impl<T> InclusiveRangeExt for RangeInclusive<T>
    where T : PartialOrd {
    fn is_subset(&self, other: &Self) -> bool {
        self.contains(other.start()) && self.contains(other.end())
    }
    fn is_overlapping(&self, other: &Self) -> bool {
        self.contains(other.start()) || self.contains(other.end())
    }
}

fn main() {

    let data = std::fs::read_to_string("src/bin/day4_input.txt").expect("Ops! Cannot read file");
    let pairs = data.lines()
        .map(|line|
            line.split(|c:char| c.is_ascii_punctuation())
                .map(|c| u32::from_str(c).unwrap_or_else(|e| panic!("{e}")) )
                .collect::<Vec<_>>()
        )
        .map(|pair| {
            let [a, b, c, d] = pair[..] else { panic!("") };
            ((a..=b), (c..=d))
        })
        .collect::<Vec<_>>();

    let out = pairs.iter()
        .filter(|(a,b)|
            a.is_subset(b) || b.is_subset(a)
        )
        .count();
    println!("Component 1 = {out}");

    let out = pairs.iter()
        .filter(|(a,b)|
            a.is_overlapping(b) || b.is_overlapping(a)
        )
        .count();
    println!("Component 2 = {out}");
}

Code Walkthrough

Extending Ranges with a Trait

trait InclusiveRangeExt {
    fn is_subset(&self, other: &Self) -> bool;
    fn is_overlapping(&self, other: &Self) -> bool;
}

The solution defines a trait to extend Rust's RangeInclusive type with two new methods for checking containment relationships:

  • is_subset - Checks if one range is fully contained within another
  • is_overlapping - Checks if two ranges overlap at all

Implementing the Trait

impl<T> InclusiveRangeExt for RangeInclusive<T>
    where T : PartialOrd {
    fn is_subset(&self, other: &Self) -> bool {
        self.contains(other.start()) && self.contains(other.end())
    }
    fn is_overlapping(&self, other: &Self) -> bool {
        self.contains(other.start()) || self.contains(other.end())
    }
}

The trait is implemented generically for any RangeInclusive<T> where T supports partial ordering. This allows the solution to work with ranges of any comparable type, not just integers.

Parsing Input

    let data = std::fs::read_to_string("src/bin/day4_input.txt").expect("Ops! Cannot read file");
    let pairs = data.lines()
        .map(|line|
            line.split(|c:char| c.is_ascii_punctuation())
                .map(|c| u32::from_str(c).unwrap_or_else(|e| panic!("{e}")) )
                .collect::<Vec<_>>()
        )
        .map(|pair| {
            let [a, b, c, d] = pair[..] else { panic!("") };
            ((a..=b), (c..=d))
        })
        .collect::<Vec<_>>();

The parsing involves several steps:

  1. Read the input file as a string
  2. Split each line into parts using punctuation characters (hyphens and commas)
  3. Convert each part to a u32 number
  4. Group the numbers into pairs of ranges using Rust's inclusive range syntax a..=b

Part 1: Checking Subset Relationships

    let out = pairs.iter()
        .filter(|(a,b)|
            a.is_subset(b) || b.is_subset(a)
        )
        .count();

This part counts pairs where one range fully contains the other by applying the is_subset method and checking in both directions.

Part 2: Checking Overlap Relationships

    let out = pairs.iter()
        .filter(|(a,b)|
            a.is_overlapping(b) || b.is_overlapping(a)
        )
        .count();

This part counts pairs where the ranges overlap at all by applying the is_overlapping method and checking in both directions.

Implementation Notes

  • Trait Extensions: This solution demonstrates Rust's powerful trait system by extending an existing type with new functionality.
  • Generic Programming: The trait implementation works with any ordered type, not just the specific integers used in this problem.
  • Pattern Matching: The solution uses Rust's pattern matching to destructure the parsed values into range pairs.
  • Error Handling: The solution uses expect and unwrap_or_else for error handling, though a more robust solution might handle errors more gracefully.

The implementation is concise and idiomatic, leveraging Rust's type system and functional programming features to solve the problem elegantly.

Day 5: Supply Stacks

Day 5 involves rearranging stacks of crates following a series of move instructions.

Problem Overview

The elves are loading supplies onto a cargo ship, and the crates need to be rearranged. Each crate is marked with a letter, and the crates are arranged in stacks. Your task is to:

  1. Parse the initial arrangement of crates and the move instructions
  2. Simulate the crate movement using two different crane models
  3. Report which crates end up on top of each stack

This problem tests your ability to work with stacks, parse complex input formats, and implement different movement rules.

Day 5: Problem Description

Supply Stacks

The expedition can depart as soon as the final supplies have been unloaded from the ships. Supplies are stored in stacks of marked crates, but because the needed supplies are buried under many other crates, the crates need to be rearranged.

The ship has a giant cargo crane capable of moving crates between stacks. To ensure none of the crates get crushed or fall over, the crane operator will rearrange them in a series of carefully-planned steps. After the crates are rearranged, the desired crates will be at the top of each stack.

The Elves don't want to interrupt the crane operator during this delicate procedure, but they forgot to ask her which crate will end up where, and they want to be ready to unload them as soon as possible so they can embark.

They do, however, have a drawing of the starting stacks of crates and the rearrangement procedure (your puzzle input). For example:

    [D]    
[N] [C]    
[Z] [M] [P]
 1   2   3 

move 1 from 2 to 1
move 3 from 1 to 3
move 2 from 2 to 1
move 1 from 1 to 2

In this example, there are three stacks of crates. Stack 1 contains two crates: crate Z is on the bottom, and crate N is on top. Stack 2 contains three crates: from bottom to top, crates M, C, and D. Finally, stack 3 contains a single crate, P.

Then, the rearrangement procedure is given. In each step of the procedure, a quantity of crates is moved from one stack to a different stack. In the first step of the above rearrangement procedure, one crate is moved from stack 2 to stack 1, resulting in this configuration:

[D]        
[N] [C]    
[Z] [M] [P]
 1   2   3 

In the second step, three crates are moved from stack 1 to stack 3. Crates are moved one at a time, so the first crate to be moved (D) ends up below the second and third crates:

        [Z]
        [N]
    [C] [D]
    [M] [P]
 1   2   3

Then, the third step moves two crates from stack 2 to stack 1. Again, because crates are moved one at a time, crate C ends up below crate M:

        [Z]
        [N]
[M]     [D]
[C]     [P]
 1   2   3

Finally, the fourth step moves one crate from stack 1 to stack 2:

        [Z]
        [N]
        [D]
[C] [M] [P]
 1   2   3

The Elves just need to know which crate will end up on top of each stack; in this example, the top crates are C in stack 1, M in stack 2, and Z in stack 3, so you should combine these together and give the Elves the message CMZ.

Part 1

After the rearrangement procedure completes, what crate ends up on top of each stack?

Part 2

As you watch the crane operator expertly rearrange the crates, you notice the process isn't following your prediction.

Some mud was covering the writing on the side of the crane, and you quickly wipe it away. The crane isn't a CrateMover 9000 - it's a CrateMover 9001.

The CrateMover 9001 is notable for many new and exciting features: air conditioning, leather seats, an extra cup holder, and the ability to pick up and move multiple crates at once.

Again considering the example above, the crates begin in the same configuration:

    [D]    
[N] [C]    
[Z] [M] [P]
 1   2   3 

Moving a single crate from stack 2 to stack 1 behaves the same as before:

[D]        
[N] [C]    
[Z] [M] [P]
 1   2   3 

However, the action of moving three crates from stack 1 to stack 3 means that those three crates stay in the same order, resulting in this new configuration:

        [D]
        [N]
    [C] [Z]
    [M] [P]
 1   2   3

Next, as both crates are moved from stack 2 to stack 1, they retain their order as well:

        [D]
        [N]
[C]     [Z]
[M]     [P]
 1   2   3

Finally, a single crate is still moved from stack 1 to stack 2, but now it's crate C that gets moved:

        [D]
        [N]
        [Z]
[M] [C] [P]
 1   2   3

In this example, the CrateMover 9001 has put the crates in a totally different order: MCD.

Before the rearrangement process finishes, update your simulation so that the Elves know where they should stand to be ready to unload the final supplies. After the rearrangement procedure completes, what crate ends up on top of each stack?

Day 5: Solution Explanation

Approach

Day 5's problem involves parsing a complex input format and simulating moving crates between stacks using different rules. The solution involves three main parts:

  1. Parsing the input - Extracting the initial crate configuration and move instructions
  2. Simulating crate movements - Implementing both the CrateMover 9000 and CrateMover 9001 rules
  3. Reading the result - Determining which crates end up on top of each stack

Implementation Details

Data Structures

The solution uses two main structures:

  1. Move - Represents a single move instruction:
#![allow(unused)]
fn main() {
#[derive(Debug,Copy,Clone)]
struct Move {
    count: usize,   // Number of crates to move
    from: usize,    // Source stack
    to: usize,      // Destination stack
}
}
  1. Buckets - Represents the stacks of crates:
#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Buckets {
    buckets: HashMap<usize,Vec<char>>,  // Stacks of crates
    keys: Vec<usize>                     // Ordered list of stack IDs
}
}

The Buckets structure uses a HashMap to store each stack, with vectors representing the crates in each stack (with the top crate at the end of the vector). It also maintains an ordered list of keys to ensure consistent access to stacks.

Parsing the Input

The input consists of two parts: the initial crate configuration and the move instructions.

Parsing the Initial Configuration

The initial crate configuration is parsed by starting from the bottom of the diagram and working upward:

#![allow(unused)]
fn main() {
fn new(start: &str) -> Buckets {
    let buckets = start.lines()
        .rev()                            // Start from the bottom of the diagram
        .map(|line| line.split("").filter_map(|e| e.chars().next()).collect::<Vec<_>>())
        .fold(HashMap::new(), |map, e| {
            e.into_iter()
                .enumerate()
                .filter(|(_, c)| c.is_alphanumeric())   // Keep only letters and numbers
                .fold(map, |mut out, (key, val)| {
                    out.entry(key)
                        .or_insert(Vec::default())
                        .push(val);                      // Add each crate to its stack
                    out
                })
        });
    let mut keys = buckets.keys().copied().collect::<Vec<_>>();
    keys.sort();                          // Sort keys for consistent access
    Buckets {
        buckets,
        keys
    }
}
}

By reading the input in reverse order, we can build each stack from bottom to top.

Parsing the Move Instructions

The move instructions are parsed using the FromStr trait:

#![allow(unused)]
fn main() {
impl FromStr for Move {
    type Err = ParseIntError;
    fn from_str(s: &str) -> Result<Self, Self::Err> {
        if let [_,count,_,from,_,to] = s.split(' ').collect::<Vec<_>>()[..] {
            Ok(
                Move {
                    count: usize::from_str(count)?,
                    from: usize::from_str(from)?,
                    to: usize::from_str(to)?,
                }
            )
        } else {
            unreachable!()
        }
    }
}
}

This parses strings like "move 1 from 2 to 1" into a Move structure with count=1, from=2, and to=1.

Simulating Crate Movements

The solution implements two different crate-moving strategies:

CrateMover 9000: Moving One Crate at a Time

For the CrateMover 9000, crates are moved one at a time, so they end up in reverse order:

#![allow(unused)]
fn main() {
fn crate_mover9000(&mut self, m: Move) {
    let (from, to) = self.get_keys(m);
    (0..m.count)
        .for_each(|_|{
            if let Some(c) = self.buckets.get_mut(&from).expect("").pop() {
                self.buckets.get_mut(&to).expect("").push(c)
            }
    });
}
}

This simply pops a crate from the source stack and pushes it onto the destination stack, repeating for the specified number of crates.

CrateMover 9001: Moving Multiple Crates at Once

For the CrateMover 9001, multiple crates are moved at once, preserving their order:

#![allow(unused)]
fn main() {
fn crate_mover9001(&mut self, m: Move) {
    let (from, to) = self.get_keys(m);
    let v = (0..m.count)
        .fold(vec![],|mut out,_|{
            if let Some(c) = self.buckets.get_mut(&from).expect("").pop() { out.push(c) }
            out
        });
    self.buckets.get_mut(&to).expect("").extend(v.iter().rev());
}
}

This code:

  1. Removes the specified number of crates from the source stack
  2. Collects them in a temporary vector (in reverse order)
  3. Extends the destination stack with the temporary vector (reversed again)

By applying a double reversal, the original order of the crates is preserved.

Reading the Result

After all moves are applied, we need to read the top crate from each stack:

#![allow(unused)]
fn main() {
fn scoop_top(&self) -> String {
    self.keys.iter()
        .filter_map(|key| self.buckets.get(key))   // Get each stack
        .filter_map(|arr| arr.last().copied() )    // Get the top crate
        .fold(String::new(),|mut out,s| { out.push(s); out })  // Combine into a string
}
}

This iterates through all stacks in order, gets the top crate from each, and concatenates them into a string.

Challenge Insights

Input Parsing Complexity

The most challenging part of this problem is parsing the initial crate configuration, which is a visual representation of stacks rather than a straightforward data format. The solution handles this by:

  1. Reading the diagram from bottom to top
  2. Converting each line into a sequence of characters
  3. Filtering out non-alphanumeric characters
  4. Building up stacks based on the position of each character

Mapping Between Visual and Logical Indexes

The input uses 1-based indexing for stacks, but our internal representation uses 0-based indexing. The get_keys method handles this conversion:

#![allow(unused)]
fn main() {
fn get_keys(&self, m:Move) -> (usize,usize) {
    (self.keys[m.from-1],self.keys[m.to-1])
}
}

Different Movement Rules

Implementing two different movement rules shows how small changes in requirements can lead to significantly different behavior. The CrateMover 9000 causes a reversal of crate order, while the CrateMover 9001 preserves it.

Alternative Approaches

Direct Vector Manipulation

Instead of using a HashMap, we could use a Vec<Vec> to represent the stacks directly:

#![allow(unused)]
fn main() {
struct Buckets {
    stacks: Vec<Vec<char>>
}
}

This would simplify some of the code but would make parsing the initial configuration more complex.

Using a Stack Data Structure

We could use an explicit stack data structure for each pile of crates:

#![allow(unused)]
fn main() {
use std::collections::VecDeque;

struct Buckets {
    stacks: Vec<VecDeque<char>>
}
}

However, Rust's Vec already provides all the necessary stack operations (push and pop), so there's no need for a separate data structure.

Time and Space Complexity

  • Time Complexity: O(n * m), where n is the number of move instructions and m is the maximum number of crates moved in a single instruction.
  • Space Complexity: O(c), where c is the total number of crates.

Conclusion

This solution demonstrates how to parse complex, visually-oriented input and simulate two different sets of rules using appropriate data structures. The use of Rust's traits (like FromStr) and collections (like HashMap and Vec) makes the implementation clean and efficient.

{{REWRITTEN_CODE}}

Day 5: Code

Below is the complete code for Day 5's solution, which handles rearranging stacks of crates.

Full Solution

use std::collections::HashMap;
use std::num::ParseIntError;
use std::str::FromStr;

#[derive(Debug,Copy,Clone)]
struct Move {
    count: usize,
    from: usize,
    to: usize
}
impl FromStr for Move {
    type Err = ParseIntError;
    fn from_str(s: &str) -> Result<Self, Self::Err> {
        if let [_,count,_,from,_,to] = s.split(' ').collect::<Vec<_>>()[..] {
            Ok(
                Move {
                    count: usize::from_str(count)?,
                    from: usize::from_str(from)?,
                    to: usize::from_str(to)?,
                }
            )
        } else {
            unreachable!()
        }
    }
}
impl Move {
    fn parse_moves(moves:&str) -> Vec<Move> {
        moves.lines()
            .map(|line| Move::from_str(line).unwrap_or_else(|e| panic!("{e}")) )
            .collect()
    }
}
#[derive(Debug)]
struct Buckets {
    buckets: HashMap<usize,Vec<char>>,
    keys: Vec<usize>
}
impl Buckets {
    fn new(start: &str) -> Buckets {
        let buckets = start.lines()
            .rev()
            .map(|line| line.split("").filter_map(|e| e.chars().next()).collect::<Vec<_>>())
            .fold(HashMap::new(), |map, e| {
                e.into_iter()
                    .enumerate()
                    .filter(|(_, c)| c.is_alphanumeric())
                    .fold(map, |mut out, (key, val)| {
                        out.entry(key)
                            .or_insert(Vec::default())
                            .push(val);
                        out
                    })
            });
        let mut keys = buckets.keys().copied().collect::<Vec<_>>();
        keys.sort();
        Buckets {
            buckets,
            keys
        }
    }
    fn crate_mover9000(&mut self, m: Move) {
        let (from, to) = self.get_keys(m);
        (0..m.count)
            .for_each(|_|{
                if let Some(c) = self.buckets.get_mut(&from).expect("").pop() {
                    self.buckets.get_mut(&to).expect("").push(c)
                }
        });
    }
    fn crate_mover9001(&mut self, m: Move) {
        let (from, to) = self.get_keys(m);
        let v = (0..m.count)
            .fold(vec![],|mut out,_|{
                if let Some(c) = self.buckets.get_mut(&from).expect("").pop() { out.push(c) }
                out
            });
        self.buckets.get_mut(&to).expect("").extend(v.iter().rev());
    }
    fn scoop_top(&self) -> String {
        self.keys.iter()
            .filter_map(|key| self.buckets.get(key))
            .filter_map(|arr| arr.last().copied() )
            .fold(String::new(),|mut out,s| { out.push(s); out })
    }
    fn get_keys(&self, m:Move) -> (usize,usize) {
        (self.keys[m.from-1],self.keys[m.to-1])
    }
}

fn main() {

    let data = std::fs::read_to_string("src/bin/day5_input.txt").expect("Ops!");

    let [start,moves] = data.split("\n\n").collect::<Vec<_>>()[..] else { panic!("") };

    let mut buckets = Buckets::new(start);
    let moves = Move::parse_moves(moves);

    moves.iter().for_each(|&m| buckets.crate_mover9000(m) );
    println!("{:?}",buckets.scoop_top());

    moves.iter().for_each(|&m| buckets.crate_mover9001(m) );
    println!("{:?}",buckets.scoop_top());

}

Code Walkthrough

Data Structures

The solution uses two main structures:

  1. Move - Represents a single move instruction:
#[derive(Debug,Copy,Clone)]
struct Move {
    count: usize,
    from: usize,
    to: usize
}
  1. Buckets - Represents the stacks of crates:
#[derive(Debug)]
struct Buckets {
    buckets: HashMap<usize,Vec<char>>,
    keys: Vec<usize>
}

Parsing

Parsing Move Instructions

The FromStr trait implementation for Move allows parsing strings like "move 1 from 2 to 1":

impl FromStr for Move {
    type Err = ParseIntError;
    fn from_str(s: &str) -> Result<Self, Self::Err> {
        if let [_,count,_,from,_,to] = s.split(' ').collect::<Vec<_>>()[..] {
            Ok(
                Move {
                    count: usize::from_str(count)?,
                    from: usize::from_str(from)?,
                    to: usize::from_str(to)?,
                }
            )
        } else {
            unreachable!()
        }
    }
}

The helper method parse_moves processes multiple move instructions:

impl Move {
    fn parse_moves(moves:&str) -> Vec<Move> {
        moves.lines()
            .map(|line| Move::from_str(line).unwrap_or_else(|e| panic!("{e}")) )
            .collect()
    }

Parsing Initial Crate Configuration

The new method of Buckets parses the initial crate configuration:

    fn new(start: &str) -> Buckets {
        let buckets = start.lines()
            .rev()
            .map(|line| line.split("").filter_map(|e| e.chars().next()).collect::<Vec<_>>())
            .fold(HashMap::new(), |map, e| {
                e.into_iter()
                    .enumerate()
                    .filter(|(_, c)| c.is_alphanumeric())
                    .fold(map, |mut out, (key, val)| {
                        out.entry(key)
                            .or_insert(Vec::default())
                            .push(val);
                        out
                    })
            });
        let mut keys = buckets.keys().copied().collect::<Vec<_>>();
        keys.sort();
        Buckets {
            buckets,
            keys
        }
    }

This method works by:

  1. Reading the input in reverse order (bottom to top)
  2. Splitting each line into characters
  3. Filtering out non-alphanumeric characters (keeping only crate letters)
  4. Building each stack based on character positions

Crane Operations

CrateMover 9000: Moving One at a Time

    fn crate_mover9000(&mut self, m: Move) {
        let (from, to) = self.get_keys(m);
        (0..m.count)
            .for_each(|_|{
                if let Some(c) = self.buckets.get_mut(&from).expect("").pop() {
                    self.buckets.get_mut(&to).expect("").push(c)
                }
        });
    }

This method moves crates one at a time, popping from the source stack and pushing to the destination.

CrateMover 9001: Moving Multiple at Once

    fn crate_mover9001(&mut self, m: Move) {
        let (from, to) = self.get_keys(m);
        let v = (0..m.count)
            .fold(vec![],|mut out,_|{
                if let Some(c) = self.buckets.get_mut(&from).expect("").pop() { out.push(c) }
                out
            });
        self.buckets.get_mut(&to).expect("").extend(v.iter().rev());
    }

This method moves multiple crates at once, preserving their order through a double-reversal process.

Getting the Final Result

    fn scoop_top(&self) -> String {
        self.keys.iter()
            .filter_map(|key| self.buckets.get(key))
            .filter_map(|arr| arr.last().copied() )
            .fold(String::new(),|mut out,s| { out.push(s); out })
    }

This method retrieves the top crate from each stack and combines them into a string.

Main Function

fn main() {

    let data = std::fs::read_to_string("src/bin/day5_input.txt").expect("Ops!");

    let [start,moves] = data.split("\n\n").collect::<Vec<_>>()[..] else { panic!("") };

    let mut buckets = Buckets::new(start);
    let moves = Move::parse_moves(moves);

    moves.iter().for_each(|&m| buckets.crate_mover9000(m) );
    println!("{:?}",buckets.scoop_top());

    moves.iter().for_each(|&m| buckets.crate_mover9001(m) );
    println!("{:?}",buckets.scoop_top());

}

The main function:

  1. Reads the input file
  2. Splits it into the initial configuration and move instructions
  3. Creates the stacks and parses the moves
  4. Applies the CrateMover 9000 rules and prints the result (Part 1)
  5. Applies the CrateMover 9001 rules and prints the result (Part 2)

Implementation Notes

  • Functional Programming Style: The solution makes extensive use of iterators and functional programming patterns.
  • Key Transformation: The get_keys method handles the conversion between 1-based indexing (in the input) and 0-based indexing (in the code).
  • Parsing Approach: The solution parses the visual representation of the crates by reading from the bottom up and using character positions.
  • Double Reversal: The CrateMover 9001 uses a double reversal technique to preserve the order of crates when moving multiple at once.

Day 6: Tuning Trouble

Day 6 involves analyzing a datastream to find marker patterns of unique characters.

Problem Overview

You're trying to tune a communication device, which requires finding markers in the datastream. A marker is a sequence of characters where all characters are different. Your task is to:

  1. Find the position where the first start-of-packet marker (4 unique characters) appears
  2. Find the position where the first start-of-message marker (14 unique characters) appears

This problem tests your ability to search for patterns in a stream of data and identify unique sequences of characters.

Day 6: Problem Description

Tuning Trouble

The preparations are finally complete; you and the Elves leave camp on foot and begin to make your way toward the star fruit grove.

As you move through the dense undergrowth, one of the Elves gives you a handheld device. He says that it has many fancy features, but the most important one to set up right now is the communication system.

However, because he's heard you have significant experience dealing with signal-based systems, he convinced the other Elves that it would be okay to give you their one malfunctioning device - surely you'll have no problem fixing it.

As if inspired by comedic timing, the device emits a few colorful sparks.

To be able to communicate with the Elves, the device needs to lock on to their signal. The signal is a series of seemingly-random characters that the device receives one at a time.

To fix the communication system, you need to add a subroutine to the device that detects a start-of-packet marker in the datastream. In the protocol being used by the Elves, the start of a packet is indicated by a sequence of four characters that are all different.

The device will send your subroutine a datastream buffer (your puzzle input); your subroutine needs to identify the first position where the four most recently received characters were all different. Specifically, it needs to report the number of characters from the beginning of the buffer to the end of the first such four-character marker.

For example, suppose you receive the following datastream buffer:

mjqjpqmgbljsphdztnvjfqwrcgsmlb

After the first three characters (mjq) have been received, there haven't been enough characters received yet to find the marker. The first time a marker could occur is after the fourth character is received, making the most recent four characters mjqj. Because j is repeated, this isn't a marker.

The first time a marker appears is after the seventh character arrives. Once it does, the last four characters received are jpqm, which are all different. In this case, your subroutine should report the value 7, because the first start-of-packet marker is complete after 7 characters have been processed.

Here are a few more examples:

  • bvwbjplbgvbhsrlpgdmjqwftvncz: first marker after character 5
  • nppdvjthqldpwncqszvftbrmjlhg: first marker after character 6
  • nznrnfrfntjfmvfwmzdfjlvtqnbhcprsg: first marker after character 10
  • zcfzfwzzqfrljwzlrfnpqdbhtmscgvjw: first marker after character 11

Part 1

How many characters need to be processed before the first start-of-packet marker is detected?

Part 2

Your device's communication system is correctly detecting packets, but still isn't working. It looks like it also needs to look for messages.

A start-of-message marker is just like a start-of-packet marker, except it consists of 14 distinct characters rather than 4.

Here are the first positions of start-of-message markers for all of the above examples:

  • mjqjpqmgbljsphdztnvjfqwrcgsmlb: first marker after character 19
  • bvwbjplbgvbhsrlpgdmjqwftvncz: first marker after character 23
  • nppdvjthqldpwncqszvftbrmjlhg: first marker after character 23
  • nznrnfrfntjfmvfwmzdfjlvtqnbhcprsg: first marker after character 29
  • zcfzfwzzqfrljwzlrfnpqdbhtmscgvjw: first marker after character 26

How many characters need to be processed before the first start-of-message marker is detected?

Day 6: Solution Explanation

Approach

Day 6's problem involves finding the first occurrence of a sequence of unique characters in a datastream. The approach is to:

  1. Process the input datastream as a sequence of bytes
  2. Examine consecutive windows of characters (of length 4 for part 1, 14 for part 2)
  3. Check each window for duplicate characters
  4. Find the position of the first window that contains no duplicates

The solution uses Rust's trait system to create reusable functionality for checking duplicates and finding marker positions.

Implementation Details

Detecting Duplicates

The first key component is a trait for checking whether a slice contains any duplicate elements:

#![allow(unused)]
fn main() {
trait Duplicate {
    fn has_duplicates(&self) -> bool;
}

impl<T> Duplicate for [T] where T: Debug + Copy + PartialEq + Ord {
    fn has_duplicates(&self) -> bool {
        let mut tmp = self.to_vec();
        tmp.sort();
        tmp.windows(2).any(|a| a[0]==a[1])
    }
}
}

This implementation:

  1. Creates a copy of the slice
  2. Sorts the copy (bringing identical elements next to each other)
  3. Checks adjacent pairs for equality using windows(2)

The trait is implemented generically for any slice type [T] where T supports debugging, copying, equality comparison, and ordering.

Finding Marker Positions

The second key component is a trait for finding the position of a marker in a datastream:

#![allow(unused)]
fn main() {
trait Signaling {
    fn marker_position(&self, len:usize) -> usize;
}

impl<T> Signaling for [T] where T : Debug + Copy + PartialEq + Ord {
    fn marker_position(&self, len: usize) -> usize {
        self.windows(len)
            .enumerate()
            .skip_while(|&(_,stm)| stm.has_duplicates() )
            .next()
            .map(|(i,_)| i + len)
            .unwrap_or_else(|| panic!("marker_position(): Ops!"))
    }
}
}

This implementation:

  1. Creates sliding windows of the specified length using windows(len)
  2. Pairs each window with its index using enumerate()
  3. Skips windows that contain duplicates using skip_while
  4. Takes the first window that has no duplicates
  5. Returns the position after this window (index + window length)

Main Solution

With these traits defined, the main solution becomes remarkably simple:

fn main() {
    let data = std::fs::read_to_string("src/bin/day6_input.txt").expect("");

    let out = data.bytes().collect::<Vec<_>>();
    println!("Marker Length @4 = {}", out.marker_position(4));
    println!("Marker Length @14 = {}", out.marker_position(14));
}

The solution reads the input file, converts it to a vector of bytes, and then calls marker_position with the appropriate lengths for part 1 (4) and part 2 (14).

Algorithm Analysis

Time Complexity

The time complexity of this solution depends on the length of the input (n) and the marker length (m):

  • Checking for duplicates in a window takes O(m log m) time due to the sorting operation
  • In the worst case, we check every window in the input, giving us O(n) windows
  • Overall time complexity: O(n * m log m)

For this problem, m is small (4 or 14), so the logarithmic factor isn't significant, making the effective complexity close to O(n).

Space Complexity

The space complexity is O(n) to store the input as a vector of bytes, plus O(m) temporary storage for each duplicate check.

Alternative Approaches

Using a HashSet for Duplicate Detection

A common alternative approach would be to use a HashSet to check for duplicates:

#![allow(unused)]
fn main() {
fn has_unique_chars(window: &[u8]) -> bool {
    let mut set = HashSet::new();
    window.iter().all(|&c| set.insert(c))
}
}

This would have O(m) time complexity for checking duplicates instead of O(m log m), but at the cost of using HashSet which has more overhead than simple sorting for small datasets.

Using Frequency Counting

Another approach would be to count the frequency of each character:

#![allow(unused)]
fn main() {
fn has_unique_chars(window: &[u8]) -> bool {
    let mut counts = [0; 256]; // For ASCII
    for &c in window {
        counts[c as usize] += 1;
        if counts[c as usize] > 1 {
            return false;
        }
    }
    true
}
}

This has O(m) time complexity and uses a fixed amount of space, but is limited to ASCII or other bounded character sets.

Using a Bit Set

For even more efficiency, a bit set could be used for the specific case of lowercase ASCII characters:

#![allow(unused)]
fn main() {
fn has_unique_chars(window: &[u8]) -> bool {
    let mut bits = 0u32;
    for &c in window {
        let mask = 1 << (c - b'a');
        if (bits & mask) != 0 {
            return false;
        }
        bits |= mask;
    }
    true
}
}

This has O(m) time complexity and uses only a single integer for storage, but is limited to a single case of character set.

Conclusion

The solution demonstrates the power of Rust's traits for creating reusable, generic functionality. By separating the concerns of duplicate detection and marker finding into traits, the code becomes more modular and expressive. The generic implementation allows the solution to work with any type of element, not just characters, making it more versatile than specialized approaches.

{{REWRITTEN_CODE}}

Day 6: Code

Below is the complete code for Day 6's solution, which finds marker patterns in a datastream.

Full Solution

use std::fmt::Debug;

trait Duplicate {
    fn has_duplicates(&self) -> bool;
}
impl<T> Duplicate for [T] where T: Debug + Copy + PartialEq + Ord {
    fn has_duplicates(&self) -> bool {
        let mut tmp = self.to_vec();
        tmp.sort();
        tmp.windows(2).any(|a| a[0]==a[1])
    }
}

trait Signaling {
    fn marker_position(&self, len:usize) -> usize;
}
impl<T> Signaling for [T] where T : Debug + Copy + PartialEq + Ord {
    fn marker_position(&self, len: usize) -> usize {
        self.windows(len)
            .enumerate()
            .skip_while(|&(_,stm)| stm.has_duplicates() )
            .next()
            .map(|(i,_)| i + len)
            .unwrap_or_else(|| panic!("marker_position(): Ops!"))
    }
}

fn main() {
    let data = std::fs::read_to_string("src/bin/day6_input.txt").expect("");

    let out = data.bytes().collect::<Vec<_>>();
    println!("Marker Length @4 = {}", out.marker_position(4));
    println!("Marker Length @14 = {}", out.marker_position(14));
}

Code Walkthrough

Duplicate Detection Trait

trait Duplicate {
    fn has_duplicates(&self) -> bool;
}
impl<T> Duplicate for [T] where T: Debug + Copy + PartialEq + Ord {
    fn has_duplicates(&self) -> bool {
        let mut tmp = self.to_vec();
        tmp.sort();
        tmp.windows(2).any(|a| a[0]==a[1])
    }
}

This trait provides a method to check if a slice contains duplicate elements:

  1. Duplicate trait defines a single method has_duplicates that returns a boolean
  2. The implementation for slices [T] works with any type that can be debugged, copied, compared for equality, and ordered
  3. The implementation creates a temporary copy of the slice, sorts it (bringing identical elements adjacent to each other), and then checks if any adjacent elements are equal
  4. The windows(2) method creates sliding windows of 2 elements, and any checks if the predicate is true for any window

Marker Detection Trait

trait Signaling {
    fn marker_position(&self, len:usize) -> usize;
}
impl<T> Signaling for [T] where T : Debug + Copy + PartialEq + Ord {
    fn marker_position(&self, len: usize) -> usize {
        self.windows(len)
            .enumerate()
            .skip_while(|&(_,stm)| stm.has_duplicates() )
            .next()
            .map(|(i,_)| i + len)
            .unwrap_or_else(|| panic!("marker_position(): Ops!"))
    }
}

This trait provides a method to find the position of the first marker of a specified length:

  1. Signaling trait defines a single method marker_position that takes a length parameter and returns a position
  2. The implementation creates sliding windows of the specified length using windows(len)
  3. Each window is paired with its position using enumerate()
  4. Windows containing duplicates are skipped using skip_while
  5. The first window without duplicates is selected with next()
  6. The marker position is calculated as the window index plus the window length

Main Function

fn main() {
    let data = std::fs::read_to_string("src/bin/day6_input.txt").expect("");

    let out = data.bytes().collect::<Vec<_>>();
    println!("Marker Length @4 = {}", out.marker_position(4));
    println!("Marker Length @14 = {}", out.marker_position(14));
}

The main function:

  1. Reads the input file into a string
  2. Converts the string to a vector of bytes using bytes().collect()
  3. Calls marker_position(4) to solve Part 1 (finding a start-of-packet marker)
  4. Calls marker_position(14) to solve Part 2 (finding a start-of-message marker)

Implementation Notes

  • Traits for Reusability: The solution uses Rust's trait system to create reusable behaviors
  • Generic Implementation: Both traits work with any type that meets the trait bounds, not just bytes or characters
  • Functional Approach: The code uses a functional programming style with method chaining for concise and expressive code
  • Algorithm Choice: The solution uses sorting for duplicate detection, which is efficient for small windows (like the 4 and 14 character windows in this problem)

The implementation is elegant and leverages Rust's powerful type system to create a generic, reusable solution that can handle both parts of the problem with the same code.

Day 7: No Space Left On Device

Day 7 involves parsing terminal output to build a directory structure and calculate directory sizes.

Problem Overview

You're trying to free up space on your device by analyzing the file system. Given a terminal output showing the commands you executed and their results, you need to:

  1. Build a directory tree from the commands and output
  2. Calculate the total size of each directory (including subdirectories)
  3. Find directories smaller than a certain size
  4. Find the smallest directory that, when deleted, would free enough space

This problem tests your ability to parse structured text, build a tree data structure, and perform size calculations on it.

Day 7: Problem Description

No Space Left On Device

You can hear birds chirping and raindrops hitting leaves as the expedition proceeds. Occasionally, you can even hear much louder sounds in the distance; how big do the animals get out here, anyway?

The device the Elves gave you has problems with more than just its communication system. You try to run a system update:

$ system-update --please --pretty-please-with-sugar-on-top
Error: No space left on device

Perhaps you can delete some files to make space for the update?

You browse around the filesystem to assess the situation and save the resulting terminal output (your puzzle input). For example:

$ cd /
$ ls
dir a
14848514 b.txt
8504156 c.dat
dir d
$ cd a
$ ls
dir e
29116 f
2557 g
62596 h.lst
$ cd e
$ ls
584 i
$ cd ..
$ cd ..
$ cd d
$ ls
4060174 j
8033020 d.log
5626152 d.ext
7214296 k

The filesystem consists of a tree of files (plain data) and directories (which can contain other directories or files). The outermost directory is called /. You can navigate around the filesystem, moving into or out of directories and listing the contents of the directory you're currently in.

Within the terminal output, lines that begin with $ are commands you executed, very much like some modern computers:

  • cd means change directory. This changes which directory is the current directory, but the specific result depends on the argument:
    • cd x moves in one level: it looks in the current directory for the directory named x and makes it the current directory.
    • cd .. moves out one level: it finds the directory that contains the current directory, then makes that directory the current directory.
    • cd / switches the current directory to the outermost directory, /.
  • ls means list. It prints out all of the files and directories immediately contained by the current directory:
    • 123 abc means that the current directory contains a file named abc with size 123.
    • dir xyz means that the current directory contains a directory named xyz.

Given the commands and output in the example above, you can determine that the filesystem looks visually like this:

- / (dir)
  - a (dir)
    - e (dir)
      - i (file, size=584)
    - f (file, size=29116)
    - g (file, size=2557)
    - h.lst (file, size=62596)
  - b.txt (file, size=14848514)
  - c.dat (file, size=8504156)
  - d (dir)
    - j (file, size=4060174)
    - d.log (file, size=8033020)
    - d.ext (file, size=5626152)
    - k (file, size=7214296)

Here, there are four directories: / (the outermost directory), a and d (which are in /), and e (which is in a). These directories also contain files of various sizes.

Since the disk is full, your first step should probably be to find directories that are good candidates for deletion. To do this, you need to determine the total size of each directory. The total size of a directory is the sum of the sizes of the files it contains, directly or indirectly. (Directories themselves do not count as having any intrinsic size.)

The total sizes of the directories above can be found as follows:

  • The total size of directory e is 584 because it contains a single file i of size 584 and no other directories.
  • The directory a has total size 94853 because it contains files f (size 29116), g (size 2557), and h.lst (size 62596), plus file i indirectly (a contains e which contains i).
  • Directory d has total size 24933642.
  • As the outermost directory, / contains every file. Its total size is 48381165, the sum of the size of every file.

To begin, find all of the directories with a total size of at most 100000, then calculate the sum of their total sizes. In the example above, these directories are a and e; the sum of their total sizes is 95437 (94853 + 584). (As in this example, this process can count files more than once!)

Part 1

Find all of the directories with a total size of at most 100000. What is the sum of the total sizes of those directories?

Part 2

Now, you're ready to choose a directory to delete.

The total disk space available to the filesystem is 70000000. To run the update, you need unused space of at least 30000000. You need to find a directory you can delete that will free up enough space to run the update.

In the example above, the total size of the outermost directory (and thus the total amount of used space) is 48381165; this means that the size of the unused space must currently be 21618835, which isn't quite the 30000000 required by the update. Therefore, the update still requires a directory with total size of at least 8381165 to be deleted before it can run.

To achieve this, you have the following options:

  • Delete directory e, which would increase unused space by 584.
  • Delete directory a, which would increase unused space by 94853.
  • Delete directory d, which would increase unused space by 24933642.
  • Delete directory /, which would increase unused space by 48381165.

Directories e and a are both too small; deleting them would not free up enough space. However, directories d and / are both big enough! Between these, choose the smallest: d, increasing unused space by 24933642.

Find the smallest directory that, if deleted, would free up enough space on the filesystem to run the update. What is the total size of that directory?

Day 7: Solution Explanation

Approach

Day 7's problem involves building a directory tree and calculating directory sizes from terminal output. The solution breaks down into several key steps:

  1. Parse the terminal output into commands and results
  2. Build a directory tree structure based on the commands
  3. Calculate the total size of each directory (including its subdirectories)
  4. Find directories matching the specified size criteria

The solution uses a tree structure with nodes representing directories, where each node keeps track of its contents and size.

Implementation Details

Data Structures

The solution uses several custom types to represent the file system:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone)]
enum ResultType {
    File(String, usize),  // File name and size
    Dir(String)           // Directory name
}

#[derive(Debug)]
enum CommandType {
    Cd(String),  // Change directory with target
    List          // List directory contents
}

#[derive(Debug)]
enum LineType {
    Cmd(CommandType),  // A command
    Rst(ResultType)    // Output from a command
}
}

These enums represent the different types of lines in the terminal output.

Path Representation

A custom Path struct is used to represent file paths:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
struct Path(String);

impl Path {
    fn new(path:String) -> Path {
        Path(path)
    }
    fn append(&self, dir: &str) -> Path {
        Path(format!("{}{}",self.0,dir))
    }
}
}

This struct wraps a string and provides methods for creating and appending to paths. It's also made to be hashable so it can be used as a key in a HashMap.

Directory Tree Structure

The directory tree is represented by two structures:

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Node {
    parent: Path,            // Parent directory path
    content: Vec<ResultType>, // Contents (files and subdirectories)
    size: usize              // Size of files directly in this directory
}

#[derive(Debug)]
struct Tree {
    map: HashMap<Path,Node>, // Maps paths to nodes
    totals: RefCell<Vec<(Path,usize)>> // Stores total sizes for each directory
}
}

The Node structure represents a directory with its parent, contents, and direct file size. The Tree structure contains a map from paths to nodes and a list of total sizes for all directories.

Parsing the Terminal Output

The terminal output is parsed line by line using an iterator:

#![allow(unused)]
fn main() {
struct History();
impl History {
    fn iterator(history:&str) -> impl Iterator<Item=LineType> + '_ {
        history.lines()
            .filter_map(|e| {
                let p:Vec<_> = e.split(' ').collect();
                match p[0] {
                    "$" => match p[1] {
                        "ls" => Some(LineType::Cmd(CommandType::List)),
                        "cd" => Some(LineType::Cmd(CommandType::Cd(String::from(p[2])))),
                        _ => None
                    }
                    "dir" => Some(LineType::Rst(ResultType::Dir(p[1].to_string()))),
                    _ => Some(LineType::Rst(ResultType::File(p[1].to_string(), usize::from_str(p[0]).unwrap())))
                }
            })
    }
}
}

This iterator converts each line into a LineType (either a command or a result) based on the line format.

Building the Directory Tree

The directory tree is built by processing each command and its results:

#![allow(unused)]
fn main() {
fn parse_history(history: impl Iterator<Item=LineType>) -> Tree {
    use LineType::*;

    let mut map = HashMap::<Path,Node>::new();
    let mut path = Path::new("".to_string());

    history
        .for_each(|lt| {
            match lt {
                Cmd(CommandType::Cd(dir)) if dir.contains("..") => path = map[&path].parent.clone(),
                Cmd(CommandType::Cd(dir)) => {
                    let cpath = path.append(dir.as_str());
                    map.entry(cpath.clone())
                        .or_insert(Node { parent: path.clone(), content: Vec::new(), size: 0 });
                    path = cpath;
                }
                Rst(res) => {
                    let node = map.get_mut(&path).unwrap();
                    node.content.push(res.clone());
                    if let ResultType::File(_,fsize) = res {
                        node.size += fsize;
                    }
                }
                Cmd(CommandType::List) => {},
            }
        });
    Tree { map, totals: RefCell::new(Vec::new()) }
}
}

As commands are processed, a current path is maintained, and nodes are added to the tree as needed. When file results are encountered, they're added to the current directory's contents and their sizes are added to the directory's direct size.

Calculating Directory Sizes

To calculate the total size of each directory (including subdirectories), a recursive function is used:

#![allow(unused)]
fn main() {
fn calc_dirs_totals(&self, path: &Path) -> usize {
    let mut sum = self.dir_size(path);
    for dir in self.children(path) {
        let cpath = path.append(dir);
        sum += self.calc_dirs_totals(&cpath);
    }
    self.totals.borrow_mut().push((path.clone(), sum));
    sum
}
}

This function calculates the total size of a directory by adding its direct size to the total sizes of its subdirectories. It also stores the total size in the totals list.

Solving Part 1

For Part 1, the solution finds all directories with a total size of at most 100,000 and sums their sizes:

#![allow(unused)]
fn main() {
dirs.iter()
    .filter(|(_,size)| *size < 100000 )
    .map(|&(_,size)| size)
    .sum::<usize>()
}

Solving Part 2

For Part 2, the solution finds the smallest directory that, when deleted, would free enough space:

#![allow(unused)]
fn main() {
let total_space = 70000000;
let min_free_space = 30000000;
let &(_,total_used) = dirs.last().unwrap();
let min_space_to_free = min_free_space - (total_space - total_used);

dirs.iter()
    .filter(|(_,size)| *size >= min_space_to_free )
    .min_by(|&a,&b| a.1.cmp(&b.1))
}

It calculates the minimum amount of space that needs to be freed, then finds the smallest directory that is at least that size.

Algorithmic Analysis

Time Complexity

  • Parsing the terminal output: O(n), where n is the number of lines
  • Building the directory tree: O(n)
  • Calculating directory sizes: O(d), where d is the number of directories
  • Finding directories by size: O(d)

Overall time complexity: O(n + d), which simplifies to O(n) since d ≤ n

Space Complexity

  • Directory tree: O(n) to store all files and directories
  • Totals list: O(d) to store the size of each directory

Overall space complexity: O(n)

Alternative Approaches

Using a Real File System Library

Instead of implementing a custom file system representation, we could use a file system library that supports virtual file systems:

#![allow(unused)]
fn main() {
use std::path::PathBuf;
use memfs::MemFs;

let fs = MemFs::new();

// Process commands and build the file system
for line in input.lines() {
    // Parse and execute commands...
}

// Calculate directory sizes
fn dir_size(fs: &MemFs, path: &PathBuf) -> usize {
    // Calculate size recursively
}
}

This would leverage existing file system implementations but might be more complex to set up.

Using a Graph Library

Another approach would be to use a graph library to represent the directory structure:

#![allow(unused)]
fn main() {
use petgraph::graph::{DiGraph, NodeIndex};
use petgraph::visit::DfsPostOrder;

let mut graph = DiGraph::new();
let mut node_map = HashMap::new();

// Build the graph
// ...

// Calculate sizes with a post-order traversal
let mut dfs = DfsPostOrder::new(&graph, root);
while let Some(node) = dfs.next(&graph) {
    // Calculate size based on children's sizes
}
}

This would use well-tested graph algorithms but adds an external dependency.

Conclusion

This solution demonstrates how to parse structured text and build a tree representation of a file system. The use of custom types like Path, Node, and Tree makes the code expressive and organized. The recursive calculation of directory sizes is a natural fit for the hierarchical nature of the file system.

{{REWRITTEN_CODE}}

Day 7: Code

Below is the complete code for Day 7's solution, which parses terminal output to build a directory tree and analyze directory sizes.

Full Solution

use std::cell::RefCell;
use std::collections::HashMap;
use std::str::FromStr;

#[derive(Debug, Clone)]
enum ResultType {
    File(String, usize),
    Dir(String)
}
#[derive(Debug)]
enum CommandType {
    Cd(String),
    List
}
#[derive(Debug)]
enum LineType {
    Cmd(CommandType),
    Rst(ResultType)
}
#[derive(Debug, Clone, Hash, Eq, PartialEq)]
struct Path(String);
impl Path {
    fn new(path:String) -> Path {
        Path(path)
    }
    fn append(&self, dir: &str) -> Path {
        Path(format!("{}{}",self.0,dir))
    }
}
#[derive(Debug)]
struct Node {
    parent: Path,
    content: Vec<ResultType>,
    size: usize
}
#[derive(Debug)]
struct Tree {
    map: HashMap<Path,Node>,
    totals: RefCell<Vec<(Path,usize)>>
}
impl Tree {
    fn children(&self, path: &Path) -> Vec<&String> {
        self.map[path]
            .content
            .iter()
            .filter_map(|rt|
                if let ResultType::Dir(dir) = rt {
                    Some(dir)
                } else {
                    None
                }
            )
            .collect()
    }
    fn dir_size(&self, path: &Path) -> usize {
        self.map[path].size
    }
    fn totals(&self) -> Vec<(Path, usize)> {
        self.totals.take()
    }
    fn parse_history(history: impl Iterator<Item=LineType>) -> Tree {
        use LineType::*;

        let mut map = HashMap::<Path,Node>::new();
        let mut path = Path::new("".to_string());

        history
            // .inspect(|line| println!("{:?}",line))
            .for_each(|lt| {
                match lt {
                    Cmd(CommandType::Cd(dir)) if dir.contains("..") => path = map[&path].parent.clone(),
                    Cmd(CommandType::Cd(dir)) => {
                        let cpath = path.append(dir.as_str());
                        println!("{:?}",cpath);
                        map.entry(cpath.clone())
                            .or_insert(Node { parent: path.clone(), content: Vec::new(), size: 0 });
                        path = cpath;
                    }
                    Rst(res) => {
                        let node = map.get_mut(&path).unwrap();
                        node.content.push(res.clone());
                        if let ResultType::File(_,fsize) = res {
                            node.size += fsize;
                        }
                    }
                    Cmd(CommandType::List) => {},
                }
            });
        Tree { map, totals: RefCell::new(Vec::new()) }
    }
    fn calc_dirs_totals(&self, path: &Path) -> usize {
        let mut sum = self.dir_size(path);
        for dir in self.children(path) {
            let cpath = path.append(dir);
            sum += self.calc_dirs_totals(&cpath);
        }
        // println!("{:?}:{:?}", path, sum);
        self.totals.borrow_mut().push((path.clone(), sum));
        sum
    }
}

struct History();
impl History {
    fn iterator(history:&str) -> impl Iterator<Item=LineType> + '_{
        history.lines()
            .filter_map(|e| {
                let p:Vec<_> = e.split(' ').collect();
                match p[0] {
                    "$" => match p[1] {
                        "ls" => Some(LineType::Cmd(CommandType::List)),
                        "cd" => Some(LineType::Cmd(CommandType::Cd(String::from(p[2])))),
                        _ => None
                    }
                    "dir" => Some(LineType::Rst(ResultType::Dir(p[1].to_string()))),
                    _ => Some(LineType::Rst(ResultType::File(p[1].to_string(), usize::from_str(p[0]).unwrap())))
                }
            })
    }
}

fn main() {

    let history = std::fs::read_to_string("src/bin/day7_input.txt").expect("");

    let tree = Tree::parse_history(
        History::iterator(history.as_str())
    );

    tree.calc_dirs_totals(&Path::new("/".to_string()));
    let dirs = tree.totals();

    println!("Directories < 100000 \n====================");
    println!("{:?}",
             dirs.iter()
                 .filter(|(_,size)| *size < 100000 )
                 .inspect(|&p| println!("{:?}",p))
                 .map(|&(_,size)| size)

Code Walkthrough

Data Types

The solution defines several types to represent the file system and terminal output:

#[derive(Debug, Clone)]
enum ResultType {
    File(String, usize),
    Dir(String)
}
#[derive(Debug)]
enum CommandType {
    Cd(String),
    List
}
#[derive(Debug)]
enum LineType {
    Cmd(CommandType),
    Rst(ResultType)
}

These enums represent:

  • ResultType: Either a file (with name and size) or a directory (with name)
  • CommandType: Either a change directory command or a list command
  • LineType: Either a command or a result

Path Representation

#[derive(Debug, Clone, Hash, Eq, PartialEq)]
struct Path(String);
impl Path {
    fn new(path:String) -> Path {
        Path(path)
    }
    fn append(&self, dir: &str) -> Path {
        Path(format!("{}{}",self.0,dir))
    }
}

The Path struct encapsulates a string representing a file path and provides methods to create and append to paths.

Directory Tree

#[derive(Debug)]
struct Node {
    parent: Path,
    content: Vec<ResultType>,
    size: usize
}
#[derive(Debug)]
struct Tree {
    map: HashMap<Path,Node>,
    totals: RefCell<Vec<(Path,usize)>>
}

The directory tree consists of:

  • Node: Represents a directory with its parent, contents, and direct size
  • Tree: Contains a map of paths to nodes and a list of total sizes

Directory Tree Methods

impl Tree {
    fn children(&self, path: &Path) -> Vec<&String> {
        self.map[path]
            .content
            .iter()
            .filter_map(|rt|
                if let ResultType::Dir(dir) = rt {
                    Some(dir)
                } else {
                    None
                }
            )
            .collect()
    }
    fn dir_size(&self, path: &Path) -> usize {
        self.map[path].size
    }
    fn totals(&self) -> Vec<(Path, usize)> {
        self.totals.take()
    }

These methods provide functionality to:

  • Get a list of child directories
  • Get the direct size of a directory
  • Take the list of total sizes

Parsing Terminal Output

    fn parse_history(history: impl Iterator<Item=LineType>) -> Tree {
        use LineType::*;

        let mut map = HashMap::<Path,Node>::new();
        let mut path = Path::new("".to_string());

        history
            // .inspect(|line| println!("{:?}",line))
            .for_each(|lt| {
                match lt {
                    Cmd(CommandType::Cd(dir)) if dir.contains("..") => path = map[&path].parent.clone(),
                    Cmd(CommandType::Cd(dir)) => {
                        let cpath = path.append(dir.as_str());
                        println!("{:?}",cpath);
                        map.entry(cpath.clone())
                            .or_insert(Node { parent: path.clone(), content: Vec::new(), size: 0 });
                        path = cpath;
                    }
                    Rst(res) => {
                        let node = map.get_mut(&path).unwrap();
                        node.content.push(res.clone());
                        if let ResultType::File(_,fsize) = res {
                            node.size += fsize;
                        }
                    }
                    Cmd(CommandType::List) => {},
                }
            });
        Tree { map, totals: RefCell::new(Vec::new()) }
    }

This method builds a directory tree by processing terminal commands:

  • For cd .. commands, it moves up to the parent directory
  • For other cd commands, it creates a new directory if needed and moves into it
  • For result lines, it adds files or directories to the current directory's contents

Calculating Total Sizes

    fn calc_dirs_totals(&self, path: &Path) -> usize {
        let mut sum = self.dir_size(path);
        for dir in self.children(path) {
            let cpath = path.append(dir);
            sum += self.calc_dirs_totals(&cpath);
        }
        // println!("{:?}:{:?}", path, sum);
        self.totals.borrow_mut().push((path.clone(), sum));
        sum
    }

This recursive method calculates the total size of each directory by adding its direct size to the total sizes of its subdirectories.

Creating the Line Iterator

struct History();
impl History {
    fn iterator(history:&str) -> impl Iterator<Item=LineType> + '_{
        history.lines()
            .filter_map(|e| {
                let p:Vec<_> = e.split(' ').collect();
                match p[0] {
                    "$" => match p[1] {
                        "ls" => Some(LineType::Cmd(CommandType::List)),
                        "cd" => Some(LineType::Cmd(CommandType::Cd(String::from(p[2])))),
                        _ => None
                    }
                    "dir" => Some(LineType::Rst(ResultType::Dir(p[1].to_string()))),
                    _ => Some(LineType::Rst(ResultType::File(p[1].to_string(), usize::from_str(p[0]).unwrap())))
                }
            })
    }
}

This creates an iterator that converts terminal output lines into LineType values by parsing each line based on its format.

Main Function

fn main() {

    let history = std::fs::read_to_string("src/bin/day7_input.txt").expect("");

    let tree = Tree::parse_history(
        History::iterator(history.as_str())
    );

    tree.calc_dirs_totals(&Path::new("/".to_string()));
    let dirs = tree.totals();

    println!("Directories < 100000 \n====================");
    println!("{:?}",
             dirs.iter()
                 .filter(|(_,size)| *size < 100000 )
                 .inspect(|&p| println!("{:?}",p))
                 .map(|&(_,size)| size)
                 .sum::<usize>()
    );

    let total_space = 70000000;
    let min_free_space = 30000000;
    let &(_,total_used) = dirs.last().unwrap();
    let min_space_to_free = min_free_space - (total_space - total_used);
    println!("Directories ~ 30000000 \n====================");
    println!("{:?}",
             dirs.iter()
                 .filter(|(_,size)| *size >= min_space_to_free )
                 .inspect(|&p| println!("{:?}",p))
                 .min_by(|&a,&b| a.1.cmp(&b.1))
    );
}

The main function:

  1. Reads the terminal output from a file
  2. Creates an iterator to parse the output
  3. Builds a directory tree using the parsed commands
  4. Calculates the total size of each directory
  5. For Part 1: Finds directories smaller than 100,000 and sums their sizes
  6. For Part 2: Finds the smallest directory that would free enough space when deleted

Implementation Notes

  • RefCell Usage: The solution uses a RefCell to store the list of total sizes, allowing it to be modified during the recursive calculation
  • Path Representation: Paths are represented as strings for simplicity, but with a custom wrapper type for safety
  • Tree Structure: The directory tree uses a map-based representation with explicit parent references, making it easy to navigate up and down the tree
  • Functional Approach: The solution makes extensive use of iterators and functional programming patterns

Day 8: Treetop Tree House

Day 8 involves analyzing a grid of trees to determine visibility and scenic scores.

Problem Overview

You're trying to find the best spot for a treehouse in a forest. Given a grid where each cell contains a tree with a certain height, you need to:

  1. Determine which trees are visible from outside the grid by looking horizontally or vertically
  2. Calculate a "scenic score" for each tree based on viewing distance in four directions
  3. Find the tree with the highest scenic score

This problem tests your ability to work with 2D grids and perform directional scanning operations.

Day 8: Problem Description

Treetop Tree House

The expedition comes across a peculiar patch of tall trees all planted carefully in a grid. The Elves explain that a previous expedition planted these trees as a reforestation effort. Now, they're curious if this would be a good location for a tree house.

First, determine whether there is enough tree cover here to keep a tree house hidden. To do this, you need to count the number of trees that are visible from outside the grid when looking directly along a row or column.

The Elves have already launched a quadcopter to generate a map with the height of each tree (your puzzle input). For example:

30373
25512
65332
33549
35390

Each tree is represented as a single digit whose value is its height, where 0 is the shortest and 9 is the tallest.

A tree is visible if all of the other trees between it and an edge of the grid are shorter than it. Only consider trees in the same row or column; that is, only look up, down, left, or right from any given tree.

All of the trees around the edge of the grid are visible - since they are already on the edge, there are no trees to block the view. In this example, that accounts for 16 trees.

Consider the middle 5:

  • The top-left 5 is visible from the left and top. (It isn't visible from the right or bottom since other trees of height 5 are in the way.)
  • The top-middle 5 is visible from the top and right.
  • The top-right 1 is not visible from any direction; for it to be visible, there would need to be only trees of height 0 between it and an edge.
  • The left-middle 5 is visible, but only from the right.
  • The center 3 is not visible from any direction; for it to be visible, there would need to be only trees of at most height 2 between it and an edge.

In total, in this example, 21 trees are visible from outside the grid.

Part 1

Consider your map; how many trees are visible from outside the grid?

Part 2

Content with the amount of tree cover available, the Elves just need to know the best spot to build their tree house: they would like to be able to see a lot of trees.

To measure the viewing distance from a given tree, look up, down, left, and right from that tree; stop if you reach an edge or at the first tree that is the same height or taller than the tree under consideration. (If a tree is right on the edge, at least one of its viewing distances will be zero.)

The Elves don't care about distant trees taller than those found by the rules above; the proposed tree house has large eaves to keep it dry, so they wouldn't be able to see higher than the tree house anyway.

In the example above, consider the middle 5 in the second row:

30373
25512
65332
33549
35390
  • Looking up, its view is not blocked; it can see 1 tree (of height 3).
  • Looking left, its view is blocked immediately; it can see only 1 tree (of height 5, right next to it).
  • Looking right, its view is not blocked; it can see 2 trees.
  • Looking down, its view is blocked eventually; it can see 2 trees (one of height 3, then the tree of height 5 that blocks its view).

A tree's scenic score is found by multiplying together its viewing distance in each of the four directions. For this tree, this is 4 (1 * 1 * 2 * 2).

However, you can do even better: consider the tree of height 5 in the middle of the fourth row:

30373
25512
65332
33549
35390
  • Looking up, its view is blocked at 2 trees (by another tree with a height of 5).
  • Looking left, its view is not blocked; it can see 2 trees.
  • Looking down, its view is also not blocked; it can see 1 tree.
  • Looking right, its view is blocked at 2 trees (by a massive tree of height 9).

This tree's scenic score is 8 (2 * 2 * 1 * 2).

Consider each tree on your map. What is the highest scenic score possible for any tree?

Day 8: Solution Explanation

Approach

Day 8 involves analyzing a grid of trees to determine visibility and scenic scores. The solution breaks down into two main parts:

  1. Visibility Analysis: Determine which trees are visible from outside the grid
  2. Scenic Score Calculation: Calculate the scenic score for each tree and find the maximum

The key to solving both parts efficiently is to create appropriate data structures and algorithms for scanning the grid in different directions.

Implementation Details

Core Data Structures

Coordinates

First, the solution defines a Coord struct to represent positions in the grid:

#![allow(unused)]
fn main() {
#[derive(Debug,Copy, Clone)]
struct Coord {
    x: usize,
    y: usize
}
impl From<(usize,usize)> for Coord {
    fn from(p: (usize, usize)) -> Self {
        Coord { x:p.0, y:p.1 }
    }
}
}

This provides a clean way to handle grid positions and includes a convenient conversion from tuples.

Grid Structure

The core of the solution is a generic Grid<T> structure that can store any type of data in a 2D grid:

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Grid<T> {
    width: usize,
    height: usize,
    grid: Vec<T>,
}
}

The grid is stored as a flat vector for efficiency, with methods to access elements by coordinates:

#![allow(unused)]
fn main() {
fn tree(&self, p: Coord) -> Option<&T> {
    if !self.in_bounds(p) {
        return None
    }
    Some(&self.grid[p.y * self.width + p.x])
}
}

Visibility Analysis

The visibility analysis is handled by the Visibility struct, which keeps track of which trees are visible:

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Visibility<'a> {
    forest: &'a Grid<i32>,       // Reference to the forest grid
    visible: Grid<bool>,         // Grid tracking visible trees
}
}

The key method is scan_visibility, which processes a sequence of coordinates in a given direction:

#![allow(unused)]
fn main() {
fn scan_visibility(&mut self, direction: ScanSequence) -> &mut Self {
    direction.into_iter()
        .for_each(|pos| {
            let mut tallest = -1;
            pos.into_iter().for_each(|e| {
                let tree = self.visible.tree_mut(e).unwrap();
                let t= self.forest.tree(e).unwrap();
                if tallest.lt(t) {
                    tallest = *t;
                    *tree = true;
                }
            });
        });
    self
}
}

This method:

  1. Takes a sequence of coordinate sequences (representing scan lines)
  2. For each scan line, tracks the tallest tree seen so far
  3. Marks trees as visible if they're taller than all previous trees in the scan line

By calling this method with scan sequences from all four directions (left-to-right, right-to-left, top-to-bottom, bottom-to-top), we can determine all visible trees.

Scenic Score Calculation

The scenic score calculation is handled by the Scenic struct:

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Scenic<'a> {
    forest: &'a Grid<i32>,
}
}

The main methods are:

#![allow(unused)]
fn main() {
fn scenic_score_dir(&mut self, p:Coord, (dx,dy):(isize,isize)) -> usize {
    let line = (1..).map_while(|i| {
        let coord = Coord {
            x: p.x.checked_add_signed(dx * i)?,
            y: p.y.checked_add_signed(dy * i)?,
        };
        self.forest.tree(coord)
    });

    let mut total = 0;
    let our_height = self.forest.tree(p).unwrap();
    for height in line {
        total += 1;
        if height >= our_height {
            break;
        }
    }
    total
}

fn scenic_score(&mut self, p: Coord) -> usize {
    let dirs =  [(-1, 0), (1, 0), (0, -1), (0, 1)];
    dirs.into_iter()
        .map(|dir| self.scenic_score_dir(p,dir) )
        .product()
}
}

These methods:

  1. Calculate the viewing distance in a specific direction using scenic_score_dir
  2. Combine the viewing distances in all four directions using scenic_score

The viewing distance calculation uses an infinite iterator with map_while to look in a specific direction until it reaches the edge or a blocking tree.

Generating Scan Sequences

To scan the grid in different directions, the solution defines helper functions that generate sequences of coordinates:

#![allow(unused)]
fn main() {
fn left_to_right(f: &Grid<i32>) -> ScanSequence {
    (0..f.height)
        .map(|y| (0..f.width).map(move |x| (x, y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}

fn right_to_left(f: &Grid<i32>) -> ScanSequence {
    (0..f.height)
        .map(|y| (0..f.width).rev().map(move |x| (x, y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}

// Similar functions for top_to_bottom and bottom_to_up
}

Each function generates a sequence of scan lines, where each scan line is a sequence of coordinates.

Parsing the Input

The input is parsed into a grid of tree heights:

#![allow(unused)]
fn main() {
fn parse_forest(data: &str) -> Grid<i32>  {
    let width = data.lines().next().unwrap().len();
    let height = data.lines().count();
    let mut grid = Grid::new(width,height);

    for (y,line) in data.lines().enumerate() {
        for (x, val) in line.bytes().enumerate() {
            *grid.tree_mut((x,y).into()).unwrap() = (val - b'0') as i32;
        }
    }
    grid
}
}

This converts each digit character to an integer height value.

Main Solution

The main solution flow is:

fn main() {
    let data = std::fs::read_to_string("src/bin/day8_input.txt").expect("Ops!");
    let grid = parse_forest(data.as_str());

    // Part 1: Count visible trees
    let count = Visibility::new(&grid)
        .scan_visibility(left_to_right(&grid))
        .scan_visibility(top_to_bottom(&grid))
        .scan_visibility(right_to_left(&grid))
        .scan_visibility(bottom_to_up(&grid))
        .count_visible();
    println!("Total Visible = {:?}", count);

    // Part 2: Find maximum scenic score
    let mut scenic = Scenic::new(&grid);
    let max = left_to_right(&grid).into_iter()
        .flat_map(|x| x)
        .map(|p| scenic.scenic_score(p))
        .max().unwrap();
    println!("Max scenic = {:?}", max);
}

For Part 1, it scans the grid from all four directions and counts the visible trees. For Part 2, it calculates the scenic score for every tree and finds the maximum.

Algorithm Analysis

Time Complexity

  • Visibility Analysis: O(n²) where n is the grid dimension (width or height), as we scan each cell in each direction
  • Scenic Score Calculation: O(n³) in the worst case, as for each of the n² cells we might need to look n steps in each direction

Space Complexity

  • Grid Storage: O(n²) to store the forest grid and visibility grid
  • Scan Sequences: O(n²) to store the coordinate sequences

Alternative Approaches

Single-Pass Visibility Check

For the visibility check, an alternative approach would be to use dynamic programming to precompute the maximum height seen from each direction:

#![allow(unused)]
fn main() {
// Precompute maximum heights from left
let mut max_left = vec![vec![-1; width]; height];
for y in 0..height {
    for x in 0..width {
        if x > 0 {
            max_left[y][x] = max(max_left[y][x-1], grid[y][x-1]);
        }
    }
}
// Similar for other directions

// Check visibility
for y in 0..height {
    for x in 0..width {
        if grid[y][x] > max_left[y][x] || grid[y][x] > max_right[y][x] || 
           grid[y][x] > max_top[y][x] || grid[y][x] > max_bottom[y][x] {
            visible[y][x] = true;
        }
    }
}
}

This would have the same asymptotic complexity but might be faster in practice due to better cache locality.

Optimized Scenic Score Calculation

For the scenic score calculation, we could optimize by precomputing the viewing distance in each direction:

#![allow(unused)]
fn main() {
let mut view_distance = vec![vec![(0, 0, 0, 0); width]; height];

// Compute left viewing distances
for y in 0..height {
    let mut last_height = vec![0; 10];
    for x in 0..width {
        let h = grid[y][x] as usize;
        view_distance[y][x].0 = x - *last_height[..=h].iter().max().unwrap_or(&0);
        last_height[h] = x;
    }
}
// Similar for other directions
}

This would reduce the time complexity to O(n²), but would be more complex to implement.

Conclusion

This solution demonstrates how to efficiently work with 2D grids and perform directional scanning operations. The use of custom data structures for coordinates and grids makes the code clean and maintainable, while the separation of visibility analysis and scenic score calculation into different structs keeps the code organized.

Day 8: Code

Below is the complete code for Day 8's solution, which analyzes a grid of trees to determine visibility and scenic scores.

Full Solution


type ScanSequence = Vec<Vec<Coord>>;

#[derive(Debug,Copy, Clone)]
struct Coord {
    x: usize,
    y: usize
}
impl From<(usize,usize)> for Coord {
    fn from(p: (usize, usize)) -> Self {
        Coord { x:p.0, y:p.1 }
    }
}

#[derive(Debug)]
struct Grid<T> {
    width: usize,
    height: usize,
    grid: Vec<T>,
}
impl<T> Grid<T> where T : Default + Copy {
    fn new(height: usize, width: usize) -> Grid<T> {
        Grid {
            height,
            width,
            grid: vec![T::default(); width * height]
        }
    }
    fn in_bounds(&self, p:Coord) -> bool {
        p.x < self.width && p.y < self.height
    }
    fn tree(&self, p: Coord) -> Option<&T> {
        if !self.in_bounds(p) {
            return None
        }
        Some(&self.grid[p.y * self.width + p.x])
    }
    fn tree_mut(&mut self, p: Coord) -> Option<&mut T> {
        if !self.in_bounds(p) {
            return None
        }
        Some(&mut self.grid[p.y * self.width + p.x])
    }
}

#[derive(Debug)]
struct Visibility<'a> {
    forest: &'a Grid<i32>,
    visible: Grid<bool>,
}
impl Visibility<'_> {
    fn new(forest: &Grid<i32>) -> Visibility {
        Visibility {
            forest,
            visible: Grid::new(forest.width, forest.height),
        }
    }
    fn count_visible(&self) -> usize {
        self.visible.grid.iter()
            .filter(|&e| *e)
            .count()
    }
    fn scan_visibility(&mut self, direction: ScanSequence) -> &mut Self {
        direction.into_iter()
            .for_each(|pos| {
                let mut tallest = -1;
                pos.into_iter().for_each(|e| {
                    let tree = self.visible.tree_mut(e).unwrap();
                    let t= self.forest.tree(e).unwrap();
                    if tallest.lt(t) {
                        tallest = *t;
                        *tree = true;
                    }
                });
            });
        self
    }
}
#[derive(Debug)]
struct Scenic<'a> {
    forest: &'a Grid<i32>,
    // scenic: Grid<usize>
}
impl Scenic<'_> {
    fn new(forest: &Grid<i32>) -> Scenic {
        Scenic { forest }
    }
    fn scenic_score_dir(&mut self, p:Coord, (dx,dy):(isize,isize)) -> usize {
        let line = (1..).map_while(|i| {
            let coord = Coord {
                x: p.x.checked_add_signed(dx * i)?,
                y: p.y.checked_add_signed(dy * i)?,
            };
            self.forest.tree(coord)
        });

        let mut total = 0;
        let our_height = self.forest.tree(p).unwrap();
        for height in line {
            total += 1;
            if height >= our_height {
                break;
            }
        }
        total

    }
    fn scenic_score(&mut self, p: Coord) -> usize {
        let dirs =  [(-1, 0), (1, 0), (0, -1), (0, 1)];
        dirs.into_iter()
            .map(|dir| self.scenic_score_dir(p,dir) )
            .product()
    }
}

fn main() {
    // let data = "30373\n25512\n65332\n33549\n35390".to_string();
    let data = std::fs::read_to_string("src/bin/day8_input.txt").expect("Ops!");

    let grid = parse_forest(data.as_str());

    let count = Visibility::new(&grid)
        .scan_visibility(left_to_right(&grid))
        .scan_visibility(top_to_bottom(&grid))
        .scan_visibility(right_to_left(&grid))
        .scan_visibility(bottom_to_up(&grid))
        .count_visible();
    println!("Total Visible = {:?}", count);

    let mut scenic = Scenic::new(&grid);
    let max = left_to_right(&grid).into_iter()
        .flat_map(|x| x)
        .map(|p| scenic.scenic_score(p))
        .max().unwrap();
    println!("Max scenic = {:?}", max);
}

fn parse_forest(data: &str) -> Grid<i32>  {
    let width = data.lines().next().unwrap().len();
    let height = data.lines().count();
    let mut grid = Grid::new(width,height);

    for (y,line) in data.lines().enumerate() {
        for (x, val) in line.bytes().enumerate() {
            *grid.tree_mut((x,y).into()).unwrap() = (val - b'0') as i32;
        }
    }
    grid
}

fn left_to_right(f: &Grid<i32>) -> ScanSequence {
    (0..f.height)
        .map(|y| (0..f.width).map(move |x| (x, y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}
fn right_to_left(f: &Grid<i32>) -> ScanSequence {
    (0..f.height)
        .map(|y| (0..f.width).rev().map(move |x| (x, y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}
fn top_to_bottom(f: &Grid<i32>) -> ScanSequence {
    (0..f.width)
        .map(|x| (0..f.height).map(move |y| (x,y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}
fn bottom_to_up(f: &Grid<i32>) -> ScanSequence {
    (0..f.width)
        .map(|x| (0..f.height).rev().map(move |y| (x,y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}

Code Walkthrough

Core Data Structures

Coordinate System

#[derive(Debug,Copy, Clone)]
struct Coord {
    x: usize,
    y: usize
}
impl From<(usize,usize)> for Coord {
    fn from(p: (usize, usize)) -> Self {
        Coord { x:p.0, y:p.1 }
    }
}

The Coord struct represents a position in the grid with x and y coordinates. The From<(usize,usize)> implementation allows easy conversion from coordinate tuples.

Grid Implementation

#[derive(Debug)]
struct Grid<T> {
    width: usize,
    height: usize,
    grid: Vec<T>,
}
impl<T> Grid<T> where T : Default + Copy {
    fn new(height: usize, width: usize) -> Grid<T> {
        Grid {
            height,
            width,
            grid: vec![T::default(); width * height]
        }
    }
    fn in_bounds(&self, p:Coord) -> bool {
        p.x < self.width && p.y < self.height
    }
    fn tree(&self, p: Coord) -> Option<&T> {
        if !self.in_bounds(p) {
            return None
        }
        Some(&self.grid[p.y * self.width + p.x])
    }
    fn tree_mut(&mut self, p: Coord) -> Option<&mut T> {
        if !self.in_bounds(p) {
            return None
        }
        Some(&mut self.grid[p.y * self.width + p.x])
    }
}

The Grid<T> struct is a generic container that stores a 2D grid as a flat vector. It provides methods for:

  • Creating a new grid with default values
  • Checking if coordinates are within bounds
  • Accessing grid elements by coordinates (both immutably and mutably)

Visibility Analysis

#[derive(Debug)]
struct Visibility<'a> {
    forest: &'a Grid<i32>,
    visible: Grid<bool>,
}
impl Visibility<'_> {
    fn new(forest: &Grid<i32>) -> Visibility {
        Visibility {
            forest,
            visible: Grid::new(forest.width, forest.height),
        }
    }
    fn count_visible(&self) -> usize {
        self.visible.grid.iter()
            .filter(|&e| *e)
            .count()
    }
    fn scan_visibility(&mut self, direction: ScanSequence) -> &mut Self {
        direction.into_iter()
            .for_each(|pos| {
                let mut tallest = -1;
                pos.into_iter().for_each(|e| {
                    let tree = self.visible.tree_mut(e).unwrap();
                    let t= self.forest.tree(e).unwrap();
                    if tallest.lt(t) {
                        tallest = *t;
                        *tree = true;
                    }
                });
            });
        self
    }
}

The Visibility struct manages determining which trees are visible:

  • It keeps a reference to the forest grid and a boolean grid to track visibility
  • count_visible() counts the number of visible trees
  • scan_visibility() scans along provided coordinate sequences, marking trees as visible if they're taller than all previous trees in the scan

Scenic Score Calculation

#[derive(Debug)]
struct Scenic<'a> {
    forest: &'a Grid<i32>,
    // scenic: Grid<usize>
}
impl Scenic<'_> {
    fn new(forest: &Grid<i32>) -> Scenic {
        Scenic { forest }
    }
    fn scenic_score_dir(&mut self, p:Coord, (dx,dy):(isize,isize)) -> usize {
        let line = (1..).map_while(|i| {
            let coord = Coord {
                x: p.x.checked_add_signed(dx * i)?,
                y: p.y.checked_add_signed(dy * i)?,
            };
            self.forest.tree(coord)
        });

        let mut total = 0;
        let our_height = self.forest.tree(p).unwrap();
        for height in line {
            total += 1;
            if height >= our_height {
                break;
            }
        }
        total

    }
    fn scenic_score(&mut self, p: Coord) -> usize {
        let dirs =  [(-1, 0), (1, 0), (0, -1), (0, 1)];
        dirs.into_iter()
            .map(|dir| self.scenic_score_dir(p,dir) )
            .product()
    }
}

The Scenic struct handles calculating scenic scores:

  • scenic_score_dir() calculates the viewing distance in a specific direction using an iterator that continues until it reaches the edge or a blocking tree
  • scenic_score() combines the viewing distances in all four directions by multiplying them together

Direction Scanning Utilities

fn parse_forest(data: &str) -> Grid<i32>  {
    let width = data.lines().next().unwrap().len();
    let height = data.lines().count();
    let mut grid = Grid::new(width,height);

    for (y,line) in data.lines().enumerate() {
        for (x, val) in line.bytes().enumerate() {
            *grid.tree_mut((x,y).into()).unwrap() = (val - b'0') as i32;
        }
    }
    grid
}

fn left_to_right(f: &Grid<i32>) -> ScanSequence {
    (0..f.height)
        .map(|y| (0..f.width).map(move |x| (x, y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}
fn right_to_left(f: &Grid<i32>) -> ScanSequence {
    (0..f.height)
        .map(|y| (0..f.width).rev().map(move |x| (x, y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}
fn top_to_bottom(f: &Grid<i32>) -> ScanSequence {
    (0..f.width)
        .map(|x| (0..f.height).map(move |y| (x,y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}
fn bottom_to_up(f: &Grid<i32>) -> ScanSequence {
    (0..f.width)
        .map(|x| (0..f.height).rev().map(move |y| (x,y).into()).collect::<Vec<Coord>>() )
        .collect::<Vec<_>>()
}

These utility functions generate coordinate sequences for scanning the grid in different directions:

  • left_to_right: Scans each row from left to right
  • right_to_left: Scans each row from right to left
  • top_to_bottom: Scans each column from top to bottom
  • bottom_to_up: Scans each column from bottom to top

Main Function and Input Parsing

fn main() {
    // let data = "30373\n25512\n65332\n33549\n35390".to_string();
    let data = std::fs::read_to_string("src/bin/day8_input.txt").expect("Ops!");

    let grid = parse_forest(data.as_str());

    let count = Visibility::new(&grid)
        .scan_visibility(left_to_right(&grid))
        .scan_visibility(top_to_bottom(&grid))
        .scan_visibility(right_to_left(&grid))
        .scan_visibility(bottom_to_up(&grid))
        .count_visible();
    println!("Total Visible = {:?}", count);

    let mut scenic = Scenic::new(&grid);
    let max = left_to_right(&grid).into_iter()
        .flat_map(|x| x)
        .map(|p| scenic.scenic_score(p))
        .max().unwrap();
    println!("Max scenic = {:?}", max);
}

The main function:

  1. Reads the input file
  2. Parses it into a grid
  3. For Part 1: Scans the grid from all four directions and counts the visible trees
  4. For Part 2: Calculates the scenic score for every tree and finds the maximum

The parse_forest function converts the input string into a grid of tree heights.

Implementation Notes

  • Generic Grid: The solution uses a generic grid implementation that can store any type of data, making it flexible for different use cases
  • Fluent Interface: The visibility scanning uses a fluent interface with method chaining for concise code
  • Iterator Usage: The solution makes extensive use of iterators, including infinite iterators with map_while for clean, efficient code
  • Coordinate Handling: The custom Coord type with From trait implementation makes coordinate handling safer and more expressive

Day 9: Rope Bridge

Day 9 involves simulating the motion of a rope with multiple knots based on a series of movement commands.

Problem Overview

You're crossing a rope bridge but need to model the rope's movement. The rope consists of a series of knots connected in a line. When the head knot moves, the other knots follow according to specific rules. Your task is to:

  1. Track the motion of a rope with 2 knots (Part 1)
  2. Track the motion of a rope with 10 knots (Part 2)
  3. Count the number of unique positions the tail knot visits

This problem tests your ability to simulate physical constraints and track positional state across a sequence of moves.

Day 9: Problem Description

Rope Bridge

This rope bridge creaks as you walk along it. You aren't sure how old it is, or whether it can even support your weight. It seems to support the Elves just fine, though.

The bridge is a series of planks connected by rope. It doesn't have any guardrails, which is a bit concerning given how many Elves have already fallen into the river. You decide to distract yourself by modeling how the ropes move as you cross the bridge.

The bridge is made entirely of rope, with the poses connected end-to-end. The first pose is secured to a large tree, and the last pose is holding a big bag of supplies. A single rope connects each pose in the bridge. The ropes form a physical constraint: every rope segment wants to stay straight, and if a rope is pulled taught then the next rope segment in the chain will also be pulled in that direction (unless it's constrained in some other way).

The Elves want to know where the bag of supplies might end up. To simulate the ropes, you will need to keep track of the head (the first pose) and the tail (the last pose). If the head is ever two steps directly up, down, left, or right from the tail, the tail must move one step in that direction so it remains close enough. Otherwise, if the head and tail aren't touching and aren't in the same row or column, the tail always moves one step diagonally to keep up.

You'll need to keep track of positions the tail visited at least once. For now, you should model the positions of the knots after each step. Then, you can count the positions the tail visited at least once.

For example:

R 4
U 4
L 3
D 1
R 4
D 1
L 5
R 2

This series of motions moves the head right four steps, then up four steps, then left three steps, then down one step, and so on. After each step, you'll need to update the position of the tail if the head and tail aren't touching. Visually, these motions occur as follows (s marks the starting position as a reference point):

== Initial State ==

......
......
......
......
..H...  (H covers T, s)

== R 4 ==

......
......
......
......
..TH..  (T covers s)

......
......
......
......
..TH..  (T covers s)

......
......
......
......
...TH.  (T covers s)

......
......
......
......
....TH  (T covers s)

== U 4 ==

......
......
......
....H.
....T.  (T covers s)

......
......
....H.
....T.
......  (T covers s)

......
....H.
....T.
......
......  (T covers s)

....H.
....T.
......
......
......  (T covers s)

== L 3 ==

...H..
....T.
......
......
......  (T covers s)

..H...
...T..
......
......
......  (T covers s)

.H....
..T...
......
......
......  (T covers s)

== D 1 ==

..H...
..T...
......
......
......  (T covers s)

== R 4 ==

...H..
..T...
......
......
......  (T covers s)

....H.
...T..
......
......
......  (T covers s)

.....H
....T.
......
......
......  (T covers s)

......
.....H
....T.
......
......  (T covers s)

== D 1 ==

......
......
.....H
....T.
......  (T covers s)

== L 5 ==

......
......
....H.
....T.
......  (T covers s)

......
......
...H..
....T.
......  (T covers s)

......
......
..H...
...T..
......  (T covers s)

......
......
.H....
..T...
......  (T covers s)

......
......
H.....
.T....
......  (T covers s)

== R 2 ==

......
......
.H....
.T....
......  (T covers s)

......
......
..H...
.T....
......  (T covers s)

After simulating the rope, you can count up all of the positions the tail visited at least once. In this diagram, s again marks the starting position (which the tail also visited) and # marks other positions the tail visited:

..##..
...##.
.####.
....#.
...s#.

So, there are 13 positions the tail visited at least once.

Part 1

Simulate your complete hypothetical series of motions. How many positions does the tail of the rope visit at least once?

Part 2

A rope snaps! Suddenly, the river is getting a lot closer than you remember. The bridge is still there, but some of the ropes that broke are now whipping toward you as you fall through the air!

The ropes are moving too quickly to grab; you only have a few seconds to choose how to arch your body to avoid being hit. Fortunately, your simulation can be extended to support longer ropes.

Rather than two knots, you now must simulate a rope consisting of ten knots. One knot is still the head of the rope and moves according to the series of motions. Each knot further down the rope follows the knot in front of it using the same rules as before.

Using the same series of motions as the above example, but with the knots marked H, 1, 2, ..., 9, the motions now occur as follows:

== Initial State ==

......
......
......
......
H.....  (H covers 1, 2, 3, 4, 5, 6, 7, 8, 9, s)

== R 4 ==

......
......
......
......
H123..  (9 covers s)

......
......
......
......
.H123.  (9 covers s)

......
......
......
......
..H123  (9 covers s)

......
......
......
......
...H123  (9 covers s)

== U 4 ==

......
......
......
....H..
....1..
....2..
....3..
....4..
....5..
....6..
....7..
....8..
....9..  (9 covers s)

== L 3 ==

// ... continued visualization omitted for brevity ...

Now, you need to keep track of the positions the new tail (knot 9) visits. In this example, the tail never moves far enough to leave a # in the visualized grid, but if you count the positions the tail visits at least once, you still get 1. (You may want to try a different initial configuration to be sure.)

Let's try a larger example:

R 5
U 8
L 8
D 3
R 17
D 10
L 25
U 20

These motions cause the head of the rope to move around quite a bit. Here is an illustration of the positions of the head (H) and the tail (9) after each of the first few steps:

== Initial State ==
................
................
................
................
................
................
................
................
................
................
................
................
................
................
................
...............H  (H covers 1, 2, 3, 4, 5, 6, 7, 8, 9, s)

== R 5 ==
................
................
................
................
................
................
................
................
................
................
................
................
................
................
................
...............H  (H covers 1, 2, 3, 4, 5, 6, 7, 8, 9, s)

// ... continued visualization omitted for brevity ...

After simulating the rope, you can count up all of the positions the tail (knot 9) visited at least once. In this larger example, the tail visits 36 positions (including the position where it starts).

Simulate your complete series of motions on a rope with ten knots. How many positions does the tail of the rope visit at least once?

Day 9: Solution Explanation

Approach

Day 9 involves simulating the motion of a rope with multiple knots. The solution breaks down into several key components:

  1. Representing coordinates: We need a way to represent positions in 2D space
  2. Modeling the rope: We need to model a chain of connected knots
  3. Implementing movement rules: We need to implement how knots move in relation to each other
  4. Tracking unique positions: We need to track unique positions visited by the tail knot

The solution uses a combination of custom data structures and simulation logic to model the rope's behavior.

Implementation Details

Coordinate System

First, we define a Coord struct to represent positions in 2D space:

#![allow(unused)]
fn main() {
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
struct Coord {
    x: isize,
    y: isize
}
}

This struct includes several derived traits:

  • Debug, Copy, and Clone for convenience
  • PartialEq and Eq for equality comparisons
  • Hash to allow using coordinates as keys in a HashSet

We also implement the Sub trait to make it easy to calculate the distance between two coordinates:

#![allow(unused)]
fn main() {
impl Sub for Coord {
    type Output = Coord;

    fn sub(self, rhs: Self) -> Self::Output {
        Coord {
            x: self.x - rhs.x,
            y: self.y - rhs.y,
        }
    }
}
}

And a conversion from tuples for convenience:

#![allow(unused)]
fn main() {
impl From<(isize,isize)> for Coord {
    fn from(pos: (isize, isize)) -> Self {
        Coord{x:pos.0, y:pos.1}
    }
}
}

Movement Commands

We define an enum to represent the four possible movement directions:

#![allow(unused)]
fn main() {
#[derive(Debug, Copy, Clone)]
enum Command {
    Left,
    Right,
    Up,
    Down
}
}

And a struct to represent a movement step with a direction and distance:

#![allow(unused)]
fn main() {
#[derive(Debug, Copy, Clone)]
struct Step {
    cmd: Command,
    units: isize
}
}

Each knot in the rope is modeled as a Link that knows its position and how to move:

#![allow(unused)]
fn main() {
#[derive(Debug, Copy, Clone)]
struct Link {
    pos: Coord
}
}

The Link struct has methods for different types of movement:

#![allow(unused)]
fn main() {
impl Link {
    fn new(pos:Coord) -> Link {
        Link { pos }
    }
    
    // Move directly in a cardinal direction
    fn move_to(&mut self, cmd: Command) -> Coord {
        match cmd {
            Command::Left => self.pos.x -= 1,
            Command::Right => self.pos.x += 1,
            Command::Up => self.pos.y += 1,
            Command::Down => self.pos.y -= 1
        }
        self.position()
    }
    
    // Move relative to another link based on physical constraints
    fn move_relative(&mut self, front: &Link) -> Coord {
        let dist = front.position() - self.position();
        let (dx,dy) = match (dist.x, dist.y) {
            // overlapping
            (0, 0) => (0, 0),
            // touching up/left/down/right
            (0, 1) | (1, 0) | (0, -1) | (-1, 0) => (0, 0),
            // touching diagonally
            (1, 1) | (1, -1) | (-1, 1) | (-1, -1) => (0, 0),
            // need to move up/left/down/right
            (0, 2) => (0, 1),
            (0, -2) => (0, -1),
            (2, 0) => (1, 0),
            (-2, 0) => (-1, 0),
            // need to move to the right diagonally
            (2, 1) => (1, 1),
            (2, -1) => (1, -1),
            // need to move to the left diagonally
            (-2, 1) => (-1, 1),
            (-2, -1) => (-1, -1),
            // need to move up/down diagonally
            (1, 2) => (1, 1),
            (-1, 2) => (-1, 1),
            (1, -2) => (1, -1),
            (-1, -2) => (-1, -1),
            // need to move diagonally
            (-2, -2) => (-1, -1),
            (-2, 2) => (-1, 1),
            (2, -2) => (1, -1),
            (2, 2) => (1, 1),
            _ => panic!("unhandled case: tail - head = {dist:?}"),
        };
        self.pos.x += dx;
        self.pos.y += dy;
        self.position()
    }
    
    fn position(&self) -> Coord {
        self.pos
    }
}
}

The move_relative method is the heart of the solution. It implements the physical constraint that if a knot is too far from the knot in front of it, it must move to maintain proximity. The method calculates the relative position and then determines the appropriate movement using pattern matching.

Modeling the Rope Chain

The entire rope is modeled as a chain of links:

#![allow(unused)]
fn main() {
struct Chain {
    links: Vec<Link>
}

impl Chain {
    fn new(pos:Coord, size:usize) -> Chain {
        Chain {
            links: vec![Link::new(pos); size]
        }
    }
    
    fn move_to(&mut self, cmd: Command) -> Coord {
        self.links[0].move_to(cmd);
        self.links
            .iter_mut()
            .reduce(|front,tail|{
                tail.move_relative(front);
                tail
            })
            .unwrap()
            .position()
    }
}
}

The move_to method moves the head knot directly and then propagates the movement through the chain using reduce. This elegantly handles the chain of dependencies where each knot's movement depends on the knot in front of it.

Game Simulation

The overall simulation is handled by the Game struct:

#![allow(unused)]
fn main() {
struct Game {
    rope: Chain,
    unique: HashSet<Coord>
}

impl Game {
    fn new(rope: Chain) -> Game {
        Game { rope, unique: HashSet::new() }
    }
    
    fn unique_positions(&self) -> usize {
        self.unique.len()
    }
    
    fn run(&mut self, input: &Vec<Step>) -> &Self{
        for step in input {
            (0..step.units).all(|_| {
                self.unique.insert(
                    self.rope.move_to(step.cmd)
                );
                true
            });
        }
        self
    }
}
}

The Game struct:

  • Manages the rope chain
  • Tracks unique positions visited by the tail knot using a HashSet
  • Provides a run method to simulate all the movement steps

Parsing Input

The input is parsed into a sequence of Step values:

#![allow(unused)]
fn main() {
fn parse_commands(input: &str) -> Vec<Step> {
    input.lines()
        .map(|line| line.split(' '))
        .map(|mut s| {
            let cmd = match s.next() {
                Some("R") => Command::Right,
                Some("U") => Command::Up,
                Some("D") => Command::Down,
                Some("L") => Command::Left,
                _ => panic!("Woohaaaa!")
            };
            (cmd, isize::from_str(s.next().unwrap()).unwrap())
        })
        .fold(vec![], |mut out, (cmd, units)| {
            out.push(Step{ cmd, units });
            out
        })
}
}

Main Solution

The main solution creates two games - one with a 2-knot rope for Part 1 and one with a 10-knot rope for Part 2:

fn main() {
    let data = std::fs::read_to_string("src/bin/day9_input.txt").expect("");
    let cmds = parse_commands(data.as_str());

    println!("2 Link Chain - Unique points: {}",
             Game::new(Chain::new((0, 0).into(), 2))
                 .run(&cmds)
                 .unique_positions()
    );
    println!("10 Links Chain - Unique points: {}",
             Game::new(Chain::new((0, 0).into(), 10))
                 .run(&cmds)
                 .unique_positions()
    );
}

Algorithm Analysis

Time Complexity

  • Parsing the input: O(n) where n is the number of lines in the input
  • Simulating the rope: O(n * m * k) where:
    • n is the number of steps
    • m is the maximum number of units in any step
    • k is the number of knots in the rope

Space Complexity

  • Storing the rope: O(k) where k is the number of knots
  • Storing unique positions: O(p) where p is the number of unique positions visited

Alternative Approaches

Simplified Movement Logic

The move_relative method uses a detailed pattern match to handle all possible relative positions. An alternative approach could use a more general formula:

#![allow(unused)]
fn main() {
fn move_relative_simplified(&mut self, front: &Link) -> Coord {
    let dist = front.position() - self.position();
    
    // If Manhattan distance <= 1 or diagonal distance = 1, don't move
    if (dist.x.abs() <= 1 && dist.y.abs() <= 1) {
        return self.position();
    }
    
    // Otherwise, move in the direction of the front knot
    self.pos.x += dist.x.signum();
    self.pos.y += dist.y.signum();
    self.position()
}
}

This approach is more concise but less explicit about the movement rules.

Alternative Coordinate Representation

Instead of a custom Coord struct, we could use tuples:

#![allow(unused)]
fn main() {
type Coord = (isize, isize);

// Calculate distance
fn distance(a: Coord, b: Coord) -> Coord {
    (a.0 - b.0, a.1 - b.1)
}
}

This would be simpler but less expressive and type-safe.

Conclusion

This solution demonstrates a clean approach to simulating physical constraints in a chain of connected objects. The use of custom types for coordinates and links, along with the implementation of movement rules, creates a readable and maintainable solution. The approach generalizes well from the 2-knot rope in Part 1 to the 10-knot rope in Part 2 without requiring significant changes to the code.

Day 9: Code

Below is the complete code for Day 9's solution, which simulates the motion of a rope with multiple knots.

Full Solution

use std::collections::HashSet;
use std::hash::Hash;
use std::ops::Sub;
use std::str::FromStr;
use std::vec;

#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
struct Coord {
    x: isize,
    y: isize
}
impl Sub for Coord {
    type Output = Coord;

    fn sub(self, rhs: Self) -> Self::Output {
        Coord {
            x: self.x - rhs.x,
            y: self.y - rhs.y,
        }
    }
}
impl From<(isize,isize)> for Coord {
    fn from(pos: (isize, isize)) -> Self {
        Coord{x:pos.0, y:pos.1}
    }
}
#[derive(Debug, Copy, Clone)]
enum Command {
    Left,
    Right,
    Up,
    Down
}
#[derive(Debug, Copy, Clone)]
struct Step {
    cmd: Command,
    units: isize
}

#[derive(Debug, Copy, Clone)]
struct Link {
    pos: Coord
}
impl Link {
    fn new(pos:Coord) -> Link {
        Link { pos }
    }
    fn move_to(&mut self, cmd: Command) -> Coord {
        match cmd {
            Command::Left => self.pos.x -= 1,
            Command::Right => self.pos.x += 1,
            Command::Up => self.pos.y += 1,
            Command::Down => self.pos.y -= 1
        }
        self.position()
    }
    fn move_relative(&mut self, front: &Link) -> Coord {
        let dist = front.position() - self.position();
        let (dx,dy) = match (dist.x, dist.y) {
            // overlapping
            (0, 0) => (0, 0),
            // touching up/left/down/right
            (0, 1) | (1, 0) | (0, -1) | (-1, 0) => (0, 0),
            // touching diagonally
            (1, 1) | (1, -1) | (-1, 1) | (-1, -1) => (0, 0),
            // need to move up/left/down/right
            (0, 2) => (0, 1),
            (0, -2) => (0, -1),
            (2, 0) => (1, 0),
            (-2, 0) => (-1, 0),
            // need to move to the right diagonally
            (2, 1) => (1, 1),
            (2, -1) => (1, -1),
            // need to move to the left diagonally
            (-2, 1) => (-1, 1),
            (-2, -1) => (-1, -1),
            // need to move up/down diagonally
            (1, 2) => (1, 1),
            (-1, 2) => (-1, 1),
            (1, -2) => (1, -1),
            (-1, -2) => (-1, -1),
            // need to move diagonally
            (-2, -2) => (-1, -1),
            (-2, 2) => (-1, 1),
            (2, -2) => (1, -1),
            (2, 2) => (1, 1),
            _ => panic!("unhandled case: tail - head = {dist:?}"),
        };
        self.pos.x += dx;
        self.pos.y += dy;
        self.position()
    }
    fn position(&self) -> Coord {
        self.pos
    }
}

struct Chain {
    links: Vec<Link>
}
impl Chain {
    fn new(pos:Coord, size:usize) -> Chain {
        Chain {
            links: vec![Link::new(pos); size]
        }
    }
    fn move_to(&mut self, cmd: Command) -> Coord {

        self.links[0].move_to(cmd);
        self.links
            .iter_mut()
            .reduce(|front,tail|{
                tail.move_relative(front);
                tail
            })
            .unwrap()
            .position()
    }
}

struct Game {
    rope: Chain,
    unique: HashSet<Coord>
}
impl Game {
    fn new(rope: Chain) -> Game {
        Game { rope, unique: HashSet::new() }
    }
    fn unique_positions(&self) -> usize {
        self.unique.len()
    }
    fn run(&mut self, input: &Vec<Step>) -> &Self{
        for step in input {
            (0..step.units).all(|_| {
                self.unique.insert(
                    self.rope.move_to( step.cmd )
                );
                true
            });
        }
        self
    }
}

fn parse_commands(input: &str) -> Vec<Step> {
    input.lines()
        .map(|line| line.split(' '))
        .map(|mut s| {
            let cmd = match s.next() {
                Some("R") => Command::Right,
                Some("U") => Command::Up,
                Some("D") => Command::Down,
                Some("L") => Command::Left,
                _ => panic!("Woohaaaa!")
            };
            (cmd, isize::from_str(s.next().unwrap()).unwrap())
        })
        .fold(vec![], |mut out, (cmd, units)| {
            out.push( Step{ cmd, units });
            out
        })

}

fn main() {
//     let data = "R 4\nU 4\nL 3\nD 1\nR 4\nD 1\nL 5\nR 2".to_string();
//     let data = "R 5\nU 8\nL 8\nD 3\nR 17\nD 10\nL 25\nU 20\n".to_string();

    let data = std::fs::read_to_string("src/bin/day9_input.txt").expect("");

    let cmds = parse_commands(data.as_str());

    println!("2 Link Chain - Unique points: {}",
             Game::new( Chain::new((0, 0).into(), 2))
                 .run( &cmds )
                 .unique_positions()
    );
    println!("10 Links Chain - Unique points: {}",
             Game::new( Chain::new((0, 0).into(), 10))
                 .run( &cmds )
                 .unique_positions()
    );
}

Code Walkthrough

Coordinate System

#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
struct Coord {
    x: isize,
    y: isize
}
impl Sub for Coord {
    type Output = Coord;

    fn sub(self, rhs: Self) -> Self::Output {
        Coord {
            x: self.x - rhs.x,
            y: self.y - rhs.y,
        }
    }
}
impl From<(isize,isize)> for Coord {
    fn from(pos: (isize, isize)) -> Self {
        Coord{x:pos.0, y:pos.1}
    }
}

The Coord struct represents positions in 2D space. It includes:

  • x and y coordinates as signed integers
  • Implementation of Sub to calculate the distance between two coordinates
  • A conversion from tuples for convenience
  • Several derived traits, including Hash to allow using coordinates in a HashSet

Movement Commands

#[derive(Debug, Copy, Clone)]
enum Command {
    Left,
    Right,
    Up,
    Down
}
#[derive(Debug, Copy, Clone)]
struct Step {
    cmd: Command,
    units: isize
}

These types represent movement commands:

  • Command is an enum for the four possible directions
  • Step combines a command with a distance
#[derive(Debug, Copy, Clone)]
struct Link {
    pos: Coord
}
impl Link {
    fn new(pos:Coord) -> Link {
        Link { pos }
    }
    fn move_to(&mut self, cmd: Command) -> Coord {
        match cmd {
            Command::Left => self.pos.x -= 1,
            Command::Right => self.pos.x += 1,
            Command::Up => self.pos.y += 1,
            Command::Down => self.pos.y -= 1
        }
        self.position()
    }
    fn move_relative(&mut self, front: &Link) -> Coord {
        let dist = front.position() - self.position();
        let (dx,dy) = match (dist.x, dist.y) {
            // overlapping
            (0, 0) => (0, 0),
            // touching up/left/down/right
            (0, 1) | (1, 0) | (0, -1) | (-1, 0) => (0, 0),
            // touching diagonally
            (1, 1) | (1, -1) | (-1, 1) | (-1, -1) => (0, 0),
            // need to move up/left/down/right
            (0, 2) => (0, 1),
            (0, -2) => (0, -1),
            (2, 0) => (1, 0),
            (-2, 0) => (-1, 0),
            // need to move to the right diagonally
            (2, 1) => (1, 1),
            (2, -1) => (1, -1),
            // need to move to the left diagonally
            (-2, 1) => (-1, 1),
            (-2, -1) => (-1, -1),
            // need to move up/down diagonally
            (1, 2) => (1, 1),
            (-1, 2) => (-1, 1),
            (1, -2) => (1, -1),
            (-1, -2) => (-1, -1),
            // need to move diagonally
            (-2, -2) => (-1, -1),
            (-2, 2) => (-1, 1),
            (2, -2) => (1, -1),
            (2, 2) => (1, 1),
            _ => panic!("unhandled case: tail - head = {dist:?}"),
        };
        self.pos.x += dx;
        self.pos.y += dy;
        self.position()
    }
    fn position(&self) -> Coord {
        self.pos
    }
}

The Link struct represents a single knot in the rope:

  • pos is the current position of the knot
  • move_to moves the knot directly in a cardinal direction
  • move_relative implements the physical constraints of the rope, moving the knot based on its relation to the knot in front of it
  • position returns the current position

The move_relative method is particularly detailed, handling all possible relative positions through pattern matching.

Rope Chain Implementation

struct Chain {
    links: Vec<Link>
}
impl Chain {
    fn new(pos:Coord, size:usize) -> Chain {
        Chain {
            links: vec![Link::new(pos); size]
        }
    }
    fn move_to(&mut self, cmd: Command) -> Coord {

        self.links[0].move_to(cmd);
        self.links
            .iter_mut()
            .reduce(|front,tail|{
                tail.move_relative(front);
                tail
            })
            .unwrap()
            .position()
    }
}

The Chain struct represents the entire rope:

  • links is a vector of Link objects
  • new creates a chain of a specified size, with all links starting at the same position
  • move_to moves the head link directly and then propagates the movement through the chain

The reduce operation in move_to elegantly handles the chain of movement dependencies.

Game Simulation

struct Game {
    rope: Chain,
    unique: HashSet<Coord>
}
impl Game {
    fn new(rope: Chain) -> Game {
        Game { rope, unique: HashSet::new() }
    }
    fn unique_positions(&self) -> usize {
        self.unique.len()
    }
    fn run(&mut self, input: &Vec<Step>) -> &Self{
        for step in input {
            (0..step.units).all(|_| {
                self.unique.insert(
                    self.rope.move_to( step.cmd )
                );
                true
            });
        }
        self
    }
}

The Game struct manages the simulation:

  • rope is the rope chain being simulated
  • unique is a HashSet of coordinates visited by the tail
  • unique_positions returns the number of unique positions visited
  • run simulates all the movement steps and tracks unique tail positions

Parsing Input

fn parse_commands(input: &str) -> Vec<Step> {
    input.lines()
        .map(|line| line.split(' '))
        .map(|mut s| {
            let cmd = match s.next() {
                Some("R") => Command::Right,
                Some("U") => Command::Up,
                Some("D") => Command::Down,
                Some("L") => Command::Left,
                _ => panic!("Woohaaaa!")
            };
            (cmd, isize::from_str(s.next().unwrap()).unwrap())
        })
        .fold(vec![], |mut out, (cmd, units)| {
            out.push( Step{ cmd, units });
            out
        })

}

The parse_commands function converts the input text into a vector of Step objects by:

  1. Splitting each line into parts
  2. Converting the first part to a Command
  3. Converting the second part to a distance
  4. Creating a Step with the command and distance

Main Function

fn main() {
//     let data = "R 4\nU 4\nL 3\nD 1\nR 4\nD 1\nL 5\nR 2".to_string();
//     let data = "R 5\nU 8\nL 8\nD 3\nR 17\nD 10\nL 25\nU 20\n".to_string();

    let data = std::fs::read_to_string("src/bin/day9_input.txt").expect("");

    let cmds = parse_commands(data.as_str());

    println!("2 Link Chain - Unique points: {}",
             Game::new( Chain::new((0, 0).into(), 2))
                 .run( &cmds )
                 .unique_positions()
    );
    println!("10 Links Chain - Unique points: {}",
             Game::new( Chain::new((0, 0).into(), 10))
                 .run( &cmds )
                 .unique_positions()
    );
}

The main function:

  1. Reads the input file
  2. Parses it into commands
  3. For Part 1: Creates a game with a 2-link chain and runs the simulation
  4. For Part 2: Creates a game with a 10-link chain and runs the simulation
  5. Prints the number of unique positions visited by the tail in each case

Implementation Notes

  • Pattern Matching: The solution makes extensive use of pattern matching, especially in the move_relative method
  • Functional Approach: The solution uses functional programming techniques like map, reduce, and method chaining
  • Trait Implementations: Custom traits like Sub and trait derivations make the code more expressive and type-safe
  • Type Safety: Custom types like Coord, Command, and Step provide type safety and clarity

Day 10: Cathode-Ray Tube

Day 10 involves simulating a CPU with a simple instruction set and a CRT display.

Problem Overview

You need to fix a small handheld device with a CPU and CRT screen. The task involves:

  1. Simulating a CPU that executes instructions with different timing
  2. Monitoring the X register at specific cycles
  3. Drawing pixels on a CRT screen based on the X register's value

This problem tests your ability to implement a simple processor simulation and track the state over time, as well as generating visual output based on that state.

Day 10: Problem Description

Cathode-Ray Tube

You avoid the ropes, plunge into the river, and swim to shore.

The Elves yell something about meeting back up with them upriver, but the river is too loud to tell exactly what they're saying. They finish crossing the bridge and disappear from view.

Situations like this must be why the Elves prioritized getting the communication system on your handheld device working. You pull it out of your pack, but the amount of water slowly draining from a big crack in its screen tells you it probably won't be of much immediate use.

Unless, that is, you can design a replacement for the device's video system! It seems to be some kind of cathode-ray tube screen and simple CPU that are both driven by a precise clock circuit. The clock circuit ticks at a constant rate; each tick is called a cycle.

Start by figuring out the signal being sent by the CPU. The CPU has a single register, X, which starts with the value 1. It supports only two instructions:

  • addx V takes two cycles to complete. After two cycles, the X register is increased by the value V. (V can be negative.)
  • noop takes one cycle to complete. It has no other effect.

The CPU uses these instructions in a program (your puzzle input) to, somehow, tell the screen what to draw.

Consider the following small program:

noop
addx 3
addx -5

Execution of this program proceeds as follows:

  • At the start of the first cycle, the noop instruction begins execution. During the first cycle, X is 1. After the first cycle, the noop instruction finishes execution, doing nothing.
  • At the start of the second cycle, the addx 3 instruction begins execution. During the second cycle, X is still 1.
  • During the third cycle, X is still 1. After the third cycle, the addx 3 instruction finishes execution, setting X to 4.
  • At the start of the fourth cycle, the addx -5 instruction begins execution. During the fourth cycle, X is still 4.
  • During the fifth cycle, X is still 4. After the fifth cycle, the addx -5 instruction finishes execution, setting X to -1.

Maybe you can learn something by looking at the value of the X register throughout execution. For now, consider the signal strength (the cycle number multiplied by the value of the X register) during the 20th cycle and every 40 cycles after that (that is, during the 20th, 60th, 100th, 140th, 180th, and 220th cycles).

For example, consider this larger program:

addx 15
addx -11
addx 6
addx -3
addx 5
addx -1
addx -8
addx 13
addx 4
noop
addx -1
addx 5
addx -1
addx 5
addx -1
addx 5
addx -1
addx 5
addx -1
addx -35
addx 1
addx 24
addx -19
addx 1
addx 16
addx -11
noop
noop
addx 21
addx -15
noop
noop
addx -3
addx 9
addx 1
addx -3
addx 8
addx 1
addx 5
noop
noop
noop
noop
noop
addx -36
noop
addx 1
addx 7
noop
noop
noop
addx 2
addx 6
noop
noop
noop
noop
noop
addx 1
noop
noop
addx 7
addx 1
noop
addx -13
addx 13
addx 7
noop
addx 1
addx -33
noop
noop
noop
addx 2
noop
noop
noop
addx 8
noop
addx -1
addx 2
addx 1
noop
addx 17
addx -9
addx 1
addx 1
addx -3
addx 11
noop
noop
addx 1
noop
addx 1
noop
noop
addx -13
addx -19
addx 1
addx 3
addx 26
addx -30
addx 12
addx -1
addx 3
addx 1
noop
noop
noop
addx -9
addx 18
addx 1
addx 2
noop
noop
addx 9
noop
noop
noop
addx -1
addx 2
addx -37
addx 1
addx 3
noop
addx 15
addx -21
addx 22
addx -6
addx 1
noop
addx 2
addx 1
noop
addx -10
noop
noop
addx 20
addx 1
addx 2
addx 2
addx -6
addx -11
noop
noop
noop

The interesting signal strengths can be determined as follows:

  • During the 20th cycle, register X has the value 21, so the signal strength is 20 * 21 = 420. (The 20th cycle occurs in the middle of the second addx -1, so the value of register X is the starting value, 1, plus all of the other addx values up to that point: 1 + 15 - 11 + 6 - 3 + 5 - 1 - 8 + 13 + 4 = 21.)
  • During the 60th cycle, register X has the value 19, so the signal strength is 60 * 19 = 1140.
  • During the 100th cycle, register X has the value 18, so the signal strength is 100 * 18 = 1800.
  • During the 140th cycle, register X has the value 21, so the signal strength is 140 * 21 = 2940.
  • During the 180th cycle, register X has the value 16, so the signal strength is 180 * 16 = 2880.
  • During the 220th cycle, register X has the value 18, so the signal strength is 220 * 18 = 3960.

The sum of these signal strengths is 13140.

Part 1

Find the signal strength during the 20th, 60th, 100th, 140th, 180th, and 220th cycles. What is the sum of these six signal strengths?

Part 2

It seems like the X register controls the horizontal position of a sprite. Specifically, the sprite is 3 pixels wide, and the X register sets the horizontal position of the middle of that sprite. (In this system, there is no such thing as "vertical position": if the sprite's horizontal position puts its pixels where the CRT is currently drawing, then those pixels will be drawn.)

You count the pixels on the CRT: 40 wide and 6 high. This CRT screen draws the top row of pixels left-to-right, then the row below that, and so on. The left-most pixel in each row is in position 0, and the right-most pixel in each row is in position 39.

Like the CPU, the CRT is tied closely to the clock circuit: the CRT draws a single pixel during each cycle. Representing each pixel of the screen as a #, here are the cycles during which the first and last pixel in each row are drawn:

Cycle   1 -> ######################################## <- Cycle  40
Cycle  41 -> ######################################## <- Cycle  80
Cycle  81 -> ######################################## <- Cycle 120
Cycle 121 -> ######################################## <- Cycle 160
Cycle 161 -> ######################################## <- Cycle 200
Cycle 201 -> ######################################## <- Cycle 240

So, by carefully timing the CPU instructions and the CRT drawing operations, you should be able to determine whether the sprite is visible the instant each pixel is drawn. If the sprite is positioned such that one of its three pixels is the pixel currently being drawn, the screen produces a lit pixel (#); otherwise, the screen leaves the pixel dark (.).

The first few pixels from the larger example above are drawn as follows:

Sprite position: ###.....................................

Start cycle   1: begin executing addx 15
During cycle  1: CRT draws pixel in position 0
Current CRT row: #.......................................

During cycle  2: CRT draws pixel in position 1
Current CRT row: ##......................................
End of cycle  2: finish executing addx 15 (Register X is now 16)
Sprite position: ...............###......................

Start cycle   3: begin executing addx -11
During cycle  3: CRT draws pixel in position 2
Current CRT row: ##......................................

During cycle  4: CRT draws pixel in position 3
Current CRT row: ##......................................
End of cycle  4: finish executing addx -11 (Register X is now 5)
Sprite position: ....###.................................

Start cycle   5: begin executing addx 6
During cycle  5: CRT draws pixel in position 4
Current CRT row: ##..#...................................

During cycle  6: CRT draws pixel in position 5
Current CRT row: ##..##..................................
End of cycle  6: finish executing addx 6 (Register X is now 11)
Sprite position: ..........###...........................

Start cycle   7: begin executing addx -3
During cycle  7: CRT draws pixel in position 6
Current CRT row: ##..##..................................

During cycle  8: CRT draws pixel in position 7
Current CRT row: ##..##...................................
End of cycle  8: finish executing addx -3 (Register X is now 8)
Sprite position: .......###..............................

Start cycle   9: begin executing addx 5
During cycle  9: CRT draws pixel in position 8
Current CRT row: ##..##...#..............................

During cycle 10: CRT draws pixel in position 9
Current CRT row: ##..##...##.............................
End of cycle 10: finish executing addx 5 (Register X is now 13)
Sprite position: ............###.........................

Start cycle  11: begin executing addx -1
During cycle 11: CRT draws pixel in position 10
Current CRT row: ##..##...##.............................

During cycle 12: CRT draws pixel in position 11
Current CRT row: ##..##...##.............................
End of cycle 12: finish executing addx -1 (Register X is now 12)
Sprite position: ...........###..........................

Start cycle  13: begin executing addx -8
During cycle 13: CRT draws pixel in position 12
Current CRT row: ##..##...##..#.........................

During cycle 14: CRT draws pixel in position 13
Current CRT row: ##..##...##..##........................
End of cycle 14: finish executing addx -8 (Register X is now 4)
Sprite position: ...###..................................

Start cycle  15: begin executing addx 13
During cycle 15: CRT draws pixel in position 14
Current CRT row: ##..##...##..##........................

During cycle 16: CRT draws pixel in position 15
Current CRT row: ##..##...##..##........................
End of cycle 16: finish executing addx 13 (Register X is now 17)
Sprite position: ................###.....................

Start cycle  17: begin executing addx 4
During cycle 17: CRT draws pixel in position 16
Current CRT row: ##..##...##..##..#.....................

During cycle 18: CRT draws pixel in position 17
Current CRT row: ##..##...##..##..##....................
End of cycle 18: finish executing addx 4 (Register X is now 21)
Sprite position: ....................###.................

Start cycle  19: begin executing noop
During cycle 19: CRT draws pixel in position 18
Current CRT row: ##..##...##..##..##....................
End of cycle 19: finish executing noop

Start cycle  20: begin executing addx -1
During cycle 20: CRT draws pixel in position 19
Current CRT row: ##..##...##..##..##..#.................

During cycle 21: CRT draws pixel in position 20
Current CRT row: ##..##...##..##..##..##................
End of cycle 21: finish executing addx -1 (Register X is now 20)
Sprite position: ...................###..................

Render the image given by your program. What eight capital letters appear on your CRT?

Day 10: Solution Explanation

Approach

Day 10 involves simulating a simple CPU with a basic instruction set and a CRT display. The solution requires us to:

  1. Parse the instructions: Read the input and convert it to a series of CPU instructions
  2. Simulate the CPU: Execute instructions while tracking the X register value
  3. Monitor signal strength: Calculate signal strength at specific cycles
  4. Render the CRT: Draw pixels based on the X register value

The solution models the CPU, its execution cycle, and the CRT display as separate components that interact with each other.

Implementation Details

Instruction Set

We start by defining the instruction set and what each instruction does:

#![allow(unused)]
fn main() {
type Cycles = usize;

#[derive(Debug,Copy, Clone)]
enum InstructionSet { Noop, AddX(isize) }

#[derive(Debug,Copy, Clone)]
struct Instruction {
    op: InstructionSet,
    ticks: Cycles
}
impl Instruction {
    fn result(&self) -> isize {
        match self.op {
            InstructionSet::Noop => 0,
            InstructionSet::AddX(val) => val
        }
    }
}
}

The InstructionSet enum represents the two possible instructions:

  • Noop: Does nothing
  • AddX(isize): Adds the specified value to the X register

The Instruction struct combines an operation with the number of cycles it takes to execute. The result method returns the value that should be added to the X register after execution.

CPU Simulation

The CPU is modeled as a state machine with several components:

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Register(isize);

struct CPU {
    x: Register,             // X register
    buffer: Option<Instruction>, // Currently executing instruction
    exec_cycles: Cycles,      // Remaining cycles for current instruction
    ip: Option<IntoIter<Instruction>> // Instruction pointer
}
}

The CPU implementation includes methods for loading instructions, fetching the next instruction, executing instructions, and advancing the clock:

#![allow(unused)]
fn main() {
impl CPU {
    fn new() -> CPU {
        CPU { x: Register(1), buffer: None, exec_cycles: 0, ip: None }
    }
    
    fn load(&mut self, ops: Vec<Instruction>) {
        self.ip = Some(ops.into_iter());
    }
    
    fn fetch(&mut self, op: Instruction) {
        self.exec_cycles = op.ticks;
        self.buffer = Some(op);
    }
    
    fn execute(&mut self) -> bool {
        match self.buffer {                         // Check instruction buffer
            None => false,                          // empty, not exec, go and load
            Some(op) => {                 // Instruction loaded
                self.exec_cycles -= 1;               // execution cycle #
                if self.exec_cycles == 0 {           // exec cycles reached?
                    self.x.0 += op.result();            // move Val to Reg X
                    self.buffer = None;                 // flush instruction buffer
                    false                           // not exec, go and load
                } else { true }                     // Busy executing
            }
        }
    }
    
    fn tick(&mut self) {
        if !self.execute() {
            let mut ip = self.ip.take().unwrap();
            self.fetch(ip.next().unwrap());
            self.ip.replace(ip);
        }
    }
    
    fn reg_x(&self) -> isize {
        self.x.0
    }
}
}

This implementation models the CPU's behavior:

  • execute processes one cycle of the current instruction and returns whether execution is ongoing
  • tick advances the CPU by one cycle, either continuing execution of the current instruction or fetching a new one
  • reg_x provides access to the current value of the X register

CRT Display

The CRT display is modeled as a separate component:

#![allow(unused)]
fn main() {
struct CRT {
    width: usize,
    clock: Cycles
}

impl CRT {
    fn new(width: usize) -> CRT {
        CRT{ width, clock: 0 }
    }
    
    fn draw(&mut self, pos: isize) {
        let col = self.clock % self.width;
        print!("{}",
            if (pos-1..=pos+1).contains(&(col as isize)) { '#' } else { '.' }
        );
        if col == self.width-1 { println!() }
    }
    
    fn tick(&mut self, pos:isize) {
        self.draw(pos);
        self.clock += 1;
    }
}
}

The CRT:

  • Tracks its own clock cycle
  • Draws a pixel based on the current cycle and the X register value
  • Automatically handles line breaks when reaching the end of a row

Parsing Instructions

The input is parsed into a sequence of instructions:

#![allow(unused)]
fn main() {
fn parse_instructions(inp: &str) -> (Vec<Instruction>, usize) {
    inp.lines()
        .map(|line| {
            let mut iter = line.split(' ');
            match iter.next() {
                Some("noop") => Instruction { op: InstructionSet::Noop, ticks: 1 },
                Some("addx") => {
                    let val = isize::from_str(
                        iter.next().expect("parse_instructions: addx is missing its value!")
                    ).expect("parse_instructions: addx not followed by numeric value!");
                    Instruction { op: InstructionSet::AddX(val), ticks: 2 }
                },
                _ => panic!("parse_instructions: unknown instruction caught!")
            }
        })
        .fold((vec![],0), |(mut out,mut total), op| {
            total += op.ticks;
            out.push(op);
            (out,total)
        })
}
}

This function:

  1. Converts each line into an Instruction
  2. Sets the appropriate number of cycles for each instruction type (1 for noop, 2 for addx)
  3. Returns both the instructions and the total number of cycles they'll take to execute

Main Simulation

The main simulation brings everything together:

fn main() {
    let input = std::fs::read_to_string("src/bin/day10_input.txt").expect("Ops!");

    let sample_intervals = vec![20usize, 60, 100, 140, 180, 220];
    let mut sampling_interval = sample_intervals.iter().peekable();

    let mut crt = CRT::new(40);
    let mut cpu = CPU::new();

    let (opcode, clock) = parse_instructions(input.as_str() );
    cpu.load(opcode);

    let sum = (1..=clock)
        .map(|cycle| {
            cpu.tick();
            crt.tick(cpu.reg_x());
            ( cycle, cpu.reg_x() )
        })
        .filter(|(cycle,_)|
            match sampling_interval.peek() {
                Some(&to_sample) if to_sample.eq(cycle) => { sampling_interval.next(); true }
                _ => false
            }
        )
        .map(|(clock, x)| x * clock as isize)
        .sum::<isize>();

    println!("{sum} is the sum of signal strengths at {:?}", sample_intervals);
}

The main function:

  1. Sets up the sample intervals for signal strength measurement
  2. Creates the CPU and CRT
  3. Parses the instructions and loads them into the CPU
  4. Runs the simulation for the specified number of cycles, ticking both CPU and CRT each cycle
  5. Filters for the specific cycles we need to sample
  6. Calculates the signal strength at those cycles
  7. Sums the signal strengths for Part 1

Part 2's output is handled automatically by the CRT's draw method, which prints the characters directly to the console.

Algorithm Analysis

Time Complexity

  • Parsing the input: O(n) where n is the number of instructions
  • Simulating the CPU: O(c) where c is the total number of cycles
  • Overall: O(n + c), which is effectively O(c) since the number of cycles is proportional to the number of instructions

Space Complexity

  • Storing instructions: O(n) where n is the number of instructions
  • CPU state: O(1)
  • CRT state: O(1)
  • Overall: O(n)

Alternative Approaches

Simplified CPU Model

Instead of modeling the CPU with an instruction buffer and execution cycles, we could use a simpler approach that just keeps track of the current instruction and cycles:

#![allow(unused)]
fn main() {
struct SimplifiedCPU {
    x: isize,
    cycle: usize,
    instructions: Vec<(String, isize)>
}

impl SimplifiedCPU {
    fn run(&mut self) -> Vec<(usize, isize)> {
        let mut history = Vec::new();
        let mut pc = 0;
        
        while pc < self.instructions.len() {
            let (instr, val) = &self.instructions[pc];
            
            match instr.as_str() {
                "noop" => {
                    self.cycle += 1;
                    history.push((self.cycle, self.x));
                }
                "addx" => {
                    self.cycle += 1;
                    history.push((self.cycle, self.x));
                    self.cycle += 1;
                    history.push((self.cycle, self.x));
                    self.x += val;
                }
                _ => panic!("Unknown instruction")
            }
            
            pc += 1;
        }
        
        history
    }
}
}

This approach is more straightforward but less flexible if we wanted to add more instructions or change the behavior.

CRT as a String Buffer

Instead of printing directly, the CRT could build a string buffer:

#![allow(unused)]
fn main() {
struct BufferedCRT {
    width: usize,
    height: usize,
    buffer: Vec<char>,
    position: usize
}

impl BufferedCRT {
    fn new(width: usize, height: usize) -> Self {
        Self {
            width,
            height,
            buffer: vec!['.'; width * height],
            position: 0
        }
    }
    
    fn draw(&mut self, sprite_pos: isize) {
        let col = self.position % self.width;
        if (sprite_pos-1..=sprite_pos+1).contains(&(col as isize)) {
            self.buffer[self.position] = '#';
        }
        self.position += 1;
    }
    
    fn display(&self) -> String {
        self.buffer.chunks(self.width)
            .map(|row| row.iter().collect::<String>())
            .collect::<Vec<_>>()
            .join("\n")
    }
}
}

This would allow us to build up the entire display and then render it all at once, which might be preferable for some applications.

Conclusion

This solution demonstrates how to simulate a simple CPU and CRT display. The modular approach with separate CPU and CRT components makes the code clean and maintainable. The use of Rust's pattern matching and option handling helps elegantly manage the CPU's state and instruction execution.

Day 10: Code

Below is the complete code for Day 10's solution, which simulates a CPU and CRT display.

Full Solution

use std::str::FromStr;
use std::vec::IntoIter;

type Cycles = usize;

#[derive(Debug,Copy, Clone)]
enum InstructionSet { Noop, AddX(isize) }

#[derive(Debug,Copy, Clone)]
struct Instruction {
    op: InstructionSet,
    ticks: Cycles
}
impl Instruction {
    fn result(&self) -> isize {
        match self.op {
            InstructionSet::Noop => 0,
            InstructionSet::AddX(val) => val
        }
    }
}

#[derive(Debug)]
struct Register(isize);


struct CPU {
    x: Register,
    buffer: Option<Instruction>,
    exec_cycles: Cycles,
    ip: Option<IntoIter<Instruction>>
}
impl CPU {
    fn new() -> CPU {
        CPU { x: Register(1), buffer: None, exec_cycles: 0, ip: None }
    }
    fn load(&mut self, ops: Vec<Instruction>) {
        self.ip = Some(ops.into_iter());
    }
    fn fetch(&mut self, op: Instruction) {
        self.exec_cycles = op.ticks;
        self.buffer = Some(op);
    }
    fn execute(&mut self) -> bool {
        match self.buffer {                         // Check instruction buffer
            None => false,                          // empty, not exec, go and load
            Some(op) => {                 // Instruction loaded
                self.exec_cycles -= 1;               // execution cycle #
                if self.exec_cycles == 0 {           // exec cycles reached ?
                    self.x.0 += op.result();            // move Val to Reg X
                    self.buffer = None;                 // flush instruction buffer
                    false                           // not exec, go and load
                } else { true }                     // Busy executing
            }
        }
    }
    fn tick(&mut self) {
        if !self.execute() {
            let mut ip = self.ip.take().unwrap();
            self.fetch(ip.next().unwrap());
            self.ip.replace(ip);
        }
    }
    fn reg_x(&self) -> isize {
        self.x.0
    }
}

struct CRT {
    width: usize,
    clock: Cycles
}
impl CRT {
    fn new(width: usize) -> CRT {
        CRT{ width, clock: 0 }
    }
    fn draw(&mut self, pos: isize) {
        let col = self.clock % self.width;
        print!("{}",
            if (pos-1..=pos+1).contains(&(col as isize)) { '#' } else { '.' }
        );
        if col == self.width-1 { println!() }
    }
    fn tick(&mut self, pos:isize) {
        self.draw(pos);
        self.clock += 1;
    }
}

fn parse_instructions(inp: &str) -> (Vec<Instruction>, usize) {
    inp.lines()
        .map(|line| {
            let mut iter = line.split(' ');
            match iter.next() {
                Some("noop") => Instruction { op: InstructionSet::Noop, ticks: 1 },
                Some("addx") => {
                    let val = isize::from_str(
                        iter.next().expect("parse_instructions: addx is missing its value!")
                    ).expect("parse_instructions: addx not followed by numeric value!");
                    Instruction { op: InstructionSet::AddX(val), ticks: 2 }
                },
                _ => panic!("parse_instructions: unknown instruction caught!")
            }
        })
        .fold((vec![],0), |(mut out,mut total), op| {
            total += op.ticks;
            out.push(op);
            (out,total)
        })
}

fn main() {
    let input = std::fs::read_to_string("src/bin/day10_input.txt").expect("Ops!");

    let sample_intervals = vec![20usize, 60, 100, 140, 180, 220];
    let mut sampling_interval = sample_intervals.iter().peekable();

    let mut crt = CRT::new(40);
    let mut cpu = CPU::new();

    let (opcode, clock) = parse_instructions(input.as_str() );
    cpu.load(opcode);

    let sum = (1..=clock)
        .map(|cycle| {
            cpu.tick();
            crt.tick(cpu.reg_x());
            ( cycle, cpu.reg_x() )
        })
        .filter(|(cycle,_)|
            match sampling_interval.peek() {
                Some(&to_sample) if to_sample.eq(cycle) => { sampling_interval.next(); true }
                _ => false
            }
        )
        .map(|(clock, x)| x * clock as isize)
        .sum::<isize>();

    println!("{sum} is the sum of  signal strengths at {:?}", sample_intervals);
}

Code Walkthrough

Data Types and Instruction Set

type Cycles = usize;

#[derive(Debug,Copy, Clone)]
enum InstructionSet { Noop, AddX(isize) }

#[derive(Debug,Copy, Clone)]
struct Instruction {
    op: InstructionSet,
    ticks: Cycles
}
impl Instruction {
    fn result(&self) -> isize {
        match self.op {
            InstructionSet::Noop => 0,
            InstructionSet::AddX(val) => val
        }
    }
}

#[derive(Debug)]
struct Register(isize);

The code defines the core types for the CPU simulation:

  • Cycles is a type alias for usize to represent clock cycles
  • InstructionSet is an enum of the possible instructions (Noop and AddX)
  • Instruction combines an operation with the number of cycles it takes
  • Register is a simple wrapper around an isize value

The result method on Instruction returns the value that should be added to the X register after execution.

CPU Implementation

struct CPU {
    x: Register,
    buffer: Option<Instruction>,
    exec_cycles: Cycles,
    ip: Option<IntoIter<Instruction>>
}
impl CPU {
    fn new() -> CPU {
        CPU { x: Register(1), buffer: None, exec_cycles: 0, ip: None }
    }
    fn load(&mut self, ops: Vec<Instruction>) {
        self.ip = Some(ops.into_iter());
    }
    fn fetch(&mut self, op: Instruction) {
        self.exec_cycles = op.ticks;
        self.buffer = Some(op);
    }
    fn execute(&mut self) -> bool {
        match self.buffer {                         // Check instruction buffer
            None => false,                          // empty, not exec, go and load
            Some(op) => {                 // Instruction loaded
                self.exec_cycles -= 1;               // execution cycle #
                if self.exec_cycles == 0 {           // exec cycles reached ?
                    self.x.0 += op.result();            // move Val to Reg X
                    self.buffer = None;                 // flush instruction buffer
                    false                           // not exec, go and load
                } else { true }                     // Busy executing
            }
        }
    }
    fn tick(&mut self) {
        if !self.execute() {
            let mut ip = self.ip.take().unwrap();
            self.fetch(ip.next().unwrap());
            self.ip.replace(ip);
        }
    }
    fn reg_x(&self) -> isize {
        self.x.0
    }
}

The CPU struct models a simple processor with:

  • An X register storing a single value
  • An instruction buffer for the currently executing instruction
  • A counter for the remaining execution cycles
  • An instruction pointer to iterate through the program

The key methods are:

  • execute() - Processes one cycle of the current instruction, decrements the cycle counter, and returns whether execution is still in progress
  • tick() - Advances the CPU by one cycle, either continuing execution or fetching a new instruction
  • reg_x() - Returns the current value of the X register

CRT Implementation

struct CRT {
    width: usize,
    clock: Cycles
}
impl CRT {
    fn new(width: usize) -> CRT {
        CRT{ width, clock: 0 }
    }
    fn draw(&mut self, pos: isize) {
        let col = self.clock % self.width;
        print!("{}",
            if (pos-1..=pos+1).contains(&(col as isize)) { '#' } else { '.' }
        );
        if col == self.width-1 { println!() }
    }
    fn tick(&mut self, pos:isize) {
        self.draw(pos);
        self.clock += 1;
    }
}

The CRT struct implements a simple display:

  • width defines how many pixels are in each row
  • clock tracks the current pixel position
  • draw() prints a pixel based on whether the sprite (positioned at pos) overlaps with the current pixel
  • tick() advances the CRT clock after drawing a pixel

Instruction Parsing

fn parse_instructions(inp: &str) -> (Vec<Instruction>, usize) {
    inp.lines()
        .map(|line| {
            let mut iter = line.split(' ');
            match iter.next() {
                Some("noop") => Instruction { op: InstructionSet::Noop, ticks: 1 },
                Some("addx") => {
                    let val = isize::from_str(
                        iter.next().expect("parse_instructions: addx is missing its value!")
                    ).expect("parse_instructions: addx not followed by numeric value!");
                    Instruction { op: InstructionSet::AddX(val), ticks: 2 }
                },
                _ => panic!("parse_instructions: unknown instruction caught!")
            }
        })
        .fold((vec![],0), |(mut out,mut total), op| {
            total += op.ticks;
            out.push(op);
            (out,total)
        })
}

The parse_instructions function converts the input text to a list of instructions:

  1. It splits each line and matches the instruction type
  2. For noop, it creates an instruction with 1 execution cycle
  3. For addx, it parses the value and creates an instruction with 2 execution cycles
  4. It uses fold to build a vector of instructions while also calculating the total number of cycles

Main Function

fn main() {
    let input = std::fs::read_to_string("src/bin/day10_input.txt").expect("Ops!");

    let sample_intervals = vec![20usize, 60, 100, 140, 180, 220];
    let mut sampling_interval = sample_intervals.iter().peekable();

    let mut crt = CRT::new(40);
    let mut cpu = CPU::new();

    let (opcode, clock) = parse_instructions(input.as_str() );
    cpu.load(opcode);

    let sum = (1..=clock)
        .map(|cycle| {
            cpu.tick();
            crt.tick(cpu.reg_x());
            ( cycle, cpu.reg_x() )
        })
        .filter(|(cycle,_)|
            match sampling_interval.peek() {
                Some(&to_sample) if to_sample.eq(cycle) => { sampling_interval.next(); true }
                _ => false
            }
        )
        .map(|(clock, x)| x * clock as isize)
        .sum::<isize>();

    println!("{sum} is the sum of  signal strengths at {:?}", sample_intervals);
}

The main function ties everything together:

  1. It defines the specific cycles at which to sample the signal (20, 60, 100, etc.)
  2. It initializes the CRT and CPU
  3. It parses the instructions and loads them into the CPU
  4. It creates a range for all cycles and maps each cycle to:
    • Advance the CPU
    • Update the CRT
    • Return the cycle number and register value
  5. It filters for the specific cycles we want to sample
  6. It calculates the signal strength (cycle number × register value) for each sampled cycle
  7. It sums all signal strengths and prints the result

The Part 2 output (the eight capital letters) is printed directly by the CRT during simulation.

Implementation Notes

  • State Machine Design: The CPU is implemented as a state machine that processes instructions cycle-by-cycle
  • Separation of Concerns: The CPU and CRT are separate components with their own state and behavior
  • Pipeline Simulation: The instruction execution follows a simple pipeline pattern with fetch and execute stages
  • Functional Programming: The code uses functional programming patterns like map, filter, and fold for concise data processing

Day 11: Monkey in the Middle

Day 11 involves simulating monkeys playing a game of keep-away with items of various worry levels.

Problem Overview

You need to model a group of monkeys passing items between them according to specific rules. Each monkey:

  1. Has a list of items with worry levels
  2. Inspects each item, applying an operation to update its worry level
  3. Tests the worry level to decide which monkey to throw the item to
  4. Keeps track of how many items it inspects

The challenge is to determine the level of "monkey business" (product of inspection counts of the most active monkeys) after a number of rounds.

Day 11: Problem Description

Monkey in the Middle

As you finally start making your way upriver, you realize your pack is much lighter than you remember. Just then, one of the items from your pack goes flying overhead. Monkeys are playing Keep Away with your missing things!

To get your stuff back, you need to be able to predict where the monkeys will throw your items. After some careful observation, you realize the monkeys operate based on how worried you are about each item.

You take some notes (your puzzle input) on the items each monkey currently has, how worried you are about those items, and how the monkey makes decisions based on your worry level. For example:

Monkey 0:
  Starting items: 79, 98
  Operation: new = old * 19
  Test: divisible by 23
    If true: throw to monkey 2
    If false: throw to monkey 3

Monkey 1:
  Starting items: 54, 65, 75, 74
  Operation: new = old + 6
  Test: divisible by 19
    If true: throw to monkey 2
    If false: throw to monkey 0

Monkey 2:
  Starting items: 79, 60, 97
  Operation: new = old * old
  Test: divisible by 13
    If true: throw to monkey 1
    If false: throw to monkey 3

Monkey 3:
  Starting items: 74
  Operation: new = old + 3
  Test: divisible by 17
    If true: throw to monkey 0
    If false: throw to monkey 1

Each monkey has several attributes:

  • Starting items lists your worry level for each item the monkey is currently holding in the order they will be inspected.
  • Operation shows how your worry level changes as that monkey inspects an item. (An operation like new = old * 5 means that your worry level after the monkey inspected the item is five times whatever your worry level was before inspection.)
  • Test shows how the monkey uses your worry level to decide where to throw an item next.
    • If true shows what happens with an item if the Test was true.
    • If false shows what happens with an item if the Test was false.

After each monkey inspects an item but before it tests your worry level, your relief that the monkey's inspection didn't damage the item causes your worry level to be divided by three and rounded down to the nearest integer.

The monkeys take turns inspecting and throwing items. On a single monkey's turn, it inspects and throws all of the items it is holding one at a time and in the order listed. Monkey 0 goes first, then monkey 1, and so on until each monkey has had one turn. The process of each monkey taking a single turn is called a round.

When a monkey throws an item to another monkey, the item goes on the end of the recipient monkey's list. A monkey that starts a round with no items could end up inspecting and throwing many items by the time its turn comes around. If a monkey is holding no items at the start of its turn, its turn ends.

In the above example, the first round proceeds as follows:

Monkey 0:
  Monkey inspects an item with a worry level of 79.
    Worry level is multiplied by 19 to 1501.
    Monkey gets bored with item. Worry level is divided by 3 to 500.
    Current worry level is not divisible by 23.
    Item with worry level 500 is thrown to monkey 3.
  Monkey inspects an item with a worry level of 98.
    Worry level is multiplied by 19 to 1862.
    Monkey gets bored with item. Worry level is divided by 3 to 620.
    Current worry level is not divisible by 23.
    Item with worry level 620 is thrown to monkey 3.
Monkey 1:
  Monkey inspects an item with a worry level of 54.
    Worry level increases by 6 to 60.
    Monkey gets bored with item. Worry level is divided by 3 to 20.
    Current worry level is not divisible by 19.
    Item with worry level 20 is thrown to monkey 0.
  Monkey inspects an item with a worry level of 65.
    Worry level increases by 6 to 71.
    Monkey gets bored with item. Worry level is divided by 3 to 23.
    Current worry level is not divisible by 19.
    Item with worry level 23 is thrown to monkey 0.
  Monkey inspects an item with a worry level of 75.
    Worry level increases by 6 to 81.
    Monkey gets bored with item. Worry level is divided by 3 to 27.
    Current worry level is not divisible by 19.
    Item with worry level 27 is thrown to monkey 0.
  Monkey inspects an item with a worry level of 74.
    Worry level increases by 6 to 80.
    Monkey gets bored with item. Worry level is divided by 3 to 26.
    Current worry level is not divisible by 19.
    Item with worry level 26 is thrown to monkey 0.
Monkey 2:
  Monkey inspects an item with a worry level of 79.
    Worry level is multiplied by itself to 6241.
    Monkey gets bored with item. Worry level is divided by 3 to 2080.
    Current worry level is divisible by 13.
    Item with worry level 2080 is thrown to monkey 1.
  Monkey inspects an item with a worry level of 60.
    Worry level is multiplied by itself to 3600.
    Monkey gets bored with item. Worry level is divided by 3 to 1200.
    Current worry level is not divisible by 13.
    Item with worry level 1200 is thrown to monkey 3.
  Monkey inspects an item with a worry level of 97.
    Worry level is multiplied by itself to 9409.
    Monkey gets bored with item. Worry level is divided by 3 to 3136.
    Current worry level is not divisible by 13.
    Item with worry level 3136 is thrown to monkey 3.
Monkey 3:
  Monkey inspects an item with a worry level of 74.
    Worry level increases by 3 to 77.
    Monkey gets bored with item. Worry level is divided by 3 to 25.
    Current worry level is not divisible by 17.
    Item with worry level 25 is thrown to monkey 1.
  Monkey inspects an item with a worry level of 500.
    Worry level increases by 3 to 503.
    Monkey gets bored with item. Worry level is divided by 3 to 167.
    Current worry level is not divisible by 17.
    Item with worry level 167 is thrown to monkey 1.
  Monkey inspects an item with a worry level of 620.
    Worry level increases by 3 to 623.
    Monkey gets bored with item. Worry level is divided by 3 to 207.
    Current worry level is not divisible by 17.
    Item with worry level 207 is thrown to monkey 1.
  Monkey inspects an item with a worry level of 1200.
    Worry level increases by 3 to 1203.
    Monkey gets bored with item. Worry level is divided by 3 to 401.
    Current worry level is not divisible by 17.
    Item with worry level 401 is thrown to monkey 1.
  Monkey inspects an item with a worry level of 3136.
    Worry level increases by 3 to 3139.
    Monkey gets bored with item. Worry level is divided by 3 to 1046.
    Current worry level is not divisible by 17.
    Item with worry level 1046 is thrown to monkey 1.

After round 1, the monkeys are holding items with these worry levels:

Monkey 0: 20, 23, 27, 26
Monkey 1: 2080, 25, 167, 207, 401, 1046
Monkey 2: 
Monkey 3: 

Monkeys 2 and 3 aren't holding any items at the end of the round; they both inspected items during the round and threw them all before the round ended.

This process continues for a few more rounds:

After round 2, the monkeys are holding items with these worry levels:
Monkey 0: 695, 10, 71, 135, 350
Monkey 1: 43, 49, 58, 55, 362
Monkey 2: 
Monkey 3: 

After round 3, the monkeys are holding items with these worry levels:
Monkey 0: 16, 18, 21, 20, 122
Monkey 1: 1468, 22, 150, 286, 739
Monkey 2: 
Monkey 3: 

After round 4, the monkeys are holding items with these worry levels:
Monkey 0: 491, 9, 52, 97, 248, 34
Monkey 1: 39, 45, 43, 258
Monkey 2: 
Monkey 3: 

After round 5, the monkeys are holding items with these worry levels:
Monkey 0: 15, 17, 16, 88, 1037
Monkey 1: 20, 110, 205, 524, 72
Monkey 2: 
Monkey 3: 

After round 6, the monkeys are holding items with these worry levels:
Monkey 0: 8, 70, 176, 26, 34
Monkey 1: 481, 32, 36, 186, 2190
Monkey 2: 
Monkey 3: 

After round 7, the monkeys are holding items with these worry levels:
Monkey 0: 162, 12, 14, 64, 732, 17
Monkey 1: 148, 372, 55, 72
Monkey 2: 
Monkey 3: 

After round 8, the monkeys are holding items with these worry levels:
Monkey 0: 51, 126, 20, 26, 136
Monkey 1: 343, 26, 30, 1546, 36
Monkey 2: 
Monkey 3: 

After round 9, the monkeys are holding items with these worry levels:
Monkey 0: 116, 10, 12, 517, 14
Monkey 1: 108, 267, 43, 55, 288
Monkey 2: 
Monkey 3: 

After round 10, the monkeys are holding items with these worry levels:
Monkey 0: 91, 16, 20, 98
Monkey 1: 481, 245, 22, 26, 1092, 30
Monkey 2: 
Monkey 3: 

...

After round 15, the monkeys are holding items with these worry levels:
Monkey 0: 83, 44, 8, 184, 9, 20, 26, 102
Monkey 1: 110, 36
Monkey 2: 
Monkey 3: 

...

After round 20, the monkeys are holding items with these worry levels:
Monkey 0: 10, 12, 14, 26, 34
Monkey 1: 245, 93, 53, 199, 115
Monkey 2: 
Monkey 3: 

Chasing all of the monkeys at once is impossible; you're going to have to focus on the two most active monkeys if you want any hope of getting your stuff back. Count the total number of times each monkey inspects items over 20 rounds:

Monkey 0 inspected items 101 times.
Monkey 1 inspected items 95 times.
Monkey 2 inspected items 7 times.
Monkey 3 inspected items 105 times.

In this example, the two most active monkeys inspected items 101 and 105 times. The level of monkey business in this situation can be found by multiplying these together: 10605.

Part 1

Figure out which monkeys to chase by counting how many items they inspect over 20 rounds. What is the level of monkey business after 20 rounds of stuff-slinging simian shenanigans?

Part 2

You're worried you might not ever get your items back. So worried, in fact, that your relief that a monkey's inspection didn't damage an item no longer causes your worry level to be divided by three.

Unfortunately, that relief was all that was keeping your worry levels from reaching ridiculous levels. You'll need to find another way to keep your worry levels manageable.

At this rate, you might be putting up with these monkeys for a very long time - possibly 10000 rounds!

With these new rules, you can still figure out the monkey business after 10000 rounds. Using the same example above:

== After round 1 ==
Monkey 0 inspected items 2 times.
Monkey 1 inspected items 4 times.
Monkey 2 inspected items 3 times.
Monkey 3 inspected items 6 times.

== After round 20 ==
Monkey 0 inspected items 99 times.
Monkey 1 inspected items 97 times.
Monkey 2 inspected items 8 times.
Monkey 3 inspected items 103 times.

== After round 1000 ==
Monkey 0 inspected items 5204 times.
Monkey 1 inspected items 4792 times.
Monkey 2 inspected items 199 times.
Monkey 3 inspected items 5192 times.

== After round 2000 ==
Monkey 0 inspected items 10419 times.
Monkey 1 inspected items 9577 times.
Monkey 2 inspected items 392 times.
Monkey 3 inspected items 10391 times.

== After round 3000 ==
Monkey 0 inspected items 15638 times.
Monkey 1 inspected items 14358 times.
Monkey 2 inspected items 587 times.
Monkey 3 inspected items 15593 times.

== After round 4000 ==
Monkey 0 inspected items 20858 times.
Monkey 1 inspected items 19138 times.
Monkey 2 inspected items 780 times.
Monkey 3 inspected items 20797 times.

== After round 5000 ==
Monkey 0 inspected items 26075 times.
Monkey 1 inspected items 23921 times.
Monkey 2 inspected items 974 times.
Monkey 3 inspected items 26000 times.

== After round 6000 ==
Monkey 0 inspected items 31294 times.
Monkey 1 inspected items 28702 times.
Monkey 2 inspected items 1165 times.
Monkey 3 inspected items 31204 times.

== After round 7000 ==
Monkey 0 inspected items 36508 times.
Monkey 1 inspected items 33488 times.
Monkey 2 inspected items 1360 times.
Monkey 3 inspected items 36400 times.

== After round 8000 ==
Monkey 0 inspected items 41728 times.
Monkey 1 inspected items 38268 times.
Monkey 2 inspected items 1553 times.
Monkey 3 inspected items 41606 times.

== After round 9000 ==
Monkey 0 inspected items 46945 times.
Monkey 1 inspected items 43051 times.
Monkey 2 inspected items 1746 times.
Monkey 3 inspected items 46807 times.

== After round 10000 ==
Monkey 0 inspected items 52166 times.
Monkey 1 inspected items 47830 times.
Monkey 2 inspected items 1938 times.
Monkey 3 inspected items 52013 times.

After 10000 rounds, the two most active monkeys inspected items 52166 and 52013 times. Multiplying these together, the level of monkey business in this situation is now 2713310158.

Worry levels are no longer divided by three after each item is inspected; you'll need to find another way to keep your worry levels manageable. Starting again from the initial state in your puzzle input, what is the level of monkey business after 10000 rounds?

Day 11: Solution Explanation

Approach

Day 11 involves simulating monkeys playing keep-away with items, applying operations to worry levels, and passing items between monkeys based on tests. The key challenges are:

  1. Parsing the monkey specifications from the input text
  2. Modeling monkeys and their behavior with appropriate data structures
  3. Simulating the rounds of monkey inspections and item throwing
  4. Managing worry levels efficiently, especially for Part 2

The solution uses a combination of custom data types and simulation logic to model the monkey behavior accurately.

Implementation Details

Data Structures

First, we define a type for representing worry levels and the operation that monkeys can perform:

#![allow(unused)]
fn main() {
type WorryType = u64;
const WORRY_DEF: WorryType = 0;

#[derive(Debug)]
enum Operation {
    Add(WorryType),
    Mul(WorryType),
}
}

WorryType is set to u64 to handle the large numbers that can occur during the simulation. The Operation enum represents the two possible operations a monkey can perform: addition or multiplication.

The Monkey struct represents all the properties of a monkey:

#![allow(unused)]
fn main() {
#[derive(Debug)]
struct Monkey {
    name: usize,
    items: VecDeque<WorryType>,
    op: Operation,
    test: WorryType,
    send: (usize,usize),
    inspect: usize
}
}

Each monkey has:

  • A name (index)
  • A queue of items (worry levels)
  • An operation to apply when inspecting items
  • A divisibility test value
  • Two target monkeys to throw to based on the test result
  • A counter for the number of inspections performed

Parsing Input

The solution uses Rust's FromStr trait to parse monkey specifications from the input text:

#![allow(unused)]
fn main() {
impl FromStr for Monkey {
    type Err = ();

    fn from_str(s: &str) -> Result<Self, Self::Err> {
        let mut monkey = Cell::new(Monkey::default());
        s.lines()
            .map(|line| line.trim().split(':').collect::<Vec<_>>())
            .map(|parts|{
                let m = monkey.get_mut();
                match parts[0] {
                    "Starting items" => {
                        parts[1].split(',')
                            .map(|n| WorryType::from_str(n.trim()).unwrap() )
                            .all(|a| { m.items.push_back(a); true });
                    }
                    "Operation" => {
                        let [op,act] = parts[1]
                            .split("new = old ")
                            .last()
                            .unwrap()
                            .split(' ')
                            .collect::<Vec<_>>()[..] else { panic!("Operation: cannot be extracted") };
                        let a = WorryType::from_str(act);
                        match (op,a) {
                            ("*",Ok(n)) => m.op = Operation::Mul(n),
                            ("+",Ok(n)) => m.op = Operation::Add(n),
                            ("*",_) => m.op = Operation::Mul(WORRY_DEF),
                            ("+",_) => m.op = Operation::Add(WORRY_DEF),
                            _ => {}
                        }
                    }
                    "Test" => {
                        let s = parts[1].trim().split("divisible by").last().unwrap().trim();
                        m.test = WorryType::from_str(s).unwrap();
                    }
                    "If true" => {
                        let s = parts[1].trim().split("throw to monkey").last().unwrap().trim();
                        m.send.0 = usize::from_str(s).unwrap();
                    }
                    "If false" => {
                        let s = parts[1].trim().split("throw to monkey").last().unwrap().trim();
                        m.send.1 = usize::from_str(s).unwrap();
                    }
                    name => {
                        m.name = usize::from_str(name.split(' ').last().unwrap().trim()).unwrap();
                    }
                }
                true
            })
            .all(|run| run);

        Ok(monkey.take())
    }
}
}

This implementation parses each line of the monkey specification and sets the corresponding fields in the Monkey struct.

Monkey Behavior

The Monkey struct implements several methods to model its behavior:

#![allow(unused)]
fn main() {
impl Monkey {
    fn parse_text(input: &str) -> Vec<Monkey> {
        input.split("\n\n")
            .map(|monkey| Monkey::from_str(monkey).unwrap() )
            .fold(Vec::new(), |mut out, monkey|{
                out.push(monkey);
                out
            })
    }
    
    fn catch(&mut self, item: WorryType) {
        self.items.push_back(item)
    }
    
    fn throw(&self, worry: WorryType) -> (usize, WorryType) {
        if (worry % self.test) == 0 as WorryType {
            // Current worry level is divisible by the test value
            (self.send.0, worry)
        } else {
            // Current worry level is not divisible by the test value
            (self.send.1, worry)
        }
    }
    
    fn observe(&mut self, div: WorryType) -> Option<(usize, WorryType)> {
        self.inspect += 1;
        // Monkey inspects an item with a worry level
        match self.items.pop_front() {
            Some(mut worry) => {
                // Apply the modulo to keep worry levels manageable
                worry %= div;
                Some( self.throw(
                    match self.op {
                        Operation::Add(WORRY_DEF) => worry.add(worry),
                        Operation::Mul(WORRY_DEF) => worry.mul(worry),
                        Operation::Add(n) => worry + n,
                        Operation::Mul(n) => worry * n,
                    }
                ))
            }
            None => None
        }
    }
    
    fn observe_all(&mut self, div: WorryType) -> Vec<Option<(usize, WorryType)>> {
        (0..self.items.len())
            .fold(vec![], |mut out, _|{
                out.push( self.observe(div));
                out
            })
    }
    
    fn inspections(&self) -> usize {
        self.inspect
    }
}
}

These methods handle:

  • Parsing all monkeys from the input text
  • Catching items thrown by other monkeys
  • Throwing items to other monkeys based on the test result
  • Observing (inspecting) an item and updating its worry level
  • Observing all items in a monkey's possession
  • Tracking the number of inspections

Managing Worry Levels

In Part 2, the challenge is managing the worry levels since they're no longer divided by 3 and can grow extremely large. The key insight is that we don't need the exact worry levels, only whether they're divisible by the monkeys' test values.

Using the Chinese Remainder Theorem, we can apply modular arithmetic with the product of all monkeys' test values as the modulus. This keeps the worry levels manageable while preserving divisibility properties:

#![allow(unused)]
fn main() {
let div_product: WorryType = monkeys.iter().map(|m| m.test).product();
}

This technique is applied in the observe method where we calculate worry %= div.

Simulation Logic

The main simulation logic runs for the specified number of rounds and tracks the items as they're thrown between monkeys:

#![allow(unused)]
fn main() {
// Queue for passing items around the monkeys
let mut queue = vec![VecDeque::<WorryType>::new(); monkeys.len()];

(0..10000).all(|_| {
    monkeys.iter_mut()
        .map(|monkey| {
            // pull from queue anything thrown at him
            while let Some(item) = queue[monkey.name].pop_front() {
                monkey.catch(item)
            };

            // observe and throw back at
            monkey.observe_all(div_product)
                .into_iter()
                .all(|throw|
                    throw.map(
                        |(monkey,item)| queue[monkey].push_back(item)
                    ).is_some()
                )
        })
        .all(|run| run)
});
}

The simulation:

  1. Iterates through each round
  2. For each monkey, processes all items in its possession
  3. Calculates new worry levels and determines target monkeys
  4. Uses queues to handle the items being thrown between monkeys

Calculating Monkey Business

Finally, the solution calculates the level of monkey business by multiplying the inspection counts of the two most active monkeys:

#![allow(unused)]
fn main() {
monkeys.sort_by(|a,b| b.inspect.cmp(&a.inspect));
println!("level of monkey business after 10000 rounds : {:?}",
         monkeys[0].inspections() * monkeys[1].inspections()
);
}

Algorithmic Analysis

Time Complexity

  • Parsing input: O(n) where n is the length of the input text
  • Simulation: O(r * m * i) where:
    • r is the number of rounds (10,000 for Part 2)
    • m is the number of monkeys
    • i is the average number of items per monkey

Space Complexity

  • O(m * i) for storing the monkeys and their items
  • O(m) for the queues used to pass items between monkeys

Key Insights

Chinese Remainder Theorem Application

The key insight for Part 2 is using modular arithmetic to manage worry levels. Since we only care about divisibility by each monkey's test value, we can use the product of all test values as a modulus.

This works because if we have:

  • Original worry level: W
  • Modulus: M = product of all test divisors
  • Remainder: R = W mod M

Then for any test divisor D that is a factor of M:

  • W is divisible by D if and only if R is divisible by D

This allows us to keep the worry levels manageable while preserving the divisibility properties needed for the monkey's tests.

Alternative Approaches

Direct Divisibility Tracking

Instead of tracking the actual worry levels, we could track just the remainders when divided by each monkey's test value:

#![allow(unused)]
fn main() {
struct Item {
    remainders: HashMap<WorryType, WorryType>, // Map from test value to remainder
}
}

This would allow us to update the remainders directly without ever dealing with the full worry values. However, this is more complex to implement and likely not necessary given the effectiveness of the modulo approach.

Simulation Optimization

For a large number of rounds, we could look for patterns in the monkey's behavior and potentially skip ahead in the simulation. However, this would add complexity and might not be necessary for the given constraints.

Conclusion

This solution demonstrates how to model a complex system with multiple interacting entities. The key insights are:

  1. Using appropriate data structures to model the monkeys and their behavior
  2. Applying modular arithmetic to manage worry levels efficiently
  3. Using queues to handle the passing of items between monkeys

These techniques allow us to simulate the monkey's behavior for a large number of rounds without running into numerical overflow issues.

Day 11: Code

Below is the complete code for Day 11's solution, which simulates monkeys passing items with worry levels.

Full Solution

use std::cell::Cell;
use std::collections::VecDeque;
use std::ops::{Add, Mul};
use std::str::FromStr;

fn main() {

    let input = std::fs::read_to_string("src/bin/day11_input.txt").expect("Ops!");

    let mut monkeys = Monkey::parse_text(input.as_str());
    let div_product: WorryType = monkeys.iter().map(|m| m.test).product();

    // Queue for passing items around the monkeys
    let mut queue = vec![VecDeque::<WorryType>::new(); monkeys.len()];

    (0..10000).all(|_| {
        monkeys.iter_mut()
            .map(|monkey| {

                // pull from queue anything thrown at him
                while let Some(item) = queue[monkey.name].pop_front() {
                    monkey.catch(item)
                };

                // observe and throw back at
                monkey.observe_all(div_product)
                    .into_iter()
                    // .filter_map(|throw| throw)
                    .all(|throw|
                        throw.map(
                            |(monkey,item)| queue[monkey].push_back(item)
                        ).is_some()
                    )
            })
            .all(|run| run)
    });

    monkeys.sort_by(|a,b| b.inspect.cmp(&a.inspect));
    println!("level of monkey business after 10000 rounds : {:?}",
             monkeys[0].inspections() * monkeys[1].inspections()
    );
}


type WorryType = u64;
const WORRY_DEF: WorryType = 0;

#[derive(Debug)]
enum Operation {
    Add(WorryType),
    Mul(WorryType),
}
#[derive(Debug)]
struct Monkey {
    name: usize,
    items: VecDeque<WorryType>,
    op: Operation,
    test: WorryType,
    send: (usize,usize),
    inspect: usize
}
impl Monkey {
    fn parse_text(input: &str) -> Vec<Monkey> {
        input.split("\n\n")
            .map(|monkey| Monkey::from_str(monkey).unwrap() )
            .fold(Vec::new(), |mut out, monkey|{
                out.push(monkey);
                out
            })
    }
    fn catch(&mut self, item: WorryType) {
        self.items.push_back(item)
    }
    fn throw(&self, worry: WorryType) -> (usize, WorryType) {
        if (worry % self.test) == 0 as WorryType {
            // Current worry level is divisible by 23.
            // Sent to Monkey
            (self.send.0, worry)
        } else {
            // Current worry level is not divisible by 23.
            // Sent to Monkey
            (self.send.1, worry)
        }
    }
    fn observe(&mut self, div: WorryType) -> Option<(usize, WorryType)> {
        self.inspect += 1;
        //   Monkey inspects an item with a worry level of 79.
        match self.items.pop_front() {
            Some(mut worry) => {
                //     Worry level is multiplied by 19 to 1501.
                //     Monkey gets bored with item. Worry level is divided by 3 to 500.
                worry %= div;
                Some( self.throw(
                    match self.op {
                        Operation::Add(WORRY_DEF) => worry.add(worry),
                        Operation::Mul(WORRY_DEF) => worry.mul(worry),
                        Operation::Add(n) => worry + n,
                        Operation::Mul(n) => worry * n,
                    }
                ))
            }
            None => None
        }
    }
    fn observe_all(&mut self, div: WorryType) -> Vec<Option<(usize, WorryType)>> {
        (0..self.items.len())
            .fold(vec![], |mut out, _|{
                out.push( self.observe(div));
                out
            })
    }
    fn inspections(&self) -> usize {
        self.inspect
    }
}
impl Default for Monkey {
    fn default() -> Self {
        Monkey {
            name: 0,
            items: VecDeque::new(),
            op: Operation::Add(WORRY_DEF),
            test: WORRY_DEF,
            send: (0,0),
            inspect: 0
        }
    }
}
impl FromStr for Monkey {
    type Err = ();

    fn from_str(s: &str) -> Result<Self, Self::Err> {
        let mut monkey = Cell::new(Monkey::default());
        s.lines()
            .map(|line| line.trim().split(':').collect::<Vec<_>>())
            .map(|parts|{
                let m = monkey.get_mut();
                match parts[0] {
                    "Starting items" => {
                        parts[1].split(',')
                            .map(|n| WorryType::from_str(n.trim()).unwrap() )
                            .all(|a| { m.items.push_back(a); true });
                    }
                    "Operation" => {
                        let [op,act] = parts[1]
                            .split("new = old ")
                            .last()
                            .unwrap()
                            .split(' ')
                            .collect::<Vec<_>>()[..] else { panic!("Operation: cannot be extracted") };
                        let a = WorryType::from_str(act);
                        match (op,a) {
                            ("*",Ok(n)) => m.op = Operation::Mul(n),
                            ("+",Ok(n)) => m.op = Operation::Add(n),
                            ("*",_) => m.op = Operation::Mul(WORRY_DEF),
                            ("+",_) => m.op = Operation::Add(WORRY_DEF),
                            _ => {}
                        }
                    }
                    "Test" => {
                        let s = parts[1].trim().split("divisible by").last().unwrap().trim();
                        m.test = WorryType::from_str(s).unwrap();
                    }
                    "If true" => {
                        let s = parts[1].trim().split("throw to monkey").last().unwrap().trim();
                        m.send.0 = usize::from_str(s).unwrap();
                    }
                    "If false" => {
                        let s = parts[1].trim().split("throw to monkey").last().unwrap().trim();
                        m.send.1 = usize::from_str(s).unwrap();
                    }
                    name => {
                        m.name = usize::from_str(name.split(' ').last().unwrap().trim()).unwrap();
                    }
                }
                true
            })
            .all(|run| run);

        Ok(monkey.take())
    }
}

Code Walkthrough

Data Types and Structures

type WorryType = u64;
const WORRY_DEF: WorryType = 0;

#[derive(Debug)]
enum Operation {
    Add(WorryType),
    Mul(WorryType),
}
#[derive(Debug)]
struct Monkey {
    name: usize,
    items: VecDeque<WorryType>,
    op: Operation,
    test: WorryType,
    send: (usize,usize),
    inspect: usize
}

The solution defines:

  • WorryType as u64 to handle large worry levels
  • An Operation enum to represent addition or multiplication operations
  • A Monkey struct with properties for:
    • name: The monkey's index
    • items: A queue of worry levels for items the monkey is holding
    • op: The operation the monkey performs on items
    • test: The divisibility test value
    • send: A tuple with indices of monkeys to throw to (true case, false case)
    • inspect: A counter for the number of inspections

Monkey Behavior

impl Monkey {
    fn parse_text(input: &str) -> Vec<Monkey> {
        input.split("\n\n")
            .map(|monkey| Monkey::from_str(monkey).unwrap() )
            .fold(Vec::new(), |mut out, monkey|{
                out.push(monkey);
                out
            })
    }
    fn catch(&mut self, item: WorryType) {
        self.items.push_back(item)
    }
    fn throw(&self, worry: WorryType) -> (usize, WorryType) {
        if (worry % self.test) == 0 as WorryType {
            // Current worry level is divisible by 23.
            // Sent to Monkey
            (self.send.0, worry)
        } else {
            // Current worry level is not divisible by 23.
            // Sent to Monkey
            (self.send.1, worry)
        }
    }
    fn observe(&mut self, div: WorryType) -> Option<(usize, WorryType)> {
        self.inspect += 1;
        //   Monkey inspects an item with a worry level of 79.
        match self.items.pop_front() {
            Some(mut worry) => {
                //     Worry level is multiplied by 19 to 1501.
                //     Monkey gets bored with item. Worry level is divided by 3 to 500.
                worry %= div;
                Some( self.throw(
                    match self.op {
                        Operation::Add(WORRY_DEF) => worry.add(worry),
                        Operation::Mul(WORRY_DEF) => worry.mul(worry),
                        Operation::Add(n) => worry + n,
                        Operation::Mul(n) => worry * n,
                    }
                ))
            }
            None => None
        }
    }
    fn observe_all(&mut self, div: WorryType) -> Vec<Option<(usize, WorryType)>> {
        (0..self.items.len())
            .fold(vec![], |mut out, _|{
                out.push( self.observe(div));
                out
            })
    }
    fn inspections(&self) -> usize {
        self.inspect
    }
}

The Monkey implementation includes methods for:

  • parse_text: Parsing all monkeys from the input
  • catch: Adding an item to the monkey's queue
  • throw: Determining which monkey to throw to based on the test
  • observe: Processing a single item:
    • Incrementing the inspection counter
    • Taking an item from the front of the queue
    • Applying modulo to manage worry levels
    • Applying the operation to update the worry level
    • Determining which monkey to throw to
  • observe_all: Processing all items a monkey is holding
  • inspections: Returning the inspection count

Parsing Logic

impl Default for Monkey {
    fn default() -> Self {
        Monkey {
            name: 0,
            items: VecDeque::new(),
            op: Operation::Add(WORRY_DEF),
            test: WORRY_DEF,
            send: (0,0),
            inspect: 0
        }
    }
}
impl FromStr for Monkey {
    type Err = ();

    fn from_str(s: &str) -> Result<Self, Self::Err> {
        let mut monkey = Cell::new(Monkey::default());
        s.lines()
            .map(|line| line.trim().split(':').collect::<Vec<_>>())
            .map(|parts|{
                let m = monkey.get_mut();
                match parts[0] {
                    "Starting items" => {
                        parts[1].split(',')
                            .map(|n| WorryType::from_str(n.trim()).unwrap() )
                            .all(|a| { m.items.push_back(a); true });
                    }
                    "Operation" => {
                        let [op,act] = parts[1]
                            .split("new = old ")
                            .last()
                            .unwrap()
                            .split(' ')
                            .collect::<Vec<_>>()[..] else { panic!("Operation: cannot be extracted") };
                        let a = WorryType::from_str(act);
                        match (op,a) {
                            ("*",Ok(n)) => m.op = Operation::Mul(n),
                            ("+",Ok(n)) => m.op = Operation::Add(n),
                            ("*",_) => m.op = Operation::Mul(WORRY_DEF),
                            ("+",_) => m.op = Operation::Add(WORRY_DEF),
                            _ => {}
                        }
                    }
                    "Test" => {
                        let s = parts[1].trim().split("divisible by").last().unwrap().trim();
                        m.test = WorryType::from_str(s).unwrap();
                    }
                    "If true" => {
                        let s = parts[1].trim().split("throw to monkey").last().unwrap().trim();
                        m.send.0 = usize::from_str(s).unwrap();
                    }
                    "If false" => {
                        let s = parts[1].trim().split("throw to monkey").last().unwrap().trim();
                        m.send.1 = usize::from_str(s).unwrap();
                    }
                    name => {
                        m.name = usize::from_str(name.split(' ').last().unwrap().trim()).unwrap();
                    }
                }
                true
            })
            .all(|run| run);

        Ok(monkey.take())
    }
}

The parsing logic includes:

  • A Default implementation for Monkey providing initial values
  • An implementation of FromStr for parsing monkey specifications
  • Logic for parsing each line of the monkey description based on field names
  • Special handling for operations that reference "old" (the current worry level)

Main Simulation

fn main() {

    let input = std::fs::read_to_string("src/bin/day11_input.txt").expect("Ops!");

    let mut monkeys = Monkey::parse_text(input.as_str());
    let div_product: WorryType = monkeys.iter().map(|m| m.test).product();

    // Queue for passing items around the monkeys
    let mut queue = vec![VecDeque::<WorryType>::new(); monkeys.len()];

    (0..10000).all(|_| {
        monkeys.iter_mut()
            .map(|monkey| {

                // pull from queue anything thrown at him
                while let Some(item) = queue[monkey.name].pop_front() {
                    monkey.catch(item)
                };

                // observe and throw back at
                monkey.observe_all(div_product)
                    .into_iter()
                    // .filter_map(|throw| throw)
                    .all(|throw|
                        throw.map(
                            |(monkey,item)| queue[monkey].push_back(item)
                        ).is_some()
                    )
            })
            .all(|run| run)
    });

    monkeys.sort_by(|a,b| b.inspect.cmp(&a.inspect));
    println!("level of monkey business after 10000 rounds : {:?}",
             monkeys[0].inspections() * monkeys[1].inspections()
    );
}

The main simulation logic:

  1. Reads and parses the input
  2. Calculates the product of all test divisors to manage worry levels
  3. Creates queues for passing items between monkeys
  4. Runs the simulation for 10,000 rounds:
    • For each monkey, processes all items it's holding
    • Updates worry levels and determines target monkeys
    • Uses queues to pass items between monkeys
  5. Sorts monkeys by inspection count and calculates the "monkey business" level

Implementation Notes

  • Chinese Remainder Theorem: The solution uses modular arithmetic with div_product to keep worry levels manageable while preserving divisibility properties
  • Queue-based Communication: Items are passed between monkeys using queues, allowing each monkey to process all its items before moving to the next monkey
  • Functional Programming Style: The code uses functional programming patterns like map, fold, and method chaining

Day 12: Hill Climbing Algorithm

Day 12 involves finding the shortest path through a grid of varying elevations.

Problem Overview

You need to navigate a heightmap representing a hilly area, finding the shortest path from a starting position to an ending position. The key constraints are:

  1. You can only move to adjacent squares that are at most one unit higher than your current position
  2. You can move to squares of any lower elevation
  3. For Part 1, you need to find the shortest path from a specific starting point to a specific ending point
  4. For Part 2, you need to find the shortest path from any lowest-elevation square to the ending point

This problem tests your ability to implement pathfinding algorithms, specifically breadth-first search (BFS), on a 2D grid with special movement constraints.

Day 12: Problem Description

Hill Climbing Algorithm

You try contacting the Elves using your handheld device, but the river you're following must be too low to get a decent signal.

You ask the device for a heightmap of the surrounding area (your puzzle input). The heightmap shows the local area from above broken into a grid; the elevation of each square of the grid is given by a single lowercase letter, where a is the lowest elevation, b is the next-lowest, and so on up to the highest elevation, z.

Also included on the heightmap are marks for your current position (S) and the location that should get the best signal (E). Your current position (S) has elevation a, and the location that should get the best signal (E) has elevation z.

You'd like to reach E, but to save energy, you should do it in as few steps as possible. During each step, you can move exactly one square up, down, left, or right. To avoid needing to get out your climbing gear, the elevation of the destination square can be at most one higher than the elevation of your current square; that is, if your current elevation is m, you could step to elevation n, but not to elevation o. (This also means that the elevation of the destination square can be much lower than the elevation of your current square.)

For example:

Sabqponm
abcryxxl
accszExk
acctuvwj
abdefghi

Here, you start in the top-left corner; your goal is near the middle. You could start by moving down or right, but eventually you'll need to head toward the e at the bottom. From there, you can spiral around to the goal:

v..v<<<<
>v.vv<<^
.>vv>E^^
..v>>>^^
..>>>>>^

In the above diagram, the symbols indicate whether the path exits each square moving up (^), down (v), left (<), or right (>). The location that should get the best signal is still E, and . marks unvisited squares.

This path reaches the goal in 31 steps, the fewest possible.

Part 1

What is the fewest steps required to move from your current position to the location that should get the best signal?

Part 2

As you walk up the hill, you suspect that the Elves will want to turn this into a hiking trail. The beginning isn't very scenic, though; perhaps you can find a better starting point.

To maximize exercise while hiking, the trail should start as low as possible: elevation a. The goal is still the square marked E. However, the trail should still be direct, taking the fewest steps to reach its goal. So, you'll need to find the shortest path from any square at elevation a to the square marked E.

Again consider the example from above:

Sabqponm
abcryxxl
accszExk
acctuvwj
abdefghi

Now, there are six choices for starting position (five marked a, plus the square marked S that counts as being at elevation a). If you start at the bottom-left square, you can reach the goal most quickly:

...v<<<<
...vv<<^
...v>E^^
.>v>>>^^
>^>>>>>^

This path reaches the goal in only 29 steps, the fewest possible.

What is the fewest steps required to move starting from any square with elevation a to the location that should get the best signal?

Day 12: Solution Explanation

Approach

Day 12 involves finding the shortest path through a grid with elevation constraints. The key to solving this problem is to use a breadth-first search (BFS) algorithm, which is optimal for finding the shortest path in an unweighted graph.

The solution breaks down into several key components:

  1. Representing the heightmap: We need to parse the input into a grid of elevation values
  2. Implementing BFS: We need to find the shortest path from start to end, respecting elevation constraints
  3. Reversing the problem for Part 2: We can efficiently solve Part 2 by starting from the end point and finding the closest square with elevation 'a'
  4. Visualizing the path: As a bonus, the solution includes visualization using the bracket-lib library

Implementation Details

Grid Representation

The solution uses a custom Grid structure to represent the heightmap:

#![allow(unused)]
fn main() {
struct ElevationGrid(Grid<u8>);
}

This wraps a generic Grid<u8> from a shared library, with elevation values represented as unsigned bytes. During parsing, letters 'a' to 'z' are converted to values 1 to 26, with 'S' (start) mapped to 0 and 'E' (end) mapped to 27.

Parsing the Input

The input is parsed into an ElevationGrid, with special handling for the start ('S') and end ('E') positions:

#![allow(unused)]
fn main() {
fn parse_elevation(data: &str) -> (ElevationGrid, Coord, Coord) {
    let width = data.lines().next().unwrap().len();
    let height = data.lines().count();
    let mut grid = Grid::new(width,height);
    let (mut start, mut finish) = ((0,0).into(),(0,0).into());

    for (y,line) in data.lines().enumerate() {
        for (x, val) in line.bytes().enumerate() {
            match val {
                b'S' => {
                    start = (x, y).into();
                    *grid.square_mut(start).unwrap() = 0;
                },
                b'E' => {
                    finish = (x, y).into();
                    *grid.square_mut(finish).unwrap() = b'z'-b'a'+2;
                }
                _ => *grid.square_mut((x, y).into()).unwrap() = val - b'a' + 1
            }
        }
    }
    (ElevationGrid(grid), start, finish)
}
}

This function returns the grid, start coordinate, and end coordinate.

Path Finding with BFS

The core of the solution is the shortest_path method on ElevationGrid, which implements BFS to find the shortest path satisfying a given goal condition:

#![allow(unused)]
fn main() {
fn shortest_path<F>(&self, start: Coord, goal:F ) -> Vec<Coord> where F: Fn(Coord)->bool {
    let mut ps = PathSearch::init(self);
    // push start in the queue
    ps.queue.push_back(start);

    // pop from top & while still nodes in the queue
    while let Some(cs) = ps.queue.pop_front() {
        // position matches target
        if goal(cs) {
            // extract parent position from target
            let mut cur = cs;
            while let Some(par) = ps.visited.square(cur).unwrap().1 {
                ps.path.push(par);
                cur = par;
            }
            // remove start position from path
            ps.path.pop();
            break
        }

        // mark square as visited
        ps.visited.square_mut(cs).unwrap().0 = true;

        let &square = self.0.square(cs).unwrap();

        // evaluate neighbour squares and
        // push to the queue if the have elevation delta <= 1
        self.0.neighbouring(cs)
            .for_each(|(ns, &elevation)| {
                if let Some((false, None)) = ps.visited.square(ns) {
                    if elevation <= square + 1 {
                        // capture the square we arrived from
                        ps.visited.square_mut(ns).unwrap().1 = Some(cs);
                        ps.queue.push_back(ns)
                    }
                }
            })
    }
    ps.path
}
}

Key aspects of this implementation:

  1. It uses a queue for BFS traversal, starting from the specified position
  2. It checks each position against a goal function passed as a parameter
  3. It respects the elevation constraint (can only move to positions with elevation at most 1 higher)
  4. It reconstructs the path from end to start using parent pointers

Path Search Data Structure

The BFS algorithm is supported by a PathSearch struct that manages the search state:

#![allow(unused)]
fn main() {
struct PathSearch {
    queue: VecDeque<Coord>,
    visited: Grid<(bool,Option<Coord>)>,
    path: Vec<Coord>
}
}

This structure maintains:

  • A queue of coordinates to explore
  • A grid tracking visited positions and their parent positions (for path reconstruction)
  • A vector to store the final path

Solving Part 1

For Part 1, we find the shortest path from the start position to the end position:

#![allow(unused)]
fn main() {
// find path with closure fn() goal set at reaching the target coordinate
let path = grid.shortest_path(start, |cs| cs.eq(&target));
}

We use a closure that checks if the current position matches the target position.

Solving Part 2

For Part 2, we need to find the shortest path from any position with elevation 'a' to the end position. Instead of running BFS from each possible starting position, we reverse the problem:

#![allow(unused)]
fn main() {
// reverse the elevation so E(0) and S(27)
grid.reverse_elevation();

// find path with closure fn() goal set as reaching elevation(26) = a
let path = grid.shortest_path(target, |cs| 26.eq(grid.0.square(cs).unwrap()));
}

This elegant approach:

  1. Reverses the elevation values (making 'a' the highest and 'z' the lowest)
  2. Starts BFS from the end position
  3. Looks for the first position with elevation value 26 (which corresponds to 'a' after reversal)

The elevation reversal is implemented as:

#![allow(unused)]
fn main() {
fn reverse_elevation(&mut self) {
    let &max = self.0.iter().max().unwrap();
    self.0.iter_mut()
        .map(|val|{
            *val = max - *val;
        })
        .all(|_| true);
}
}

This effectively flips the elevation constraint, allowing us to find the shortest path from the end position to any 'a' position.

Visualization

The solution includes visualization using the bracket-lib library, which renders the grid and path in a graphical window. This is not essential for solving the problem but provides a nice way to see the results.

Algorithmic Analysis

Time Complexity

  • BFS: O(V + E) where V is the number of vertices (grid cells) and E is the number of edges (adjacent cell pairs). In a grid, this simplifies to O(n) where n is the number of cells.
  • Path Reconstruction: O(p) where p is the length of the path.
  • Overall: O(n) for each part, where n is the number of grid cells.

Space Complexity

  • Grid Storage: O(n) to store the grid
  • BFS Data Structures: O(n) for the queue and visited tracking
  • Path Storage: O(p) where p is the path length
  • Overall: O(n)

Alternative Approaches

Dijkstra's Algorithm or A*

While BFS is optimal for unweighted graphs, we could also use Dijkstra's algorithm or A* if we wanted to add more complex cost calculations. For example:

#![allow(unused)]
fn main() {
fn shortest_path_astar(&self, start: Coord, end: Coord) -> Vec<Coord> {
    let mut open_set = BinaryHeap::new();
    let mut came_from = HashMap::new();
    let mut g_score = HashMap::new();
    let mut f_score = HashMap::new();
    
    g_score.insert(start, 0);
    f_score.insert(start, manhattan_distance(start, end));
    open_set.push(Node { pos: start, f_score: *f_score.get(&start).unwrap() });
    
    // A* algorithm implementation...
}

fn manhattan_distance(a: Coord, b: Coord) -> u32 {
    ((a.x as i32 - b.x as i32).abs() + (a.y as i32 - b.y as i32).abs()) as u32
}
}

However, for this problem, BFS is sufficient and more efficient.

Dynamic Programming

Another approach could be to use dynamic programming to calculate the shortest distance to each cell from the starting point:

#![allow(unused)]
fn main() {
fn shortest_distance_dp(&self, start: Coord) -> Grid<Option<usize>> {
    let mut distances = Grid::new(self.width(), self.height());
    distances.square_mut(start).unwrap() = Some(0);
    
    let mut changed = true;
    while changed {
        changed = false;
        // For each cell, update distances based on neighbors
        // ...
    }
    
    distances
}
}

This would be more complex and less efficient than BFS for this problem.

Conclusion

This solution demonstrates an efficient approach to pathfinding in a grid with elevation constraints. By using BFS and cleverly reversing the problem for Part 2, we achieve a clean and performant solution. The visualization component adds an interesting way to see the results of the algorithm in action.

Day 12: Code

Below is the complete code for Day 12's solution, which implements a path-finding algorithm to navigate a heightmap.

Full Solution

use std::collections::VecDeque;
use std::fmt::{Debug, Formatter};
use bracket_lib::prelude::*;
use advent2022::{
    Grid, Coord,
    app::{App, AppLevel, State}
};

fn main() -> BResult<()> {

    let input = std::fs::read_to_string("src/bin/day12_input.txt").expect("ops!");

    // parse elevations onto a grid
    let (mut grid,start, target) = parse_elevation(input.as_str());

    // find path with closure fn() goal set at reaching the target coordinate
    let path = grid.shortest_path(start,|cs| cs.eq(&target));

    // visualise path produced
    grid.visualise_path(path);

    // reverse the elevation so E(0) and S(27)
    grid.reverse_elevation();

    // find path with closure fn() goal set as reaching elevation(26) = a
    let path = grid.shortest_path(target, |cs| 26.eq(grid.0.square(cs).unwrap()));

    // visualise path produced
    grid.visualise_path(path);
    grid.reverse_elevation();

    let mut ctx = BTermBuilder::simple(160,120)?
        .with_simple_console(grid.width(),grid.height(), "terminal8x8.png")
        .with_simple_console_no_bg(grid.width(),grid.height(), "terminal8x8.png")
        .with_simple_console_no_bg(grid.width(),grid.height(), "terminal8x8.png")
        .with_fps_cap(640f32)
        .with_title("Day12: Path Search")
        .build()?;

    let ps = PathSearch::init(&grid);
    let mut app = App::init(GStore { grid, target, start, ps } , Level::MENU);

    app.register_level(Level::MENU, Menu);
    app.register_level(Level::LEVEL1, ExerciseOne);
    app.register_level(Level::LEVEL2, ExerciseTwo);

    ctx.set_active_console(1);
    app.store().grid.draw(&mut ctx);
    main_loop(ctx, app)
}

#[derive(Copy, Clone, Debug, Eq, PartialEq, Hash )]
pub enum Level { MENU, LEVEL1, LEVEL2 }

struct GStore {
    grid: ElevationGrid,
    target: Coord,
    start: Coord,
    ps: PathSearch,
}

struct Menu;
impl AppLevel for Menu {
    type GStore = GStore;
    type GLevel = Level;
    fn init(&mut self, _: &mut BTerm, _: &mut Self::GStore) -> (Self::GLevel, State) {
        (Level::MENU, State::RUN)
    }
    fn run(&mut self, ctx: &mut BTerm, _: &mut Self::GStore) -> (Self::GLevel, State) {
        ctx.set_active_console(3);
        match ctx.key {
            Some(VirtualKeyCode::Key1) => { ctx.cls(); (Level::LEVEL1, State::INIT) },
            Some(VirtualKeyCode::Key2) => { ctx.cls(); (Level::LEVEL2, State::INIT) },
            Some(VirtualKeyCode::Q) => (Level::MENU, State::FINISH),
            _ => {
                ctx.print_centered( 42, "Press '1' : Lowest to highest elevation");
                ctx.print_centered( 44, "Press '2' : Highest to lowest elevation ");
                ctx.print_centered( 46, "Press 'Q' to exit");
                (Level::MENU, State::RUN)
            }
        }
    }
    fn term(&mut self, ctx: &mut BTerm, _: &mut Self::GStore) -> (Self::GLevel, State) {
        ctx.quit();
        (Level::MENU, State::FINISH)
    }
}

struct ExerciseOne;
impl AppLevel for ExerciseOne {
    type GStore = GStore;
    type GLevel = Level;
    fn init(&mut self, _: &mut BTerm, store: &mut Self::GStore) -> (Self::GLevel, State) {
        store.ps.reset();
        store.ps.queue.push_back(store.start);
        (Level::LEVEL1, State::RUN)
    }
    fn run(&mut self, ctx: &mut BTerm, store: &mut Self::GStore) -> (Self::GLevel, State) {
        ctx.set_active_console(2);
        match store.ps.tick(&store.grid, |cs| cs.eq(&store.target)) {
            None => {
                ctx.cls();
                store.ps.draw(ctx);
                ctx.set(store.target.x,store.target.y, BLUE, BLACK, to_cp437('\u{2588}'));
                (Level::LEVEL1, State::RUN)
            }
            Some(target) => {
                store.ps.queue.clear();
                store.ps.queue.push_back(target);
                ctx.cls();
                store.ps.draw(ctx);
                (Level::LEVEL1, State::FINISH)
            }
        }
    }
    fn term(&mut self, ctx: &mut BTerm, _: &mut Self::GStore) -> (Self::GLevel, State) {
        ctx.set_active_console(3);
        ctx.print_centered(10, "Path Found !!");
        (Level::MENU, State::INIT)
    }
}

struct ExerciseTwo;
impl AppLevel for ExerciseTwo {
    type GStore = GStore;
    type GLevel = Level;
    fn init(&mut self, _: &mut BTerm, store: &mut Self::GStore) -> (Self::GLevel, State) {
        store.ps.reset();
        store.ps.queue.push_back(store.target);
        store.grid.reverse_elevation();
        (Level::LEVEL2, State::RUN)
    }
    fn run(&mut self, ctx: &mut BTerm, store: &mut Self::GStore) -> (Self::GLevel, State) {
        ctx.set_active_console(2);
        match store.ps.tick(&store.grid, |cs| 26.eq(store.grid.0.square(cs).unwrap())) {
            None => {
                ctx.cls();
                store.ps.draw(ctx);
                (Level::LEVEL2, State::RUN)
            }
            Some(target) => {
                store.ps.queue.clear();
                store.ps.queue.push_back(target);
                ctx.cls();
                store.ps.draw(ctx);
                store.grid.reverse_elevation();
                (Level::LEVEL2, State::FINISH)
            }
        }
    }
    fn term(&mut self, ctx: &mut BTerm, _: &mut Self::GStore) -> (Self::GLevel, State) {
        ctx.set_active_console(3);
        ctx.print_centered(10, "Path Found !!");
        (Level::MENU, State::INIT)
    }
}

fn parse_elevation(data: &str) -> (ElevationGrid, Coord, Coord) {
    let width = data.lines().next().unwrap().len();
    let height = data.lines().count();
    let mut grid = Grid::new(width,height);
    let (mut start, mut finish) = ((0,0).into(),(0,0).into());

    for (y,line) in data.lines().enumerate() {
        for (x, val) in line.bytes().enumerate() {
            match val {
                b'S' => {
                    start = (x, y).into();
                    *grid.square_mut(start).unwrap() = 0;
                },
                b'E' => {
                    finish = (x, y).into();
                    *grid.square_mut(finish).unwrap() = b'z'-b'a'+2;
                }
                _ => *grid.square_mut((x, y).into()).unwrap() = val - b'a' + 1
            }
        }
    }
    (ElevationGrid(grid), start, finish)
}

struct PathSearch {
    queue: VecDeque<Coord>,
    visited: Grid<(bool,Option<Coord>)>,
    path: Vec<Coord>
}
impl PathSearch {
    fn init(grid: &ElevationGrid) -> PathSearch {
        PathSearch {
            queue: VecDeque::<Coord>::new(),
            visited: Grid::new(grid.width(), grid.height()),
            path: Vec::<_>::new()
        }
    }
    fn reset(&mut self) {
        self.queue.clear();
        self.visited.grid.iter_mut().for_each(|val| *val = (false, None) );
        self.path.clear();
    }
    fn tick<F>(&mut self, grid: &ElevationGrid, goal: F) -> Option<Coord> where F: Fn(Coord)->bool {
        let Some(cs) = self.queue.pop_front() else { return None };

        // position matches target
        if goal(cs) {
            return Some(cs);
        }
        // mark square as visited
        self.visited.square_mut(cs).unwrap().0 = true;

        let &square = grid.0.square(cs).unwrap();

        // evaluate neighbour squares and
        // push to the queue if the have elevation delta <= 1
        grid.0.neighbouring(cs)
            .for_each(|(ns, &elevation)| {
                if let Some((false, None)) = self.visited.square(ns) {
                    if elevation <= square + 1 {
                        // capture the square we arrived from
                        self.visited.square_mut(ns).unwrap().1 = Some(cs);
                        self.queue.push_back(ns)
                    }
                }
            });
        None
    }
    fn extract_path(&self, start:Coord) -> PathIter {
        PathIter { ps: self, cur: start }
    }
    fn draw(&self,ctx: &mut BTerm) {
        self.queue.iter()
            .for_each(|&cs| {
                ctx.set(cs.x,cs.y,RED,BLACK,to_cp437('\u{2588}'));
                self.extract_path(cs)
                    .for_each(|Coord{x,y}|
                        ctx.set(x,y,ORANGE, BLACK,to_cp437('\u{2588}'))
                    )
            })
    }
}
struct PathIter<'a> {
    ps: &'a PathSearch,
    cur: Coord
}
impl Iterator for PathIter<'_> {
    type Item = Coord;
    fn next(&mut self) -> Option<Self::Item> {
        match self.ps.visited.square(self.cur).unwrap().1 {
            Some(par) => {
                self.cur = par;
                Some(par)
            }
            _ => None
        }
    }
}

struct ElevationGrid(Grid<u8>);

impl ElevationGrid {
    fn width(&self) -> usize { self.0.width }
    fn height(&self) -> usize { self.0.height }
    fn reverse_elevation(&mut self) {
        let &max = self.0.iter().max().unwrap();
        self.0.iter_mut()
            .map(|val|{
                *val = max - *val;
            })
            .all(|_| true);
    }
    fn visualise_path(&self, path:Vec<Coord>) {
        let mut gpath= ElevationGrid(Grid::new(self.width(), self.height()) );
        path.iter().for_each(|&a| *gpath.0.square_mut(a).unwrap() = *self.0.square(a).unwrap() );
        println!("Path length: {}\n{:?}",path.len(),gpath);
    }
    fn shortest_path<F>(&self, start: Coord, goal:F ) -> Vec<Coord> where F: Fn(Coord)->bool {

        let mut ps = PathSearch::init(self);
        // push start in the queue
        ps.queue.push_back(start);

        // pop from top & while still nodes in the queue
        while let Some(cs) = ps.queue.pop_front() {

            // position matches target
            if goal(cs) {
                // extract parent position from target
                let mut cur = cs;
                while let Some(par) = ps.visited.square(cur).unwrap().1 {
                    ps.path.push(par);
                    cur = par;
                }
                // remove start position from path
                ps.path.pop();
                break
            }

            // mark square as visited
            ps.visited.square_mut(cs).unwrap().0 = true;

            let &square = self.0.square(cs).unwrap();

            // evaluate neighbour squares and
            // push to the queue if the have elevation delta <= 1
            self.0.neighbouring(cs)
                .for_each(|(ns, &elevation)| {
                    if let Some((false, None)) = ps.visited.square(ns) {
                        if elevation <= square + 1 {
                            // capture the square we arrived from
                            ps.visited.square_mut(ns).unwrap().1 = Some(cs);
                            ps.queue.push_back(ns)
                        }
                    }
                })
        }
        ps.path
    }
    fn draw(&self, ctx: &mut BTerm) {
        let rgb: Vec<_> = RgbLerp::new(CADETBLUE.into(), WHITESMOKE.into(), 27).collect();
        (0..self.height()).for_each(|y|{
            (0..self.width()).for_each(|x|
                ctx.set_bg(x, y, self.0.square((x, y).into()).map(|&cell| rgb[cell as usize]).unwrap_or(BLACK.into()))
            );
        });
    }
}

impl Debug for ElevationGrid {
    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
        (0..self.height()).for_each(|y|{
            (0..self.width()).for_each(|x|
                write!(f, "{:^2}",
                       self.0.square((x, y).into())
                           .map(|&cell| match cell { 0 => '.', _=> 'x'})
                           .expect("TODO: panic message")
                ).expect("failed in x")
            );
            writeln!(f).expect("failed in y");
        });
        Ok(())
    }
}

Code Walkthrough

Core Data Structures

struct ElevationGrid(Grid<u8>);

The solution uses an ElevationGrid wrapper around a generic Grid<u8> to represent the heightmap. Elevation values are stored as bytes, with special values for the start and end positions.

struct PathSearch {
    queue: VecDeque<Coord>,
    visited: Grid<(bool,Option<Coord>)>,
    path: Vec<Coord>
}

The PathSearch struct manages the breadth-first search algorithm, tracking:

  • A queue of coordinates to explore
  • A grid marking visited positions and their parent positions
  • A vector to store the final path

Input Parsing

fn parse_elevation(data: &str) -> (ElevationGrid, Coord, Coord) {
    let width = data.lines().next().unwrap().len();
    let height = data.lines().count();
    let mut grid = Grid::new(width,height);
    let (mut start, mut finish) = ((0,0).into(),(0,0).into());

    for (y,line) in data.lines().enumerate() {
        for (x, val) in line.bytes().enumerate() {
            match val {
                b'S' => {
                    start = (x, y).into();
                    *grid.square_mut(start).unwrap() = 0;
                },
                b'E' => {
                    finish = (x, y).into();
                    *grid.square_mut(finish).unwrap() = b'z'-b'a'+2;
                }
                _ => *grid.square_mut((x, y).into()).unwrap() = val - b'a' + 1
            }
        }
    }
    (ElevationGrid(grid), start, finish)
}

The parsing function:

  1. Creates a grid of the appropriate size
  2. Processes each character in the input:
    • 'S' (start) is mapped to elevation 0 and its position is stored
    • 'E' (end) is mapped to elevation 27 and its position is stored
    • Letters 'a' to 'z' are mapped to values 1 to 26
  3. Returns the grid and the start and end positions

Breadth-First Search Implementation

    fn shortest_path<F>(&self, start: Coord, goal:F ) -> Vec<Coord> where F: Fn(Coord)->bool {

        let mut ps = PathSearch::init(self);
        // push start in the queue
        ps.queue.push_back(start);

        // pop from top & while still nodes in the queue
        while let Some(cs) = ps.queue.pop_front() {

            // position matches target
            if goal(cs) {
                // extract parent position from target
                let mut cur = cs;
                while let Some(par) = ps.visited.square(cur).unwrap().1 {
                    ps.path.push(par);
                    cur = par;
                }
                // remove start position from path
                ps.path.pop();
                break
            }

            // mark square as visited
            ps.visited.square_mut(cs).unwrap().0 = true;

            let &square = self.0.square(cs).unwrap();

            // evaluate neighbour squares and
            // push to the queue if the have elevation delta <= 1
            self.0.neighbouring(cs)
                .for_each(|(ns, &elevation)| {
                    if let Some((false, None)) = ps.visited.square(ns) {
                        if elevation <= square + 1 {
                            // capture the square we arrived from
                            ps.visited.square_mut(ns).unwrap().1 = Some(cs);
                            ps.queue.push_back(ns)
                        }
                    }
                })
        }
        ps.path
    }

The BFS algorithm:

  1. Initializes a PathSearch instance with the grid dimensions
  2. Adds the start position to the queue
  3. Processes positions from the queue until finding one that satisfies the goal condition
  4. For each position, visits neighboring positions that satisfy the elevation constraint
  5. When the goal is reached, reconstructs the path by following parent pointers

Elevation Reversal for Part 2

    fn reverse_elevation(&mut self) {
        let &max = self.0.iter().max().unwrap();
        self.0.iter_mut()
            .map(|val|{
                *val = max - *val;
            })
            .all(|_| true);
    }

This method reverses the elevation values, which allows solving Part 2 by starting from the end position and searching for the closest square with elevation 'a'.

Path Visualization

    fn visualise_path(&self, path:Vec<Coord>) {
        let mut gpath= ElevationGrid(Grid::new(self.width(), self.height()) );
        path.iter().for_each(|&a| *gpath.0.square_mut(a).unwrap() = *self.0.square(a).unwrap() );
        println!("Path length: {}\n{:?}",path.len(),gpath);
    }

This method creates a new grid highlighting only the cells in the path, then prints it to the console.

Interactive Visualization

The solution includes a sophisticated interactive visualization using the bracket-lib library. This allows exploring the map and watching the path-finding algorithm in action.

    let mut ctx = BTermBuilder::simple(160,120)?
        .with_simple_console(grid.width(),grid.height(), "terminal8x8.png")
        .with_simple_console_no_bg(grid.width(),grid.height(), "terminal8x8.png")
        .with_simple_console_no_bg(grid.width(),grid.height(), "terminal8x8.png")
        .with_fps_cap(640f32)
        .with_title("Day12: Path Search")
        .build()?;

    let ps = PathSearch::init(&grid);
    let mut app = App::init(GStore { grid, target, start, ps } , Level::MENU);

    app.register_level(Level::MENU, Menu);
    app.register_level(Level::LEVEL1, ExerciseOne);
    app.register_level(Level::LEVEL2, ExerciseTwo);

    ctx.set_active_console(1);
    app.store().grid.draw(&mut ctx);
    main_loop(ctx, app)

This setup creates a visualization window with multiple layers and implements an interactive application with different levels.

Main Solution Flow

    let input = std::fs::read_to_string("src/bin/day12_input.txt").expect("ops!");

    // parse elevations onto a grid
    let (mut grid,start, target) = parse_elevation(input.as_str());

    // find path with closure fn() goal set at reaching the target coordinate
    let path = grid.shortest_path(start,|cs| cs.eq(&target));

    // visualise path produced
    grid.visualise_path(path);

    // reverse the elevation so E(0) and S(27)
    grid.reverse_elevation();

    // find path with closure fn() goal set as reaching elevation(26) = a
    let path = grid.shortest_path(target, |cs| 26.eq(grid.0.square(cs).unwrap()));

    // visualise path produced
    grid.visualise_path(path);

The main solution:

  1. Parses the input into a grid and identifies start and end positions
  2. For Part 1:
    • Finds the shortest path from start to end
    • Visualizes the path
  3. For Part 2:
    • Reverses elevation values
    • Finds the shortest path from end to any position with elevation 'a'
    • Visualizes the path

Implementation Notes

  • Goal Function: The solution uses a closure as a goal function, making it flexible for both parts
  • Path Reconstruction: The algorithm reconstructs the path by storing parent pointers in the visited grid
  • Interactive Visualization: The solution includes a sophisticated visualization using bracket-lib
  • Functional Programming Style: The code makes extensive use of iterators and functional programming patterns

Day 13: Distress Signal

Day 13 involves parsing and comparing nested lists according to specific rules.

Problem Overview

You're trying to decode a distress signal consisting of pairs of packets, where each packet is a nested list structure. Your task is to:

  1. Determine which pairs of packets are in the right order according to specific comparison rules
  2. Sort all packets, including two divider packets, and find the decoder key

This problem tests your ability to parse and compare hierarchical data structures with complex comparison rules.

Day 13: Problem Description

Distress Signal

You climb the hill and again try contacting the Elves. However, you instead receive a signal you weren't expecting: a distress signal.

Your handheld device must still not be working properly; the packets from the distress signal got decoded out of order. You'll need to re-order the list of received packets (your puzzle input) to decode the message.

Your list consists of pairs of packets; pairs are separated by a blank line. You need to identify how many pairs of packets are in the right order.

For example:

[1,1,3,1,1]
[1,1,5,1,1]

[[1],[2,3,4]]
[[1],4]

[9]
[[8,7,6]]

[[4,4],4,4]
[[4,4],4,4,4]

[7,7,7,7]
[7,7,7]

[]
[3]

[[[]]]
[[]]

[1,[2,[3,[4,[5,6,7]]]],8,9]
[1,[2,[3,[4,[5,6,0]]]],8,9]

Packet data consists of lists and integers. Each list starts with [, ends with ], and contains zero or more comma-separated values (either integers or other lists). Each packet is always a list and appears on its own line.

When comparing two values, the first value is called left and the second value is called right. Then:

  • If both values are integers, the lower integer should come first. If the left integer is lower than the right integer, the inputs are in the right order. If the left integer is higher than the right integer, the inputs are not in the right order. Otherwise, the inputs are the same integer; continue checking the next part of the input.
  • If both values are lists, compare the first value of each list, then the second value, and so on. If the left list runs out of items first, the inputs are in the right order. If the right list runs out of items first, the inputs are not in the right order. If the lists are the same length and no comparison makes a decision about the order, continue checking the next part of the input.
  • If exactly one value is an integer, convert the integer to a list which contains that integer as its only value, then retry the comparison. For example, if comparing [0,0,0] and 2, convert the right value to [2] (a list containing 2); the result is then found by instead comparing [0,0,0] and [2].

Using these rules, you can determine which of the pairs in the example are in the right order:

== Pair 1 ==
- Compare [1,1,3,1,1] vs [1,1,5,1,1]
  - Compare 1 vs 1
  - Compare 1 vs 1
  - Compare 3 vs 5
    - Left side is smaller, so inputs are in the right order

== Pair 2 ==
- Compare [[1],[2,3,4]] vs [[1],4]
  - Compare [1] vs [1]
    - Compare 1 vs 1
  - Compare [2,3,4] vs 4
    - Mixed types; convert right to [4] and retry comparison
    - Compare [2,3,4] vs [4]
      - Compare 2 vs 4
        - Left side is smaller, so inputs are in the right order

== Pair 3 ==
- Compare [9] vs [[8,7,6]]
  - Compare 9 vs [8,7,6]
    - Mixed types; convert left to [9] and retry comparison
    - Compare [9] vs [8,7,6]
      - Compare 9 vs 8
        - Right side is smaller, so inputs are not in the right order

== Pair 4 ==
- Compare [[4,4],4,4] vs [[4,4],4,4,4]
  - Compare [4,4] vs [4,4]
    - Compare 4 vs 4
    - Compare 4 vs 4
  - Compare 4 vs 4
  - Compare 4 vs 4
  - Left side ran out of items, so inputs are in the right order

== Pair 5 ==
- Compare [7,7,7,7] vs [7,7,7]
  - Compare 7 vs 7
  - Compare 7 vs 7
  - Compare 7 vs 7
  - Right side ran out of items, so inputs are not in the right order

== Pair 6 ==
- Compare [] vs [3]
  - Left side ran out of items, so inputs are in the right order

== Pair 7 ==
- Compare [[[]]] vs [[]]
  - Compare [[]] vs []
    - Right side ran out of items, so inputs are not in the right order

== Pair 8 ==
- Compare [1,[2,[3,[4,[5,6,7]]]],8,9] vs [1,[2,[3,[4,[5,6,0]]]],8,9]
  - Compare 1 vs 1
  - Compare [2,[3,[4,[5,6,7]]]] vs [2,[3,[4,[5,6,0]]]]
    - Compare 2 vs 2
    - Compare [3,[4,[5,6,7]]] vs [3,[4,[5,6,0]]]
      - Compare 3 vs 3
      - Compare [4,[5,6,7]] vs [4,[5,6,0]]
        - Compare 4 vs 4
        - Compare [5,6,7] vs [5,6,0]
          - Compare 5 vs 5
          - Compare 6 vs 6
          - Compare 7 vs 0
            - Right side is smaller, so inputs are not in the right order

In this example, the right order pairs are 1, 2, 4, and 6; the sum of their indices is 13.

Part 1

Determine which pairs of packets are in the right order. What is the sum of the indices of those pairs?

Part 2

Now, you just need to put all of the packets in the right order. Disregard the blank lines in your list of received packets.

The distress signal protocol also requires that you include two divider packets:

[[2]]
[[6]]

Using the same rules as before, organize all packets - the ones in your list of received packets as well as the two divider packets - into the correct order.

For the example above, the result of putting the packets in the correct order is:

[]
[[]]
[[[]]]
[1,1,3,1,1]
[1,1,5,1,1]
[[1],[2,3,4]]
[1,[2,[3,[4,[5,6,0]]]],8,9]
[1,[2,[3,[4,[5,6,7]]]],8,9]
[[1],4]
[[2]]
[3]
[[4,4],4,4]
[[4,4],4,4,4]
[[6]]
[7,7,7]
[7,7,7,7]
[[8,7,6]]
[9]

Afterward, locate the divider packets. To find the decoder key for this distress signal, you need to determine the indices of the two divider packets and multiply them together. (The first packet is at index 1, the second packet is at index 2, and so on.) In this example, the divider packets are 10th and 14th, and so the decoder key is 140.

Organize all of the packets into the correct order. What is the decoder key for the distress signal?

Day 13: Solution Explanation

Approach

Day 13 involves parsing and comparing nested lists according to specific rules. The solution breaks down into three main components:

  1. Parsing the nested list structure: We need to parse strings like [1,[2,3],4] into a structured representation
  2. Implementing the comparison logic: We need to define how to compare two list structures following the given rules
  3. Processing the input data: We need to handle the pairs of packets for Part 1 and sort all packets for Part 2

The solution uses a recursive approach for parsing and a structured type system with trait implementations for comparison.

Implementation Details

Data Structure

First, we define a data structure to represent the packet data, which can be either a number or a list of items:

#![allow(unused)]
fn main() {
enum ListItem {
    N(u8),       // A number
    L(Vec<ListItem>)  // A list
}
}

This recursive enum allows representing any nested list structure. We use N for numbers and L for lists.

Parsing

The solution uses a custom parser implemented with the FromStr trait to convert string input into ListItem structures:

#![allow(unused)]
fn main() {
impl FromStr for ListItem {
    type Err = ();

    fn from_str(inp: &str) -> Result<Self, Self::Err> {
        struct Scanner<I: Iterator<Item=char>> {
            i: Peekable<I>,
        }
        impl<I: Iterator<Item=char>> Scanner<I> {
            fn new(s: I) -> Self {
                Scanner { i: s.peekable() }
            }
            fn parse_list(&mut self) -> ListItem {
                let mut s = String::new();
                let mut v = L(vec![]);
                loop {
                    match &self.i.peek() {
                        Some('[') => {
                            self.i.next();
                            v.insert(self.parse_list());
                        },
                        Some(&c@ '0'..='9') => s.push(c),
                        &c@
                        (Some(',') | Some(']')) if !s.is_empty() => {
                            v.insert(N(u8::from_str(s.as_str()).expect(""))); 
                            s.clear();
                            if ']'.eq(c.unwrap()) {
                                break v
                            }
                        },
                        Some(',') => {},
                        Some(']') => break v,
                        None => break v,
                        _ => unreachable!()
                    }
                    self.i.next();
                }
            }
        }
        let mut i = inp.chars().peekable();
        i.next();
        Ok(Scanner::new(i).parse_list())
    }
}
}

This parsing logic works by:

  1. Creating a Scanner that processes characters from a peekable iterator
  2. Implementing a recursive parse_list method that handles nested lists
  3. Processing each character based on whether it's an opening bracket, digit, comma, or closing bracket
  4. Building up the nested ListItem structure as it parses

The parser handles the specific format of the packets as described in the problem.

Comparison Logic

The core of the solution is implementing the comparison logic between ListItem values. This is done by implementing the Ord trait:

#![allow(unused)]
fn main() {
impl Ord for ListItem {
    fn cmp(&self, other: &Self) -> Ordering {
        match (self,other) {
            (L(l), L(r)) => {
                let mut liter = l.iter();
                let mut riter = r.iter();

                loop {
                    match (liter.next(),riter.next()) {
                        (Some(l), Some(r)) =>
                            match l.cmp(r) {
                                Ordering::Equal => {},
                                ord@
                                (Ordering::Less | Ordering::Greater) => break ord,
                            },
                        (Some(_), None) => break Ordering::Greater,
                        (None, Some(_)) => break Ordering::Less,
                        (None,None) => break Ordering::Equal,
                    };
                }
            }
            (L(_), N(r)) => {
                let right = L(vec![N(*r)]);
                self.cmp(&right)
            }
            (N(l), L(_)) => {
                let left = L(vec![N(*l)]);
                left.cmp(other)
            }
            (N(l), N(r)) => l.cmp(r),
        }
    }
}
}

This implementation follows the rules specified in the problem:

  1. For two lists: Compare items one by one until a difference is found or one list runs out of items
  2. For two integers: Compare them directly
  3. For a list and an integer: Convert the integer to a single-item list and compare

The PartialOrd trait is also implemented to support comparison operators:

#![allow(unused)]
fn main() {
impl PartialOrd for ListItem {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}
}

And for completeness, the PartialEq and Eq traits are implemented using the comparison logic:

#![allow(unused)]
fn main() {
impl PartialEq<Self> for ListItem {
    fn eq(&self, other: &Self) -> bool {
        self.partial_cmp(other) == Some(Ordering::Equal)
    }
}

impl Eq for ListItem {}
}

Processing Pairs (Part 1)

For Part 1, we need to find the pairs of packets that are in the right order (left < right) and sum their indices:

#![allow(unused)]
fn main() {
fn packets_in_right_order(input: &str) -> usize {
    input.split("\n\n")
        .map(|x| x.lines().collect::<Vec<_>>() )
        .map(|d|
            (ListItem::from_str(d[0]), ListItem::from_str(d[1]))
        )
        .enumerate()
        .filter_map(|(i,(l,r))|
            if l.lt(&r) { Some(i+1) } else { None }
        )
        .sum()
}
}

This function:

  1. Splits the input by double newlines to get pairs of packets
  2. Parses each packet into a ListItem
  3. Compares each pair using the lt method (less than)
  4. Keeps track of indices (1-based) for pairs in the right order
  5. Sums the indices

Sorting Packets (Part 2)

For Part 2, we need to sort all packets, including two divider packets, and find the product of the indices of the divider packets:

#![allow(unused)]
fn main() {
fn get_decoder_key(input: &str) -> usize {
    let dividers = vec![
        L(vec![L(vec![N(2)])]),
        L(vec![L(vec![N(6)])])
    ];

    let mut order = input.split("\n\n")
        .flat_map(|x| x.lines() )
        .filter_map(|d|
            ListItem::from_str(d).ok()
        )
        .chain(vec![ L(vec![L(vec![N(2)])]), L(vec![L(vec![N(6)])]) ] )
        .fold(vec![], |mut out, item|{
            out.push(item);
            out
        });

    order.sort();
    order.iter().for_each(|d| println!("{:?}",d));

    dividers.iter()
        .map(|d| order.binary_search(d).unwrap() + 1 )
        .product()
}
}

This function:

  1. Creates the two divider packets ([[2]] and [[6]])
  2. Parses all packets from the input, ignoring blank lines
  3. Adds the divider packets to the list
  4. Sorts all packets using the implemented comparison logic
  5. Finds the indices of the divider packets (1-based)
  6. Multiplies the indices to get the decoder key

Algorithmic Analysis

Time Complexity

  • Parsing: O(n) for each packet, where n is the length of the packet string
  • Comparison: O(n) for two packets of total size n
  • Part 1: O(p × n) where p is the number of pairs and n is the average packet size
  • Part 2: O(p × n × log(p)) due to the sorting operation

Space Complexity

  • O(n) to store the parsed packet structures
  • O(p) for the list of all packets in Part 2

Alternative Approaches

Using JSON Parsing

Since the packet format is essentially JSON, we could use a JSON parsing library:

#![allow(unused)]
fn main() {
use serde_json::Value;

fn compare_values(left: &Value, right: &Value) -> Ordering {
    match (left, right) {
        (Value::Array(l), Value::Array(r)) => {
            // Compare arrays
            // ...
        },
        (Value::Number(l), Value::Number(r)) => {
            // Compare numbers
            // ...
        },
        (Value::Array(_), Value::Number(_)) => {
            // Convert number to array
            // ...
        },
        (Value::Number(_), Value::Array(_)) => {
            // Convert number to array
            // ...
        },
        _ => unreachable!()
    }
}
}

This approach would rely on an external library but could be more robust for complex inputs.

Recursive Descent Parser

Another approach would be to use a more structured recursive descent parser:

#![allow(unused)]
fn main() {
fn parse_packet(s: &str) -> ListItem {
    let mut chars = s.chars().peekable();
    parse_list(&mut chars)
}

fn parse_list(chars: &mut Peekable<Chars>) -> ListItem {
    // Expect opening bracket
    assert_eq!(chars.next().unwrap(), '[');
    
    let mut list = vec![];
    
    // Parse items until closing bracket
    while chars.peek() != Some(&']') {
        if chars.peek() == Some(&'[') {
            list.push(parse_list(chars));
        } else {
            list.push(parse_number(chars));
        }
        
        // Skip comma if present
        if chars.peek() == Some(&',') {
            chars.next();
        }
    }
    
    // Skip closing bracket
    chars.next();
    
    L(list)
}

fn parse_number(chars: &mut Peekable<Chars>) -> ListItem {
    // Parse digits into a number
    // ...
}
}

This would be more structured but essentially accomplish the same thing as the current scanner approach.

Conclusion

This solution demonstrates how to parse and compare nested data structures according to complex rules. The use of enums and trait implementations creates a clean, type-safe solution that directly models the problem domain. The comparison logic is implemented recursively to handle the nested nature of the data, and the solution efficiently processes both parts of the problem.

Day 13: Code

Below is the complete code for Day 13's solution, which parses and compares nested lists according to specific rules.

Full Solution

use std::cmp::Ordering;
use std::fmt::{Debug, Formatter};
use std::iter::Peekable;
use std::str::FromStr;
use crate::ListItem::{L, N};

fn packets_in_right_order(input: &str) -> usize {
    input.split("\n\n")
        .map(|x| x.lines().collect::<Vec<_>>() )
        .map(|d|
            (ListItem::from_str(d[0]), ListItem::from_str(d[1]))
        )
        .enumerate()
        .filter_map(|(i,(l,r))|
            if l.lt(&r) { Some(i+1) } else { None }
        )
        .sum()
}

fn get_decoder_key(input: &str) -> usize {

    let dividers = vec![
        L(vec![L(vec![N(2)])]),
        L(vec![L(vec![N(6)])])
    ];

    let mut order = input.split("\n\n")
        .flat_map(|x| x.lines() )
        .filter_map(|d|
            ListItem::from_str(d).ok()
        )
        .chain(vec![ L(vec![L(vec![N(2)])]), L(vec![L(vec![N(6)])]) ] )
        .fold(vec![], |mut out, item|{
            out.push(item);
            out
        });

    order.sort();
    order.iter().for_each(|d| println!("{:?}",d));

    dividers.iter()
        .map(|d| order.binary_search(d).unwrap() + 1 )
        .product()
}

fn main() {
    // let mut input = "[1,1,3,1,1]\n[1,1,5,1,1]\n\n[[1],[2,3,4]]\n[[1],4]\n\n[9]\n[[8,7,6]]\n\n[[4,4],4,4]\n[[4,4],4,4,4]\n\n\
    // [7,7,7,7]\n[7,7,7]\n\n[]\n[3]\n\n[[[]]]\n[[]]\n\n[1,[2,[3,[4,[5,6,7]]]],8,9]\n[1,[2,[3,[4,[5,6,0]]]],8,9]".to_string();

    let input = std::fs::read_to_string("src/bin/day13_input.txt").expect("Ops!");

    let res = packets_in_right_order(input.as_str());
    println!("Correctly ordered packets = {:?}",res);
    let res = get_decoder_key(input.as_str());
    println!("Decoder Key = {:?}",res);

}

enum ListItem {
    N(u8),
    L(Vec<ListItem>)
}
impl ListItem {
    fn insert(&mut self, item:ListItem) {
        match (self,item) {
            (L(list), item) => list.push(item),
            (N(old), N(new)) => *old = new,
            (_,_) => unreachable!()
        }
    }
}

impl FromStr for ListItem {
    type Err = ();

    fn from_str(inp: &str) -> Result<Self, Self::Err> {

        struct Scanner<I: Iterator<Item=char>> {
            i: Peekable<I>,
        }
        impl<I: Iterator<Item=char>> Scanner<I> {
            fn new(s: I) -> Self {
                Scanner { i: s.peekable() }
            }
            fn parse_list(&mut self) -> ListItem {
                let mut s = String::new();
                let mut v = L(vec![]);
                loop {
                    match &self.i.peek() {
                        Some('[') => {
                            self.i.next();
                            v.insert(self.parse_list());
                        },
                        Some(&c@ '0'..='9') => s.push(c),
                        &c@
                        (Some(',') | Some(']')) if !s.is_empty() => {
                            v.insert(N(u8::from_str(s.as_str()).expect("")));
                            s.clear();
                            if ']'.eq(c.unwrap()) {
                                break v
                            }
                        },
                        Some(',') => {}
                        Some(']') => break v,
                        None => break v,
                        _ => unreachable!()
                    }
                    self.i.next();
                }
            }
        }
        let mut i = inp.chars().peekable();
        i.next();
        Ok(Scanner::new(i).parse_list())
    }
}

impl PartialEq<Self> for ListItem {
    fn eq(&self, other: &Self) -> bool {
        self.partial_cmp(other) == Some(Ordering::Equal)
    }
}

impl Eq for ListItem {}

impl Ord for ListItem {
    fn cmp(&self, other: &Self) -> Ordering {
        match (self,other) {
            (L(l), L(r)) => {
                let mut liter = l.iter();
                let mut riter = r.iter();

                loop {
                    match (liter.next(),riter.next()) {
                        (Some(l), Some(r)) =>
                            match l.cmp(r) {
                                Ordering::Equal => {}
                                ord@
                                (Ordering::Less | Ordering::Greater) => break ord,
                            },
                        (Some(_), None) => break Ordering::Greater,
                        (None, Some(_)) => break Ordering::Less,
                        (None,None) => break Ordering::Equal,
                    };
                }
            }
            (L(_), N(r)) => {
                let right = L(vec![N(*r)]);
                self.cmp(&right)
            }
            (N(l), L(_)) => {
                let left = L(vec![N(*l)]);
                left.cmp(other)
            }
            (N(l), N(r)) => l.cmp(r),
        }

    }
}

impl PartialOrd for ListItem {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

impl Debug for ListItem {
    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
        match self {
            N(n) => write!(f,"{n}")?,
            L(v) => f.debug_list().entries(v.iter()).finish()?
        };
        Ok(())
    }
}

Code Walkthrough

Data Structure for Packets

enum ListItem {
    N(u8),
    L(Vec<ListItem>)
}

The solution uses an enum ListItem to represent the nested list structure of packets:

  • N(u8) represents a number (limited to u8 for this problem)
  • L(Vec<ListItem>) represents a list containing other items (which can be numbers or lists)

This recursive structure can represent any valid packet in the problem.

Parsing Packets

impl FromStr for ListItem {
    type Err = ();

    fn from_str(inp: &str) -> Result<Self, Self::Err> {

        struct Scanner<I: Iterator<Item=char>> {
            i: Peekable<I>,
        }
        impl<I: Iterator<Item=char>> Scanner<I> {
            fn new(s: I) -> Self {
                Scanner { i: s.peekable() }
            }
            fn parse_list(&mut self) -> ListItem {
                let mut s = String::new();
                let mut v = L(vec![]);
                loop {
                    match &self.i.peek() {
                        Some('[') => {
                            self.i.next();
                            v.insert(self.parse_list());
                        },
                        Some(&c@ '0'..='9') => s.push(c),
                        &c@
                        (Some(',') | Some(']')) if !s.is_empty() => {
                            v.insert(N(u8::from_str(s.as_str()).expect("")));
                            s.clear();
                            if ']'.eq(c.unwrap()) {
                                break v
                            }
                        },
                        Some(',') => {}
                        Some(']') => break v,
                        None => break v,
                        _ => unreachable!()
                    }
                    self.i.next();
                }
            }
        }
        let mut i = inp.chars().peekable();
        i.next();
        Ok(Scanner::new(i).parse_list())
    }
}

The FromStr implementation uses a custom scanner to parse the input string into a ListItem:

  1. It creates a Scanner with a peekable iterator over the input characters
  2. The parse_list method recursively builds the list structure by:
    • Creating a new list when encountering [
    • Accumulating digits for numbers
    • Inserting numbers when reaching a comma or closing bracket
    • Breaking when reaching the end of the list
  3. The method returns the parsed ListItem

Item Insertion Helper

impl ListItem {
    fn insert(&mut self, item:ListItem) {
        match (self,item) {
            (L(list), item) => list.push(item),
            (N(old), N(new)) => *old = new,
            (_,_) => unreachable!()
        }
    }
}

This helper method adds an item to a list or updates a number.

Comparison Logic

impl Ord for ListItem {
    fn cmp(&self, other: &Self) -> Ordering {
        match (self,other) {
            (L(l), L(r)) => {
                let mut liter = l.iter();
                let mut riter = r.iter();

                loop {
                    match (liter.next(),riter.next()) {
                        (Some(l), Some(r)) =>
                            match l.cmp(r) {
                                Ordering::Equal => {}
                                ord@
                                (Ordering::Less | Ordering::Greater) => break ord,
                            },
                        (Some(_), None) => break Ordering::Greater,
                        (None, Some(_)) => break Ordering::Less,
                        (None,None) => break Ordering::Equal,
                    };
                }
            }
            (L(_), N(r)) => {
                let right = L(vec![N(*r)]);
                self.cmp(&right)
            }
            (N(l), L(_)) => {
                let left = L(vec![N(*l)]);
                left.cmp(other)
            }
            (N(l), N(r)) => l.cmp(r),
        }

    }
}

The Ord implementation defines how to compare two ListItem values:

  1. List vs. List: Compare elements one by one until finding a difference or reaching the end of a list
  2. List vs. Number: Convert the number to a single-item list and retry comparison
  3. Number vs. List: Convert the number to a single-item list and retry comparison
  4. Number vs. Number: Use the built-in number comparison

This implements the comparison rules specified in the problem.

Additional Trait Implementations

impl PartialEq<Self> for ListItem {
    fn eq(&self, other: &Self) -> bool {
        self.partial_cmp(other) == Some(Ordering::Equal)
    }
}

impl Eq for ListItem {}
impl PartialOrd for ListItem {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

These implementations ensure that ListItem supports all the comparison operators and can be used in sorting operations.

Debug Display

impl Debug for ListItem {
    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
        match self {
            N(n) => write!(f,"{n}")?,
            L(v) => f.debug_list().entries(v.iter()).finish()?
        };
        Ok(())
    }
}

This implementation formats ListItem values for debugging, using Rust's debug_list for nice formatting of lists.

Part 1: Finding Correctly Ordered Pairs

fn packets_in_right_order(input: &str) -> usize {
    input.split("\n\n")
        .map(|x| x.lines().collect::<Vec<_>>() )
        .map(|d|
            (ListItem::from_str(d[0]), ListItem::from_str(d[1]))
        )
        .enumerate()
        .filter_map(|(i,(l,r))|
            if l.lt(&r) { Some(i+1) } else { None }
        )
        .sum()
}

This function processes the input for Part 1:

  1. Splits the input by double newlines to get pairs of packets
  2. Parses each packet into a ListItem
  3. Uses the lt comparison to check if pairs are in the right order
  4. Keeps 1-based indices of correctly ordered pairs
  5. Sums these indices

Part 2: Sorting and Finding Divider Packets

fn get_decoder_key(input: &str) -> usize {

    let dividers = vec![
        L(vec![L(vec![N(2)])]),
        L(vec![L(vec![N(6)])])
    ];

    let mut order = input.split("\n\n")
        .flat_map(|x| x.lines() )
        .filter_map(|d|
            ListItem::from_str(d).ok()
        )
        .chain(vec![ L(vec![L(vec![N(2)])]), L(vec![L(vec![N(6)])]) ] )
        .fold(vec![], |mut out, item|{
            out.push(item);
            out
        });

    order.sort();
    order.iter().for_each(|d| println!("{:?}",d));

    dividers.iter()
        .map(|d| order.binary_search(d).unwrap() + 1 )
        .product()
}

This function processes the input for Part 2:

  1. Defines the two divider packets ([[2]] and [[6]])
  2. Parses all packets from the input and adds the divider packets
  3. Sorts all packets using the comparison logic
  4. Finds the 1-based indices of the divider packets
  5. Multiplies these indices to get the decoder key

Main Function

fn main() {
    // let mut input = "[1,1,3,1,1]\n[1,1,5,1,1]\n\n[[1],[2,3,4]]\n[[1],4]\n\n[9]\n[[8,7,6]]\n\n[[4,4],4,4]\n[[4,4],4,4,4]\n\n\
    // [7,7,7,7]\n[7,7,7]\n\n[]\n[3]\n\n[[[]]]\n[[]]\n\n[1,[2,[3,[4,[5,6,7]]]],8,9]\n[1,[2,[3,[4,[5,6,0]]]],8,9]".to_string();

    let input = std::fs::read_to_string("src/bin/day13_input.txt").expect("Ops!");

    let res = packets_in_right_order(input.as_str());
    println!("Correctly ordered packets = {:?}",res);
    let res = get_decoder_key(input.as_str());
    println!("Decoder Key = {:?}",res);

}

The main function reads the input file and runs both parts of the problem.

Implementation Notes

  • Recursive Data Structure: The solution uses a recursive enum to represent the nested packet structure
  • Custom Parser: The parser handles the specific format of the input without relying on external libraries
  • Trait Implementations: The comparison logic is cleanly implemented using Rust's trait system
  • Functional Style: The solution uses a functional programming style with iterators and method chaining

Day 14: Regolith Reservoir

Day 14 involves simulating falling sand in a cave system with rock formations.

Problem Overview

You're mapping out a cave with rock structures and need to simulate how sand will fall and accumulate. The key elements are:

  1. Rock formations are represented as lines in the input
  2. Sand falls from a specific source point (500, 0)
  3. Each sand unit follows specific movement rules until it comes to rest or falls into the abyss
  4. For Part 1, you need to count how many sand units come to rest before sand starts falling into the abyss
  5. For Part 2, a floor is added, and you need to count how many sand units it takes to block the source

This problem tests your ability to simulate physical processes and handle grid-based representations of a 2D space.

Day 14: Problem Description

Regolith Reservoir

The distress signal leads you to a giant waterfall! Actually, hang on - the signal seems like it's coming from the waterfall itself, and that doesn't make any sense. However, you do notice a little path that leads behind the waterfall.

Mineral formations of various kinds are dripping into small pools. The distress signal must be coming from somewhere in the cave behind the waterfall.

Your handheld device downloads a scan of the cave; this scan shows the shape of the cave walls. Your device reports that there's a kind of sand that slowly drops out of thin air and settles in the cave. When you see it in person, you confirm this - the source of the sand seems to be a point above the cave.

This scan is useful for detecting whether more sand will fall. Using your scan, simulate the falling sand. How many units of sand come to rest before sand starts flowing into the abyss below?

Sand is pouring into the cave from point 500,0.

Drawing rock as #, air as ., and the source of the sand as +, this example looks like this:

  4     5  5
  9     0  0
  4     0  3
0 ......+...
1 ..........
2 ..........
3 ..........
4 ....#...##
5 ....#...#.
6 ..###...#.
7 ........#.
8 ........#.
9 #########.

Sand is produced one unit at a time, and the next unit of sand is not produced until the previous unit of sand comes to rest. A unit of sand is large enough to fill one tile of air in your scan.

A unit of sand always falls down one step if possible. If the tile immediately below is blocked (by rock or sand), the unit of sand attempts to instead move diagonally one step down and to the left. If that tile is blocked, the unit of sand attempts to instead move diagonally one step down and to the right. Sand keeps moving as long as it is able to do so, at each step trying to move down, then down-left, then down-right. If all three possible destinations are blocked, the unit of sand comes to rest and no longer moves, at which point the next unit of sand is created back at the source.

Using your scan, simulate the falling sand. How many units of sand come to rest before sand starts flowing into the abyss below?

Example

In this example, the first unit of sand falls downward until it lands on the rock path at the bottom:

......+...
..........
..........
..........
....#...##
....#...#.
..###...#.
........#.
......o.#.
#########.

The second unit of sand follows a slightly different path, falling to the right and then coming to rest:

......+...
..........
..........
..........
....#...##
....#...#.
..###...#.
........#.
.....oo.#.
#########.

After a total of 5 units of sand come to rest, they form this pattern:

......+...
..........
..........
..........
....#...##
....#...#.
..###...#.
......o.#.
....oooo#.
#########.

After a total of 22 units of sand fall:

......+...
..........
......o...
.....ooo..
....#ooo##
....#ooo#.
..###ooo#.
....oooo#.
...ooooo#.
#########.

After a total of 24 units of sand fall:

......+...
..........
......o...
.....ooo..
....#ooo##
...o#ooo#.
..###ooo#.
....oooo#.
.o.ooooo#.
#########.

Finally, using your scan, once a total of 24 units of sand come to rest, all further sand flows out the bottom, falling into the endless void. Just for fun, the path any new sand takes before falling forever is shown here with ~:

.......+...
.......~...
......~o...
.....~ooo..
....~#ooo##
...~o#ooo#.
..~###ooo#.
..~..oooo#.
.~o.ooooo#.
~#########.
~..........
~..........
~..........

For Part 1, once all 24 units of sand shown above come to rest, all further sand flows out the bottom, falling into the endless void.

Part 2

You realize you misunderstood the scan. There isn't an endless void at the bottom of the scan - there's floor, and you're standing on it!

You don't have time to scan the floor, so just assume the floor is an infinite horizontal line with a y coordinate equal to two plus the highest y coordinate of any point in your scan.

In the example above, the highest y coordinate of any point is 9, and so the floor is at y=11. (This is as if your scan contained one extra rock path from 0,11 to 500,11.)

With the added floor, the sand spreads to the left and right, reaching a position of rest if it encounters a higher sand unit. Because of this, more sand is able to come to rest before the source is blocked.

Using your scan and assuming the floor is an infinite horizontal line with a y coordinate equal to two plus the highest y coordinate of any point in your scan, how many units of sand come to rest?

In the example from Part 1, after a total of 93 units of sand fall and come to rest, no more sand can come to rest. The source becomes blocked when sand unit 94 is produced:

............o............
...........ooo...........
..........ooooo..........
.........ooooooo.........
........oo#ooo##o........
.......ooo#ooo#ooo.......
......oo###ooo#oooo......
.....oooo.oooo#ooooo.....
....oooooooooo#oooooo....
...ooo#########ooooooo...
..ooooo.......ooooooooo..
#########################

For Part 2, using your scan, how many units of sand come to rest before the source of the sand becomes blocked?

Day 14: Solution Explanation

Approach

Day 14 involves simulating falling sand in a cave system with rock formations. The solution needs to handle several key aspects:

  1. Parsing the rock formations: Converting input lines into coordinates for rock paths
  2. Representing the cave: Creating a data structure to track materials (rock, sand, air) at each position
  3. Simulating sand movement: Implementing the rules for sand falling and coming to rest
  4. Handling two scenarios: Tracking sand units for both scenarios (with and without a floor)

The solution uses a grid-based approach with custom data types for the board, materials, and sand grains.

Implementation Details

Data Structures

The solution uses several key data structures:

Board

The Board<T> struct represents the cave system:

#![allow(unused)]
fn main() {
struct Board<T> {
    width: usize,
    height: usize,
    centre_x: usize,
    offset_x: usize,
    grid: HashMap<Coord, T>,
}
}

This structure uses a hashmap to store the material at each position, which is more memory-efficient than a full 2D array when most of the cave is air.

Material

An enum represents the different materials in the cave:

#![allow(unused)]
fn main() {
enum Material { Rock, Sand, Air }
}

Grain

The Grain struct represents a single unit of sand:

#![allow(unused)]
fn main() {
struct Grain {
    pos: Coord,
    settled: bool
}
}

Parsing Rock Formations

The input is parsed into a series of rock paths:

#![allow(unused)]
fn main() {
fn parse_plines(input:&str) -> (Coord, Coord, Vec<Vec<Coord>>) {
    let mut br = Coord{ x: usize::MIN, y: usize::MIN };
    let mut tl = Coord{ x: usize::MAX, y: 0 };
    let plines =
        input.lines()
            .map(|line|{
                line.split(" -> ")
                    .map(|val| Coord::from_str(val).expect("Ops!"))
                    .inspect(|p|{
                        tl.x = std::cmp::min(tl.x, p.x);
                        br.x = std::cmp::max(br.x, p.x);
                        br.y = std::cmp::max(br.y, p.y);
                    })
                    .collect::<Vec<_>>()
            })
            .fold(vec![],|mut out, pline|{
                out.push(pline);
                out
            });
    (tl, br, plines)
}
}

This function:

  1. Parses each line of the input into a sequence of coordinates
  2. Tracks the bounding box of all coordinates (top-left and bottom-right)
  3. Returns the bounding box and the list of rock paths

Creating the Cave Board

The board is created based on the bounding box:

#![allow(unused)]
fn main() {
fn new(tl: Coord, br: Coord) -> Self {
    let width = br.x - tl.x + 1 + 200;
    let offset_x = if tl.x > 200 { tl.x - 100 } else { 0 };
    let centre_x = 500 - offset_x;
    Board {
        width,
        height: br.y + 3,
        centre_x,
        offset_x,
        grid: HashMap::new()
    }
}
}

The board is sized to include all rock formations plus some extra space for sand to accumulate. The offset_x value is used to make the board more memory-efficient by not starting from x=0 when all the action happens near x=500.

Drawing Rock Formations

Rock formations are drawn on the board using the Painter helper:

#![allow(unused)]
fn main() {
fn rock_walls(board: &mut Board<Material>, points: &Vec<Coord>) {
    points.windows(2)
        .for_each(|w|{
            if let [a, b] = w {
                Painter::wall(board, *a, *b, Material::Rock);
            }
        })
}
}

This function takes a sequence of points and draws rock walls between each consecutive pair. The wall function handles drawing both horizontal and vertical lines:

#![allow(unused)]
fn main() {
fn wall(board: &mut Board<Material>, a: Coord, b: Coord, m: Material) {
    if a.x == b.x {
        // vertical wall
        for y in std::cmp::min(a.y, b.y)..=std::cmp::max(a.y, b.y) {
            *board.square_mut(Coord { x: a.x, y }).unwrap() = m;
        }
    } else if a.y == b.y {
        // horizontal wall
        for x in std::cmp::min(a.x, b.x)..=std::cmp::max(a.x, b.x) {
            *board.square_mut(Coord { x, y: a.y }).unwrap() = m;
        }
    }
}
}

Simulating Sand Movement

The core of the solution is the sand simulation. A unit of sand falls according to specific rules until it comes to rest or falls into the abyss:

#![allow(unused)]
fn main() {
fn fall(&mut self, board: &Board<Material>) -> Option<()> {
    // Try to move down
    let down = Coord { x: self.pos.x, y: self.pos.y + 1 };
    if let Some(Material::Air) = board.square(down) {
        self.pos = down;
        return Some(());
    }
    
    // Try to move down-left
    let down_left = Coord { x: self.pos.x - 1, y: self.pos.y + 1 };
    if let Some(Material::Air) = board.square(down_left) {
        self.pos = down_left;
        return Some(());
    }
    
    // Try to move down-right
    let down_right = Coord { x: self.pos.x + 1, y: self.pos.y + 1 };
    if let Some(Material::Air) = board.square(down_right) {
        self.pos = down_right;
        return Some(());
    }
    
    // Can't move further
    if board.in_bounds(self.pos) {
        self.settled = true;
        Some(())
    } else {
        None
    }
}
}

This method tries to move the sand grain in the priority order: down, down-left, down-right. If no move is possible, the grain comes to rest.

Running the Simulation

The run method simulates falling sand until a specified condition is met:

#![allow(unused)]
fn main() {
fn run<F>(&mut self, start: Coord, check_goal: F) where F: Fn(&Grain) -> bool {
    loop {
        let mut grain = Grain::release_grain(start);

        // let the grain fall until it either (a) settles or (b) falls off the board
        while grain.fall(self).is_some() {};

        // Have we reached an end state?
        // we use a closure that passes the stopped grain
        // for checking whether (a) it has fallen in the abyss or (b) reached the starting position
        if check_goal(&grain) {
            // Mark settled grain position on the board
            *self.square_mut(grain.pos).unwrap() = Material::Sand;
            break
        }

        // Mark settled grain position on the board
        *self.square_mut(grain.pos).unwrap() = Material::Sand;
    }
}
}

The method takes a closure check_goal that determines when to stop the simulation. This allows for different stopping conditions for Part 1 and Part 2.

Adding a Floor (Part 2)

For Part 2, a floor is added at the bottom of the cave:

#![allow(unused)]
fn main() {
fn toggle_floor(&mut self) {
    let height = self.height-1;
    let left = Coord { x: self.offset_x, y: height };
    let right = Coord { x: self.offset_x + self.width - 1, y : height };
    match self.square(left) {
        Some(Material::Rock) => Painter::wall(self, left, right, Material::Air),
        _ => Painter::wall(self, left, right, Material::Rock)
    }
}
}

This adds a horizontal rock wall at the bottom of the cave, simulating the floor described in Part 2.

Counting Sand Grains

The solution counts the number of sand grains at rest:

#![allow(unused)]
fn main() {
fn grains_at_rest(&self) -> usize {
    self.grid.values()
        .filter(|&s| Material::Sand.eq(s))
        .count()
}
}

Solving the Problem

The solution solves both parts of the problem:

#![allow(unused)]
fn main() {
// Part 1: Count sand units until one falls into the abyss
board.run(start, |g| !g.is_settled());
println!("Scenario 1: Grains Rest: {}", board.grains_at_rest() - 1);

// Reset for Part 2
board.empty_sand();

// Part 2: Add floor and count until source is blocked
board.toggle_floor();
board.run(start, |g| g.pos.eq(&start));
println!("Scenario 2: Grains Rest: {}", board.grains_at_rest());
}

For Part 1, the simulation stops when a grain fails to settle (falls into the abyss). For Part 2, the simulation stops when a grain settles at the source position, blocking further sand.

Visualization

The solution includes a visualization component using the bracket-lib library, allowing you to see the sand falling in real-time.

Algorithm Analysis

Time Complexity

  • Parsing: O(n) where n is the number of coordinates in the input
  • Sand Simulation: O(s u00d7 h) where s is the number of sand grains and h is the height of the cave
  • Overall: O(s u00d7 h) since the sand simulation dominates

Space Complexity

  • Board Storage: O(r + s) where r is the number of rock positions and s is the number of sand positions
  • Path Storage: O(n) for storing the rock paths

Alternative Approaches

Array-Based Grid

Instead of using a hashmap for the grid, we could use a 2D array:

#![allow(unused)]
fn main() {
struct ArrayBoard {
    width: usize,
    height: usize,
    grid: Vec<Vec<Material>>
}
}

This would have faster access times (O(1) vs. hashmap's average O(1) but worst-case O(n)), but would use more memory for sparse caves.

Scan Lines

Another approach would be to use a scan line algorithm to more efficiently determine where sand will come to rest without simulating each step:

#![allow(unused)]
fn main() {
fn calculate_rest_position(board: &Board, start: Coord) -> Option<Coord> {
    // Find the first rock/sand below the start position
    // Check if sand can flow left or right
    // Return the final rest position
}
}

This could be faster for certain scenarios but would be more complex to implement correctly, especially for Part 2.

Conclusion

This solution demonstrates a comprehensive approach to physical simulation in a grid-based environment. The use of a hashmap for the grid provides memory efficiency, while the simulation logic accurately captures the problem's constraints. The solution is also flexible enough to handle both parts of the problem with minimal changes.

Day 14: Code

Below is the complete code explanation for Day 14's solution, which simulates falling sand in a cave system with rock formations.

Code Structure

The solution is quite extensive and uses several key components:

  1. A Board<T> struct to represent the cave grid
  2. A Material enum for different types of material (rock, sand, air)
  3. A Grain struct to track individual sand units
  4. A Painter helper to draw rock formations
  5. Simulation logic for falling sand
  6. Visualization components using bracket-lib

Key Components

Board and Materials

The cave is represented by a Board struct with a hashmap grid:

struct Board<T> {
    width: usize,
    height: usize,
    centre_x: usize,
    offset_x: usize,
    grid: HashMap<Coord,T>,
}

The materials in the cave are represented by an enum:

enum Material { Rock, Sand, Air }
impl Default for Material {
    fn default() -> Self { Material::Air }
}

Sand Grain Representation

Each unit of sand is represented by a Grain struct:

struct Grain {
    pos: Coord,
    settled: bool
}

Parsing Rock Formations

The input is parsed into rock formations:

fn parse_plines(input:&str) -> (Coord, Coord, Vec<Vec<Coord>>) {
    let mut br = Coord{ x: usize::MIN, y: usize::MIN };
    let mut tl = Coord{ x: usize::MAX, y: 0 };
    let plines =
        input.lines()
            .map(|line|{
                line.split(" -> ")
                    .map(|val| Coord::from_str(val).expect("Ops!"))
                    .inspect(|p|{
                        tl.x = std::cmp::min(tl.x, p.x);
                        br.x = std::cmp::max(br.x, p.x);
                        br.y = std::cmp::max(br.y, p.y);
                    })
                    .collect::<Vec<_>>()
            })
            .fold(vec![],|mut out, pline|{
                out.push(pline);
                out
            });
    (tl, br, plines)
}

Drawing Rock Walls

Rock walls are drawn between consecutive points:

    fn rock_walls(board: &mut Board<Material>, c: &[Coord]) {
        c.windows(2)
            .for_each(| p|
                Painter::wall(board, p[0], p[1], Material::Rock)
            );
    }

Sand Movement Simulation

The core of the solution is the sand movement logic:

    fn fall(&mut self, board: &Board<Material>) -> Option<Coord> {

        if self.settled { return None }

        let Coord{ x, y} = self.pos;

        let [lc, uc, rc] = [(x-1, y+1).into(), (x, y+1).into(), (x+1, y+1).into()];

        let l = board.square( lc );
        let u = board.square( uc );
        let r = board.square( rc );

        match (l,u,r) {
            (_, None, _) => None,
            (_, Some(Material::Air), _) => { self.pos = uc; Some(self.pos) },
            (Some(Material::Air), _, _) => { self.pos = lc; Some(self.pos) },
            (_, _, Some(Material::Air)) => { self.pos = rc; Some(self.pos) },
            (_, _, _) => { self.settled = true; None }
        }
    }
    fn is_settled(&self) -> bool {
        self.settled
    }
}

Running the Simulation

The simulation runs until a specified condition is met:

    fn run<F>(&mut self, start: Coord, check_goal: F) where F: Fn(&Grain) -> bool {

        loop {
            let mut grain = Grain::release_grain(start);

            // let the grain fall until it either (a) settles or (b) falls off the board
            while grain.fall(self).is_some() {};

            // Have we reached an end state ?
                // we use a closure that passes the stopped grain
                // for checking whether (a) it has fallen in the abyss or (b) reached the starting position
            if check_goal(&grain) {
                // Mark settled grain position on the board
                *self.square_mut(grain.pos).unwrap() = Material::Sand;
                break
            }

            // Mark settled grain position on the board
            *self.square_mut(grain.pos).unwrap() = Material::Sand;
        }
    }

Managing the Floor (Part 2)

A floor is added for Part 2:

    fn toggle_floor(&mut self) {
        let height = self.height-1;
        let left = Coord { x: self.offset_x, y: height };
        let right = Coord { x: self.offset_x + self.width - 1, y : height };
        match self.square(left) {
            Some(Material::Rock) => Painter::wall(self, left, right, Material::Air),
            _ => Painter::wall(self, left, right, Material::Rock)
        }
    }

Counting Sand Grains

The solution counts sand grains at rest:

    fn grains_at_rest(&self) -> usize {
        self.grid.values()
            .filter(|&s| Material::Sand.eq(s) )
            .count()
    }

Main Function

The main function sets up the simulation and runs both parts of the problem:

fn main() -> BResult<()> {

    // let input = "498,4 -> 498,6 -> 496,6\n503,4 -> 502,4 -> 502,9 -> 494,9".to_string();
    let input = std::fs::read_to_string("src/bin/day14_input.txt").expect("ops!");

    // parse the board's wall layout
    let (tl, br, plines) = parse_plines(input.as_str());

    let mut board = Board::new(tl, br);

    // paint layout on the board
    plines.into_iter()
        .for_each(|pline|
            Painter::rock_walls(&mut board, &pline)
        );

    // run the sand simulation until we reach the abyss, that is, grain stopped but not settled
    let start = (board.centre_x, 0).into();
    board.run(
        start, |g| !g.is_settled()
    );
    println!("Scenario 1: Grains Rest: {}\n{:?}", board.grains_at_rest() - 1, board);

    board.empty_sand();
    // add rock floor
    board.toggle_floor();
    // run the sand simulation until grain settled position == starting position
    board.run(
        start, |g| g.pos.eq(&start)
    );
    println!("Scenario 2: Grains Rest: {}\n{:?}", board.grains_at_rest(), board);

Visualization

The solution includes a visualization component using bracket-lib:

    let ctx = BTermBuilder::simple(board.width >> 1, board.height >> 1)?
        .with_simple_console(board.width, board.height, "terminal8x8.png")
        .with_simple_console_no_bg(board.width, board.height, "terminal8x8.png")
        .with_simple_console_no_bg(board.width >> 2, board.height >> 2, "terminal8x8.png")
        .with_fps_cap(60f32)
        .with_title("S: Reset, R: Run, G: Grain: Q: Quit")
        .build()?;

    let mut app = App::init(
        Store {
            board,
            grains: VecDeque::new(),
            start
        },
        Levels::MENU
    );
    app.register_level(Levels::MENU, Menu);
    app.register_level(Levels::LEVEL1, ExerciseOne {run:false, abyss:false} );
    app.register_level(Levels::LEVEL2, ExerciseTwo {ceiling:false} );

    main_loop(ctx, app)

Implementation Notes

  • Grid Representation: The solution uses a hashmap for the grid, which is memory-efficient for sparse grids
  • Flexible Simulation: The run method takes a closure parameter to allow different stopping conditions
  • Visualization: The solution includes a real-time visualization of the falling sand
  • Movement Logic: Sand follows specific rules with a priority order of movement directions

The code elegantly handles both parts of the problem using a comprehensive simulation of the physical process described in the problem.

Day 15: Beacon Exclusion Zone

Day 15 involves analyzing sensor coverage to find positions where beacons cannot be present.

Problem Overview

You need to help locate a distress beacon in a cave system. There are several sensors that can detect the nearest beacon, and you need to use this information to:

  1. Determine positions where a beacon cannot possibly be located
  2. Find the one position in a specific area where the distress beacon must be located

The key aspects of this problem are:

  • Each sensor reports its position and the position of the nearest beacon
  • The distance between a sensor and its nearest beacon is calculated using Manhattan distance
  • For Part 1, you need to count positions that cannot contain a beacon in a specific row
  • For Part 2, you need to find the only possible position for the distress beacon in a large area

This problem tests your ability to work with ranges and coordinate systems efficiently.

Day 15: Problem Description

Beacon Exclusion Zone

You feel the ground rumble again as the distress signal leads you to a large network of subterranean tunnels. You don't have time to search them all, but you don't need to: your pack contains a set of deployable sensors that you imagine were originally built to locate lost Elves.

The sensors aren't very powerful, but that's okay; your handheld device indicates that you're close enough to the source of the distress signal. You pull the emergency sensor system out of your pack, hit the big button on top, and the sensors zoom off down the tunnels.

Once a sensor finds a spot it thinks will give it a good reading, it attaches itself to a hard surface and begins monitoring for the nearest signal source beacon. Sensors and beacons always exist at integer coordinates. Each sensor knows its own position and can determine the position of a beacon precisely; however, sensors can only lock on to the one beacon closest to the sensor as measured by the Manhattan distance. (There is never a tie where two beacons are the same distance to a sensor.)

It doesn't take long for the sensors to report back their positions and closest beacons (your puzzle input). For example:

Sensor at x=2, y=18: closest beacon is at x=-2, y=15
Sensor at x=9, y=16: closest beacon is at x=10, y=16
Sensor at x=13, y=2: closest beacon is at x=15, y=3
Sensor at x=12, y=14: closest beacon is at x=10, y=16
Sensor at x=10, y=20: closest beacon is at x=10, y=16
Sensor at x=14, y=17: closest beacon is at x=10, y=16
Sensor at x=8, y=7: closest beacon is at x=2, y=10
Sensor at x=2, y=0: closest beacon is at x=2, y=10
Sensor at x=0, y=11: closest beacon is at x=2, y=10
Sensor at x=20, y=14: closest beacon is at x=25, y=17
Sensor at x=17, y=20: closest beacon is at x=21, y=22
Sensor at x=16, y=7: closest beacon is at x=15, y=3
Sensor at x=14, y=3: closest beacon is at x=15, y=3
Sensor at x=20, y=1: closest beacon is at x=15, y=3

So, consider the sensor at 2,18; the closest beacon to it is at -2,15. For the sensor at 9,16, the closest beacon to it is at 10,16.

Drawing sensors as S and beacons as B, the above arrangement of sensors and beacons looks like this:

               1    1    2    2
     0    5    0    5    0    5
 0 ....S.................
 1 .......................
 2 ...............S.......
 3 ......................B
 4 .......................
 5 .......................
 6 .......................
 7 ..........S.......B....
 8 .......................
 9 .......................
10 ....B..................
11 ..S....................
12 .......................
13 .......................
14 ..............S...S....
15 B......................
16 ...........SB..........
17 ................S......
18 ....S.................
19 .......................
20 ............S......S...
21 .......................
22 .......................B

This isn't necessarily a comprehensive map of all beacons in the area, though. Because each sensor only identifies its closest beacon, if a sensor detects a beacon, you know there are no other beacons that close or closer to that sensor. There could still be beacons that just happen to not be the closest beacon to any sensor. Consider the sensor at 8,7:

               1    1    2    2
     0    5    0    5    0    5
-2 ..........#.............
-1 .........###............
 0 ....S...#####...........
 1 .......#######........S.
 2 ......#########S........
 3 .....###########B.......
 4 ....#############.......
 5 ...###############......
 6 ..#################.....
 7 .#########S#######.....
 8 ..#################.....
 9 ...###############......
10 ....B############.......
11 ..S..###########........
12 ......#########.........
13 .......#######..........
14 ........#####..S........
15 B........###............
16 ..........#SB...........
17 ................S.......
18 ....S.................
19 .......................
20 ............S......S...
21 .......................
22 .......................B

This sensor's closest beacon is at 2,10, and so you know there are no beacons that close or closer (in any positions marked #).

None of the detected beacons seem to be producing the distress signal, so you'll need to work out where the distress beacon is by working out where it isn't. For now, keep things simple by counting the positions where a beacon cannot possibly be along just a single row.

So, suppose you have an arrangement of beacons and sensors like in the example above and, just in the row where y=10, you'd like to count the number of positions a beacon cannot possibly exist. The coverage from all sensors near that row looks like this:

                 1    1    2    2
       0    5    0    5    0    5
 9 ...#########################...
10 ..####B######################..
11 .###S#############.###########.

In this example, in the row where y=10, there are 26 positions where a beacon cannot be present.

Part 1

Consult the report from the sensors you just deployed. In the row where y=2000000, how many positions cannot contain a beacon?

Part 2

Your handheld device indicates that the distress signal is coming from a beacon nearby. The distress beacon is not detected by any sensor, but the distress beacon must have x and y coordinates each no lower than 0 and no larger than 4000000.

To isolate the distress beacon's signal, you need to determine its tuning frequency, which can be found by multiplying its x coordinate by 4000000 and then adding its y coordinate.

In the example above, the search space is smaller: instead, the x and y coordinates can each be at most 20. With this reduced search area, there is only a single position that could have a beacon: x=14, y=11. The tuning frequency for this distress beacon is 56000011.

Find the only possible position for the distress beacon. What is its tuning frequency?

Day 15: Solution Explanation

Approach

Day 15 involves analyzing sensor coverage to find positions where beacons cannot be present. The key challenge is efficiently handling the potentially large search space.

The solution breaks down into several components:

  1. Parsing the input data: Extract sensor and beacon positions from the input
  2. Calculating sensor coverage: Determine the area each sensor can cover based on Manhattan distance
  3. Analyzing coverage on specific rows: Find ranges of positions that cannot contain a beacon
  4. Finding the distress beacon: Identify the one position where the distress beacon must be located

The key insight is to work with ranges rather than individual positions, which allows for much more efficient processing.

Implementation Details

Data Structures

The solution uses several key data structures:

#![allow(unused)]
fn main() {
#[derive(Ord, PartialOrd, Copy, Clone, Eq, PartialEq, Hash)]
struct Coord {
    x: isize,
    y: isize
}

#[derive(Eq, PartialEq, Hash)]
struct Sensor {
    pos: Coord,
    beacon: Coord,
    dist: usize
}

struct Area {
    sensors: Vec<Sensor>
}
}

These structures represent coordinates, sensors, and the overall area being analyzed.

Parsing the Input

The input is parsed into a collection of sensors:

#![allow(unused)]
fn main() {
fn deploy_sensors(sensors: &str) -> Area {
    Area {
        sensors: sensors.lines()
            .map(|line|
                line.split(&[' ','=',',',':'])
                    .filter(|item| !item.trim().is_empty())
                    .filter(|item| item.chars().all(|d| d.is_numeric() || d == '-'))
                    .filter_map(|n| isize::from_str(n).ok())
                    .collect::<Vec<_>>()
            )
            .map(|comb|
                Sensor {
                    pos: (comb[0], comb[1]).into(),
                    beacon: (comb[2], comb[3]).into(),
                    dist: comb[0].abs_diff(comb[2]) + comb[1].abs_diff(comb[3])
                }
            )
            .collect::<Vec<_>>()
    }
}
}

This function extracts the coordinates from each line and calculates the Manhattan distance between each sensor and its nearest beacon.

Calculating Sensor Coverage

For each sensor, we need to determine its coverage at a specific y-coordinate. This is done by calculating a range of x-coordinates that the sensor can cover:

#![allow(unused)]
fn main() {
fn coverage_at(&self, d: isize) -> Option<RangeInclusive<isize>> {
    let Coord{x, y} = self.pos;
    let diff = y.abs_diff(d);
    if diff <= self.dist {
        Some(RangeInclusive::new(
            x.saturating_sub_unsigned(self.dist - diff),
            x.saturating_add_unsigned(self.dist - diff))
        )
    } else {
        None
    }
}
}

This method:

  1. Calculates the vertical distance from the sensor to the specified y-coordinate
  2. If this distance is within the sensor's range, calculates the horizontal range the sensor can cover at that y-coordinate
  3. Returns the range as a RangeInclusive<isize>, or None if the y-coordinate is out of range

Analyzing Coverage on a Row

To determine the coverage on a specific row, we need to combine the ranges from all sensors:

#![allow(unused)]
fn main() {
fn sensor_coverage_at(&self, line: isize) -> Vec<RangeInclusive<isize>> {
    let mut result = vec![];

    let mut ranges = self.sensors.iter()
            .filter_map(|sensor| sensor.coverage_at(line))
            .collect::<Vec<_>>();

    ranges.sort_by_key(|a| *a.start());

    if let Some(last) = ranges.into_iter()
        .reduce(|a, b|
            if a.end() >= &(b.start()-1) {
                if a.end() < b.end() {
                    *a.start()..=*b.end()
                } else { a }
            } else {
                // We got a range gap here hence we must save range A
                // while we pass on Range B to the next iteration
                result.push(a);
                b
            }
        ) {
        result.push(last);
    }
    result
}
}

This method:

  1. Collects the coverage ranges from all sensors for the specified row
  2. Sorts the ranges by their start position
  3. Merges overlapping or adjacent ranges
  4. Returns a list of non-overlapping ranges representing the total coverage

The merging step is crucial for efficiency, as it allows us to represent large areas of coverage with just a few ranges.

Finding Beacons on a Row

We also need to identify beacons that are already on the specified row:

#![allow(unused)]
fn main() {
fn beacons_at(&self, line: isize) -> HashSet<Coord> {
    self.sensors.iter()
        .filter_map(|s| if s.beacon.y == line { Some(s.beacon) } else { None })
        .collect::<HashSet<_>>()
}
}

This is used to exclude beacon positions from the count of positions where a beacon cannot be present.

Finding the Distress Beacon (Part 2)

For Part 2, we need to find the one position in a large area where the distress beacon must be located. The key insight is that this position must be just outside the range of multiple sensors:

#![allow(unused)]
fn main() {
let (line, v) = (0..=4000000)
    .map(|line| (line, area.sensor_coverage_at(line)))
    .filter(|(_, v)| v.len() > 1)
    .filter(|(_, v)| v[1].start() - v[0].end() > 1)
    .next().unwrap();

let total = (v[0].end() + 1) * 4000000 + line;
}

This code:

  1. Checks each row in the search area
  2. Identifies rows where the coverage is split into multiple ranges
  3. Finds the first row where there's a gap of exactly one position between ranges
  4. Calculates the tuning frequency based on the position in the gap

This approach is much more efficient than checking every possible position, as it only needs to examine rows where the coverage is not continuous.

Algorithmic Analysis

Time Complexity

  • Parsing: O(n) where n is the number of sensors
  • Coverage Calculation: O(n) for each row analyzed
  • Range Merging: O(n log n) due to the sorting step
  • Part 1: O(n log n)
  • Part 2: O(y * n log n) where y is the number of rows in the search area

Space Complexity

  • Storage: O(n) for storing the sensors and their information
  • Range Processing: O(n) for storing the ranges during processing

Alternative Approaches

Grid-Based Approach

A naive approach would be to use a grid to track each position:

#![allow(unused)]
fn main() {
fn count_positions_without_beacon(sensors: &[Sensor], y: isize, x_range: RangeInclusive<isize>) -> usize {
    let mut count = 0;
    for x in x_range {
        let pos = Coord { x, y };
        if sensors.iter().any(|s| s.covers(pos)) && !sensors.iter().any(|s| s.beacon == pos) {
            count += 1;
        }
    }
    count
}
}

This would be much less efficient for large search areas, with a time complexity of O(x * n) where x is the width of the search area.

Binary Search for Part 2

Another approach for Part 2 would be to use binary search to find the gap more efficiently:

#![allow(unused)]
fn main() {
fn find_gap(ranges: &[RangeInclusive<isize>], min: isize, max: isize) -> Option<isize> {
    // Binary search for a gap in the ranges
    // ...
}
}

This could potentially reduce the time complexity for finding the gap, but would be more complex to implement correctly.

Geometric Approach

A more sophisticated approach would be to use computational geometry techniques:

#![allow(unused)]
fn main() {
fn find_distress_beacon(sensors: &[Sensor], bounds: (isize, isize)) -> Coord {
    // Find intersection points of sensor boundaries
    // Check positions just outside the boundary of each sensor
    // ...
}
}

This would be more efficient for very large search areas but would require more complex geometric calculations.

Conclusion

This solution demonstrates an efficient approach to a problem that involves analyzing large ranges of positions. By working with ranges rather than individual positions, we can efficiently solve both parts of the problem. The range merging technique is particularly effective for Part 1, while the gap-finding approach allows us to solve Part 2 without exhaustively checking every position.

Day 15: Code

Below is the complete code for Day 15's solution, which analyzes sensor coverage to find positions where beacons cannot be present.

Full Solution

use std::collections::HashSet;
use std::fmt::{Debug, Formatter};
use std::ops::RangeInclusive;
use std::str::FromStr;

// const INPUT : &str = "Sensor at x=2, y=18: closest beacon is at x=-2, y=15
// Sensor at x=9, y=16: closest beacon is at x=10, y=16
// Sensor at x=13, y=2: closest beacon is at x=15, y=3
// Sensor at x=12, y=14: closest beacon is at x=10, y=16
// Sensor at x=10, y=20: closest beacon is at x=10, y=16
// Sensor at x=14, y=17: closest beacon is at x=10, y=16
// Sensor at x=8, y=7: closest beacon is at x=2, y=10
// Sensor at x=2, y=0: closest beacon is at x=2, y=10
// Sensor at x=0, y=11: closest beacon is at x=2, y=10
// Sensor at x=20, y=14: closest beacon is at x=25, y=17
// Sensor at x=17, y=20: closest beacon is at x=21, y=22
// Sensor at x=16, y=7: closest beacon is at x=15, y=3
// Sensor at x=14, y=3: closest beacon is at x=15, y=3
// Sensor at x=20, y=1: closest beacon is at x=15, y=3";

fn main() {
    let input = std::fs::read_to_string("src/bin/day15_input.txt").expect("Ops!");

    let area = Area::deploy_sensors(input.as_str());

    // Component 1
    let res = area.sensor_coverage_at(2000000);
    println!("Signal Coverage @2000000 = {:?}",res);
    let beacons = area.beacons_at(2000000);
    println!("Beacons = {:?}",beacons);

    let positions = res.into_iter()
        .map(|r| r.count())
        .sum::<usize>();
    println!("{}-{}={} (4793062)", positions,beacons.len(),positions-beacons.len());

    // Component 2
    let (line, v) = (0..=4000000)
        .map(|line| (line,area.sensor_coverage_at(line)))
        .filter(|(_,v)| v.len() > 1 )
        .filter(|(_,v)| v[1].start() - v[0].end() > 1 )
        .next().unwrap();

    let total = (v[0].end() + 1) * 4000000 + line;
    println!("Signal Coverage @{line} = {:?} \nFreq of distress beacon: {total}", v);
}

struct Area {
    sensors: Vec<Sensor>
}
impl Area {
    fn deploy_sensors(sensors:&str ) -> Area {
        Area {
            sensors: sensors.lines()
                .map(|line|
                    line.split(&[' ','=',',',':'])
                        .filter(|item| !item.trim().is_empty() )
                        .filter(|item| item.chars().all(|d| d.is_numeric() || d == '-'))
                        .filter_map(|n| isize::from_str(n).ok())
                        .collect::<Vec<_>>()
                )
                .map(|comb|
                    Sensor {
                        pos: (comb[0],comb[1]).into(),
                        beacon: (comb[2],comb[3]).into(),
                        dist: comb[0].abs_diff(comb[2]) + comb[1].abs_diff(comb[3])
                    }
                )
                .collect::<Vec<_>>()
        }
    }
    fn beacons_at(&self, line:isize) -> HashSet<Coord> {
        self.sensors.iter().filter_map(|s| if s.beacon.y == line { Some(s.beacon)} else {None}).collect::<HashSet<_>>()
    }
    fn sensor_coverage_at(&self, line: isize) -> Vec<RangeInclusive<isize>> {

        let mut result = vec![];

        let mut ranges = self.sensors.iter()
                .filter_map(|sensor| sensor.coverage_at(line))
                .collect::<Vec<_>>();

        ranges.sort_by_key(|a| *a.start());

        if let Some(last) = ranges.into_iter()
            .reduce(|a, b|
                if a.end() >= &(b.start()-1) {
                    if a.end() < b.end() {
                        *a.start()..=*b.end()
                    } else { a }
                } else {
                    // We got a range gap here hence we must save range A
                    // while we pass on Range B to the next iteration
                    result.push(a);
                    b
                }
            ) {
            result.push(last);
        }
        result
    }
}

#[derive(Eq, PartialEq, Hash)]
struct Sensor {
    pos: Coord,
    beacon: Coord,
    dist: usize
}
impl Sensor {
    fn coverage_at(&self, d: isize) -> Option<RangeInclusive<isize>> {
        let Coord{x,y} = self.pos;
        let diff = y.abs_diff(d);
        if diff <= self.dist {
            Some(RangeInclusive::new(
                x.saturating_sub_unsigned(self.dist - diff),
                x.saturating_add_unsigned(self.dist - diff))
            )
        } else {
            None
        }
    }
}

impl Debug for Sensor {
    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
        write!(f, "S{:?} {:?} B{:?}",self.pos, self.dist, self.beacon)
    }
}

/// Generics
///

#[derive(Ord, PartialOrd,Copy, Clone, Eq, PartialEq,Hash)]
struct Coord {
    x: isize,
    y: isize
}
impl Debug for Coord {
    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
        write!(f, "({},{})",self.x,self.y)
    }
}
impl From<(isize,isize)> for Coord {
    fn from(p: (isize, isize)) -> Self {
        Coord { x:p.0, y:p.1 }
    }
}

Code Walkthrough

Core Data Structures

#[derive(Ord, PartialOrd,Copy, Clone, Eq, PartialEq,Hash)]
struct Coord {
    x: isize,
    y: isize
}
impl Debug for Coord {
    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
        write!(f, "({},{})",self.x,self.y)
    }
}

The Coord struct represents a 2D coordinate with x and y values. It implements several traits to make it comparable, hashable, and printable.

#[derive(Eq, PartialEq, Hash)]
struct Sensor {
    pos: Coord,
    beacon: Coord,
    dist: usize
}

The Sensor struct contains information about a sensor's position, its closest beacon's position, and the Manhattan distance between them.

struct Area {
    sensors: Vec<Sensor>
}

The Area struct is a container for all sensors in the input.

Sensor Coverage Calculation

impl Sensor {
    fn coverage_at(&self, d: isize) -> Option<RangeInclusive<isize>> {
        let Coord{x,y} = self.pos;
        let diff = y.abs_diff(d);
        if diff <= self.dist {
            Some(RangeInclusive::new(
                x.saturating_sub_unsigned(self.dist - diff),
                x.saturating_add_unsigned(self.dist - diff))
            )
        } else {
            None
        }
    }
}

This method calculates the x-coordinate range that a sensor can cover at a specific y-coordinate. It:

  1. Calculates the vertical distance to the target line
  2. If this distance is within the sensor's range, calculates the horizontal range
  3. Returns the range, or None if the line is out of range

Analyzing Sensor Coverage on a Row

    fn sensor_coverage_at(&self, line: isize) -> Vec<RangeInclusive<isize>> {

        let mut result = vec![];

        let mut ranges = self.sensors.iter()
                .filter_map(|sensor| sensor.coverage_at(line))
                .collect::<Vec<_>>();

        ranges.sort_by_key(|a| *a.start());

        if let Some(last) = ranges.into_iter()
            .reduce(|a, b|
                if a.end() >= &(b.start()-1) {
                    if a.end() < b.end() {
                        *a.start()..=*b.end()
                    } else { a }
                } else {
                    // We got a range gap here hence we must save range A
                    // while we pass on Range B to the next iteration
                    result.push(a);
                    b
                }
            ) {
            result.push(last);
        }
        result
    }

This method aggregates coverage from all sensors on a specific row:

  1. Collects ranges from all sensors that cover the specified row
  2. Sorts the ranges by their start position
  3. Merges overlapping ranges using a reduce operation
  4. Returns a vector of non-overlapping ranges representing total coverage

Finding Beacons on a Row

    fn beacons_at(&self, line:isize) -> HashSet<Coord> {
        self.sensors.iter().filter_map(|s| if s.beacon.y == line { Some(s.beacon)} else {None}).collect::<HashSet<_>>()
    }

This method identifies all beacons located on a specific row.

Parsing Input

    fn deploy_sensors(sensors:&str ) -> Area {
        Area {
            sensors: sensors.lines()
                .map(|line|
                    line.split(&[' ','=',',',':'])
                        .filter(|item| !item.trim().is_empty() )
                        .filter(|item| item.chars().all(|d| d.is_numeric() || d == '-'))
                        .filter_map(|n| isize::from_str(n).ok())
                        .collect::<Vec<_>>()
                )
                .map(|comb|
                    Sensor {
                        pos: (comb[0],comb[1]).into(),
                        beacon: (comb[2],comb[3]).into(),
                        dist: comb[0].abs_diff(comb[2]) + comb[1].abs_diff(comb[3])
                    }
                )
                .collect::<Vec<_>>()
        }
    }

This method parses the input text into Sensor objects by:

  1. Splitting each line into parts
  2. Filtering out non-numeric parts
  3. Converting numeric strings to integers
  4. Constructing sensors with their positions, beacon positions, and distances

Main Function

fn main() {
    let input = std::fs::read_to_string("src/bin/day15_input.txt").expect("Ops!");

    let area = Area::deploy_sensors(input.as_str());

    // Component 1
    let res = area.sensor_coverage_at(2000000);
    println!("Signal Coverage @2000000 = {:?}",res);
    let beacons = area.beacons_at(2000000);
    println!("Beacons = {:?}",beacons);

    let positions = res.into_iter()
        .map(|r| r.count())
        .sum::<usize>();
    println!("{}-{}={} (4793062)", positions,beacons.len(),positions-beacons.len());

    // Component 2
    let (line, v) = (0..=4000000)
        .map(|line| (line,area.sensor_coverage_at(line)))
        .filter(|(_,v)| v.len() > 1 )
        .filter(|(_,v)| v[1].start() - v[0].end() > 1 )
        .next().unwrap();

    let total = (v[0].end() + 1) * 4000000 + line;
    println!("Signal Coverage @{line} = {:?} \nFreq of distress beacon: {total}", v);
}

The main function:

  1. Reads and parses the input file
  2. For Part 1:
    • Gets the sensor coverage on row 2000000
    • Identifies beacons already on that row
    • Calculates the number of positions that cannot contain a beacon
  3. For Part 2:
    • Checks each row in the search area (0 to 4000000)
    • Finds a row where the coverage is split with a gap of exactly one position
    • Calculates the tuning frequency of the distress beacon

The key insight for Part 2 is that the distress beacon must be in a position that is just outside the range of multiple sensors, which appears as a gap in the coverage.

Implementation Notes

  • Range Representation: The solution uses RangeInclusive<isize> to represent coverage ranges efficiently
  • Merge Algorithm: Overlapping ranges are merged, significantly reducing the number of ranges needed to represent coverage
  • Efficient Searching: The solution for Part 2 efficiently finds the gap by examining rows with split coverage rather than checking every position

Day 16: Proboscidea Volcanium

Day 16 involves finding the optimal sequence for opening valves to release maximum pressure in a cave system.

Problem Overview

You're trying to escape a volcano through a network of tunnels with pressure-release valves. Your goal is to maximize the pressure released before time runs out. Key aspects include:

  1. Valves have different flow rates, and many have a flow rate of zero
  2. Moving between valves takes 1 minute, and opening a valve takes 1 minute
  3. For Part 1, you have 30 minutes to release as much pressure as possible
  4. For Part 2, you work with an elephant for 26 minutes to release maximum pressure

This problem is essentially a pathfinding optimization problem where the goal is to find the sequence of valve openings that maximizes the total pressure released.

Day 16: Problem Description

Proboscidea Volcanium

The sensors have led you to the origin of the distress signal at the top of a large mountain. The mountain is made up of hot springs and waterfalls, but the terrain is otherwise treacherous and difficult to navigate.

As the expedition team begins climbing the mountain, you notice a trail of steam that ends at the entrance to a large cave. As you begin to make your way there, wolves with glowing red eyes begin circling you.

Just then, a sudden gust of freezing wind blows a small locket with a picture of you in front of you onto the ground. As you pick it up, you begin to hear echoing all around you — a distress message from the Elves about danger in the underground cave. You consider the wolves and begin broadcasting your own danger message on a frequency the wolves can't hear.

The distress message includes information about how the cave currently works in its present non-volcanic state. If you can calculate the potential pressure releases just in time, you might have a chance to stop the volcano from erupting.

You scan the cave for potential pressure release valves. Through your scan, you detect a network of pipes and valves. There's a pressure-release valve at each junction of pipes, a component that can be remotely operated over radio (your puzzle input). Each one of these valves has a flow rate: the number of pressure units it can release per minute (from 0, for a valve that can't be opened, to a reasonably large number). You calculate how long you and an elephant could work together to move around the cave system, open valves, and release pressure.

To save time, you open the valves with non-zero flow rates. The rules for each of you moving and opening valves are:

  • You start at valve AA.
  • It takes you 1 minute to move between valves.
  • It takes you 1 minute to open a valve.
  • Moving and opening valves occurs in whole-number integer minutes.

To better plan your route, you note the flow rate of each valve from your scan. You're going to spend 30 minutes opening valves to release as much pressure as possible.

For example, suppose you have the following scan output:

Valve AA has flow rate=0; tunnels lead to valves DD, II, BB
Valve BB has flow rate=13; tunnels lead to valves CC, AA
Valve CC has flow rate=2; tunnels lead to valves DD, BB
Valve DD has flow rate=20; tunnels lead to valves CC, AA, EE
Valve EE has flow rate=3; tunnels lead to valves FF, DD
Valve FF has flow rate=0; tunnels lead to valves EE, GG
Valve GG has flow rate=0; tunnels lead to valves FF, HH
Valve HH has flow rate=22; tunnel leads to valve GG
Valve II has flow rate=0; tunnels lead to valves AA, JJ
Valve JJ has flow rate=21; tunnel leads to valve II

All of the valves begin closed. You start at valve AA, but it must be damaged or jammed or something: its flow rate is 0, so there's no point in opening it. However, you could spend one minute moving to valve BB and another minute opening it; doing so would release pressure during the remaining 28 minutes at a flow rate of 13, a total eventual pressure release of 28 * 13 = 364. Then, you could spend your remaining 26 minutes moving to and opening the remaining valves with positive flow rates (CC, DD, EE, HH, and JJ) to maximize pressure released (which would be 1707).

However, there's a more efficient approach. One way to maximize pressure is:

== Minute 1 ==
You open valve DD.
The elephant waits.

== Minute 2 ==
You move to valve CC.
The elephant moves to valve JJ.

== Minute 3 ==
You open valve CC.
The elephant opens valve JJ.

== Minute 4 ==
You move to valve BB.
The elephant waits.

== Minute 5 ==
You open valve BB.
The elephant moves to valve II.

== Minute 6 ==
You move to valve AA.
The elephant moves to valve AA.

== Minute 7 ==
You move to valve II.
The elephant moves to valve DD.

== Minute 8 ==
You move to valve JJ.
The elephant opens valve DD.

== Minute 9 ==
You open valve JJ.
The elephant moves to valve EE.

...

Part 1

Work out the steps to release the most pressure in 30 minutes. What is the most pressure you can release?

Part 2

You're worried that even with an optimal approach, the pressure released won't be enough. What if you got one of the elephants to help you?

It would take you 4 minutes to teach an elephant how to open the right valves in the right order, leaving you with only 26 minutes to actually execute your plan. Would having two of you working together be better, even if it means having less time? (Assume the elephant is just as capable as you are at moving and opening valves.)

In the example above, you could teach the elephant your plan, which would take 4 minutes:

== Minute 1 ==
You move to valve II.
The elephant moves to valve DD.

== Minute 2 ==
You move to valve JJ.
The elephant opens valve DD.

== Minute 3 ==
You open valve JJ.
The elephant moves to valve EE.

== Minute 4 ==
You wait.
The elephant opens valve EE.

...

With the elephant helping, after opening valves BB, CC, DD, EE, HH, and JJ, you could achieve a flow rate of 81.

However, you and the elephant need to be careful not to interfere with each other. As a result, you need to meticulously coordinate your actions to make sure that you and the elephant are never both trying to open the same valve, or move to the same valve.

With both you and the elephant working together for 26 minutes, what is the most pressure you could release?

Day 16: Solution Explanation

Approach

Day 16 involves optimizing a sequence of valve openings to maximize the pressure released in a limited time. This is a complex optimization problem that can be approached in several ways. The solution uses a combination of techniques:

  1. Graph Representation: Modeling the valve network as a graph where valves are nodes and tunnels are edges
  2. Distance Caching: Pre-computing the distances between all relevant valves to avoid redundant calculations
  3. Recursive Backtracking: Exploring different valve opening sequences to find the optimal solution
  4. Pruning: Eliminating non-productive paths to reduce the search space

The key insight is recognizing that valves with zero flow rate never need to be opened, which significantly reduces the search space.

Implementation Details

Data Structures

The solution uses several key data structures:

ValveNet

This structure represents the network of valves and tunnels:

#![allow(unused)]
fn main() {
struct ValveNet<'a> {
    graph: HashMap<&'a str, Vec<&'a str>>,  // Adjacency list representation
    flow: HashMap<&'a str, Valve>,           // Flow rate for each valve
    cache: Cache<(&'a str, &'a str)>         // Distance cache
}
}

Valve

This structure represents a single valve:

#![allow(unused)]
fn main() {
struct Valve {
    pressure: usize,  // Flow rate
    open: bool        // Whether the valve is open
}
}

ValveBacktrack

This structure handles the backtracking algorithm to find the optimal solution:

#![allow(unused)]
fn main() {
struct ValveBacktrack {
    net: &'a ValveNet<'a>,        // Reference to the valve network
    path: Vec<&'a str>,           // Current path being explored
    solution: Vec<&'a str>,       // Best solution found so far
    max: usize,                   // Maximum pressure released
    pressure: usize,              // Current pressure in this path
    time: Cell<SystemTime>        // For timing the solution
}
}

Preprocessing

Before running the main algorithm, the solution performs several preprocessing steps:

  1. Parsing the input: Converting the text input into a graph representation
  2. Identifying relevant valves: Finding all valves with non-zero flow rates
  3. Building a distance cache: Pre-computing the distances between all relevant valves
#![allow(unused)]
fn main() {
fn nonzero_valves(&self) -> Vec<&str> {
    self.flow.iter()
        .filter(|(_, v)| v.pressure > 0)
        .fold(vec![], |mut out, (name, _)| {
            out.push(name);
            out
        })
}

fn build_cache(&self, valves: &[&'a str]) {
    for &a in valves {
        for &b in valves {
            if a != b {
                self.cache.push(
                    (a, b),
                    self.travel_distance(a, b).unwrap()
                );
            }
        }
    }
}
}

The distance cache is crucial for performance, as it allows the algorithm to quickly look up the time required to move between valves without recalculating paths.

Distance Calculation

The distances between valves are calculated using breadth-first search (BFS):

#![allow(unused)]
fn main() {
fn travel_distance(&self, start: &'a str, end: &'a str) -> Option<usize> {
    // Check if distance is already cached
    if let Some(cost) = self.cache.pull((start, end)) {
        return Some(cost);
    }

    // Perform BFS to find shortest path
    let mut queue = VecDeque::new();
    let mut state: HashMap<&str, (bool, Option<&str>)> = /* initialize state */;

    queue.push_back(start);
    while let Some(valve) = queue.pop_front() {
        if valve.eq(end) {
            // Path found, calculate cost
            // ...
            return Some(path_cost);
        }

        // Process neighbors
        // ...
    }

    None // No path found
}
}

Backtracking Algorithm for Part 1

The main algorithm for Part 1 (single player) uses backtracking to explore different valve opening sequences:

#![allow(unused)]
fn main() {
fn combinations_elf(&mut self, time_left: usize, start: &'a str, valves: &[&'a str]) {
    // Base case: no more valves to visit or no more time
    if valves.is_empty() || time_left == 0 {
        if self.max < self.pressure {
            self.max = self.pressure;
            self.solution = self.path.clone();
            // Update best solution
        }
        return;
    }

    // Try each remaining valve
    for (i, &valve) in valves.iter().enumerate() {
        // Calculate cost to move to valve and open it
        let cost = self.net.travel_distance(start, valve).unwrap() + 1;

        // Skip if not enough time
        if cost > time_left {
            continue;
        }

        // Calculate pressure released
        let new_time_left = time_left - cost;
        let pressure_released = self.net.flow[&valve].pressure * new_time_left;

        // Add to current path
        self.path.push(valve);
        self.pressure += pressure_released;

        // Recursive call with remaining valves
        let remaining_valves = valves.iter()
            .enumerate()
            .filter_map(|(j, &v)| if j != i { Some(v) } else { None })
            .collect::<Vec<&str>>();

        self.combinations_elf(new_time_left, valve, &remaining_valves);

        // Backtrack
        self.path.pop();
        self.pressure -= pressure_released;
    }
}
}

Backtracking Algorithm for Part 2

For Part 2 (with an elephant), the algorithm is extended to handle two actors moving simultaneously:

#![allow(unused)]
fn main() {
fn combinations_elf_elephant(&mut self, time_left: &[usize], start: &[&'a str], valves: &[&'a str]) {
    // Base case: no more valves to visit
    if valves.is_empty() {
        if self.max < self.pressure {
            self.max = self.pressure;
            self.solution = self.path.clone();
            // Update best solution
        }
        return;
    }

    // Add current positions to path
    self.path.extend(start);

    // Try all combinations of valves for elf and elephant
    for elf in 0..valves.len() {
        for elephant in 0..valves.len() {
            // Skip if both try to visit the same valve
            if elf == elephant {
                continue;
            }

            let elf_target = valves[elf];
            let elephant_target = valves[elephant];

            // Calculate costs
            let elf_cost = self.net.travel_distance(start[0], elf_target).unwrap();
            let elephant_cost = self.net.travel_distance(start[1], elephant_target).unwrap();

            // Skip if not enough time
            if elf_cost > time_left[0] || elephant_cost > time_left[1] {
                continue;
            }

            // Calculate new time and pressure
            let elf_time = time_left[0] - elf_cost;
            let elephant_time = time_left[1] - elephant_cost;

            let pressure =
                self.net.flow[&elf_target].pressure * elf_time +
                self.net.flow[&elephant_target].pressure * elephant_time;

            // Add pressure
            self.pressure += pressure;

            // Recursive call with remaining valves
            let remaining_valves = valves.iter()
                .enumerate()
                .filter_map(|(i, &v)| if i != elf && i != elephant { Some(v) } else { None })
                .collect::<Vec<&str>>();

            self.combinations_elf_elephant(
                &[elf_time, elephant_time],
                &[elf_target, elephant_target],
                &remaining_valves
            );

            // Backtrack
            self.pressure -= pressure;
        }
    }

    // Remove current positions from path
    for _ in 0..start.len() {
        self.path.pop();
    }
}
}

This approach explores all possible combinations of valve assignments between the player and elephant.

Optimizations

Several optimizations make the solution feasible:

  1. Filtering Zero-Flow Valves: Only valves with non-zero flow rates are considered for opening
  2. Distance Caching: Distances between valves are cached to avoid redundant calculations
  3. Early Pruning: Paths that can't possibly beat the current best solution are pruned early
  4. Time Checking: Valves that can't be reached in the remaining time are skipped

These optimizations significantly reduce the search space, making an otherwise intractable problem solvable in a reasonable time.

Algorithm Analysis

Time Complexity

The time complexity is primarily determined by the backtracking algorithm:

  • Part 1: O(N!) where N is the number of non-zero flow valves, due to exploring all permutations
  • Part 2: O(N! × N!) in the worst case, due to exploring all combinations of assignments between the player and elephant

However, the pruning optimizations significantly reduce the actual runtime.

Space Complexity

  • Graph Representation: O(V + E) where V is the number of valves and E is the number of tunnels
  • Distance Cache: O(V²) for storing distances between all pairs of valves
  • Backtracking State: O(V) for storing the current path and solution

Alternative Approaches

Dynamic Programming

A dynamic programming approach could potentially solve this problem by using a state representation that includes the current position, time remaining, and valves opened:

#![allow(unused)]
fn main() {
type State = (String, usize, BitSet);

fn max_pressure(state: State, memo: &mut HashMap<State, usize>) -> usize {
    // Base case
    if state.1 == 0 {
        return 0;
    }

    // Check memo
    if let Some(&result) = memo.get(&state) {
        return result;
    }

    // Calculate maximum pressure
    let mut best = 0;

    // Try opening the current valve
    // Try moving to each adjacent valve

    // Store result
    memo.insert(state, best);
    return best;
}
}

This approach would have a more predictable runtime but requires careful state representation to avoid memory issues.

Greedy Algorithm

A simpler but less optimal approach would be a greedy algorithm that always chooses the valve with the highest potential pressure release (flow rate × remaining time after reaching it):

#![allow(unused)]
fn main() {
fn greedy_solution(net: &ValveNet, start: &str, time: usize) -> usize {
    let mut current = start;
    let mut time_left = time;
    let mut total_pressure = 0;
    let mut opened = HashSet::new();

    while time_left > 0 {
        // Find best valve to open next
        let best_valve = net.valves()
            .filter(|v| !opened.contains(v) && net.flow[v].pressure > 0)
            .max_by_key(|v| {
                let cost = net.distance(current, v) + 1;
                if cost >= time_left {
                    0
                } else {
                    net.flow[v].pressure * (time_left - cost)
                }
            });

        // No more valves worth opening
        if let Some(valve) = best_valve {
            // Move to valve and open it
            // Update state
        } else {
            break;
        }
    }

    total_pressure
}
}

This would run much faster but would likely produce suboptimal results.

Conclusion

This solution demonstrates an effective approach to a complex optimization problem. By combining graph algorithms, caching, and backtracking with pruning, it finds the optimal valve opening sequence in a reasonable time. The extension to Part 2 shows how the algorithm can be adapted to handle multiple actors working simultaneously.

Day 16: Code

Below is an explanation of the code for Day 16's solution, which finds the optimal valve opening sequence to maximize pressure release.

Code Structure

The solution for Day 16 is quite complex and uses several key components:

  1. ValveNet: Represents the network of valves and tunnels
  2. Valve: Represents a single valve with its flow rate
  3. ValveBacktrack: Implements the backtracking algorithm to find optimal paths
  4. Cache: Provides efficient caching of distances between valves

Key Components

Valve and ValveNet Structures

#[derive(Copy, Clone)]
struct Valve {
    pressure: usize,
    open: bool
}

struct ValveNet<'a> {
    graph: HashMap<&'a str,Vec<&'a str>>,
    flow: HashMap<&'a str, Valve>,
    cache: Cache<(&'a str, &'a str)>
}

The Valve struct represents a single valve with its flow rate and status. The ValveNet struct represents the entire network, using hashmaps to store the graph structure and valve information, along with a cache for distances.

Valve Network Methods

The ValveNet implementation includes several key methods:

impl<'a> ValveNet<'a> {
    fn backtrack(&'a self) -> ValveBacktrack {
        ValveBacktrack {
            net: self,
            path: Vec::with_capacity(self.flow.len()),
            solution: Vec::with_capacity(self.flow.len()),
            pressure: 0,
            max: 0,
            time: Cell::new(std::time::SystemTime::now())
        }
    }
    fn build_cache(&self, valves: &[&'a str]) {
        for &a in valves {
            for &b in valves {
                if a != b {
                    self.cache.push(
                        (a, b),
                        self.travel_distance(a, b).unwrap()
                    );
                }
            }
        }

    }
    fn nonzero_valves(&self) -> Vec<&str> {
        self.flow.iter()
            .filter(|(_, v)| v.pressure > 0 )
            .fold( vec![],|mut out, (name, _)| {
                out.push(name);
                out
            })
    }

These methods set up the backtracking algorithm, build a cache of distances between valves, and identify the valves with non-zero flow rates.

Backtracking Implementation

The core of the solution is the backtracking algorithm implemented in ValveBacktrack. For Part 2 (with an elephant), the implementation explores combinations of valve assignments:

    fn combinations_elf_elephant(&mut self, time_left: &[usize], start: &[&'a str], valves: &[&'a str]) {

        // have we run out of valve destinations ?
        if valves.is_empty() {
            // we have a candidate solution; valve combination within 30"
            if self.max < self.pressure {
                self.max = self.pressure;
                self.solution = self.path.clone();
                self.solution.extend(start);

                let time = self.time.replace(std::time::SystemTime::now());
                print!("Found (EoV): {:?},{:?}", self.pressure, &self.path);
                println!(" - {:.2?},", std::time::SystemTime::now().duration_since(time).unwrap());
            }
            // END OF RECURSION HERE
            return;
        }

        // Entering a valves
        self.path.extend(start);

        // Run combinations of valves
        // valves visited by Elf
        (0..valves.len())
            .for_each( |elf| {
                // valves visited by Elephant
                (0..valves.len())
                    .for_each(|elephant| {
                        // Are they both on the same valve ?
                        if elf == elephant {return;}

                        // pick the target valves to walk towards
                        let (elf_target,eleph_target) = ( valves[elf], valves[elephant] );

                        let (elf_cost, eleph_cost) = (
                            self.net.travel_distance(start[0], elf_target).unwrap(),
                            self.net.travel_distance(start[1], eleph_target).unwrap()
                        );

                        // do we have time to move to target valves ?
                        if elf_cost <= time_left[0] && eleph_cost <= time_left[1] {

                            let (elf_time, eleph_time) = ( time_left[0] - elf_cost, time_left[1] - eleph_cost );

                            // calculate the total pressure resulting from this move
                            let pressure=
                                self.net.flow[&elf_target].pressure * elf_time
                                    + self.net.flow[&eleph_target].pressure * eleph_time;

                            // Store the total pressure released
                            self.pressure += pressure;

                            // remove the elf & elephant targets from the valves to visit
                            let valves_remain= valves.iter()
                                .enumerate()
                                .filter_map(|(i,&v)| if i != elf && i != elephant {Some(v)} else { None } )
                                .collect::<Vec<&str>>();

                            // println!("\tElf:{:?}, Eleph:{:?} - {:?},[{:?},{:?}]",
                            //          (start[0], elf_target, elf_cost, time_left[0]),
                            //          (start[1], eleph_target, eleph_cost, time_left[1]),
                            //          (self.max,self.pressure+self.path_pressure(elf_time, &valves_remain)), (elf_target, eleph_target), &valves_remain
                            // );
                            self.combinations_elf_elephant(
                                &[elf_time, eleph_time],
                                &[elf_target, eleph_target],
                                &valves_remain
                            );
                            // we've finished with this combination hence remove from total pressure
                            self.pressure -= pressure;
                        } else {
                            // We've run out of time so we've finished and store the total pressure for this combination
                            if self.pressure > self.max {
                                self.max = self.pressure;
                                self.solution = self.path.clone();

                                let time = self.time.replace(std::time::SystemTime::now());
                                print!("Found (OoT): {:?},{:?}", self.pressure, self.path);
                                println!(" - {:.2?},", std::time::SystemTime::now().duration_since(time).unwrap());
                            }
                        }
                    });
            });
        // Leaving the valve we entered; finished testing combinations
        self.path.pop();
        self.path.pop();
    }

This method recursively explores different combinations of valve assignments between the player and elephant, calculating the total pressure released for each combination.

Distance Calculation

The solution calculates distances between valves using breadth-first search and caches the results for efficiency:

    fn travel_distance(&self, start:&'a str, end:&'a str) -> Option<usize> {

        if let Some(cost) = self.cache.pull((start,end)) {
            return Some(cost)
        }

        let mut queue = VecDeque::new();
        let mut state: HashMap<&str,(bool,Option<&str>)> =
            self.flow.iter()
                .map(|(&key,_)| (key, (false, None)))
                .collect::<HashMap<_,_>>();
        let mut path_cost = 0;

        queue.push_back(start);
        while let Some(valve) = queue.pop_front() {

            if valve.eq(end) {
                let mut cur = valve;
                while let Some(par) = state[&cur].1 {
                    path_cost += 1;
                    cur = par;
                }
                path_cost += 1;
                self.cache.push((start, end), path_cost);
                return Some(path_cost);
            }
            state.get_mut(valve).unwrap().0 = true;
            for &v in &self.graph[valve] {
                if !state[v].0 {
                    state.get_mut(v).unwrap().1 = Some(valve);
                    queue.push_back(v)
                }
            }
        }
        None
    }

This function performs a breadth-first search to find the shortest path between valves, then caches the result to avoid redundant calculations.

Main Function

The main function sets up and runs the solution:

fn main() {

    // Found 2059,["AA", "II", "JI", "VC", "TE", "XF", "WT", "DM", "ZK", "KI", "VF", "DU", "BD", "XS", "IY"]
    let input = std::fs::read_to_string("src/bin/day16_input.txt").expect("ops!");
    let net = ValveNet::parse(input.as_str());

    let start = "AA";
    let mut valves = net.nonzero_valves();
    println!("Valves: {:?}",valves);

    valves.push(start);
    net.build_cache(&valves);
    valves.pop();

    let time = std::time::SystemTime::now();

    // create all valve visit order combinations
    let mut btrack = net.backtrack();
    btrack.combinations_elf_elephant(&[TIME-4,TIME-4], &[start,start], &valves);
    println!("Lapse time: {:?}",std::time::SystemTime::now().duration_since(time));
    println!("Max flow {:?}\nSolution: {:?}\n", btrack.max, (&btrack.solution,btrack.path));
}

The main function:

  1. Parses the input to create the valve network
  2. Identifies valves with non-zero flow rates
  3. Builds a cache of distances between valves
  4. Runs the backtracking algorithm for Part 2 (with an elephant)
  5. Prints the maximum pressure that can be released and the optimal path

Implementation Notes

  • Caching Strategy: The solution uses extensive caching to avoid redundant calculations
  • Pruning: The algorithm prunes paths that can't possibly lead to better solutions
  • Two-Actor Coordination: The solution handles coordination between two actors (player and elephant) to avoid conflicting actions
  • Backtracking Approach: The core algorithm uses a recursive backtracking approach to explore the solution space

The solution efficiently handles the complex optimization problem by focusing on the most relevant valves and using appropriate data structures and algorithms.