Skip to content

Reflections 2023

github-actions[bot] edited this page Jan 2, 2025 · 20 revisions

2016 / 2018 / 2019 / 2020 / 2021 / 2022 / 2023 / 2024

Table of Contents

Day 1

Top / Prompt / Code / Standalone

This year's day 1 was definitely a little spicier than most! But Haskell's stream processing works well for us, again.

First let's get a nice utility function to make a number the first and last seen digit:

firstAndLast :: [Int] -> Int
firstAndLast xs = head xs * 10 + last xs

And now we have to figure out how to extract the digits from each part. For part 1, we need only to extract all of the isDigit characters:

extract1 :: String -> [Int]
extract1 = map digitToInt . filter isDigit

For part 2, it's a little bit trickier: we can look at each position and see if it is a new number string:

extract2 :: String -> [Int]
extract2 = mapMaybe isNumberString . tails
  where
    isNumberString :: String -> Maybe Int
    isNumberString str = listToMaybe
      [ d
      | (numStr, d) <-
          [ ("one",1), ("two",2), ("three",3)
          , ("four",4), ("five",5), ("six",6)
          , ("seven",7), ("eight",8), ("nine",9)
          ]
      | numStr `isPrefixOf` str
      ]

Note that tails is a function that takes "abc" and returns ["abc","bc","c"], allowing us to look at each position in order.

And that gives us the solutions:

day01 :: (String -> [Int]) -> String -> Int
day01 extractor = sum . map (sum . extractor) . lines

day01a = day01 extract1
day01b = day01 extract2

Day 1 Benchmarks

>> Day 01a
benchmarking...
time                 440.4 μs   (436.0 μs .. 447.6 μs)
                     0.998 R²   (0.995 R² .. 1.000 R²)
mean                 440.5 μs   (439.0 μs .. 445.8 μs)
std dev              10.99 μs   (4.110 μs .. 20.09 μs)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 01b
benchmarking...
time                 3.554 ms   (3.546 ms .. 3.563 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 3.524 ms   (3.502 ms .. 3.543 ms)
std dev              61.94 μs   (36.39 μs .. 79.25 μs)

* parsing and formatting times excluded

Day 2

Top / Prompt / Code / Standalone

This one was definitely a bit too heavy on the parsing side heh. But maybe the more interesting Haskell insight would be that we can use a nice data type -- V3 from the linear library (data V3 a = V3 a a a) to make the actual solving logic very simple. I've always held that V2 and V3 are pretty much some of the most useful non-standard data types for advent of code, and they were pretty helpful today. They have the most convenient Num, Functor, Applicative, Foldable, and Traversable instances!

If we encode each "set" as V3 <red> <green> <blue>, then the filters and checksums become very simple.

For part 1, we need to see if all V3 Ints in a line are legal. We can implement that as all isLegal:

isLegal :: V3 Int -> Bool
isLegal colorVec = and do
  allowed <- V3 12 13 14
  amount <- colorVec
  pure (amount <= allowed)

It reads a little silly, but you just have to remember that and just checks if every item is true, and here the do notation semantics are to compare each allowed and amount point-wise, first the red-by-red, then the green-by-green, and then the blue-by-blue.

Part 2 is a little simpler, we just need to aggregate the max per-color, and then find the product:

calcPower = product . foldr (liftA2 max) 0

where liftA2 max (V3 x y z) (V3 a b c) = V3 (max x a) (max y b) (max z c), and 0 for V3 is V3 0 0 0. And product works like how you'd think.

Going back to parsing, one neat way we can leverage V3 is with its Functor instance:

-- [("red", 1), ("blue", 2), ("green", 6)] => V3 1 6 2
pairUp :: [(String, Int)] -> V3 Int
pairUp pairs = do
    color <- V3 "red" "green" "blue"
    pure $ fromMaybe 0 (lookup color pairs)

This performs an action per-color and fills in the spot with the result of lookup, with 0 if the lookup fails.

Day 2 Benchmarks

>> Day 02a
benchmarking...
time                 9.858 μs   (9.838 μs .. 9.874 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 9.855 μs   (9.845 μs .. 9.872 μs)
std dev              43.39 ns   (31.10 ns .. 57.98 ns)

* parsing and formatting times excluded

>> Day 02b
benchmarking...
time                 862.9 ns   (856.2 ns .. 869.2 ns)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 868.6 ns   (863.3 ns .. 875.0 ns)
std dev              21.32 ns   (18.16 ns .. 25.09 ns)
variance introduced by outliers: 32% (moderately inflated)

* parsing and formatting times excluded

Day 3

Top / Prompt / Code

Day 3 Benchmarks

>> Day 03a
benchmarking...
time                 9.193 ms   (9.102 ms .. 9.302 ms)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 9.173 ms   (9.128 ms .. 9.255 ms)
std dev              171.1 μs   (115.3 μs .. 246.9 μs)

* parsing and formatting times excluded

>> Day 03b
benchmarking...
time                 6.127 ms   (6.109 ms .. 6.145 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 6.112 ms   (6.102 ms .. 6.121 ms)
std dev              27.81 μs   (23.75 μs .. 35.36 μs)

* parsing and formatting times excluded

Day 4

Top / Prompt / Code

Day 4 Benchmarks

>> Day 04a
benchmarking...
time                 141.5 μs   (141.1 μs .. 142.2 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 141.9 μs   (141.4 μs .. 142.5 μs)
std dev              1.790 μs   (1.228 μs .. 2.379 μs)

* parsing and formatting times excluded

>> Day 04b
benchmarking...
time                 269.6 μs   (269.3 μs .. 270.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 269.6 μs   (269.3 μs .. 270.0 μs)
std dev              1.147 μs   (920.8 ns .. 1.494 μs)

* parsing and formatting times excluded

Day 5

Top / Prompt / Code / Standalone

Another year and another puzzle where I feel like using Haskell's data-interval is cheating :) It lets us work with ranged intervals and gives us types like IntervalSet (a set of intervals) and IntervalMap (mapping intervals to values), but with the correct implementations of map intersection, set intersection, etc. that is aware of how intervals work. We can generate a single Interval:

import Data.Interval (Interval)
import qualified Data.Interval as IV

fromRange :: Int -> Int -> Interval Int
fromRange x len = IV.Finite x IV.<=..< IV.Finite (x + len)

The main type we will be using here is the IntervalMap type. We will use it to map intervals to deltas: if you fall inside the interval, your value will update by + delta.

import Data.IntervalMap.Strict (IntervalMap)
import qualified Data.IntervalMap.Strict as IVM

-- | Takes in the (a, b, c) triples that the problem gives map chunks
buildMap :: [(Int, Int, Int)] -> IntervalMap Int Int
buildMap = IVM.fromList
         . map (\(dest, src, len) -> fromRange src len, dest - src)

Now we can run it on a single point:

convertSingle :: Int -> IntervalMap Int Int -> Int
convertSingle x mp = case IVM.lookup x mp of
  Nothing    -> x           -- the value is not in any known interval
  Just delta -> x + delta   -- that interval has the given delta

So part 1 is simply finding the minimum after iterating convertSingle across every range:

day05a :: [IntervalMap Int Int] -> [Int] -> Int
day05a imaps = minimum . map (\s0 -> foldl' convertSingle s0 imaps)

Part 2 is where it gets interesting. We actually want to keep track of an IntervalSet -- the set of all intervals that we have seeds at. We need to write a function convertMany that takes an IntervalSet and runs that interval set through an IntervalMap appropriately.

To do this, we find all of the "misses" (the intervals in the IntervalSet that aren't mapped) and the "hits" (the intervals in the IntervalSet that do exist in the map, shifted by the delta) and then recombine it:

import Data.IntervalSet (IntervalSet)
import qualified Data.IntervalSet as IVS

convertMany :: IntervalMap Int Int -> IntervalSet Int -> IntervalSet Int
convertMany mp xs = misses <> hits
  where
    -- dummy map corresponding to putting `()` at every interval in our
    -- `IntervalSet`, needed because interval map functions require maps
    tempMap :: IntervalMap Int ()
    tempMap = IVM.fromList . map (,()) . IVS.toList $ xs

    misses = IVM.keysSet $ tempMap `IVM.difference` mp
    hits =
      IVS.fromList
        . map (\(iv, delta) -> IV.mapMonotonic (+ delta) iv)
        . IVM.toList
        $ IVM.intersectionWith const mp tempMap

And run it all together with:

day05b :: IntervalSet Int -> [IntervalMap Int Int] -> IntervalSet Int
day05b = foldl' convertMany

Day 5 Benchmarks

>> Day 05a
benchmarking...
time                 16.26 μs   (16.19 μs .. 16.31 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 16.14 μs   (16.09 μs .. 16.19 μs)
std dev              169.7 ns   (136.1 ns .. 224.7 ns)

* parsing and formatting times excluded

>> Day 05b
benchmarking...
time                 426.0 μs   (425.0 μs .. 427.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 425.7 μs   (425.2 μs .. 426.4 μs)
std dev              2.177 μs   (1.528 μs .. 3.040 μs)

* parsing and formatting times excluded

Day 6

Top / Prompt / Code

Day 6 Benchmarks

>> Day 06a
benchmarking...
time                 328.2 ns   (325.9 ns .. 330.9 ns)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 328.0 ns   (326.9 ns .. 329.8 ns)
std dev              4.944 ns   (2.971 ns .. 7.751 ns)
variance introduced by outliers: 16% (moderately inflated)

* parsing and formatting times excluded

>> Day 06b
benchmarking...
time                 350.2 ns   (348.1 ns .. 353.0 ns)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 353.2 ns   (351.8 ns .. 354.8 ns)
std dev              6.038 ns   (4.651 ns .. 7.734 ns)
variance introduced by outliers: 20% (moderately inflated)

* parsing and formatting times excluded

Day 7

Top / Prompt / Code

Day 7 Benchmarks

>> Day 07a
benchmarking...
time                 3.152 ms   (3.134 ms .. 3.181 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 3.118 ms   (3.104 ms .. 3.133 ms)
std dev              50.74 μs   (40.23 μs .. 64.20 μs)

>> Day 07b
benchmarking...
time                 2.730 ms   (2.717 ms .. 2.745 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 2.702 ms   (2.695 ms .. 2.711 ms)
std dev              26.83 μs   (22.87 μs .. 32.31 μs)

Day 8

Top / Prompt / Code

Day 8 Benchmarks

>> Day 08a
benchmarking...
time                 9.144 ms   (9.029 ms .. 9.321 ms)
                     0.998 R²   (0.995 R² .. 0.999 R²)
mean                 9.016 ms   (8.954 ms .. 9.094 ms)
std dev              197.9 μs   (151.1 μs .. 283.8 μs)

* parsing and formatting times excluded

>> Day 08b
benchmarking...
time                 60.69 ms   (59.21 ms .. 62.27 ms)
                     0.999 R²   (0.996 R² .. 1.000 R²)
mean                 59.53 ms   (58.55 ms .. 60.26 ms)
std dev              1.546 ms   (860.9 μs .. 2.546 ms)

* parsing and formatting times excluded

Day 9

Top / Prompt / Code

Day 9 Benchmarks

>> Day 09a
benchmarking...
time                 550.3 μs   (546.7 μs .. 557.5 μs)
                     0.999 R²   (0.997 R² .. 1.000 R²)
mean                 548.2 μs   (546.3 μs .. 554.3 μs)
std dev              10.82 μs   (2.959 μs .. 22.26 μs)
variance introduced by outliers: 10% (moderately inflated)

* parsing and formatting times excluded

>> Day 09b
benchmarking...
time                 537.3 μs   (532.7 μs .. 548.1 μs)
                     0.995 R²   (0.985 R² .. 1.000 R²)
mean                 537.5 μs   (531.0 μs .. 550.1 μs)
std dev              31.13 μs   (1.678 μs .. 56.66 μs)
variance introduced by outliers: 51% (severely inflated)

* parsing and formatting times excluded

Day 10

Top / Prompt / Code

Day 10 Benchmarks

>> Day 10a
benchmarking...
time                 3.928 ms   (3.892 ms .. 3.958 ms)
                     0.999 R²   (0.998 R² .. 1.000 R²)
mean                 3.885 ms   (3.852 ms .. 3.913 ms)
std dev              103.0 μs   (78.97 μs .. 137.4 μs)
variance introduced by outliers: 11% (moderately inflated)

* parsing and formatting times excluded

>> Day 10b
benchmarking...
time                 17.60 ms   (17.50 ms .. 17.69 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 17.68 ms   (17.60 ms .. 17.83 ms)
std dev              254.4 μs   (84.96 μs .. 386.4 μs)

* parsing and formatting times excluded

Day 11

Top / Prompt / Code

Day 11 Benchmarks

>> Day 11a
benchmarking...
time                 763.0 μs   (762.2 μs .. 764.0 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 761.5 μs   (760.8 μs .. 762.1 μs)
std dev              2.216 μs   (1.607 μs .. 2.883 μs)

* parsing and formatting times excluded

>> Day 11b
benchmarking...
time                 763.5 μs   (763.0 μs .. 764.4 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 763.3 μs   (762.1 μs .. 766.0 μs)
std dev              5.347 μs   (1.797 μs .. 9.246 μs)

* parsing and formatting times excluded

Day 13

Top / Prompt / Code

Day 13 Benchmarks

>> Day 13a
benchmarking...
time                 561.6 μs   (559.9 μs .. 562.7 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 559.7 μs   (559.0 μs .. 560.8 μs)
std dev              2.904 μs   (2.242 μs .. 3.535 μs)

* parsing and formatting times excluded

>> Day 13b
benchmarking...
time                 53.87 ms   (53.69 ms .. 54.14 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 53.47 ms   (53.30 ms .. 53.62 ms)
std dev              311.0 μs   (245.8 μs .. 383.7 μs)

* parsing and formatting times excluded

Day 14

Top / Prompt / Code

Day 14 Benchmarks

>> Day 14a
benchmarking...
time                 624.6 μs   (622.6 μs .. 627.3 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 621.4 μs   (620.0 μs .. 623.3 μs)
std dev              4.704 μs   (4.002 μs .. 5.421 μs)

* parsing and formatting times excluded

>> Day 14b
benchmarking...
time                 205.7 ms   (203.6 ms .. 206.8 ms)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 206.4 ms   (205.9 ms .. 207.1 ms)
std dev              855.3 μs   (436.7 μs .. 1.328 ms)
variance introduced by outliers: 14% (moderately inflated)

* parsing and formatting times excluded

Day 15

Top / Prompt / Code

Day 15 Benchmarks

>> Day 15a
benchmarking...
time                 61.08 μs   (61.02 μs .. 61.17 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 61.55 μs   (61.39 μs .. 61.84 μs)
std dev              811.2 ns   (501.1 ns .. 1.187 μs)

* parsing and formatting times excluded

>> Day 15b
benchmarking...
time                 793.6 μs   (790.2 μs .. 797.2 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 792.6 μs   (791.0 μs .. 794.6 μs)
std dev              6.258 μs   (5.704 μs .. 7.352 μs)

* parsing and formatting times excluded

Day 17

Top / Prompt / Code

Day 17 Benchmarks

>> Day 17a
benchmarking...
time                 606.0 ms   (570.4 ms .. 624.9 ms)
                     1.000 R²   (0.999 R² .. 1.000 R²)
mean                 608.5 ms   (604.2 ms .. 612.7 ms)
std dev              5.335 ms   (2.815 ms .. 6.466 ms)
variance introduced by outliers: 19% (moderately inflated)

* parsing and formatting times excluded

>> Day 17b
benchmarking...
time                 1.328 s    (1.273 s .. 1.416 s)
                     0.999 R²   (0.999 R² .. 1.000 R²)
mean                 1.290 s    (1.279 s .. 1.310 s)
std dev              19.03 ms   (1.992 ms .. 23.94 ms)
variance introduced by outliers: 19% (moderately inflated)

* parsing and formatting times excluded

Day 19

Top / Prompt / Code

Day 19 Benchmarks

>> Day 19a
benchmarking...
time                 130.1 μs   (129.5 μs .. 130.7 μs)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 129.9 μs   (129.6 μs .. 130.2 μs)
std dev              1.212 μs   (861.4 ns .. 1.741 μs)

* parsing and formatting times excluded

>> Day 19b
benchmarking...
time                 3.971 s    (3.945 s .. 4.005 s)
                     1.000 R²   (1.000 R² .. 1.000 R²)
mean                 3.978 s    (3.970 s .. 3.983 s)
std dev              8.439 ms   (5.056 ms .. 11.90 ms)
variance introduced by outliers: 19% (moderately inflated)

* parsing and formatting times excluded