Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Puzzle families

Different puzzle genres need fundamentally different algorithms, not just different rules. Penmark today is a constraint-satisfaction engine — built around Puzzle = (Substrate, Vec<Constraint>, &'static Genre) and a Game<'p> whose state is per-coord candidate sets. That covers one large family natively. Several other families share substantial plumbing (substrate, geometry, parsing, dataset infra, frontends) but need different solvers, graders, and state models entirely. A few are different enough that hosting them in penmark would be a different project.

This chapter maps the landscape so we can be specific about what penmark covers, what it could cover with focused work, and what’s out of scope.

How this chapter classifies

Genres are grouped by what algorithm solves them, since that’s the axis that decides reuse. Rules and aesthetics matter for play; they don’t determine whether penmark’s Propagator and FastSolver apply.

For each family, this chapter lists:

  • What — examples and the canonical solver shape.
  • Reusable from penmark today — concrete subsystems that transfer without modification.
  • Missing — subsystems that would need to be added.

The same vocabulary recurs across families. Reusable pieces are almost always the static layer: Substrate + Coord + Region + Mark + PuzzleMeta + tagged envelope + parsing dispatch + dataset infra + bench infra + the eframe paint pipeline + the Tier / GradeStatus taxonomy. What varies is the rule layer and the solver track sitting on top.

1. Constraint satisfaction

Penmark’s home family.

  • Examples: Sudoku, Akari, Slitherlink, Nurikabe, Heyawake, Star Battle, Kakuro, Masyu, Hitori, KenKen, Yajilin, Tents, Skyscrapers, Nonograms, Numberlink, Battleships, Einstein/zebra logic puzzles, cryptarithmetic, futoshiki.
  • Solver shape: backtracking DFS + arc-consistency propagation, or SAT / CP-SAT for harder instances. Domains are small (2–9 marks per coord); the propagator narrows them via per-rule reasoning until every coord is singleton.

Reusable from penmark today: everything. Akari is wired end-to-end; Sudoku and Slitherlink are partway. Adding a new genre is a Genre impl plus the rule list its build_constraints emits. See the Adding a genre chapter.

Missing: more Rule variants for genres that aren’t yet expressible — for instance, Hashi’s “edge multiplicity 0/1/2” needs a small extension to DegreeIn, and Yajilin’s arrow-with-count clue needs a directional region shape. The genre expansion plan tracks which rule kinds each upcoming genre promotes from stub to real.

2. State-space planning

Sequential games. State changes with every move; a “solution” is a plan, not an assignment.

  • Examples: Rush Hour, Sokoban, Klotski, the 15-puzzle and other n-puzzles, Tower of Hanoi, Lights Out, peg solitaire, Atomix, Lunar Lockout, Bloxorz, Rubik’s cube.
  • Solver shape: BFS for shortest-plan (when state space is small), IDA* with a heuristic and pattern databases for larger ones, parent-pointer reconstruction for the move sequence. State is typically a packed bitboard or a small tuple, hashed into a visited-set.

Reusable from penmark today:

  • Substrate + Material for the static board (walls, exits, special cells).
  • Coord + Region + grid geometry helpers for legality checks (“which cells does this car occupy after sliding”).
  • Tagged-JSON envelope, content-hash, PuzzleMeta.
  • The half of Genre that describes “what does the level look like”: NAME, LAYERS, MARK_RENDER, default_substrate, parse_text, to_janko_text, dataset / parsing dispatch.
  • Tier / GradeStatus for difficulty bucketing.
  • CLI scaffolding, dataset loaders, eframe grid paint pipeline.

Missing:

  • A planning sister-trait (call it PlanningRules) parallel to the CSP-rules half of Genre, with associated State, Move, legal_moves, apply, unapply, is_goal, optional heuristic, and a canonicalize for visited-set keys.
  • Generic BfsPlanner<P> and IdaStarPlanner<P> returning Plan = Vec<P::Move>.
  • A planning-shaped grader (min-plan-length → Tier).
  • A planning-shaped generator (sample layout, run BFS, accept if min-length is in target band).
  • An eframe view for stepping through / animating a plan.

The apply/unapply discipline is conceptually the same as FastGame’s trial-1 rollback, but the trait surface differs enough that sharing one impl isn’t worth it — better to keep CSP and planning as parallel solver families.

See the Roadmap for whether this is planned.

3. Exact cover and packing

Find a subset of placements that covers every cell exactly once.

  • Examples: pentominoes, polyomino tilings, tangram, Soma cube, Eternity II, jigsaws, peg-grid placement puzzles, Sudoku itself when reformulated as exact cover.
  • Solver shape: Knuth’s Algorithm X with Dancing Links (DLX). Expressible as CSP but DLX dominates by an order of magnitude on this shape, because the “choose one constraint to satisfy next, branch over the rows that satisfy it” loop is unusually well- matched to the data structure.

Reusable from penmark today:

  • Substrate, region, coord, geometry — exactly the same vocabulary for cells / edges / vertices.
  • Mark works for piece-id labelling.
  • Parsing, dataset infra, eframe paint, CLI scaffolding.
  • Generation patterns (cancel: AtomicBool, time budgets, seeds).

Missing:

  • A DLX solver. Penmark’s Propagator can technically encode exact cover via ExactCount { mark, n: 1 } per cell + per-piece ExactCount, but it’s slow compared to native DLX.
  • A piece-placement vocabulary in Region (arbitrary shape + rotation / reflection orbit).
  • A grader axis for packing puzzles — uniqueness of solution is the usual target, but “how forced is each placement” is the analogue of human technique grading.

This is a real gap. Pentominoes-shaped puzzles are popular and penmark’s current family doesn’t serve them well; a DLX track alongside the CSP track is plausible future work.

4. Word and dictionary puzzles

Constraint satisfaction with a lexical domain.

  • Examples: crosswords (American, cryptic, themeless), word ladders, Scrabble (the legal-move enumerator), Boggle, fill-in puzzles, anagram puzzles.
  • Solver shape: still CSP — slot variables, intersection constraints — but the domain is a 100k–500k-entry dictionary, not a small mark set. Propagation is bitset-AND over per-(length, position, letter) precomputed filter masks; branching is at the cell level rather than the slot level to deduplicate redundant propagation across candidate words sharing a letter at the crossing. See Orca’s writeup for the canonical modern implementation.
  • For playable crosswords (find the right fill given clue scores), the problem becomes MAP inference rather than satisfaction — loopy belief propagation over the slot graph with per-slot answer distributions from a neural clue→answer scorer (Berkeley Crossword Solver, Dr. Fill). That’s a different solver again.

Reusable from penmark today:

  • Substrate (grid with black squares), coord, region (a slot is a row/col slice of cells), parsing, eframe paint.
  • The shape of Propagator: arc-consistency loop + per-arc cache
    • bitset domain reasoning. The architecture transfers; only the domain backing changes.
  • Tagged-JSON, dataset infra, CLI scaffolding, the Tier taxonomy.
  • Constraint’s shape is right (“variables crossing at this coord must agree on letter”); just a new Rule variant (LetterAgreement or similar) is needed.

Missing:

  • A large-domain domain store. FastGame hardcodes u32 per coord because mark domains are tiny; crosswords need heap-allocated bitsets sized to the wordlist (~52 KB per slot for a 411k-word dictionary).
  • The (length, position, letter) → bitset filter index. This is the data structure that makes propagation fast.
  • A variable model where slots, not coords, are the primary variables. Today penmark’s variables are coords. Crosswords need region-as-variable, with cell-level branching on top.
  • A clue→answer scoring layer for the MAP-inference flavor — that’s a separate concern (NLP), and grid-fillers like Orca skip it entirely.

A crossword genre is genuinely possible inside penmark, but the propagator-internal work is non-trivial.

Find a path or sequence of moves through a graph.

  • Examples: mazes, Hashiwokakero (Hashi — bridges between islands), river-crossing puzzles (missionaries + cannibals), the classic shortest-path puzzles. Hashi is partly CSP (constraint: bridge counts at each island) and partly graph (the bridges must form a connected planar embedding).
  • Solver shape: graph search (Dijkstra, A*) for the pure- pathfinding cases; CSP with topological constraints for Hashi and friends. Mazes are a degenerate planning family with single- cell moves.

Reusable from penmark today:

  • Substrate + coord + region: the graph is implicit in the geometry.
  • For Hashi: Constraint mostly works (bridge-count is ExactCount); Connected { mark } already exists for the global “all islands reachable” check.
  • For mazes: a pure-pathfinding solver doesn’t really need penmark’s machinery, but the parsing / rendering / dataset infra still lands cleanly.

Missing:

  • For Hashi specifically: an edge-multiplicity rule (0/1/2 bridges per slot) and a planarity / non-crossing constraint. The latter is awkward in pure CSP; today’s Propagator would handle it via lazy cuts at the leaf, similar to the Path rule.
  • For mazes: not really anything — they’re so simple that a 50-line BFS is the whole solver. The question is whether penmark adds value over just embedding mazes as a degenerate planning genre.

6. Hidden information and probabilistic puzzles

CSP under uncertainty.

  • Examples: Minesweeper, Battleship (solving), Mastermind, Hangman, Wordle (the strategic-guesser variant). The solver enumerates hidden states consistent with the observations made so far, then picks the maximum-information probe.
  • Solver shape: CSP for consistency checking + expected- information-gain or expected-regret reasoning over the model space on top. Often Monte Carlo if exact enumeration is infeasible.

Reusable from penmark today:

  • The CSP layer for consistency checking — very directly. Each observation narrows the candidate-state set; that’s exactly what Propagator does.
  • Substrate, region, mark vocabulary, parsing, frontends.

Missing:

  • A probability layer. Game::status() is InProgress / Solved / Contradicted — a binary verdict. Probabilistic puzzles need per-coord probability distributions and an expected-value computation over the consistent-models space.
  • A “next probe” selector that scores candidate moves by information gain, not just legality.
  • A grader that distinguishes “always solvable with logic” from “requires guessing” — this is the famous Minesweeper grader question and is genuinely hard.

A Minesweeper genre that just shows the deduction is feasible today; a Minesweeper player with optimal-guess support would need real new machinery.

7. Adversarial two-player games

  • Examples: Chess, Go, Checkers, Othello, Hex, Connect 4, Nim (when played, not when computed via Sprague–Grundy), TwixT.
  • Solver shape: minimax + alpha-beta, MCTS, neural evaluation. State model is similar to planning but the objective is game- theoretic value (best move under optimal opponent), not goal- reaching.

Reusable from penmark today:

  • Substrate / coord / region / parsing / eframe / dataset / CLI scaffolding. Same level-description machinery.
  • The tagged-JSON envelope.

Missing:

  • Essentially the entire algorithmic layer: search trees with two alternating players, evaluation functions, transposition tables, iterative deepening, opening books, endgame tablebases. Almost no overlap with CSP propagation.
  • A player-identity layer in the game model.
  • For neural-eval games: a totally separate ML stack.

This is realistically a sister project, not an extension. Penmark could host the board and the move format for an adversarial game, but the engine work is far enough afield that mashing it in would dilute the rest of the codebase.

8. Synthesis and programming-game puzzles

Build a program meeting a spec.

  • Examples: SpaceChem, Opus Magnum, Shenzhen IO, TIS-100, Manufactoria, Human Resource Machine, Lightbot, the various Zachtronics catalog. Often optimized along multiple axes (cycles, area, ops).
  • Solver shape: program synthesis — superoptimization, SMT- guided search, RL, neural code generation. The “puzzle” is a program with input/output specs; the “solution” is a program.

Reusable from penmark today: very little. Maybe the CLI scaffolding and PuzzleMeta.

Missing: everything else. Different state model, different search techniques, different evaluation criteria, different authoring tools, different visualization needs.

Out of scope.

9. Combinatorial game theory

  • Examples: Nim and its many variants, Wythoff, Hackenbush, Domineering, Toads-and-Frogs.
  • Solver shape: Sprague–Grundy values for impartial games; surreal-number arithmetic for partizan games. Largely closed-form rather than search-based for the classical instances.

Reusable from penmark today: nothing meaningful.

Missing: the entire mathematical apparatus.

Niche, mathematically beautiful, and disjoint from everything else in this catalog. Out of scope.

Summary table

FamilyPenmark fitWhat’s reusableWhat’s missing
1. Constraint satisfactionNativeEverythingPer-genre rule variants
2. State-space planningAdjacentSubstrate / level half of Genre / TierPlanning trait + BFS/IDA* + plan-length grader + plan generator
3. Exact cover / packingAdjacentSubstrate, region, parsing, frontendsDLX solver, piece-orbit region kind
4. Word and dictionaryAdjacentSubstrate, propagator architecture, frontendsLarge-domain store, filter index, slot-as-variable model, optional clue scorer
5. Path / graphMixedSubstrate, Connected ruleEdge-multiplicity, planarity (Hashi); maze BFS overlap is small
6. Hidden / probabilisticAdjacentCSP layer for consistency checkProbability layer, info-gain selector, guessing-aware grader
7. AdversarialSister projectLevel / paint / parsingMinimax / MCTS / eval / player model
8. SynthesisOut of scopeCLI scaffoldingEverything else
9. Combinatorial game theoryOut of scopeNothingEverything else

“Native” means penmark hosts it today. “Adjacent” means a real but contained body of new work — a sibling solver track or a new domain store, not a rewrite. “Sister project” means the level layer transfers but the engine doesn’t. “Out of scope” means penmark is the wrong home.

The recurring lesson: penmark’s level-description machinery (substrate, coord, region, parsing, dataset, paint, CLI) is genre- agnostic and pays for itself across every family that has a board. The rule and solver layers are family-specific. The clean extension story is to keep the level layer as the shared core and add sister rule-traits and sister solver families as new puzzle classes land — not to force every problem through the CSP propagator.