Skip to content

Solver System Architecture

Solver System Architecture

Status: Draft architecture baseline.

Date: 2026-05-11.

Audience: backend implementers, solver implementers, index maintainers, frontend implementers, and design reviewers.

Related documents:

  • docs/reference/academic/academic-course-universe-model/
  • docs/reference/academic/multi-solver-academic-planning-model/
  • docs/reference/architecture/backend-runtime-architecture/
  • docs/reference/architecture/offline-index-runtime-architecture/
  • docs/specifications/scraper-pipeline-spec/
  • docs/decisions/0013-multi-engine-solver-layer/
  • docs/decisions/0014-direct-evaluator-as-first-runtime-engine/
  • docs/decisions/0015-solver-result-and-explanation-contract/
  • docs/decisions/0016-planning-vs-evaluation-solver-boundary/
  • docs/decisions/0017-abstract-solver-adapters-before-concrete-engines/
  • docs/decisions/0018-conservative-unknown-and-engine-status-mapping/

1. Thesis

The solver layer is a shared academic contract over multiple specialized engines.

The contract is:

published index + student state + query request
-> normalized evaluation problem
-> engine routing
-> engine result
-> academic result with explanation, unknowns, conflicts, and source references

The backend must not expose raw engine internals as the academic API.

The frontend must not infer academic truth from graph geometry.

The index must remain the source of durable academic facts.

The student state must remain the source of user-specific facts.

The solver layer combines those inputs into bounded query answers.

The solver layer is not one universal solver.

The solver layer is a router, a normalizer, a set of engine_adapter implementations, and a shared result contract.

Version 1 should implement the deterministic direct evaluator first.

Additional adapters should be added only where they answer a class of query more naturally than the direct evaluator.

2. Design Goals

The solver system must preserve academic meaning.

The solver system must expose uncertainty directly.

The solver system must return explanations that can be rendered by the frontend.

The solver system must support current-state evaluation and future-state planning without mixing their contracts.

The solver system must be deterministic for direct evaluation.

The solver system must be testable against small fixtures before it touches the full Waterloo index.

The solver system must keep source references attached to facts, requirements, and derived answers.

The solver system must let the backend choose engine routes without changing endpoint semantics.

The solver system must avoid mandatory commitment to a concrete solver library in the architecture layer.

The solver system must give conservative answers when the index is incomplete, the parser is unsure, or an engine times out.

3. Non-Goals

This document does not define backend HTTP endpoint schemas.

This document does not choose a mandatory Datalog, SAT, SMT, CP, or package-resolution library.

This document does not define frontend 3D geometry.

Frontend runtime, atlas, Rust/WASM layout, WebGL/WebGPU rendering, and non-canvas fallback responsibilities are defined in frontend runtime architecture and the frontend specs.

This document does not define scraper extraction rules.

This document does not replace academic advising.

This document does not treat solver output as authoritative University of Waterloo advice.

This document does not require all engines to ship in version 1.

This document does not require every query to be solved by a formal external engine.

4. External Anchors

SMT-LIB defines common SMT languages, theories, and benchmark conventions for satisfiability modulo theories: smt-lib.org.

Z3’s arithmetic guide is a useful reference for integer and real arithmetic, check-sat, models, and engine incompleteness in harder arithmetic fragments: Z3 arithmetic.

Z3’s fixedpoint guide describes a bottom-up Datalog engine over finite relations: Z3 basic Datalog.

Souffle describes Datalog programs in terms of relation declarations, facts, rules, and directives: Souffle program documentation.

PySAT documents cardinality encodings for translating at most, at least, and equality constraints into CNF: PySAT cardinality encodings.

Google OR-Tools documents CP-SAT as a constraint programming solver over integer variables and constraints: OR-Tools CP-SAT.

OR-Tools also exposes CP-SAT statuses such as OPTIMAL, FEASIBLE, INFEASIBLE, and UNKNOWN; these are engine statuses, not academic statuses: CpSolverStatus.

MiniZinc is a solver-independent constraint modeling language and is useful context for CP-style modeling even if the project does not choose it directly: MiniZinc specification.

Dart Pub’s PubGrub documentation gives a clear package-resolver account of incompatibilities, derived incompatibilities, backtracking, and explanation: PubGrub solver overview.

Waterloo’s Kuali course requisite documentation is the source context for grade thresholds, course choices, academic level requirements, program restrictions, milestones, and antirequisites: Waterloo course requisite rules.

Waterloo’s Kuali plan requirement documentation is the source context for plan requirement lists and nested requirement groups: Waterloo plan requirement lists.

Waterloo’s cross-listed course documentation is the source context for separating user-visible listings from shared credit identity: Waterloo cross-listed courses.

5. System Boundary

The solver layer sits inside the Go backend runtime.

It reads the active published index through a read-only repository interface.

It reads the student state through a state repository interface.

It receives a typed query request from an endpoint handler.

It returns an academic_result.

The solver layer must not scrape upstream catalog data.

The solver layer must not mutate the published index.

The solver layer must not modify parser patches.

The solver layer may write temporary in-memory structures.

The solver layer may cache derived read-only facts keyed by index id.

The solver layer may cache state-dependent results only under strict private-cache rules.

The solver layer must treat state-dependent results as sensitive.

flowchart LR
Index["published index"] --> Normalize["normalized problem builder"]
State["student state"] --> Normalize
Request["query request"] --> Normalize
Normalize --> Route["engine routing"]
Route --> Direct["direct evaluator"]
Route --> Datalog["Datalog adapter"]
Route --> SAT["SAT adapter"]
Route --> SMT["SMT adapter"]
Route --> CP["CP adapter"]
Route --> Resolver["package-resolver adapter"]
Direct --> Contract["result normalizer"]
Datalog --> Contract
SAT --> Contract
SMT --> Contract
CP --> Contract
Resolver --> Contract
Contract --> Result["academic_result"]

6. Canonical Inputs

The canonical solver inputs are:

  • published_index
  • student_state
  • query_request
  • runtime_policy

published_index is immutable during a backend process lifetime unless the backend explicitly supports a controlled index reload.

student_state is mutable and user-specific.

query_request is endpoint-specific but normalized before engine routing.

runtime_policy controls timeout, routing, feature flags, and completeness requirements.

The normalized solver layer should not read request bodies directly.

Endpoint handlers validate transport-level shape first.

The problem builder validates academic shape second.

Engine adapters validate fragment support third.

7. Published Index Requirements

The solver layer expects the published index to expose stable identifiers for:

  • catalog versions;
  • courses;
  • course listings;
  • course credits;
  • course offerings when included;
  • credentials;
  • requirement groups;
  • requirement conditions;
  • unparsed requirement fragments;
  • source references;
  • equivalence relationships;
  • antirequisite relationships;
  • cross-listing relationships;
  • level and subject predicates;
  • unit values;
  • grade threshold requirements;
  • academic progress requirements;
  • program enrollment requirements;
  • milestone requirements.

The solver layer must not infer identifiers from display strings.

The solver layer may use display strings only in explanations.

Every structured requirement must retain at least one source_reference.

Every unparsed requirement must retain its original text and source_reference.

Every generated derived fact must retain its derivation source.

8. Student State Requirements

The solver layer expects student_state to expose:

  • completed course attempts;
  • planned course attempts;
  • grade records when available;
  • transfer credits when represented;
  • declared programs;
  • desired programs;
  • academic progress such as 1A, 1B, 2A, 2B, 3A, 3B, 4A, or 4B;
  • academic standing such as good standing or probation when represented;
  • user-selected assumptions;
  • ignored or manually excluded attempts when represented.

Academic progress and academic standing are distinct fields.

Academic progress describes progression through the academic plan.

Academic standing describes institutional status.

The solver layer must not use academic standing as a proxy for academic progress.

The solver layer must not infer a grade threshold from completion alone when the grade is missing and the threshold matters.

The solver layer may treat a missing grade as sufficient only when the requirement condition is completion without grade threshold and the course credit is known.

9. Query Request Classes

The solver layer should normalize endpoint requests into one of these problem classes:

  • evaluation_problem
  • closure_problem
  • comparison_problem
  • feasibility_problem
  • planning_problem
  • explanation_problem

An evaluation_problem asks whether a current or hypothetical state satisfies a known requirement.

A closure_problem asks for derived relations such as what a course unlocks or which credentials mention a course.

A comparison_problem asks what changes between two states.

A feasibility_problem asks whether some completion exists under bounded assumptions.

A planning_problem asks for one or more possible future plans.

An explanation_problem asks for why a status was returned or why no plan was found.

Most user-facing requests combine more than one class.

The problem builder should make the classes explicit so routing stays inspectable.

10. Normalized Problem Representation

The normalized representation is the internal contract between endpoint handlers and engine routing.

It should be serializable for debugging and fixtures.

It should use plain operational names.

The core shape is:

normalized_problem
problem_id
problem_kind
index_scope
state_scope
target
facts
requirements
assumptions
objectives
limits
source_references

problem_id is local to the request.

problem_kind is one of the normalized query request classes.

index_scope identifies the catalog version, index id, and relevant indexed objects.

state_scope identifies the user state snapshot and any hypothetical modifications.

target identifies the course, credential, requirement group, or other object being queried.

facts are normalized indexed and state facts.

requirements are structured requirement trees or opaque fragments.

assumptions are explicit user or system assumptions.

objectives are only present for planning or optimization-like queries.

limits include timeouts, horizon bounds, maximum returned plans, and completeness policy.

source_references preserve provenance for facts and requirements.

11. Requirement Representation

The normalized requirement representation must preserve:

  • group type;
  • group cardinality;
  • child ordering when source ordering matters for display;
  • child identifiers;
  • typed requirement conditions;
  • unparsed fragments;
  • source references;
  • parser confidence;
  • calendar version;
  • applicability conditions;
  • display text.

The basic group operators are:

  • all_of
  • any_of
  • choose

all_of is satisfied when every applicable child is satisfied.

any_of is satisfied when at least one applicable child is satisfied.

choose is satisfied when at least minimum_count and at most maximum_count, when bounded, of its applicable children are satisfied.

The basic condition families are:

  • course completion;
  • course credit completion;
  • course listing completion;
  • grade threshold;
  • unit count;
  • average threshold;
  • level predicate;
  • subject predicate;
  • academic progress predicate;
  • academic standing predicate;
  • program enrollment predicate;
  • milestone predicate;
  • antirequisite exclusion;
  • equivalence;
  • cross-listing;
  • opaque source text.

The project should use requirement_condition for typed leaf-level conditions.

The project should use requirement_group for grouped requirements.

The project should use requirement_expression for the full tree.

12. Academic Result Contract

Every engine route returns an academic_result.

The shared academic statuses are:

  • satisfied
  • not_satisfied
  • partial
  • unknown
  • conflict
  • not_applicable

The result must include:

  • status;
  • target;
  • explanation_tree;
  • unknowns;
  • conflicts;
  • source_references;
  • engine_trace_summary;
  • completeness;
  • assumptions;
  • warnings.

The result may include:

  • satisfied_requirement_ids;
  • unsatisfied_requirement_ids;
  • unknown_requirement_ids;
  • conflicting_requirement_ids;
  • candidate_plans;
  • changed_facts;
  • score_breakdown;
  • debug_reference.

The debug_reference must not expose raw solver internals to ordinary clients.

The debug_reference may point maintainers to server-side logs or fixture artifacts.

13. Explanation Tree Contract

The explanation_tree is a semantic tree, not a solver proof object.

It should be built from academic objects and requirement identifiers.

It should be stable enough for frontend rendering.

It should include:

  • node id;
  • node kind;
  • status;
  • summary;
  • requirement id when applicable;
  • academic object id when applicable;
  • children;
  • source references;
  • unknown reasons;
  • conflict reasons;
  • engine summary when relevant.

The tree should mirror requirement structure when the answer is an evaluation.

The tree may mirror derived relation paths when the answer is closure.

The tree may mirror decision branches when the answer is planning.

The tree must not require the frontend to understand CNF clauses, SMT variables, Datalog internal tables, or CP search logs.

14. Source Reference Contract

source_reference is mandatory for source-derived claims.

A source_reference should include:

  • source_reference_id;
  • source_kind;
  • catalog_version_id;
  • upstream_object_id when available;
  • path or structured location when available;
  • source_text excerpt when allowed;
  • retrieved_at;
  • parser_version;
  • patch_id when a manual patch affected the fact;
  • confidence.

An answer may cite source references directly.

An answer may cite source references indirectly through explanation nodes.

Every unknown caused by unparsed source text must point back to the opaque source fragment.

Every conflict caused by source inconsistency must point to all known competing source references.

15. Unknown and Conflict Objects

unknown_reason describes semantic uncertainty in the academic model.

Examples include:

  • unparsed_requirement;
  • missing_grade;
  • missing_academic_progress;
  • missing_program_state;
  • catalog_version_unavailable;
  • unsupported_requirement_condition;
  • engine_incomplete;
  • time_limit_reached;
  • state_migration_required;
  • source_conflict_unresolved.

conflict_reason describes mutually incompatible facts, requirements, or state assumptions.

Examples include:

  • antirequisite_conflict;
  • duplicate_credit_conflict;
  • incompatible_program_assumption;
  • catalog_version_conflict;
  • source_patch_conflict;
  • state_fact_conflict.

unknown_reason and conflict_reason must be machine-readable.

Human-readable prose may be generated from them.

The project must not collapse all incomplete answers into one generic message.

16. Academic Status Semantics

satisfied means the solver layer has enough structured evidence to conclude the target condition holds under the stated assumptions.

not_satisfied means the solver layer has enough structured evidence to conclude the target condition does not hold under the stated assumptions.

partial means some relevant parts are satisfied and some relevant parts are not satisfied, with no blocking unknown that prevents partial reporting.

unknown means the solver layer cannot determine the academic status from available structured evidence and allowed engine work.

conflict means the input facts, assumptions, or source-derived rules are mutually inconsistent in a way that prevents a trustworthy ordinary answer.

not_applicable means the target requirement does not apply under the stated scope or assumptions.

unknown is not a failure status.

unknown is an honest academic result.

conflict is not the same as not_satisfied.

not_applicable is not the same as satisfied.

17. Engine Status Mapping

Every engine_adapter returns an engine_result.

The engine_result carries engine-native status separately from academic status.

Common engine-native statuses include:

  • engine_satisfied;
  • engine_not_satisfied;
  • engine_sat;
  • engine_unsat;
  • engine_unknown;
  • engine_timeout;
  • engine_incomplete;
  • engine_unsupported;
  • engine_error.

The result normalizer maps engine_result into academic_result.

The mapping must inspect:

  • whether the encoding was complete for the requested fragment;
  • whether the engine result was exact;
  • whether the engine status means no model exists;
  • whether timeout or incompleteness affected the result;
  • whether source unknowns existed before engine routing;
  • whether adapter assumptions are acceptable for the query.

engine_unsat may become not_satisfied only when the encoding is complete for the relevant academic question.

engine_unknown must not become academic not_satisfied.

engine_timeout must not become academic not_satisfied.

engine_incomplete must not become academic satisfied.

engine_unsupported should become academic unknown with unsupported_requirement_condition or route to another adapter.

engine_error should usually become a server error for operational failure, not academic unknown, unless policy marks it as a recoverable incomplete engine route.

18. Completeness Policy

Every result must report completeness.

Completeness should include:

  • complete;
  • incomplete;
  • complete_for_fragment;
  • not_attempted;
  • failed.

complete means all relevant structured requirements in scope were evaluated under the stated assumptions.

incomplete means the answer may change if missing data, parser coverage, or engine work improves.

complete_for_fragment means a solver result is exact for a declared fragment but not for the whole academic object.

not_attempted means a route was intentionally skipped.

failed means an operational error prevented the route from producing a usable engine_result.

The frontend should display academic status and completeness together.

The backend should never hide incomplete routes behind a confident status.

19. Timeout Policy

Direct evaluation should have deterministic bounded runtime over the normalized requirement tree.

Datalog closure should be bounded by the finite relation set and runtime limits.

SAT and SMT queries should have explicit time limits.

CP planning queries should have explicit horizon and search limits.

Package-resolver style search should have explicit branch and explanation limits.

Timeout is an engine execution outcome.

Timeout is not academic evidence.

If a timeout prevents a conclusive answer, the academic status is unknown or partial with engine_incomplete.

If a timeout happens after a sound partial result was produced, the result may include partial evidence and must mark completeness as incomplete.

If a route times out but a cheaper route already produced a complete answer, the backend may ignore the timed-out route for user-facing status and still record a warning for maintainers.

20. Engine Adapter Interface

The architecture-level interface is intentionally abstract.

An engine_adapter should expose:

adapter_id
supported_problem_kinds()
supported_requirement_fragments()
prepare(normalized_problem) -> prepared_problem
solve(prepared_problem, runtime_policy) -> engine_result
explain(engine_result, normalized_problem) -> explanation_tree

The concrete Go interface can differ.

The semantic obligations cannot differ.

Each adapter must declare the fragments it supports.

Each adapter must reject unsupported fragments explicitly.

Each adapter must report whether its encoding was complete.

Each adapter must return enough trace summary to audit the route.

Each adapter must preserve source_reference identifiers through translation.

Each adapter must include a fixture format for small examples.

21. Direct Evaluator Role

The direct evaluator is the first runtime engine.

It evaluates structured requirement trees against current or hypothetical student_state.

It supports:

  • all_of;
  • any_of;
  • choose;
  • course completion;
  • course credit completion;
  • grade thresholds;
  • unit counts over explicit sets;
  • level predicates over explicit sets;
  • academic progress predicates;
  • program enrollment predicates;
  • academic standing predicates when modeled;
  • antirequisite conflicts when indexed;
  • equivalence and cross-listing facts when indexed;
  • unparsed requirement propagation.

It should return deterministic academic_result values.

It should produce explanation trees that mirror requirement trees.

It should be small enough to audit by reading.

It should become the oracle for basic fixtures.

The direct evaluator is not a planner.

The direct evaluator should not search future term combinations except for simple local alternatives already present in a requirement group.

22. Datalog Adapter Role

Datalog is a good fit for finite derived relations.

It should support:

  • transitive unlock closure;
  • reverse dependency closure;
  • course-to-credential mention closure;
  • requirement containment closure;
  • equivalence closure;
  • antirequisite reachability;
  • source-derived relation validation;
  • derived explanation paths.

Datalog is attractive because many graph-like academic queries are monotone relation derivations.

Souffle’s model of relations, facts, and rules is a useful implementation reference: Souffle program documentation.

Z3’s fixedpoint engine is another useful reference for finite relations and Datalog-style derivation: Z3 basic Datalog.

Datalog should not be forced to handle every numeric average or future schedule search.

Datalog adapter output should return derived relation facts plus derivation summaries.

23. SAT Adapter Role

SAT is a good fit for finite Boolean choices.

It should support:

  • requirement alternatives;
  • choose cardinality conditions;
  • coexistence constraints;
  • conflict satisfiability;
  • Boolean feasibility under fixed numeric facts;
  • minimal unsatisfied Boolean cores when supported by a concrete library.

SAT translates choices into Boolean variables and clauses.

Cardinality constraints may be encoded into CNF using known encodings; PySAT documents several cardinality encodings as implementation context: PySAT cardinality encodings.

SAT should not be the primary representation for grades, averages, units, or term scheduling unless those domains are deliberately bounded and encoded.

SAT adapter output should include whether the Boolean encoding was complete for the requested academic fragment.

SAT UNSAT is not automatically academic not_satisfied.

It becomes academic evidence only when the encoding covers all relevant requirements and assumptions.

24. SMT Adapter Role

SMT is a good fit for numeric academic constraints combined with Boolean structure.

It should support:

  • grade thresholds;
  • averages;
  • unit counts;
  • numeric progress bounds;
  • mixed Boolean and arithmetic feasibility;
  • symbolic what-if over unknown grades when bounded by policy.

SMT-LIB provides standard theories and solver communication conventions: SMT-LIB.

Z3’s arithmetic guide is useful context for integer and real sorts, arithmetic constraints, and cases where a solver may return unknown for harder fragments: Z3 arithmetic.

The SMT adapter should avoid unsupported arithmetic fragments in v1.

The SMT adapter should not treat floating point rounding as academic policy unless the index schema explicitly defines rounding.

The SMT adapter should represent grades and units as exact scaled integers when possible.

SMT unknown is an engine-native status.

It must map to academic unknown with an engine incompleteness reason unless another complete route answers the same question.

25. CP Adapter Role

Constraint programming is a good fit for bounded planning.

It should support:

  • term-by-term course assignment;
  • maximum course load;
  • prerequisites before planned courses;
  • corequisites in the same term;
  • limited offerings if term data is later included;
  • breadth, depth, and credential constraints over a fixed horizon;
  • search for one feasible plan or several diverse plans.

OR-Tools CP-SAT is a candidate ecosystem reference for integer variables and constraints: OR-Tools CP-SAT.

MiniZinc is a candidate modeling reference for solver-independent CP-style models: MiniZinc specification.

The CP adapter must always declare horizon bounds.

The CP adapter must not imply that a returned plan is unique.

The CP adapter must not imply infeasibility beyond the declared horizon, offering assumptions, and state assumptions.

CP UNKNOWN or search timeout must map to academic unknown or incomplete planning result, not academic impossibility.

26. Package-Resolver Style Adapter Role

Package-resolver style search is a good fit for alternative-path exploration and conflict explanation.

It should support:

  • choosing among alternative requirement paths;
  • deriving incompatibilities from conflicts;
  • explaining why a target cannot coexist with current assumptions;
  • backtracking over course choices;
  • producing human-readable conflict chains.

PubGrub’s documentation is useful because it treats dependency solving as a process of deriving and explaining incompatibilities: PubGrub solver overview.

The package-resolver style adapter should not be treated as a complete academic model by itself.

It needs input from the normalized problem.

It needs validation by the direct evaluator or another exact route.

It is especially useful when the user asks why a desired target is blocked.

27. Routing Matrix

The router chooses the cheapest complete route first.

The router can choose multiple routes when a query needs both exact evaluation and derived graph context.

Query typePrimary routeSecondary routeNotes
Course unlock for current statedirect evaluatorDatalog for what-unlocks contextDirect answer should be deterministic.
Course unlock with unknown gradesdirect evaluatorSMT for bounded grade what-ifMissing grade remains academic unknown unless user supplies assumption.
Reverse unlock listDatalogdirect evaluator for current-state highlightingClosure is relation-heavy.
Credential progressdirect evaluatorSMT for averages or unit thresholdsRequirement tree drives explanation.
Credential mention of courseDatalogdirect evaluator for contribution statusMention is not contribution by itself.
What changed if I add this coursedirect evaluator comparisonDatalog for newly reachable nodesCompare state snapshots.
Boolean alternative feasibilitySATdirect evaluator validationSAT handles finite alternatives.
Numeric feasibilitySMTdirect evaluator validationUse exact scaled numeric domains where possible.
Degree-CAD bounded planCPSAT, SMT, package-resolver stylePlanning requires horizon and assumptions.
Conflict explanationpackage-resolver styleSAT unsat core when supportedExplanation must map back to academic requirements.
Source uncertainty reportdirect evaluatornoneThis is mostly provenance and parser status.

The route choice is an implementation decision.

The response contract is not.

28. Hybrid Query Patterns

Many user experiences need hybrid solving.

Example: a user clicks a course object in the global view.

The backend may:

  1. use Datalog to compute prerequisites, reverse unlocks, and credential mentions;
  2. use direct evaluation to mark which prerequisites are already satisfied;
  3. use SMT to identify grade-sensitive unknowns;
  4. use package-resolver style explanation if a conflict is detected.

Example: a user selects a desired credential.

The backend may:

  1. use direct evaluation for current progress;
  2. use Datalog to find relevant courses and requirement containment;
  3. use CP to search bounded future plans;
  4. use SMT for average and unit constraints inside the bounded plan;
  5. use package-resolver style explanation for blocked paths.

The result normalizer must merge these routes without hiding route boundaries.

29. Evaluation vs Planning Boundary

Evaluation answers questions about a fixed state.

Planning searches for a future state.

An evaluation_problem can include hypothetical changes, but those changes must be explicit.

A planning_problem can introduce decision variables for future courses, grades, terms, and path choices.

Planning may produce many valid answers.

Evaluation produces one status for a target under stated assumptions.

The backend should keep evaluation synchronous in v1 when it is fast.

The backend should treat expensive planning as a candidate for asynchronous jobs.

The frontend should not depend on planning endpoints for basic page loads.

30. Direct Evaluator Semantics

The direct evaluator uses three-valued academic evaluation for requirement children:

  • true;
  • false;
  • unknown.

It then maps the tree result to academic statuses.

For all_of:

  • if every child is true, the group is true;
  • if any child is false and no unknown child can change the required result, the group may be false or partial depending on context;
  • if at least one child is unknown and no false child conclusively determines display status, the group is unknown or partial.

For any_of:

  • if any child is true, the group is true;
  • if every child is false, the group is false;
  • if no child is true and at least one child is unknown, the group is unknown.

For choose:

  • if the number of true children satisfies the cardinality bounds, the group is true when additional unknown children cannot create an upper-bound violation;
  • if the maximum possible true count cannot reach the lower bound, the group is false;
  • if the minimum known true count violates an upper bound, the group is conflict or false depending on requirement semantics;
  • otherwise the group is unknown or partial.

The exact display status may distinguish false from partial at the result normalizer layer.

The evaluator must retain child-level evidence either way.

31. Antirequisite and Conflict Semantics

Antirequisite relationships are exclusions.

If a student has completed two course credits that the index says cannot both count, the result should include a conflict or contribution warning.

If one course is completed and another planned course is an antirequisite, the planned state should include a conflict warning.

If an antirequisite affects enrollment rather than credential contribution, the source rule must say so.

The solver layer should avoid guessing the institutional remedy.

It should report the conflict and source references.

32. Cross-Listing and Credit Identity

A course listing is not always the same as a credit identity.

Waterloo’s cross-listing guidance says cross-listed courses may share material and Quest identification while having different subject codes, requisites, and ownership: Waterloo cross-listed courses.

The solver layer must reason over both listing and credit identity.

User-facing views may show listings.

Credential contribution may depend on credit identity.

Requirement source text may refer to listings.

Equivalence and antirequisite rules may bridge listings and credits.

Every conversion between listing and credit identity must be explicit in the explanation tree when it affects the answer.

33. Unknown Propagation

Unknown propagation is conservative.

An unparsed requirement that can affect the answer must remain visible as unknown.

A missing grade that can affect a grade threshold must remain visible as unknown.

A missing academic progress value that can affect a progress predicate must remain visible as unknown.

An unsupported condition family must remain visible as unknown.

Unknown propagation should not erase known satisfied children.

Unknown propagation should not erase known unsatisfied children.

The best user-facing result is often partial plus precise unknown reasons.

34. Opaque Requirement Semantics

Opaque source text is first-class data.

Opaque text is not parser failure garbage.

Opaque text is a requirement fragment the system cannot yet structure safely.

The direct evaluator must return unknown when an opaque fragment can affect the target result.

The explanation tree must show the opaque fragment and source reference.

The release gate may permit an index with opaque fragments under documented thresholds.

The solver layer must not silently assume opaque fragments are satisfied.

The solver layer must not silently assume opaque fragments are irrelevant unless applicability rules prove irrelevance.

35. Result Normalization

The result normalizer receives one or more engine_result values.

It produces one academic_result.

Normalization steps:

  1. validate the engine route metadata;
  2. validate fragment support declarations;
  3. collect source unknowns from the normalized problem;
  4. collect engine incompleteness;
  5. collect conflicts;
  6. merge explanation trees;
  7. assign academic status;
  8. assign completeness;
  9. attach source references;
  10. attach warnings.

The normalizer should be deterministic.

When two routes disagree, conservative conflict handling wins.

When one route is complete and another route is incomplete but non-conflicting, the complete route may determine status while the incomplete route contributes a warning.

When all routes are incomplete, the academic result is unknown or partial.

36. Caching

Index-only derived facts may be cached by index id.

Examples include:

  • prerequisite closure;
  • reverse unlock closure;
  • course-to-credential mention closure;
  • equivalence closure;
  • normalized requirement summaries.

State-dependent results may be cached only under private cache policy.

State-dependent cache keys must include:

  • index id;
  • student state version;
  • query request digest;
  • runtime policy version;
  • solver route version.

Cache hits must not hide stale index ids or migrated state ids.

The direct evaluator should be fast enough that caching is optional for v1 evaluation.

37. Observability

Solver logs should include:

  • request id;
  • index id;
  • state snapshot id;
  • problem kind;
  • target id;
  • selected routes;
  • route duration;
  • route completeness;
  • academic status;
  • unknown reason codes;
  • conflict reason codes.

Solver logs must not include secret state tokens.

Solver logs should avoid full grade histories unless the user has opted into debug capture.

Metrics should count unknowns and conflicts by reason.

Metrics should count route timeouts separately from semantic unknowns.

Metrics should track parser-driven unknowns so index quality work has feedback.

38. Fixture Strategy

The solver layer needs small fixtures before full-index tests.

Required fixture families:

  • single course completion;
  • all_of prerequisites;
  • any_of prerequisites;
  • choose prerequisites;
  • nested groups;
  • grade thresholds;
  • unit totals;
  • average thresholds;
  • academic progress requirements;
  • program enrollment requirements;
  • antirequisite conflict;
  • cross-listed credit identity;
  • opaque requirement unknown;
  • source conflict;
  • Boolean alternative feasibility;
  • numeric feasibility;
  • bounded planning;
  • package-style conflict explanation.

Every fixture should include:

  • source-like input;
  • normalized problem;
  • expected academic_result;
  • expected explanation highlights;
  • expected unknowns;
  • expected conflicts;
  • source references.

Fixture results should be stable across engine upgrades.

When a concrete engine changes behavior, the adapter must preserve the academic contract or explicitly update the fixture expectation with a documented reason.

39. Proof Obligations

Every adapter must have proof obligations.

The direct evaluator must prove structural evaluation soundness for supported requirement groups.

The Datalog adapter must prove derived relations correspond to the finite source relations it encodes.

The SAT adapter must prove Boolean satisfiability answers correspond to the encoded requirement fragment.

The SMT adapter must prove numeric encodings preserve grade, unit, and average semantics.

The CP adapter must prove returned plans satisfy all encoded constraints under stated horizon assumptions.

The package-resolver style adapter must prove each conflict explanation step maps to a requirement, state fact, or derived incompatibility.

The result normalizer must prove it does not upgrade incomplete evidence into a conclusive academic status.

These can be proof sketches in design docs and executable checks in fixtures.

40. Implementation Package Boundaries

The backend can structure solver code around packages such as:

  • solver/problem
  • solver/direct
  • solver/router
  • solver/result
  • solver/explain
  • solver/adapters/datalog
  • solver/adapters/sat
  • solver/adapters/smt
  • solver/adapters/cp
  • solver/adapters/resolver
  • solver/fixtures

This package sketch is not a mandatory final layout.

The important boundary is semantic.

The problem builder owns normalization.

The router owns route selection.

The adapters own engine-specific translation and execution.

The result package owns status mapping.

The explanation package owns stable frontend-facing explanation trees.

Fixtures own regression evidence.

41. Version 1 Slice

Version 1 should include:

  • normalized evaluation_problem;
  • normalized closure_problem for simple relation views if needed;
  • direct evaluator;
  • explanation tree;
  • source references;
  • unknown propagation;
  • conflict reporting for indexed antirequisites;
  • route metadata;
  • fixture set for direct evaluation;
  • stubs or interfaces for future adapters.

Version 1 should not include:

  • mandatory external SMT engine;
  • mandatory external CP engine;
  • asynchronous planning job service;
  • full package-resolver conflict search;
  • user-facing plan optimization.

The direct evaluator first slice creates a correctness baseline.

Future engines can then be validated against it where their fragments overlap.

42. Later Slices

A Datalog slice can add relation closure and graph-heavy query speed.

A SAT slice can add Boolean feasibility and conflict cores.

An SMT slice can add numeric feasibility and grade what-if.

A CP slice can add bounded degree-CAD planning.

A package-resolver style slice can add richer alternative-path explanations.

Each slice should land with:

  • adapter fragment declaration;
  • fixtures;
  • route policy;
  • status mapping tests;
  • explanation tests;
  • performance guardrails.

43. Design Rules

Use operational names in code and schemas.

Do not name durable components after metaphors.

Do not expose raw engine variable names in public API responses.

Do not treat a route timeout as academic evidence.

Do not treat source unknown as operational failure.

Do not treat planning failure outside a bounded horizon as global impossibility.

Do not let visual graph projections define solver truth.

Do not let solver convenience erase source references.

Do not add a concrete solver dependency without an ADR or implementation spec.

Do not make the frontend responsible for combining academic statuses.

44. Open Specification Work

The backend API specs should define endpoint-level request and response schemas.

The solver implementation specs should define exact Go types.

The index schema specs should define serialized requirement representation.

The fixture specs should define file layout and expected result schema.

The planning specs should define asynchronous job behavior if planning becomes expensive.

The frontend specs define rendering contracts for explanation_tree, unknowns, conflicts, graph projections, atlas layout, and accessibility fallback.

45. Summary

The solver layer is an academic contract first and an engine integration second.

The direct evaluator gives the project a deterministic baseline.

Datalog, SAT, SMT, CP, and package-resolver style adapters each have a natural role.

The router chooses routes.

The result normalizer protects users from engine-native ambiguity.

The explanation tree protects the frontend from solver internals.

The source reference contract protects academic trust.

The multi-engine design is justified only if every route remains subordinate to the shared academic_result contract.