Jos De Roo

  The ARC Book logo

The ARC Book

Answer • Reason • Check. ARC is a simple methodology for crafting small, trustworthy programs. Each case presented in this book is far more than a black box that spits out a result; it’s a concise story told in three parts. First comes the Answer to a specific question. This is followed by the Reason Why that answer is correct, articulated in everyday language and supported by the relevant identities, rules, or ideas. Finally, every case includes a Check—a concrete test designed to fail loudly if an assumption doesn’t hold or an edge case bites. The result is a computation with a complete, auditable trail: you can see precisely what was done, why it was valid, and how the page verifies its own work.

This ARC approach starts from three fundamental ingredients we can all recognize: Data, Logic, and a Question. From these, we compose a tiny, end-to-end accountable program. We summarize this habit as P3Prompt → Program → Proof. Here, the “proof” isn’t merely ceremonial; it’s a practical validation formed by the union of the narrative explanation and the verification code the page carries with it. In short:

Proof = Reason Why + Check

This book aims to be welcoming. If you are a student, you should be able to follow the line of thought. If you are a practitioner, you should find the steps easy to audit. If you are simply curious, you should be able to tinker, change a value, and immediately watch the consequences unfold. Each page is self-contained, and every run is intended to be repeatable.

P3: Prompt → Program → Proof

Translating raw data into trusted, actionable insight remains a major hurdle. It can be difficult to be sure an AI model’s answer is correct, to audit its reasoning, or to automate processes with confidence. A method called P3 addresses this challenge by teaching a large language model to perform a specific, rigorous task. It transforms raw inputs, such as data, logic, and a specific question, into a single, self-contained program. This program is fully autonomous and, most importantly, is required to deliver a “triad of trust”: the final answer, a clear explanation of the reason why, and an independent check that validates the result, guarding against errors and hallucinations. In this system, the proof consists of both the explanation and the verification.

At its core, P3 is a pattern for question-directed program synthesis. You can think of it as giving a brilliant but literal programmer a complete lesson plan. You provide the task, all necessary materials, and a precise specification for the deliverable. The process involves formulating a prompt that instructs the model on what kind of program to write, declaring a precise question within that prompt, and providing the necessary facts and rules that govern the domain. The model is then instructed to write a self-contained program that ingests these inputs and produces the complete “answer, reason, check” output. The final deliverable isn’t just a code snippet; it’s a trustworthy and auditable artifact that can be executed in an automated pipeline, shared with auditors, and deployed with confidence.

This method stands out by blending the flexibility of generative AI with the rigor of symbolic systems. Every output is verifiable by design because it is a self-contained program with its own built-in test harness. Each execution produces both a result and an independent verification, moving beyond the “black box” paradigm. It uses the language model for what it does best—understanding intent and synthesizing code structure—while relying on formal logic to ensure the reasoning is explicit and explainable. The generated program is explicitly required to explain its reasoning, so you don’t just know what the answer is; you know why it’s the answer and have the means to prove it. By starting with a precise question, the model creates a concrete, repeatable procedure, turning the resulting program into a durable asset perfect for automation, compliance, and reproducible research.

The architecture is straightforward: inputs flow into the AI synthesizer, which emits a single program. Executing that program produces the three artifacts. This system rests on the principles that runtime verification is mandatory and that the primary output is a portable program. For performance-critical applications, an advanced “mixed computation” pattern can be used. This approach teaches the model to separate stable logic from dynamic data. The AI-guided synthesis acts as a “specializer,” converting declarative logic into a compact, highly efficient driver function. At runtime, this specialized driver is extremely fast, consuming only new facts (like a new user transaction) and applying the pre-compiled logic to emit the standard “answer, reason, check” triad. This preserves the core trust contract while dramatically improving speed, determinism, and auditability. The logic stays declarative, while the execution becomes small, fast, and predictable.

Adopting this workflow is an iterative process. It begins by clearly defining the question you need to answer. Next, you assemble the relevant data files, the logic that defines your operational rules, and a prompt that explains the task to the model. You then use the prompt to guide the AI in generating the self-contained program. After running the program, you confirm that the answer, reason, and check are all correct. As your data and logic evolve, you simply refine your inputs and re-run the synthesis step to create an updated, validated artifact. The practical benefits include building unprecedented trust, as the “answer, reason, check” triad makes every output verifiable. This is essential for regulatory and compliance-driven environments. It also enables extreme automation by producing executables that integrate seamlessly into modern development pipelines. Furthermore, it lowers maintenance overhead because policies are maintained as declarative logic, not complex code. Subject matter experts can define operational logic in a high-level format, while the AI handles the complex task of translating it into efficient, verifiable code.

Examples and test cases

Each link below opens a self-contained page that presents the ARC triad in place.

Part A

Part B