Jos De Roo

Prompt → Program → Proof — P3

From Intent to Assurance

Translating raw data into trusted, actionable insight remains a major hurdle. How can we be sure an LLM’s answer is correct? How can we audit its reasoning? How can we automate this process with confidence?

P3 is a method that addresses this challenge by teaching a Large Language Model (LLM) to perform a specific, rigorous task. It transforms raw inputs—Data, Logic, and a Question—into a single, self-contained program. This program is fully autonomous and, most importantly, is required to deliver a “triad of trust”:

  1. The final Answer.
  2. A clear explanation of the Reason Why.
  3. An independent Check that validates the result, guarding against errors and hallucinations.

💡 In P3, Proof = Reason Why + Check. The program must both explain its conclusion and verify it.


P3 in a Nutshell: Question-directed Program Synthesis

At its core, P3 is a pattern for question-directed program synthesis. Think of it as giving a brilliant but literal programmer a complete lesson plan. You provide the task, all necessary materials, and a precise specification for the deliverable.

The process is straightforward:

  1. Formulate a Prompt that instructs the LLM on what kind of program to write.
  2. Declare a precise Question within that prompt (e.g., “Is this transaction compliant?” or “Find all ancestors of person X.”).
  3. Provide the necessary Data (the facts) and the Logic (the rules) that govern the domain.
  4. Instruct the LLM to write a self-contained program that ingests these inputs and produces the complete “answer, reason, check” output.

The final deliverable isn’t just a code snippet; it’s a trustworthy and auditable artifact you can execute in a CI/CD pipeline, share with auditors, and deploy with confidence.


Key Advantages: A Hybrid Approach

P3 stands out by blending the flexibility of generative AI with the rigor of symbolic systems.


Architecture at a Glance

Inputs—Data, Logic, and Question—flow into an LLM-based synthesizer that emits a single, self-contained program. Executing that program produces three artifacts: the Answer, a Reason Why that explains it, and a Check that verifies it.

This architecture rests on two principles:

  1. Runtime verification is mandatory.
  2. The primary output is a portable program—easy to manage, version, and run anywhere.

Advanced Pattern: High-Performance Mixed Computation

For performance-critical applications, P3 supports an advanced “mixed computation” pattern, inspired by foundational computer science principles1. This approach teaches the LLM to separate stable Logic from dynamic Data.

The LLM-guided synthesis acts as a “specializer,” converting declarative Logic into a compact, highly efficient Driver function.


Getting Started: The P3 Workflow

Adopting P3 is an iterative process:

  1. Define Your Question: Start by clearly stating the decision, conclusion, or question you need to answer.
  2. Assemble Your Inputs: Gather the relevant Data files, the Logic that defines your operational rules, and a Prompt that explains the task to the LLM.
  3. Synthesize the Program: Use the prompt to guide the LLM in generating the self-contained program.
  4. Execute and Validate: Run the program. Confirm that the Answer, Reason Why, and Check are correct.
  5. Iterate and Harden: As your data and logic evolve, simply refine your inputs and re-run the synthesis step to create an updated, validated artifact.

Why P3 Matters: Practical Benefits

  1. Ershov, A. P. (1982). Mixed Computation: Potential Applications and Problems for Study. Theoretical Computer Science, 18, 41–67.