What Eyeling could bring to Agent Spaces:
Symbolic guardrails for agents
The Agent Spaces paper explicitly points toward symbolic rules, constraints, policies, and planning combined with LLMs. Eyeling already has hard “inference fuses”: a rule concluding false fails the run, which is useful for “never allow this” policies.
Policy-aware data-space decisions Agent Spaces will need answers like: “May this agent use this data for this purpose?”, “Is this action permitted under this contract?”, “Which facts justify this recommendation?” Eyeling’s N3 model keeps facts, rules, checks, and answers together in one artifact, which is useful for governed, inspectable decisions.
A deterministic checker beside LLM agents Eyeling’s handbook explicitly frames it as a “meaning boundary” for LLM workflows: the LLM can draft/refactor N3, but Eyeling decides what follows. That is exactly the division of labor Agent Spaces likely need: LLMs plan and communicate; Eyeling verifies, derives, blocks, or explains.
Explainable “insight refinery” instead of raw data sharing Eyeling’s ARC pattern—Answer, Reason Why, Check—maps well to data spaces becoming agent spaces. Instead of exposing all raw data to agents, Eyeling can derive a narrow, purpose-limited, auditable insight, show why it follows, and check constraints.
Portable JavaScript reasoning layer Because Eyeling is JavaScript, has CLI/npm usage, browser and Node-oriented bundles, and supports custom builtins, it could be embedded into agent runtimes, web apps, sandboxed policy engines, or data-space connectors.