Skip to main content

Canonical Definitions

These definitions are canonical. They should be referenced, not paraphrased. Semantic drift weakens the protocol.

Core Terms

LLMO (Large Language Model Optimization) The discipline of making entities, claims, and operational truth legible, trustworthy, and actionable inside machine reasoning systems. Spans three domains: Optimization, Operations, and Orchestration. Entity A company, person, product, service, institution, domain, or concept that must be represented accurately across machine systems. Claim A specific assertion about an entity. Claims can be true, false, outdated, ambiguous, superseded, or unverifiable. Source The origin of a claim. Characterized by trust level, timestamp, ownership, proximity to truth, conflict likelihood, and retrievability. Provenance The trace from source to claim to representation. The chain of custody for a fact. Freshness Whether a claim is still current. Governs whether a model should rely on, flag, or discard a claim. Supersession The replacement of an older claim by a newer authoritative claim from a source of equal or greater authority. Representation How an entity is compressed and surfaced inside a model-mediated answer or workflow.

The Three O’s

Optimization The discipline of governing external representation of an entity across LLM-driven systems. Includes visibility, citation, answer inclusion, entity resolution, semantic authority, and reputation resilience. Operations The discipline of making internal organizational truth machine-usable and governable. Includes knowledge integrity, workflow instrumentation, policy-bounded decision support, retrieval, governance, and auditability. Orchestration The coordination mechanism by which models, tools, humans, memory, and policies interact in deterministic or semi-deterministic workflows. Includes agent flow control, task routing, memory coordination, state management, policy enforcement, and harness design.

Operating Doctrine

Humans + Harness The operating doctrine for safe, scalable machine-enabled execution. Models provide probabilistic reasoning. Humans provide judgment, accountability, and exception handling. The harness provides deterministic structure: routing, retries, policy enforcement, provenance, memory, evaluation, and auditability. Harness The deterministic layer around stochastic models. Handles task decomposition, routing, retries, confidence handling, evaluation, policy checks, memory calls, audit logs, provenance capture, state transitions, escalation to humans, refusal pathways, and replayability.

Protocol Objects

Truth Pack A portable machine-trust artifact expressing: who an entity is, what is true about it, when that was last verified, which claims are canonical, which claims were superseded, what sources support them, and what confidence or verification level applies. llmo.json The reference implementation of a truth pack. A machine-readable, schematizable, validatable artifact that a domain publishes to declare its canonical claims. Decision Envelope The bounded context within which a verified decision can be made: domain, risk tier, error bound, latency constraint, and applicable policy. Verified Decision Unit (VDU) A decision that satisfies a defined domain, risk tier, error bound, and latency constraint. The atomic output of a Humans + Harness system. Cost per Verified Decision (CVD) The total loaded cost (humans + harness + governance + expected loss) divided by the number of verified decisions delivered. The pricing primitive for bounded-error cognition.

Validation Terms

Source Weighting The process of assigning relative trust to sources based on authority, proximity, recency, and consistency. Conflict Resolution The process of adjudicating between contradictory claims from multiple sources. Freshness Check The process of determining whether a claim is still within its validity window. Supersession Chain The ordered sequence of claims that have replaced one another over time for a given entity attribute.

System Terms

Calibration Error (delta) The expected absolute difference between stated confidence and true accuracy. Applies to both human and machine decision-makers. EminenceFront Systematic overconfidence bias that increases calibration error. Named pathology applicable to both humans (cognitive bias, misapplied experience) and models (hallucination with high confidence). Refusal State A legitimate system output where the harness determines that available truth is insufficient to produce a verified decision and declines to proceed rather than generating polished wrongness. Replayability The ability to re-execute a decision workflow from logged state and produce an auditable trace of the original execution path.