Splendor
An AI Kernel for Self-Managed Neuro-Symbolic Agents
Kernel-grade runtime for persistent, governable agent loops.
Splendor in context
Splendor is a open-souce kernel-grade runtime (Rust core + Python interfaces) that turns autonomous agents from ad-hoc application code into first-class compute. It provides the primitives to run persistent, governable agent loops—with explicit state, feedback/reward channels, neural policies, symbolic constraints, and verified action boundaries—so agents can operate safely and continuously on one machine or across a fleet.
- Runs on top of Unix-based systems (Linux/macOS and other Unix-like environments).
- Not a bare-metal OS kernel (see FAQ).
- Built to augment modern neural AI systems, not replace them.
- Built to define and enforce agent primitives for autonomy, coordination, and long-term evolution (see Primitives).
Today's systems have OS primitives—agents don't.
Modern operating systems standardized primitives like processes, threads, memory, IPC, permissions, and scheduling. That shared substrate is a major reason we can build reliable software at scale.
Autonomous AI systems don't have an equivalent foundation. “Agents” are typically assembled ad-hoc in user space: a model, a prompt, a vector store, a tool wrapper, a planner, a retry loop—plus bespoke glue for safety and observability. The result is fragmented, hard to verify, and difficult to run consistently across devices or across teams.
- Perceiving (kernel adapters that normalize sensors/tools into structured percepts).
- Deciding (hybrid routing across neural generalization and symbolic control).
- Acting (kernel adapters that execute actions through verified boundaries).
- Learning from feedback (reward and evaluation channels).
- Coordinating (multi-tenancy, messaging, and resource allocation).
- Remaining constrained by explicit rules and guarantees.
Splendor aims to provide the missing kernel-level primitives for agents, so we can build the next 100 years of autonomous systems on a stable, auditable foundation.
A systems layer for autonomous agents
Splendor augments modern neural AI systems and enforces the primitives for autonomy, coordination, and long-term evolution.
System space
Stable, enforceable kernel primitives.
Splendor runtime
Kernel-grade orchestration for agent loops.
AI space
Models, planners, tools, and domain code.
- A Rust kernel-runtime for agents that provides primitives most agent frameworks don't, see primitives.
- Interpreters as managed compute: Splendor can host one or more sandboxed interpreter instances per agent or tenant.
- A runtime for closed-loop autonomy: the kernel standardizes how percepts → policies → actions flow, and how reward/feedback channels are recorded and routed back into state and policies.
- Distributed by design: agents can run across machines, exchange structured messages, and coordinate as a fleet, while tenancy and constraints remain enforceable across boundaries.
- A replacement for Unix or your OS (it runs on top of Unix-based systems).
- A new neural architecture (bring your models).
- A single agent framework that dictates how you build (bring your stack).
Splendor is a systems layer that makes agent systems more consistent to run and reason about, without claiming to solve autonomy by itself. Splendor enforces the primitives.
Neuro-symbolic by construction
Splendor treats “neuro-symbolic” as a runtime property, not an architecture choice you bolt on later.
In Splendor, an agent loop is built from four cooperating parts—each with explicit interfaces and enforcement points:
Learning
Provides generalization
Symbolic structure
Provides control
Verification
Provides guarantees
Feedback
Provides adaptation
The point is simple
Splendor’s job is to make these pieces interoperable and enforceable at the runtime level, without prescribing a single model, planner, or training method.
Agents as first-class compute
Operating systems separate kernel space (enforced invariants) from user space (fast-changing applications). Splendor applies the same idea to autonomy: separate what must be stable and governable from what should be iterable and experimental.
- Tenancy/isolation, resource limits, and scheduling.
- Action gating + verification and constraint enforcement.
- Messaging, audit/observability, and governance.
- Models, policies, planners/solvers, and tools.
- Reward/evaluation logic, memory, and domain code.
- Rapid iteration without breaking system invariants.
Adapters sit at the boundary: they translate environments into structured percepts, expose actuators/actions, and attach constraints and verification. This makes autonomy composable: teams can share adapters and primitives while evolving their models and agent logic independently.
Runs on top of Unix-based systems
Splendor is a “kernel for AI” in the architectural sense: it supplies the runtime primitives for agents, while Unix remains the OS.
Runtime kernel in user space
Unix remains the OS.
+
Runtime kernel in user space
Unix remains the OS.
Splendor runs in user space and leans on the host for drivers, filesystems, networking, and hardware access.
The Rust core manages tenancy, state graphs, scheduling, messaging, and action verification, and delegates low-level device and I/O to the underlying system.
Distributed by default
Splendor treats “one machine” as an implementation detail.
+
Distributed by default
Splendor treats “one machine” as an implementation detail.
- Multi-tenant isolation: each tenant/agent runs inside an isolated runtime context with its own state graph, quotas, and policy boundaries.
- Mobility + coordination: agents communicate via structured messages and can be scheduled on different machines while retaining identity and state continuity.
- Fleet feedback: reward/feedback streams and traces can be aggregated across agents to support shared evaluation and learning—without dropping system-level constraints.
- Boundary-aware safety: actions are mediated at execution edges (tools/devices/services) through verification gates before side effects occur.
The Primitives We Intend to Standardize
Splendor’s goal is to make agent-building look less like glue code and more like building on an OS.
- Perceptors (sensor + tool observation interfaces)
- Embeddings / representation stores
- Multi-modal encoders
- World-model and environment schemas
- Policy networks (pluggable)
- Reward functions + evaluators
- Value estimators / critics
- Feedback channels (human, automated, environment-derived)
- Constraint solvers (hard/soft constraints)
- Planners (symbolic / hybrid)
- Rules and invariants (“never do X”, “always require Y”)
- Proof/trace artifacts where feasible
- Actuators (tool/action interfaces)
- State machines (structured control)
- Action verifiers (pre/post-conditions)
- Rollback / compensation patterns
- Guardrails as enforceable runtime objects (not just prompts)
- Alignment signals (first-class telemetry + reward shaping hooks)
- Kill switches / circuit breakers
- Audit logs and reproducibility primitives
- Message passing (typed, traceable)
- Consensus & shared state mechanisms
- Resource allocation / scheduling (agent-aware)
- Multi-device identity, permissions, and trust boundaries
These primitives are meant to augment current AI systems—especially neural nets—by adding structure, constraints, and operational guarantees.
Built for reliable autonomy
Splendor targets the bottlenecks you hit when moving from demos to persistent, deployed autonomy.
For researchers and engineers
Persistent, deployed autonomy needs a substrate.
+
For researchers and engineers
Persistent, deployed autonomy needs a substrate.
- Operational correctness: a precise record of what ran, with what state, what feedback, and what constraints—so behavior is debuggable and reproducible.
- Coordination at scale: multi-agent, multi-tenant, multi-device execution with shared messaging, resource control, and consistent boundaries.
- Verifiable execution edges: explicit gating for actions and side effects (permissions, invariants, quotas), not post-hoc monitoring.
- Standard primitives: stable interfaces for percepts, actions, state, feedback, and constraints—so components can interoperate across teams and time.
For everyone else
Dependable systems in the real world.
+
For everyone else
Dependable systems in the real world.
- Clear inputs and actions (what the system can see and do).
- Hard safety boundaries (what it must not do).
- Traceable decisions (what happened and why).
- Coordination across devices without turning into a mess.
If agents are going to run continuously and safely, they need an OS-like substrate for agent loops—not just higher-level frameworks.
It doesn't replace neural networks—it provides the runtime structure around them.
Go to Docs
Start with the primitives and architecture overview.
Quick answers
The essentials, without the fluff.
Is Splendor a real OS kernel? Does it run on bare metal?+
Why Rust?+
Why interpreters too (e.g., Python)?+
Does Splendor replace existing agent frameworks?+
What does “multi-device” mean?+
What’s “system space vs AI space”?+
What’s the end goal?+
Help design the blueprint
If you work on agent runtimes, neuro-symbolic systems, verification, distributed systems, RL/reward modeling, safety/alignment, or operating systems, we’d love your contribution to standardize the primitives that matter.
Contact: contact@splendor-os.org