Projects
Open-source and proprietary projects advancing the security, auditability, and trustworthiness of AI agent ecosystems.
Model Package Protocol (MPP)
An open specification for secure, signed, and sandboxed AI tool artifacts. MPP brings container-like isolation to AI agent tool execution with WASM sandboxing, Ed25519 signing, and fine-grained permission boundaries.
Agent Audit Trail
Comprehensive logging and replay system for AI agent actions. Every tool call, decision branch, and data access is cryptographically signed and stored in an append-only audit log for full traceability.
Permission Boundary Engine
A declarative permission system for AI agents. Define exactly what resources, APIs, and data each agent can access with runtime enforcement and automatic violation detection.
AI Threat Model Framework
A structured framework for identifying and mitigating threats unique to AI systems,prompt injection, tool poisoning, data exfiltration via agent chains, and supply-chain attacks on AI tool registries.
Sandboxed Runtime
A WASM-based execution environment that isolates AI tool execution from the host system. Tools run in memory-bounded, capability-restricted sandboxes with deterministic execution guarantees.
Security SDK
Developer toolkit for building secure AI integrations. Includes libraries for tool signing, permission declaration, audit logging, and integration with the MPP ecosystem.