Our Work

Projects

Open-source and proprietary projects advancing the security, auditability, and trustworthiness of AI agent ecosystems.

Active

Model Package Protocol (MPP)

An open specification for secure, signed, and sandboxed AI tool artifacts. MPP brings container-like isolation to AI agent tool execution with WASM sandboxing, Ed25519 signing, and fine-grained permission boundaries.

WASMEd25519Zero-TrustOpen Spec
In Development

Agent Audit Trail

Comprehensive logging and replay system for AI agent actions. Every tool call, decision branch, and data access is cryptographically signed and stored in an append-only audit log for full traceability.

AuditCryptographic LogsReplay
In Development

Permission Boundary Engine

A declarative permission system for AI agents. Define exactly what resources, APIs, and data each agent can access with runtime enforcement and automatic violation detection.

PermissionsRBACRuntime Enforcement
Research

AI Threat Model Framework

A structured framework for identifying and mitigating threats unique to AI systems,prompt injection, tool poisoning, data exfiltration via agent chains, and supply-chain attacks on AI tool registries.

Threat ModellingPrompt InjectionSupply Chain
Active

Sandboxed Runtime

A WASM-based execution environment that isolates AI tool execution from the host system. Tools run in memory-bounded, capability-restricted sandboxes with deterministic execution guarantees.

WASMIsolationCapability Model
In Development

Security SDK

Developer toolkit for building secure AI integrations. Includes libraries for tool signing, permission declaration, audit logging, and integration with the MPP ecosystem.

SDKDeveloper ToolsTypeScriptRust