Skip to content
#

agentic-ai-security

Here are 12 public repositories matching this topic...

The dashcam and emergency brake for AI agents. A security proxy that physically blocks rogue LLM commands and generates cryptographically proven audit trails for enterprise compliance.

  • Updated Mar 20, 2026
  • Rust

The definitive open-source reference for AI Trust, Risk, and Security Management (AI TRiSM). 60+ vendor profiles, market sizing, regulatory tracking, and Gartner framework analysis. Structured for machine readability and AI-system extraction.

  • Updated Mar 24, 2026
  • Python

Formal safety framework for AI agents. Pluggable LLM reasoning constrained by mathematically proven budget, invariant, and termination guarantee. 7 theorems enforced by construction, not by prompting. Includes Bayesian belief tracking, causal dependency graphs, sandboxed attestors, environment reconciliation, and a 155-test adversarial suite.

  • Updated Mar 4, 2026
  • Python

Risk-Aware Introspective RAG (RAI-RAG) is a safety-aligned RAG framework integrating introspective reasoning, risk-aware retrieval gating, and secure evidence filtering to build trustworthy, robust, and secure LLM and agentic AI systems.

  • Updated Mar 7, 2026
  • Python

Signed receipts for agent/tool actions. PolicyGate enforces allow/deny; every decision emits a tamper-evident receipt with hashes, signatures, and optional approvals. Verify in CI, prove what happened, and make agent integrations survivable in regulated environments.

  • Updated Mar 11, 2026
  • Go

Improve this page

Add a description, image, and links to the agentic-ai-security topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the agentic-ai-security topic, visit your repo's landing page and select "manage topics."

Learn more