MIT-licensed~7 min read

Why AiSOC is open source.

AiSOC is MIT-licensed and self-hostable. The agent loop, prompt templates, and eval harness are in this repo. This page describes what that posture means in practice — for buyers in regulated industries and for everyone else — and is explicit about the trade-offs.

The compliance problem with closed-source AI SOCs

Closed-source AI SOC products typically ask a regulated buyer to accept three things at once: incident data leaves the buyer network for inference in vendor cloud, the agent prompts and policy are not visible outside the vendor, and the accuracy numbers cannot be reproduced by the buyer or their auditor.

The third item is the one that compounds the other two. A number that the buyer cannot reproduce is hard to defend in a SOC 2, ISO 27001, or DORA review. In practice this often results in the AI SOC being deployed for non-regulated tenants and a manual triage queue being kept for regulated ones.

AiSOC takes the opposite approach: the agent is in the repo, the substrate eval is a CI gate, data stays in the buyer network by default, and the MIT licence is permanent.

Three artefacts an auditor can review

Every other claim on this page reduces to one of these. If the three are not reviewable on day one of a deployment review, the agent is not actually auditable end-to-end.

01

A per-investigation event stream

Each investigation can be exported as a JSON event stream listing the steps the agent took: prompts issued, models used, tokens spent, evidence rows cited, and actions executed, in order, with hashes. An auditor reads the events directly rather than relying on a vendor summary.

02

A reproducible eval harness

Cloning the repo and running `python3 scripts/run_evals.py` produces the same alert-reduction ratio, MITRE-tactic gate, completeness coverage, and response-quality score that the CI gate produces. The benchmark page documents which numbers are real measurements of the substrate and which are self-consistency gates that would need an online LLM-as-judge run to be called agent accuracy.

03

Source code for the agent

The orchestrator, planner, prompt templates, tool registry, response policy, and rubric — the components that reason over incident data — are in this repo under MIT. They can be diffed, patched, and shipped as a fork.

How AiSOC differs from the common patterns

The two non-AiSOC columns describe common patterns in the AI SOC market, not specific vendors. Counter-examples are welcome via issue or pull request.

Closed-source AI SOC vendor
  • Agent runs in vendor cloud. Incident data leaves the buyer network for inference.
  • Prompts and policy are proprietary. Buyers cannot audit how the agent reasons or what it tells the model about a case.
  • Accuracy claims come from internal evaluation. No reproducible eval harness, no public CI gate, no historical regression record.
  • No fork right. Model, policy, and pricing changes are vendor-controlled.
Open-core with proprietary agent
  • The dashboard is open. The agent component is in a private repo.
  • License is typically SSPL or BSL with a CLA, which permits future relicensing of the project.
  • The eval harness is internal. The score is published; the dataset and rubric are not.
  • Self-hosting is permitted on paper but typically supported only for the open shell, not the agent.
AiSOC
  • Agent runs on the buyer infrastructure. Incident data, by default, does not leave the buyer network.
  • Every prompt, response, tool call, and decision is written to the Investigation Ledger and replayable per case.
  • Substrate behaviour is gated in CI on every PR targeting main / develop. The dataset, harness, rubric, and historical numbers are in the repo and reproducible. The benchmark page is explicit about which metrics measure the substrate and which would need an online LLM-as-judge run to be called agent accuracy.
  • MIT, no CLA. Forks are permanent.

What the licence does and does not allow

Open source is overloaded as a term. AiSOC ships under MIT with no CLA, and the project commitment is to keep the core under MIT. What that means in practice:

  • No CLA. Contributors keep copyright on their patches. The project does not collect an irrevocable licence to relicense under SSPL, BSL, or a proprietary EULA at a later date.
  • No telemetry. Self-hosted deployments emit no analytics back to the project. The only network calls AiSOC initiates are the ones the operator configured (LLM provider, threat intelligence feed, integrations).
  • The agent is in the open repo. The orchestrator, planner, prompt templates, tool registry, response policy, and rubric — the components that reason over incident data — are in this repo. There is no separate enterprise agent.
  • Fork rights are permanent. If a future release changes a model choice, default policy, or UX in a way an operator does not want, the previous commit remains a valid deployment.

Reasons that apply outside regulated environments

The compliance story is the most legible reason to choose an auditable, MIT-licensed agent, but the same structural properties apply to teams that are not in a regulated industry.

Operator agency over the agent

If a hosted LLM provider changes a model and detection behaviour shifts, the operator decides whether to ship the change. With a closed agent, that decision is made by the vendor.

License is permanent

MIT means a deployed version stays available indefinitely. No relicensing event, no new tier gating existing features, no community-edition deprecation. Pinning a commit gives operational independence.

Plugins are source code

Plugins, detections, playbooks, and prompts are source code in the same repo. A Python or Go plugin written against the typed SDK ships with the rest of the stack.

Trade-offs and limitations

A few things this project is explicit about:

  • AiSOC does not claim better agent accuracy than every vendor. The project ships a public, reproducible eval harness over the substrate. The eval harness page is explicit about which metrics measure real substrate behaviour (alert reduction) and which are substrate self-consistency gates (MITRE tactic, completeness, response quality). The relevant property is that the harness exists and is reproducible, not that any specific number beats a vendor claim.
  • Self-hosting has an operational cost. Operators run Postgres, Redis, ClickHouse, and an LLM endpoint. The pnpm aisoc:demo command and one-click deploy buttons shorten the on-ramp, but the stack is operated by the deployer.
  • Bring-your-own LLM means bring-your-own trust boundary. The agent sends prompts to whichever LLM endpoint is configured. The Investigation Ledger logs every prompt, so what leaves the network is auditable, but the trust boundary is the configured LLM provider, not AiSOC. A future local-inference mode is on the roadmap.

Verify it directly

Three artefacts, each reproducible in under a minute: a pre-seeded investigation with the ledger visible, the eval harness on a local machine, and the agent source on GitHub. None require a signup.