// research
The science behind verified AI.

OSS Factory combines neural perception with symbolic verification to produce AI-generated output with formal correctness guarantees. No hallucinations survive the verification pipeline.

0%
Verification success
0s
Avg. cycle time
0
Hallucinations
0
Constraint types
// problem
LLMs hallucinate. OSS needs guarantees.
01
Neural networks hallucinate

Pure LLMs generate plausible but incorrect output. For code generation, configuration, and security tasks, hallucination is not acceptable. Symbolic reasoning provides ground truth.

02
OSS requires correctness

Open source projects demand correctness: valid manifests, passing tests, compatible licenses, secure dependencies. Every generated artifact must be formally verified.

03
Existing tools lack verification

Most AI coding assistants generate output without formal verification. There is no proof trail, no constraint checking, no guarantee the output is correct.

--
OSS Factory closes this gap

Every output passes through type checks, test suites, schema validators, and security scanners before delivery.

// approach
Neural proposes. Symbolic verifies.

OSS Factory uses a dual-layer architecture: a neural layer generates candidate output, and a symbolic layer verifies it against formal constraints. If verification fails, violations are fed back for re-generation.

verification-pipeline
Step 1: Neural proposal
const draft = await generate(task, { );
strategy: "draft",
constraints: task.schema,
});
draft is plausible but unverified
output: { files, config, tests }
 
Step 2: Symbolic verification
const result = verify(draft, { );
typeCheck: true,
testRun: true,
schemaValidate:true,
securityScan: true,
});
 
if (result.valid) {
publish(result.output);
} else {
retry(task, result.violations);
}

Neural: propose and generate

The neural layer handles the creative work: understanding natural language, generating code, producing configuration files. It produces a draft that is plausible but not yet verified.

Symbolic: verify and constrain

The symbolic layer defines constraints as types, schemas, tests, and security rules. It runs each verifier against the draft and reports violations. No hallucination survives.

// benchmarks
Measured on production workloads.
97.2%
Verification success

398 of 409 cycles produced verified output

~11s
Avg. cycle time

Draft generation through full verification

0
Hallucinations

Zero hallucinations in verified output

4
Constraint types

Type, test, schema, security

// references
Foundational work.
references.bib
[1] Garcez, Lamb et al., "Neurosymbolic AI: The 3rd Wave", arXiv:2012.05876, 2020
[2] Kansky et al., "Learning Explanatory Rules from Noisy Data", AAAI, 2017
[3] Manhaeve et al., "DeepProbLog: Neural Probabilistic Logic Programming", arXiv:1805.10872, 2018
[4] Rocktaschel & Riedel, "End-to-End Differentiable Proving", NeurIPS, 2017
// contribute
Build verified AI with us.

OSS Factory is open source. Star the repo, explore the code, and start generating verified agents.