Native Rust runtime for adversarial extension workloads, with deterministic replay, cryptographic decision receipts, and fleet-scale containment.
git clone https://github.com/Dicklesworthstone/franken_engine.git
cd franken_engine
cargo build --workspace --releaseThis repository currently ships Rust workspace crates and source-built utility binaries, not a packaged installer or prebuilt release binaries.
Node and Bun are fast enough for many workloads, but extension-heavy agent systems need a different default posture: active containment, deterministic forensics, and explicit runtime authority boundaries.
FrankenEngine provides one native baseline interpreter with deterministic and throughput execution profiles, a probabilistic guardplane with expected-loss actioning, deterministic replay for high-severity decisions, and signed evidence contracts for every high-impact containment event.
| Capability | What You Get In Practice |
|---|---|
| Native execution profiles | baseline_deterministic_profile for conservative control paths, baseline_throughput_profile for throughput-heavy paths, and adaptive_profile_router when policy routing is enabled |
| Probabilistic Guardplane | Bayesian risk updates and e-process boundaries that trigger allow/challenge/sandbox/suspend/terminate/quarantine |
| Deterministic replay | Bit-stable replay for high-severity decision paths with counterfactual policy simulation |
| Cryptographic governance | Signed decision receipts with transparency-log proofs and optional TEE attestation bindings |
| Fleet immune system | Quarantine and revocation propagation with bounded convergence SLOs |
| Capability-typed execution | TS-first workflow that compiles to capability-typed IR with ambient-authority rejection |
| Cross-repo constitution | Control plane on /dp/asupersync, TUI on /dp/frankentui, SQLite on /dp/frankensqlite |
| Evidence-first operations | Every published performance and security claim ships with reproducible artifact bundles |
The shipped frankenctl CLI provides core execution surfaces and selective
operator tooling. Shipped surfaces: version, compile, run, doctor,
verify, benchmark, replay, react, gates, reports, test, synth,
orchestrate, and runtime. See Unsupported Surfaces
for production guidance.
The frankenctl examples below document the operator contract:
# 1) Verify the CLI binary and schema version
frankenctl version
# 2) Create a tiny source file and artifact directory
mkdir -p ./artifacts
printf 'const answer = 40 + 2;\n' > ./demo.js
# 3) Compile source to a versioned artifact
frankenctl compile --input ./demo.js --out ./artifacts/demo.compile.json --goal script
# 4) Verify the compile artifact contract
frankenctl verify compile-artifact --input ./artifacts/demo.compile.json
# 5) Execute the same source through the orchestrator
frankenctl run --input ./demo.js --extension-id demo-ext --out ./artifacts/demo.run.json
# 6) Replay execution with validation mode
frankenctl replay run --trace ./artifacts/replay/demo-trace.json --mode validate --out ./artifacts/replay_report.json-
Runtime ownership over wrappers FrankenEngine owns parser-to-scheduler semantics in Rust. Compatibility is a product layer in
franken_node, not a hidden wrapper around third-party engines. -
Security and performance as co-equal constraints The project does not trade correctness for speed or speed for policy theater. Optimizations ship with behavior proofs and rollback artifacts.
-
Deterministic first, adaptive second Live decisions must replay deterministically from fixed artifacts. Adaptive learning is allowed, but only through signed promoted snapshots.
-
Evidence before claims Benchmarks, containment metrics, and policy assertions are tied to reproducible artifacts. No artifact, no claim.
-
Constitutional integration FrankenEngine reuses stronger sibling substrates instead of rebuilding them: asupersync control contracts, frankentui operator surfaces, and frankensqlite persistence.
Runtime governance and native-only execution boundaries are defined in docs/RUNTIME_CHARTER.md.
Donor-harvesting governance boundaries (semantic extraction allowlist + architectural denylist) are defined in docs/DONOR_EXTRACTION_SCOPE.md.
Semantic compatibility source-of-truth entries for donor-observable behavior are defined in docs/SEMANTIC_DONOR_SPEC.md.
Native architecture synthesis derived from that semantic contract is defined in docs/architecture/frankenengine_native_synthesis.md.
This charter is the acceptance gate for architecture changes and codifies:
- native Rust ownership of core execution semantics
- prohibition of binding-led core execution backends
- deterministic replay + evidence-linkage obligations for high-impact actions
- binding claim-language policy tied to reproducible artifact state
- repository split and sibling-reuse constraints
Reproducibility bundle templates (env.json, manifest.json, repro.lock) are defined in docs/REPRODUCIBILITY_CONTRACT.md and shipped under docs/templates/.
| Dimension | FrankenEngine | Node.js | Bun |
|---|---|---|---|
| Core execution ownership | Native Rust baseline interpreter + profile router | V8 embedding | JavaScriptCore + Zig runtime |
| Deterministic replay for high-severity decisions | Built in, mandatory release gate | External tooling only | External tooling only |
| Probabilistic containment policy | Built in guardplane | Not default runtime behavior | Not default runtime behavior |
| Cryptographic decision receipts | First-class runtime artifact | Not a core runtime primitive | Not a core runtime primitive |
| Fleet quarantine convergence model | Explicit SLO + fault-injection gates | App-specific integration | App-specific integration |
| Capability-typed extension contract | Native IR contract | Not native to runtime | Not native to runtime |
| Cross-runtime lockstep oracle | Built in Node/Bun differential harness | N/A | N/A |
FrankenEngine supports two build modes to accommodate different development and deployment environments:
For developers working without the full asupersync repository layout:
# Build without external dependencies
cargo check --no-default-features
cargo build --no-default-features --release
# Test standalone functionality
cargo test --no-default-featuresIn standalone mode:
- Core interpreter functionality available
- Governance modules compile with fallback behavior
- External policy integration disabled
- Suitable for development and testing
For production deployments with the complete asupersync ecosystem:
# Build with all external dependencies
cargo check --all-features
cargo build --all-features --release
# Test full integration
cargo test --all-featuresIn full integration mode:
- Complete governance and policy enforcement
- Cross-repository coordination enabled
- TEE attestation and fleet quarantine available
- Cryptographic decision receipts with audit trails
Use the provided verification script to test both modes:
./scripts/verify_build_modes.shSee docs/DEPENDENCY_AUDIT.md for detailed dependency information.
The cross-repo integration suite verifies FrankenEngine sibling boundaries with /dp/asupersync, /dp/frankentui, /dp/frankensqlite, and the service/control contracts around them. The suite is the operator entry point for checking that schema contracts, structured logs, degraded-mode diagnostics, and replay artifacts remain aligned across those repositories.
./scripts/run_cross_repo_integration_suite.sh ci
./scripts/e2e/cross_repo_integration_suite_replay.shThe machine-readable contract is docs/cross_repo_integration_suite_v1.json, and the operator guide is docs/CROSS_REPO_INTEGRATION_SUITE.md.
The parser phase0 performance artifact contract defines truthful performance evidence requirements and degraded-mode receipt handling. This contract ensures placeholder artifacts are rejected and real capture failures are explicitly documented.
To verify the artifact contract:
./scripts/run_parser_phase0_artifact_contract.sh ci
./scripts/e2e/parser_phase0_artifact_contract_replay.sh ciSee docs/PARSER_PHASE0_ARTIFACT_CONTRACT_V1.md for the complete contract specification.
The parser performance promotion gate verifies declared Boa/peer wins on fixed workloads and quantiles with reproducible artifact bundles. Run the gate through the repo-local RCH target namespace so remote builds do not depend on fragile temporary directories:
CARGO_TARGET_DIR=$PWD/target_rch_parser_performance_promotion_gate_verify \
./scripts/run_parser_performance_promotion_gate.sh ci
./scripts/e2e/parser_performance_promotion_gate_replay.shGate runs emit run_manifest.json, events.jsonl, commands.txt, and
step_logs/step_*.log under artifacts/parser_performance_promotion_gate/<timestamp>/.
The replay wrapper prints the latest complete artifact bundle and will skip a
newer incomplete run directory with a warning. If an operator interrupts a
remote step, the manifest stays anchored to the in-flight command instead of
leaving step-log-only output; normal runs still surface step_000.log in the
operator verification commands.
See docs/PARSER_PERFORMANCE_PROMOTION_GATE.md for the full gate contract.
The lowering gap truth invariant defines the authoritative relationship between lowering status fields and execution-readiness flags. This contract ensures that status, parser_ready_syntax, execution_ready_semantics, and prose fields cannot report mutually incompatible states in the lowering gap inventory.
To verify the invariant contract:
./scripts/run_lowering_gap_truth_invariant.sh ci
./scripts/e2e/lowering_gap_truth_invariant_replay.sh ciSee docs/LOWERING_GAP_TRUTH_INVARIANT_V1.md for the complete invariant specification.
The placeholder closure verification contract defines explicit verification and waiver discipline for closing out the zero-placeholder audit workstream. This contract proves that all audited placeholder/mock/stub findings have been resolved or explicitly waived with proper justification.
To verify the closure contract:
jq empty docs/rgc_placeholder_closure_verification_v1.json
cargo test --test placeholder_closure_verification
./scripts/run_placeholder_closure_matrix.sh generate
./scripts/run_placeholder_closure_verification.sh verify
./scripts/run_placeholder_closure_bundle.sh bundle
./scripts/run_placeholder_waiver_validation.sh checkSee docs/RGC_PLACEHOLDER_CLOSURE_VERIFICATION_V1.md for the complete contract specification.
The cross-platform matrix gate establishes deterministic verification for runtime execution and CLI workflows across Linux/macOS/Windows and x64/arm64 targets. This gate ensures user-facing reliability is proven, not assumed.
To verify the cross-platform matrix:
./scripts/run_rgc_cross_platform_matrix_gate.sh ci
./scripts/e2e/rgc_cross_platform_matrix_replay.sh matrix
jq empty docs/rgc_cross_platform_matrix_v1.jsonMatrix artifacts are generated at artifacts/rgc_cross_platform_matrix/<timestamp>/matrix_summary.json for each verification run.
See docs/RGC_CROSS_PLATFORM_MATRIX_V1.md for the complete contract specification.
The scientific contribution targets gate tracks FrankenEngine's research deliverables, ensuring that novel contributions become publishable artifacts with reproducible evidence bundles. This gate validates technical reports, external replication claims, and open tool adoption.
To verify scientific contribution targets:
./scripts/run_scientific_contribution_targets.sh bundle
./scripts/run_scientific_contribution_targets.sh ci
./scripts/e2e/scientific_contribution_targets_replay.sh showStatus reports are generated at:
artifacts/scientific_contribution_targets/<timestamp>/technical_report_status_report.jsonartifacts/scientific_contribution_targets/<timestamp>/external_replication_status_report.jsonartifacts/scientific_contribution_targets/<timestamp>/open_tool_adoption_status_report.jsonartifacts/scientific_contribution_targets/<timestamp>/trace_ids.json
The gate tracks three milestone beads:
bd-2501.1— Publish reproducible technical reports with artifact bundlesbd-2501.2— Achieve externally replicated high-impact claimsbd-2501.3— Release open benchmark or verification tool adopted outside the project
For operator verification:
jq empty docs/scientific_contribution_targets_v1.json
rch exec -- env RUSTUP_TOOLCHAIN=nightly CARGO_TARGET_DIR=$PWD/target_rch_scientific_contribution_targets_verify CARGO_BUILD_JOBS=1 CARGO_INCREMENTAL=0 cargo test -p frankenengine-engine --test scientific_contribution_targetsSee docs/SCIENTIFIC_CONTRIBUTION_TARGETS_V1.md, docs/SCIENTIFIC_REPORT_CATALOG_V1.md, docs/EXTERNAL_REPLICATION_CATALOG_V1.md, and docs/OPEN_TOOL_ADOPTION_CATALOG_V1.md for complete catalog specifications.
The docs and help surface audit ensures that README.md and planned CLI help output stay aligned with commands that are implemented before they are described as shipped. This audit prevents aspirational copy from diverging from runtime behavior.
To verify the docs and help surface contract:
./scripts/run_rgc_docs_help_surface_audit.sh ci
./scripts/e2e/rgc_docs_help_surface_audit_replay.sh ci
jq empty docs/rgc_docs_help_surface_audit_v1.jsonThe replay wrapper resolves the latest complete audit bundle, warns on incomplete runs, and validates that help output matches the audited contract surface.
Audit artifacts are generated at artifacts/rgc_docs_help_surface_audit/<timestamp>/docs_help_surface_report.json for each verification run.
See docs/RGC_DOCS_HELP_SURFACE_AUDIT_V1.md for the complete contract specification.
The CLI and operator workflow verification pack validates the real operator experience of frankenctl workflows across golden-path, failure-path, and observability-mode scenarios with actionable diagnostics. This pack ensures operator workflows are evidence-first and deterministic.
./scripts/run_rgc_cli_operator_workflow_verification_pack.sh ci
./scripts/e2e/rgc_cli_operator_workflow_verification_pack_replay.sh ci
jq empty docs/rgc_cli_operator_workflow_verification_pack_v1.jsonVerification artifacts are generated at artifacts/rgc_cli_operator_workflow_verification_pack/<timestamp>/run_manifest.json, artifacts/rgc_cli_operator_workflow_verification_pack/<timestamp>/events.jsonl, artifacts/rgc_cli_operator_workflow_verification_pack/<timestamp>/commands.txt, artifacts/rgc_cli_operator_workflow_verification_pack/<timestamp>/trace_ids.json, and artifacts/rgc_cli_operator_workflow_verification_pack/<timestamp>/step_logs/step_*.log for each verification run. The workflow also generates support bundle artifacts at artifacts/frankenctl_cli_workflow/<timestamp>/support_bundle/index.json.
See docs/RGC_CLI_OPERATOR_WORKFLOW_VERIFICATION_PACK_V1.md for the complete contract specification.
git clone https://github.com/Dicklesworthstone/franken_engine.git
cd franken_engine
cargo build --release --workspaceThe workspace currently includes these crates:
frankenengine-enginefrankenengine-extension-hostfrankenengine-test-supportfrankenengine-metamorphic
The source tree currently defines these release binaries:
frankenctl(main CLI binary)franken-react-sidecarfranken-benchmark-evidence-export
There is no root install.sh, prebuilt Linux/macOS/Windows binary bundle, or separate frankenengine-cli Cargo package in this repository at this time.
# Required for advanced TUI views
cd /dp/frankentui && cargo build --release
# Required for SQLite-backed replay/evidence stores
cd /dp/frankensqlite && cargo build --release- Create a tiny demo source
mkdir -p ./artifacts
printf 'const answer = 40 + 2;\n' > ./demo.js- Compile to a deterministic artifact
frankenctl compile --input ./demo.js --out ./artifacts/demo.compile.json --goal script
frankenctl verify compile-artifact --input ./artifacts/demo.compile.json- Run the source and persist the execution report
frankenctl run --input ./demo.js --extension-id demo-ext --out ./artifacts/demo.run.json- Summarize a captured runtime snapshot
frankenctl doctor --input ./artifacts/runtime_input.json --summary --out-dir ./artifacts/doctor- Verify receipt bundles and benchmark publication inputs
frankenctl verify receipt --input ./artifacts/verifier_input.json --receipt-id rcpt_01J... --summary
frankenctl benchmark score --input ./artifacts/publication_gate_input.json --output ./artifacts/benchmark_score.json- Run benchmark and replay workflows when you have the required artifacts
frankenctl benchmark run --profile small --family boot-storm --out-dir ./artifacts/benchmarks
frankenctl benchmark verify --bundle ./artifacts/benchmarks --summary --output ./artifacts/benchmark_verify.json
frankenctl replay run --trace ./artifacts/replay/demo-trace.json --compare-trace ./artifacts/replay/live-trace.json --mode validate --out ./artifacts/replay_report.jsonThe command table below documents the frankenctl contract and available command surfaces.
| Command | Purpose | Example |
|---|---|---|
frankenctl version |
Print CLI schema and binary version | frankenctl version |
frankenctl compile |
Parse and lower source into a versioned compile artifact | frankenctl compile --input ./demo.js --out ./artifacts/demo.compile.json --goal script |
frankenctl run |
Execute source through the orchestrator and emit an execution report | frankenctl run --input ./demo.js --extension-id demo-ext --out ./artifacts/demo.run.json |
frankenctl doctor |
Summarize runtime diagnostics input and emit operator artifacts | frankenctl doctor --input ./artifacts/runtime_input.json --summary --out-dir ./artifacts/doctor |
frankenctl verify compile-artifact |
Validate compile artifact integrity and schema invariants | frankenctl verify compile-artifact --input ./artifacts/demo.compile.json |
frankenctl verify receipt |
Verify a receipt bundle against a specific receipt ID | frankenctl verify receipt --input ./artifacts/verifier_input.json --receipt-id rcpt_01J... --summary |
frankenctl benchmark run |
Run bundled benchmark families and emit evidence artifacts | frankenctl benchmark run --profile small --family boot-storm --out-dir ./artifacts/benchmarks |
frankenctl benchmark score |
Score a publication-gate input against Node/Bun comparisons | frankenctl benchmark score --input ./artifacts/publication_gate_input.json --output ./artifacts/benchmark_score.json |
frankenctl benchmark verify |
Verify a benchmark claim bundle and render a verdict report | frankenctl benchmark verify --bundle ./artifacts/benchmarks --summary --output ./artifacts/benchmark_verify.json |
frankenctl replay run |
Replay a captured nondeterminism trace; validate mode compares it against --compare-trace |
frankenctl replay run --trace ./artifacts/replay/demo-trace.json --compare-trace ./artifacts/replay/live-trace.json --mode validate --out ./artifacts/replay_report.json |
Run the parser operator/developer runbook gate from the repository root:
./scripts/run_parser_operator_developer_runbook.sh ciThe wrapper uses a repo-local target_rch_parser_operator_developer_runbook_ target directory and a timeout-safe cargo test --no-run compile smoke instead of cargo check for the integration-test lane. It emits run_manifest.json, events.jsonl, commands.txt, and step_logs/step_*.log; exact preserved-bundle replay requires step_logs/step_000.log as part of the complete bundle.
Replay current or preserved evidence with:
./scripts/e2e/parser_operator_developer_runbook_replay.sh ci
./scripts/e2e/parser_operator_developer_runbook_replay.sh drill
PARSER_OPERATOR_DEVELOPER_RUNBOOK_REPLAY_RUN_DIR=artifacts/parser_operator_developer_runbook/<timestamp> \
./scripts/e2e/parser_operator_developer_runbook_replay.sh ciThe replay wrapper prints the latest complete artifact bundle, can skip a newer incomplete run directory, and states whether output reflects the current failed invocation or an older complete bundle. Drill mode reuses the latest complete dependency bundles instead of rerunning dependent parser lanes. The emitted run_manifest.json includes operator_verification commands for both the normal rerun path and the preserved-bundle path without rerunning the lane.
For detailed gate documentation, artifact contracts, and operator workflows, see:
- RGC Gates Reference - Complete reference for all RGC gate scripts, artifact paths, and replay commands
For system architecture and design details, see:
- Architecture Overview - High-level system design and component overview
- Runtime Charter - Runtime governance and execution boundaries
For information about contributing to this project, see:
- Contributing Guide - Development setup, testing, and submission guidelines
The following operator capabilities are explicitly not shipped and should not be relied upon in production environments:
- Advanced policy debugging surfaces requiring TEE attestation
- Fleet-wide quarantine orchestration beyond local containment
- Cross-repository governance coordination tools (use asupersync control plane)
- Live policy modification interfaces (use static policy manifests)
- Cryptographic key rotation automation (use dedicated key management)
- Internal execution profile switching without orchestrator mediation
- Direct IR manipulation outside the lowering pipeline contract
- Bypass interfaces for deterministic replay constraints
- Runtime governance policy overrides without evidence retention
- Evidence artifact tampering or retroactive modification
- Multi-tenant isolation boundaries within single runtime instances
- Hardware-specific optimization targeting (beyond baseline profiles)
- Third-party evidence verifier plugin architecture
- Real-time adversarial policy adaptation
- Cross-engine differential execution with live workloads
Important: Undocumented CLI commands, internal library interfaces, and experimental flags may change or be removed without notice. For production integration, use only the explicitly documented surfaces listed in the Quick Example section.
Support Contract: Unsupported surface usage voids reproduction assistance.
Submit issues only for documented surface behaviors with reproducible artifact
bundles following the templates in docs/templates/.
- High-security mode adds measurable overhead on latency-sensitive low-risk workloads.
- Capability-typed extension onboarding requires explicit manifests and policy declarations; this is extra setup for small prototypes.
- Deterministic replay and evidence retention increase storage footprint.
- Full Node ecosystem compatibility remains an active target; edge behavior differences can still appear in low-level module or process APIs.
- Fleet-level immune features assume stable cryptographic identity and time synchronization across participating nodes.
For extension-heavy, high-trust workloads, yes. For broad legacy compatibility-only use cases, franken_node is the product layer that provides migration paths.
Yes, for full control-plane guarantees. FrankenEngine can run with reduced local mode, but constitutional guarantees require /dp/asupersync integration.
To verify both build modes, run ./scripts/test_standalone_build.sh ci. That gate records
artifacts under artifacts/standalone_build_gate/<timestamp>/, sends every heavy Cargo lane
through rch, and treats the standalone mode as the blocking gate:
cargo check -p frankenengine-engine --no-default-featurescargo test -p frankenengine-engine --no-default-featurescargo check -p frankenengine-engine --all-features
If the sibling /dp dependencies needed for full integration are unavailable, the script records
that lane as skipped in the manifest instead of pretending the repo is fully integrated.
The canonical dependency-isolation contract for this split lives in
docs/CROSS_REPO_DEPENDENCY_ISOLATION_V1.md and docs/cross_repo_dependency_isolation_v1.json.
Yes for basic CLI workflows. Advanced operator views, replay dashboards, and policy explanation consoles use /dp/frankentui.
It enforces shared persistence contracts and conformance behavior across replay, evidence, benchmark, and control artifacts.
Through explicit expected-loss matrices, sequential testing boundaries, calibrated posterior models, and shadow promotion gates.
Given fixed code, policy, model snapshot, evidence stream, and randomness transcript, high-severity decision execution replays identically.
Yes. The benchmark harness, manifests, and artifact bundles are designed for third-party reproduction.
Operational target is at or below 250ms median from high-risk threshold crossing to containment action under defined load envelopes.
About Contributions: Please don't take this the wrong way, but I do not accept outside contributions for any of my projects. I simply don't have the mental bandwidth to review anything, and it's my name on the thing, so I'm responsible for any problems it causes; thus, the risk-reward is highly asymmetric from my perspective. I'd also have to worry about other "stakeholders," which seems unwise for tools I mostly make for myself for free. Feel free to submit issues, and even PRs if you want to illustrate a proposed fix, but know I won't merge them directly. Instead, I'll have Claude or Codex review submissions via
ghand independently decide whether and how to address them. Bug reports in particular are welcome. Sorry if this offends, but I want to avoid wasted time and hurt feelings. I understand this isn't in sync with the prevailing open-source ethos that seeks community contributions, but it's the only way I can move at this velocity and keep my sanity.
MIT, see LICENSE.
