Research & Verification
Methodology
Section titled “Methodology”We adhere to a strict “Trust but Verify” doctrine. Synthetic data is dangerous if it drifts from reality; every run must be explainable and reproducible.
Calibration Approach (Operator-Driven)
Section titled “Calibration Approach (Operator-Driven)”- Venue-specific funding models live in
src/aleatoric/venues/*with tests (tests/test_venues_models.py). - Presets (
src/aleatoric/presets.py) encode starting parameters; adjust manifests for your venue deltas and rerun with fixed seeds. - Use
/mcp/simulate_funding_regimeto compare funding paths before committing to a manifest.
Statistical Tests
Section titled “Statistical Tests”Suggested checks when comparing against live data (not automated in repo):
- KS-Test (Kolmogorov–Smirnov): Verifies that the distribution of synthetic returns matches the distribution of historical returns.
- Autocorrelation: Ensures that volatility clustering (periods of high/low vol) mimics real markets.
- Hurst Exponent: Verifies the “memory” of the time series (trending vs. mean-reverting).
Audits
Section titled “Audits”- Regression coverage is documented in
validation/VALIDATION_REPORT.md(158 tests, 82% coverage). - Determinism and manifest/schema parity are enforced in
tests/test_determinism.pyandtests/test_mcp_manifest_parity.py. - Audit logs and metrics include
job_id,seed, andpresetfor traceability (api/audit.py,observability.py). - Coverage snapshot reflects the report timestamp (2025-12-10); refresh these numbers when rerunning the suite.