Impact Engine — Orchestrator
Fan-out/fan-in pipeline runner for scaling pilot experiments to full deployment
Running a single causal study is hard enough. Running a portfolio of pilots — measuring effects, scoring confidence, selecting winners, and validating at scale — means stitching together independent analysis steps, synchronizing results, and managing a fan-out/fan-in execution pattern. Most teams build this glue code from scratch for every engagement.
Impact Engine — Orchestrator wires the full MEASURE → EVALUATE → ALLOCATE → SCALE pipeline into one config-driven run. A YAML file defines your initiatives, budget, and component settings. The orchestrator fans out pilot measurements in parallel, collects confidence-scored results, runs portfolio selection, then scales the winners — producing an outcome report that compares predicted vs actual impact. Swap any pipeline component by changing one line in the config.
Quick Start
pip install git+https://github.com/eisenhauerIO/tools-impact-engine-orchestrator.git
from impact_engine_orchestrator.config import load_config
from impact_engine_orchestrator.orchestrator import Orchestrator
config = load_config("pipeline.yaml")
orchestrator = Orchestrator.from_config(config)
results = orchestrator.run()
Documentation
Guide |
Description |
|---|---|
Stage descriptions and data flow contracts |
|
Execution model and design decisions |
|
System design document |
Component Repositories
Component |
Repository |
|---|---|
MEASURE |
|
EVALUATE |
|
ALLOCATE |
|
SIMULATE |
Development
hatch run test # Run tests
hatch run lint # Run linter
hatch run format # Format code
hatch run docs:build # Build documentation