Impact Engine
Science alone doesn't create value. Productionizing, automating, and scaling it does.
Organizations invest in dozens of initiatives but rarely know which ones moved the needle. Impact Engine closes the loop — it measures causal impact, scores evidence quality, optimizes resource allocation, and repeats every cycle. Built, tested, and deployed as an open-source Python ecosystem by Philipp Eisenhauer.
Decision Loop
Start with a cohort of candidate initiatives and run small-scale pilots. Measure the causal effect of each pilot. Evaluate how much to trust each estimate based on methodological rigor and remaining uncertainty. Allocate budget to the most promising initiatives through constrained portfolio optimization. Scale the winners, monitor their performance, and feed the learnings back into the next cycle.
The Impact Engine runs this loop in a single call. Pass a
YAML config with your initiatives and budget,
and it executes all four stages — pilot measurement, evidence
scoring, portfolio selection, and scaled re-measurement —
returning structured results for each.
from impact_engine_orchestrator import load_config, Orchestrator
config = load_config("config.yaml")
engine = Orchestrator.from_config(config)
result = engine.run()
# result keys map to each stage:
# pilot_results → per-initiative causal effect estimates
# evaluate_results → confidence scores by methodology
# allocate_result → portfolio selection under budget
# outcome_reports → predicted vs. actual after scaling
Walk through a full pipeline run →
Source Codes
Measure
Causal impact estimation
Evaluate
Evidence quality scoring
Allocate
Constrained portfolio optimization