Science
Three questions drive the pipeline. Each maps to a discipline, a stage, and a concrete output — grounded in established methods from econometrics and statistical decision theory.
Causal Inference — What happened?
You can only ever observe one world. Every causal method reconstructs the one you didn't.
When metrics move after an initiative launches, other things move too — seasonality, competitor actions, market shifts. The causal question is always: how much of that movement was actually caused by the initiative?
Impact Engine — Measure
The Impact Engine — Measure manages the full measurement pipeline — data loading, transformation, estimation, and storage — behind a single config-driven interface. Swap the causal method, data source, or storage backend by changing one line in YAML, without rewriting connectors or breaking downstream consumers.
Evidence Assessment — What did we learn?
Not every causal estimate deserves the same weight. The design of a study — how random the assignment, how clean the control group — determines how much the resulting number can be trusted.
Not all estimates are equal. The method that produced a number determines how much weight it deserves.
Impact Engine — Evaluate
The Impact Engine — Evaluate assigns a confidence score to each initiative based on its measurement design. That score directly penalizes return estimates downstream: low confidence pulls returns toward worst-case scenarios, making the allocator conservative where evidence is weak and aggressive where evidence is strong.
Decision Theory — What should we do?
Knowing what works is not enough. You still have to decide where to bet.
Causal estimates and confidence scores answer what happened. They don't answer what to fund. Decision theory frames allocation as a portfolio problem and resolves it under uncertainty.
Impact Engine — Allocate
The Impact Engine — Allocate solves this with two pluggable decision rules. Minimax regret minimizes the maximum regret across all scenarios. A Bayesian solver maximizes expected return under user-specified scenario weights. Both consume confidence-penalized returns — better evidence enables better bets.
Further Reading
The maintainer's work on these topics spans academic research, applied business publications, and course teaching — all collected at peisenha.github.io.
Causal Inference
J.J. Heckman & E.J. Vytlacil — Econometric Evaluation of Social Programs, Parts I & II, in Handbook of Econometrics (2007)
G.W. Imbens & D.B. Rubin — Causal Inference for Statistics, Social, and Biomedical Sciences (2015)
Evidence Assessment
P.R. Rosenbaum — Observational Studies (2nd ed., 2002)
R.A. Howard & A.E. Abbas — Foundations of Decision Analysis (Global Edition, 2016)
Decision Theory
I. Gilboa — Theory of Decision under Uncertainty (Econometric Society Monographs 45, 2009)
C.F. Manski — Public Policy in an Uncertain World: Analysis and Decisions (2013)