Science

Three questions drive the pipeline. Each maps to a discipline, a stage, and a concrete output.

Causal Inference — What happened?

When an initiative launches and metrics move, the natural instinct is to claim credit. But other things changed at the same time — seasonality, competitor actions, market shifts. The fundamental problem of causal inference is that you can never observe both worlds: the one where the intervention happened and the one where it didn't. Every causal method is a different way to reconstruct that missing counterfactual.

CI Docs Impact Engine — Measure

The Impact Engine — Measure implements five causal inference methods behind a single interface — from randomized experiments to synthetic control, interrupted time series, matching, and subclassification. Swap the method by changing one line in the config.

Evidence Assessment — What did we learn?

How much you trust a causal estimate depends on the method that produced it. A randomized experiment with thousands of observations produces stronger evidence than a time series model on sparse data. Evidence assessment scores each estimate for reliability — creating a calibrated hierarchy from experimental designs down to approximations.

CI Docs Impact Engine — Evaluate

The Impact Engine — Evaluate assigns a confidence score to each initiative based on its measurement design. That score directly penalizes return estimates downstream: low confidence pulls returns toward worst-case scenarios, making the allocator conservative where evidence is weak and aggressive where evidence is strong.

Decision Theory — What should we do?

Knowing what works is not enough — you must decide where to invest under constraints and uncertainty. Decision theory frames this as a portfolio optimization problem: select the set of initiatives that maximizes returns across scenarios while respecting budget and strategic constraints.

CI Docs Impact Engine — Allocate

The Impact Engine — Allocate solves this with two pluggable decision rules. Minimax regret minimizes the maximum regret across all scenarios. A Bayesian solver maximizes expected return under user-specified scenario weights. Both consume confidence-penalized returns — better evidence enables better bets.

Back to Home