Solver algorithms
Impact Engine Allocate provides two pluggable decision rules for portfolio selection.
Both share a common preprocessing pipeline (confidence filtering and return
penalization) and return the same RuleResult dict.
Minimax regret
The minimax regret rule selects the portfolio that minimizes the maximum regret across all scenarios. Regret is the difference between the best achievable return and the actual portfolio return under each scenario.
Step 1: Optimal scenario returns
For each scenario $j$, solve an independent binary knapsack:
$$V_j^* = \max \sum_i x_i \cdot R^{\text{eff}}_{i,j} \quad \text{s.t.} \quad \sum_i x_i \cdot \text{cost}_i \leq B$$
Step 2: Minimax regret
Minimize the maximum regret $\theta$ across all scenarios:
$$\min \theta \quad \text{s.t.} \quad \theta \geq V_j^* - \sum_i x_i \cdot R^{\text{eff}}_{i,j} \quad \forall j$$
Subject to:
Budget constraint: $\sum_i x_i \cdot \text{cost}_i \leq B$
Minimum worst return: $\sum_i x_i \cdot R_{i,\text{worst}} \geq R_{\min}$
The detail dict returns v_j_star (optimal per-scenario returns) and
regrets (per-scenario regret values) for inspection.
Bayesian expected return
The Bayesian expected return rule maximizes the weighted sum of scenario returns, where weights represent the decision-maker’s prior beliefs about scenario likelihoods.
Formulation
Given scenario weights $w_j$ (non-negative, summing to 1), the weighted return for initiative $i$ is:
$$\bar{R}i = \sum_j w_j \cdot R^{\text{eff}}{i,j}$$
The solver maximizes the total weighted return:
$$\max \sum_i x_i \cdot \bar{R}_i$$
Subject to:
Budget constraint: $\sum_i x_i \cdot \text{cost}_i \leq B$
Minimum worst return: $\sum_i x_i \cdot R_{i,\text{worst}} \geq R_{\min}$
Equal weights recover the Laplace criterion (no scenario preference).
Pessimistic weights (high worst weight) produce conservative portfolios
similar to minimax regret.
The detail dict returns weights (the scenario weights used) and
weighted_returns (per-initiative weighted returns for selected initiatives).
Implementation
Both solvers use PuLP with the CBC (COIN-OR Branch and Cut) backend. Binary decision variables $x_i \in {0, 1}$ indicate whether each initiative is selected.