Impact Engine — Orchestrator

CI Docs License: MIT Ruff Slack

Fan-out/fan-in pipeline runner for scaling pilot experiments to full deployment

Running a single causal study is hard enough. Running a portfolio of pilots — measuring effects, scoring confidence, selecting winners, and validating at scale — means stitching together independent analysis steps, synchronizing results, and managing a fan-out/fan-in execution pattern. Most teams build this glue code from scratch for every engagement.

Impact Engine — Orchestrator wires the full MEASURE → EVALUATE → ALLOCATE → SCALE pipeline into one config-driven run. A YAML file defines your initiatives, budget, and component settings. The orchestrator fans out pilot measurements in parallel, collects confidence-scored results, runs portfolio selection, then scales the winners — producing an outcome report that compares predicted vs actual impact. Swap any pipeline component by changing one line in the config.

Impact Engine Orchestrator Overview

Quick Start

pip install git+https://github.com/eisenhauerIO/tools-impact-engine-orchestrator.git
from impact_engine_orchestrator.config import load_config
from impact_engine_orchestrator.orchestrator import Orchestrator

config = load_config("pipeline.yaml")
orchestrator = Orchestrator.from_config(config)
results = orchestrator.run()

Documentation

Guide

Description

Pipeline

Stage descriptions and data flow contracts

Architecture

Execution model and design decisions

Design

System design document

Component Repositories

Component

Repository

MEASURE

tools-impact-engine-measure

EVALUATE

tools-impact-engine-evaluate

ALLOCATE

tools-impact-engine-allocate

SIMULATE

tools-catalog-generator

Development

hatch run test        # Run tests
hatch run lint        # Run linter
hatch run format      # Format code
hatch run docs:build  # Build documentation