{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# A/A Testing\n", "\n", "Before trusting a measurement model with real treatment effects, we need to confirm it behaves correctly when there is **no effect**. An A/A test assigns treatment labels randomly with no actual intervention — the true treatment effect is 0 by construction. Any model applied to this data should recover an estimate close to 0.\n", "\n", "This notebook answers two questions:\n", "\n", "1. **Model swappability** — Given the same A/A data, do different cross-sectional models all produce estimates near 0?\n", "2. **Sampling variability** — Is a single estimate reliable, or do we need multiple replications to separate bias from noise?\n", "\n", "We use a single-day simulation with random treatment labels assigned to 50% of products. The `quality_boost` is set to 0, so there is no real intervention." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Initial setup" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import copy\n", "import os\n", "from pathlib import Path\n", "\n", "import numpy as np\n", "import pandas as pd\n", "import yaml\n", "from impact_engine_measure import measure_impact, load_results\n", "from impact_engine_measure.core.validation import load_config\n", "from online_retail_simulator import simulate" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Configurable via environment variables for CI (reduced values speed up execution)\n", "N_REPS = 20\n", "\n", "output_path = Path(\"output/demo_aa_testing\")\n", "output_path.mkdir(parents=True, exist_ok=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 1 — Product catalog\n", "\n", "All models will use the same product catalog." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "with open(\"configs/demo_model_selection_catalog.yaml\") as f:\n", " catalog_config = yaml.safe_load(f)\n", "\n", "tmp_catalog = output_path / \"catalog_config.yaml\"\n", "with open(tmp_catalog, \"w\") as f:\n", " yaml.dump(catalog_config, f, default_flow_style=False)\n", "\n", "catalog_job = simulate(str(tmp_catalog), job_id=\"catalog\")\n", "products = catalog_job.load_df(\"products\")\n", "\n", "print(f\"Generated {len(products)} products\")\n", "products.head()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 2 — Configuration\n", "\n", "Treatment assignment is controlled by the config's `DATA.ENRICHMENT` section. The `product_detail_boost` function randomly assigns 50% of products to treatment (`enrichment_fraction: 0.5`). Because `quality_boost: 0.0`, there is no actual effect — this is an A/A test and the true treatment effect is 0 by construction." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "config_path = \"configs/demo_model_selection.yaml\"\n", "true_te = 0 # A/A design: no treatment effect by construction" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def run_with_override(base_config, measurement_override, storage_url, job_id, source_seed=None):\n", " \"\"\"Override MEASUREMENT in base config, write temp YAML, run measure_impact().\n", "\n", " Optionally override the data-generating seed for Monte Carlo replications.\n", " Returns the full MeasureJobResult for access to both impact_results and transformed_metrics.\n", " \"\"\"\n", " config = copy.deepcopy(base_config)\n", " config[\"MEASUREMENT\"] = measurement_override\n", " if source_seed is not None:\n", " config[\"DATA\"][\"SOURCE\"][\"CONFIG\"][\"seed\"] = source_seed\n", "\n", " tmp_config_path = Path(storage_url) / f\"config_{job_id}.yaml\"\n", " tmp_config_path.parent.mkdir(parents=True, exist_ok=True)\n", " with open(tmp_config_path, \"w\") as f:\n", " yaml.dump(config, f, default_flow_style=False)\n", "\n", " job_info = measure_impact(str(tmp_config_path), storage_url, job_id=job_id)\n", " return load_results(job_info)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "base_config = load_config(config_path)\n", "\n", "model_overrides = {\n", " \"Experiment (OLS)\": {\n", " \"MODEL\": \"experiment\",\n", " \"PARAMS\": {\"formula\": \"revenue ~ enriched + price\"},\n", " },\n", " \"Subclassification\": {\n", " \"MODEL\": \"subclassification\",\n", " \"PARAMS\": {\n", " \"treatment_column\": \"enriched\",\n", " \"covariate_columns\": [\"price\"],\n", " \"n_strata\": 5,\n", " \"estimand\": \"att\",\n", " \"dependent_variable\": \"revenue\",\n", " },\n", " },\n", " \"Nearest Neighbour Matching\": {\n", " \"MODEL\": \"nearest_neighbour_matching\",\n", " \"PARAMS\": {\n", " \"treatment_column\": \"enriched\",\n", " \"covariate_columns\": [\"price\"],\n", " \"dependent_variable\": \"revenue\",\n", " \"caliper\": 0.2,\n", " \"replace\": True,\n", " \"ratio\": 1,\n", " },\n", " },\n", "}\n", "\n", "\n", "def extract_te(result):\n", " \"\"\"Extract the treatment effect from a MeasureJobResult regardless of model type.\"\"\"\n", " estimates = result.impact_results[\"data\"][\"impact_estimates\"]\n", " model_type = result.impact_results[\"model_type\"]\n", " if model_type == \"experiment\":\n", " return estimates[\"params\"].get(\"enriched[T.True]\", estimates[\"params\"].get(\"enriched\", 0))\n", " elif model_type == \"nearest_neighbour_matching\":\n", " return estimates[\"att\"]\n", " else:\n", " return estimates[\"treatment_effect\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 3 — Model swappability\n", "\n", "We load one base config and override `MEASUREMENT` for each model.\n", "Each iteration writes a temporary YAML and calls `measure_impact()`." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model_results = {}\n", "model_estimates = {}\n", "mean_revenue = None\n", "\n", "for name, measurement in model_overrides.items():\n", " job_id = measurement[\"MODEL\"]\n", " result = run_with_override(base_config, measurement, str(output_path), job_id)\n", " model_results[name] = result\n", " model_estimates[name] = extract_te(result)\n", " if mean_revenue is None:\n", " mean_revenue = result.transformed_metrics[\"revenue\"].mean()\n", " print(f\"{name}: treatment effect = {model_estimates[name]:.4f}\")\n", "\n", "print(f\"\\nTrue effect: {true_te:.4f}\")\n", "print(f\"Mean revenue: {mean_revenue:.2f}\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "comparison = pd.DataFrame(\n", " [\n", " {\n", " \"Model\": name,\n", " \"Estimate\": est,\n", " \"True Effect\": true_te,\n", " \"Absolute Error\": abs(est - true_te),\n", " \"Relative Error (%)\": abs(est - true_te) / mean_revenue * 100,\n", " }\n", " for name, est in model_estimates.items()\n", " ]\n", ")\n", "\n", "print(\"=\" * 90)\n", "print(\"CROSS-SECTIONAL MODEL COMPARISON (A/A)\")\n", "print(\"=\" * 90)\n", "print(f\"Mean revenue: {mean_revenue:.2f} (used as denominator for relative error)\")\n", "print(\"-\" * 90)\n", "print(comparison.to_string(index=False, float_format=lambda x: f\"{x:.4f}\"))\n", "print(\"=\" * 90)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from notebook_support import plot_model_comparison\n", "\n", "plot_model_comparison(\n", " model_names=list(model_estimates.keys()),\n", " estimates=list(model_estimates.values()),\n", " true_effect=true_te,\n", " ylabel=\"Treatment Effect\",\n", " title=\"A/A Test: Model Estimates (True Effect = 0)\",\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Step 4 — Monte Carlo model comparison\n", "\n", "Step 3 used a single random seed, making it impossible to distinguish systematic bias from sampling noise. Here we run all three models across multiple replications with varying outcome seeds to obtain sampling distributions. In the A/A setting, all models should be centered around 0.\n", "\n", "**Design**: We vary `DATA.SOURCE.CONFIG.seed` (outcome noise) while keeping `DATA.ENRICHMENT.PARAMS.seed` fixed (same treatment assignment). This isolates estimator sampling variability from treatment assignment variability." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "rng = np.random.default_rng(seed=2024)\n", "mc_seeds = rng.integers(low=0, high=2**31, size=N_REPS).tolist()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mc_results = {name: [] for name in model_overrides}\n", "\n", "for i, seed in enumerate(mc_seeds):\n", " for name, measurement in model_overrides.items():\n", " job_id = f\"mc_{measurement['MODEL']}_rep{i}\"\n", " result = run_with_override(\n", " base_config,\n", " measurement,\n", " str(output_path),\n", " job_id,\n", " source_seed=seed,\n", " )\n", " mc_results[name].append(extract_te(result))\n", "\n", " if (i + 1) % 5 == 0:\n", " print(f\"Completed {i + 1}/{N_REPS} replications\")\n", "\n", "print(f\"Monte Carlo simulation complete: {N_REPS} replications x {len(model_overrides)} models\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "mc_summary = pd.DataFrame(\n", " [\n", " {\n", " \"Model\": name,\n", " \"Mean\": np.mean(estimates),\n", " \"Std\": np.std(estimates, ddof=1),\n", " \"Bias\": np.mean(estimates) - true_te,\n", " \"RMSE\": np.sqrt(np.mean([(e - true_te) ** 2 for e in estimates])),\n", " \"Rel. Bias (%)\": (np.mean(estimates) - true_te) / mean_revenue * 100,\n", " \"Rel. RMSE (%)\": np.sqrt(np.mean([(e - true_te) ** 2 for e in estimates])) / mean_revenue * 100,\n", " \"Min\": np.min(estimates),\n", " \"Max\": np.max(estimates),\n", " }\n", " for name, estimates in mc_results.items()\n", " ]\n", ")\n", "\n", "print(\"=\" * 110)\n", "print(f\"MONTE CARLO MODEL COMPARISON ({N_REPS} replications)\")\n", "print(\"=\" * 110)\n", "print(f\"True treatment effect: {true_te:.4f}\")\n", "print(f\"Mean revenue: {mean_revenue:.2f} (used as denominator for relative metrics)\")\n", "print(\"-\" * 110)\n", "print(mc_summary.to_string(index=False, float_format=lambda x: f\"{x:.4f}\"))\n", "print(\"=\" * 110)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from notebook_support import plot_monte_carlo_distribution\n", "\n", "plot_monte_carlo_distribution(\n", " model_names=list(mc_results.keys()),\n", " distributions=mc_results,\n", " true_effect=true_te,\n", " ylabel=\"Treatment Effect\",\n", " title=f\"A/A Monte Carlo Model Comparison ({N_REPS} replications)\",\n", ")" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.3" } }, "nbformat": 4, "nbformat_minor": 4 }