Sample Weighting

Nonresponse adjustment, poststratification, calibration, raking, and replicate weights

Learn how to calculate and adjust survey weights in Python using the svy library. Covers nonresponse adjustment, poststratification, calibration, raking, and replicate weights for variance estimation.
Keywords

survey weighting Python, nonresponse adjustment, poststratification, calibration weights, raking survey weights, replicate weights, bootstrap weights, jackknife weights, BRR weights, GREG calibration, svy library

Sample weighting allows analysts to generalize results from a survey sample to the target population. Design weights (also called base weights) are derived as the inverse of the final probability of selection. In large-scale surveys, these design weights are often further adjusted to correct for nonresponse, extreme values, or to align auxiliary variables with known population controls.

This tutorial covers two main topics:

  1. Weight adjustment techniques — nonresponse adjustment, poststratification, calibration, and raking to improve representativeness and reduce bias
  2. Replicate weights for variance estimation — Bootstrap, Balanced Repeated Replication (BRR), and Jackknife methods

For more on sample-weight adjustments, see Valliant and Dever (2018), which provides a step-by-step guide to calculating survey weights.

Setting Up the Sample Data

This tutorial uses the World Bank (2023) synthetic sample data.

import numpy as np
from rich import print as rprint
import svy

hld_data = svy.load_dataset(name="hld_sample_wb_2023", limit=None)

print(f"The number of records in the household sample data is {hld_data.shape[0]}")
The number of records in the household sample data is 8000

Weight Adjustment Methods

In practice, base weights derived from selection probabilities are routinely adjusted to:

  • Correct for nonresponse and unknown eligibility
  • Temper the influence of extreme or large weights
  • Align the weighted sample with known auxiliary controls

This section demonstrates five key methods available in the svy library:

Method Function Purpose
Nonresponse adjustment adjust_nr() Account for unit nonresponse and unknown eligibility
Poststratification poststratify() Match weights to known control totals
Calibration calibrate() Adjust weights using GREG framework with auxiliary variables
Raking rake() Rescale weights to match multiple marginal totals
Normalization normalize() Rescale weights to sum to a chosen constant

Creating the Design Weight

The design weight (or base weight) represents the inverse of the overall probability of selection—the product of first-stage and second-stage selection probabilities, as explained in the Sample Selection tutorial.

# Define the sampling design
hld_design = svy.Design(stratum=["geo1", "urbrur"], psu=["ea"], wgt="hhweight")

# Create the sample
hld_sample = svy.Sample(data=hld_data, design=hld_design)

The dataset includes a household-level base weight variable named hhweight. Let’s rename it to base_wgt for clarity:

# Rename hhweight -> base_wgt
hld_sample = hld_sample.wrangling.rename_columns({"hhweight": "base_wgt"})

print(hld_sample)
╭─────────────────────────── Sample ────────────────────────────╮
 Survey Data:                                                  
   Number of rows: 8000                                        
   Number of columns: 52                                       
   Number of strata: 19                                        
   Number of PSUs: 320                                         
                                                               
 Survey Design:                                                
                                                               
    Field               Value                                  
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━                         
    Row index           svy_row_index                          
    Stratum             (geo1, urbrur)                         
    PSU                 (ea,)                                  
    SSU                 None                                   
    Weight              base_wgt                               
    With replacement    False                                  
    Prob                None                                   
    Hit                 None                                   
    MOS                 None                                   
    Population size     None                                   
    Replicate weights   None                                   
                                                               
╰───────────────────────────────────────────────────────────────╯

Understanding Response Status Categories

The core idea of nonresponse adjustment is to redistribute the survey weights of eligible non-respondents to eligible respondents within defined adjustment classes.

In practice, some units have unknown eligibility. How their weights are handled is survey-specific. Common options include:

  • Treat unknowns like eligibles — redistribute their weights to respondents in the same class
  • Partition unknowns — allocate a fraction to ineligibles and the remainder to eligibles
  • Exclude unknowns from redistribution — leave ineligible weights unchanged

The svy library classifies records into four response categories:

Code Meaning
rr Respondent
nr Nonrespondent
uk Unknown eligible
in Ineligible

By default, unknowns are treated as potentially ineligible, so their weights are redistributed to the ineligible group as well.

Simulating Response Status

The World Bank simulated data has a 100% observed response rate. For demonstration purposes, we’ll simulate ineligibility and nonresponse:

import numpy as np
import polars as pl

rng = np.random.default_rng(12345)

RESPONSE_STATUS = rng.choice(
    ("ineligible", "respondent", "non-respondent", "unknown"),
    p=(0.03, 0.82, 0.10, 0.05),
    size=hld_sample.n_records,
)

hld_sample = hld_sample.wrangling.mutate({"resp_status": RESPONSE_STATUS})

# Show 9 eligible non-respondents records in geo_01
print(
    hld_sample.show_records(
        columns=["hid", "geo1", "urbrur", "resp_status"],
        where=[
            svy.col("resp_status") == "non-respondent",
            svy.col("geo1").is_in(["geo_01"]),
        ],
        n=9,
    )
)
shape: (9, 4)
┌─────────────┬────────┬────────┬────────────────┐
│ hid         ┆ geo1   ┆ urbrur ┆ resp_status    │
│ ---         ┆ ---    ┆ ---    ┆ ---            │
│ str         ┆ str    ┆ str    ┆ str            │
╞═════════════╪════════╪════════╪════════════════╡
│ 0424d8c9572 ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 09a4ff6c721 ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 0f8b12b37f6 ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 1bd825df0cc ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 1e2c7908b70 ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 1ffddef4ebe ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 22dfb939642 ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 258b968d304 ┆ geo_01 ┆ Urban  ┆ non-respondent │
│ 2c62bd0966f ┆ geo_01 ┆ Urban  ┆ non-respondent │
└─────────────┴────────┴────────┴────────────────┘

If your dataset uses different labels, provide a mapping to the canonical values:

# Mapping of canonical response status codes to descriptive labels
status_mapping = {
    "in": "ineligible",
    "rr": "respondent",
    "nr": "non-respondent",
    "uk": "unknown",
}

Nonresponse Adjustment

TipImplementation in svy

Use Sample.adjust_nr() to adjust sample weights for nonresponse. The method computes adjusted weights and stores them in the sample object for downstream estimation.

The adjust_nr() method includes a unknown_to_inelig parameter that controls where unknowns’ weights go:

  • unknown_to_inelig=True (default) — Unknowns’ weights are redistributed to ineligibles. Respondents’ adjusted weights are generally smaller.
  • unknown_to_inelig=False — Unknowns’ weights are not given to ineligibles. Respondents’ adjusted weights are larger.
hld_sample = hld_sample.weighting.adjust_nr(
    resp_status="resp_status",
    by=svy.Cross("geo1", "geo2"),
    resp_mapping=status_mapping,
    wgt_name="nr_wgt",
    unknown_to_inelig=True,
)

Verify the nr_wgt column was created:

# Show a random sample with the key columns
out = hld_sample.show_data(
    columns=["hid", "geo1", "geo2", "base_wgt", "resp_status", "nr_wgt"],
    how="sample",
    n=10,
    rstate=rng,
)

print(out)
shape: (10, 6)
┌─────────────┬────────┬───────────┬────────────┬─────────────┬────────────┐
│ hid         ┆ geo1   ┆ geo2      ┆ base_wgt   ┆ resp_status ┆ nr_wgt     │
│ ---         ┆ ---    ┆ ---       ┆ ---        ┆ ---         ┆ ---        │
│ str         ┆ str    ┆ str       ┆ f64        ┆ str         ┆ f64        │
╞═════════════╪════════╪═══════════╪════════════╪═════════════╪════════════╡
│ d14502c337c ┆ geo_07 ┆ geo_07_04 ┆ 451.204562 ┆ respondent  ┆ 525.84373  │
│ 52131a405ef ┆ geo_06 ┆ geo_06_02 ┆ 181.170838 ┆ respondent  ┆ 212.265018 │
│ cc88f791301 ┆ geo_05 ┆ geo_05_02 ┆ 322.536609 ┆ respondent  ┆ 400.87603  │
│ d5d78453cb6 ┆ geo_09 ┆ geo_09_05 ┆ 256.068981 ┆ respondent  ┆ 302.678875 │
│ bd3054ac406 ┆ geo_03 ┆ geo_03_04 ┆ 354.29115  ┆ respondent  ┆ 415.913679 │
│ 5d370c9683d ┆ geo_04 ┆ geo_04_04 ┆ 269.028273 ┆ respondent  ┆ 307.475866 │
│ db1cae7c010 ┆ geo_02 ┆ geo_02_08 ┆ 375.278288 ┆ respondent  ┆ 440.417942 │
│ e29dd48d134 ┆ geo_03 ┆ geo_03_02 ┆ 266.333451 ┆ respondent  ┆ 314.332987 │
│ 32200087576 ┆ geo_01 ┆ geo_01_08 ┆ 250.728419 ┆ respondent  ┆ 282.287027 │
│ 10e31214178 ┆ geo_10 ┆ geo_10_09 ┆ 178.671294 ┆ respondent  ┆ 206.240353 │
└─────────────┴────────┴───────────┴────────────┴─────────────┴────────────┘
print(hld_sample)
╭─────────────────────────── Sample ────────────────────────────╮
 Survey Data:                                                  
   Number of rows: 6629                                        
   Number of columns: 54                                       
   Number of strata: 19                                        
   Number of PSUs: 320                                         
                                                               
 Survey Design:                                                
                                                               
    Field               Value                                  
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━                         
    Row index           svy_row_index                          
    Stratum             (geo1, urbrur)                         
    PSU                 (ea,)                                  
    SSU                 None                                   
    Weight              nr_wgt                                 
    With replacement    False                                  
    Prob                None                                   
    Hit                 None                                   
    MOS                 None                                   
    Population size     None                                   
    Replicate weights   None                                   
                                                               
╰───────────────────────────────────────────────────────────────╯
  • If you don’t specify wgt_name, svy creates the adjusted weight automatically as svy_adjusted_<base_weight_name>
  • Set replace=True to replace the pre-adjusted variable with the adjusted one
  • svy updates the sample design internally so the weight reference points to the adjusted weight

Poststratification

Poststratification compensates for under- or over-representation in the sample by adjusting weights so that weighted sums within poststratification classes match known control totals from reliable sources.

Poststratification classes need not mirror the sampling design—they can be formed from additional variables. Common choices include age group, gender, race/ethnicity, and education.

Warning

Use current, reliable controls: Poststratifying to out-of-date or unreliable totals may introduce bias rather than reduce it. Document your sources and reference dates.

TipImplementation in svy

Use Sample.poststratify() to adjust sample weights to match known population totals.

Let’s assume we have reliable control totals (e.g., from a recent census) for households per administrative region:

hld_control_totals = {
    "geo_01": 342_000,
    "geo_02": 240_000,
    "geo_03": 282_000,
    "geo_04": 370_000,
    "geo_05": 210_000,
    "geo_06": 185_000,
    "geo_07": 183_000,
    "geo_08": 105_000,
    "geo_09": 300_000,
    "geo_10": 290_000,
}

Apply poststratification to the nonresponse-adjusted weights:

hld_sample = hld_sample.weighting.poststratify(
    controls=hld_control_totals,
    by="geo1",
    wgt_name="ps_wgt",
)

Verify the ps_wgt column was created:

# Show a random sample with the key columns
out = hld_sample.show_data(
    columns=[
        "hid",
        "geo1",
        "nr_wgt",
        "ps_wgt",
    ],
    how="sample",
    n=10,
    sort_by="geo1",
    rstate=rng,
)

print(out)
shape: (10, 4)
┌─────────────┬────────┬────────────┬────────────┐
│ hid         ┆ geo1   ┆ nr_wgt     ┆ ps_wgt     │
│ ---         ┆ ---    ┆ ---        ┆ ---        │
│ str         ┆ str    ┆ f64        ┆ f64        │
╞═════════════╪════════╪════════════╪════════════╡
│ 32200087576 ┆ geo_01 ┆ 282.287027 ┆ 288.819573 │
│ db1cae7c010 ┆ geo_02 ┆ 440.417942 ┆ 453.608104 │
│ bd3054ac406 ┆ geo_03 ┆ 415.913679 ┆ 431.762444 │
│ e29dd48d134 ┆ geo_03 ┆ 314.332987 ┆ 326.310929 │
│ 5d370c9683d ┆ geo_04 ┆ 307.475866 ┆ 314.343939 │
│ cc88f791301 ┆ geo_05 ┆ 400.87603  ┆ 410.804443 │
│ 52131a405ef ┆ geo_06 ┆ 212.265018 ┆ 221.058954 │
│ d14502c337c ┆ geo_07 ┆ 525.84373  ┆ 548.276403 │
│ d5d78453cb6 ┆ geo_09 ┆ 302.678875 ┆ 321.619428 │
│ 10e31214178 ┆ geo_10 ┆ 206.240353 ┆ 209.215739 │
└─────────────┴────────┴────────────┴────────────┘

The sample design is automatically updated with the new weight:

print(hld_sample)
╭─────────────────────────── Sample ────────────────────────────╮
 Survey Data:                                                  
   Number of rows: 6629                                        
   Number of columns: 55                                       
   Number of strata: 19                                        
   Number of PSUs: 320                                         
                                                               
 Survey Design:                                                
                                                               
    Field               Value                                  
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━                         
    Row index           svy_row_index                          
    Stratum             (geo1, urbrur)                         
    PSU                 (ea,)                                  
    SSU                 None                                   
    Weight              ps_wgt                                 
    With replacement    False                                  
    Prob                None                                   
    Hit                 None                                   
    MOS                 None                                   
    Population size     None                                   
    Replicate weights   None                                   
                                                               
╰───────────────────────────────────────────────────────────────╯

Calibration (GREG)

Calibration adjusts sample weights so that certain totals align with known population values. The Generalized Regression (GREG) approach is a model-assisted version that assumes the survey variable of interest relates to auxiliary variables through a regression-type relationship. See Deville and Särndal (1992) for the foundational work on calibration, and Särndal, Swensson, and Wretman (1992) for a thorough treatment of model-assisted survey sampling.

GREG calibration finds weights that:

  • Stay as close as possible to the original design weights
  • Make the weighted totals of auxiliary variables match their known population values

When auxiliary variables correlate strongly with the study variable, GREG estimates tend to be more stable and efficient than simple design-based estimates.

TipImplementation in svy

Use Sample.calibrate() or Sample.calibrate_matrix() to apply GREG calibration.

First, examine the auxiliary variables:

svy.Table.PRINT_WIDTH = 95

hld_sample.categorical.tabulate(rowvar="statocc", colvar="electricity", units=svy.TableUnits.COUNT)
Table(type=TWO_WAY, rowvar='statocc', colvar='electricity', levels=3x2, n=6, alpha=0.05)

Generate a control template to see the required structure:

controls = hld_sample.weighting.control_aux_template(
    x=[svy.Cat("statocc"), svy.Cat("electricity")], by_na="level"
)

rprint(controls)
{'Occupied for free': nan, 'Owned': nan, 'Rented': nan, 'No': nan, 'Yes': nan}

Populate the template with known population totals:

controls.update(
    {
        ("Occupied for free", "No"): 40_000,
        ("Occupied for free", "Yes"): 210_000,
        ("Owned", "No"): 360_000,
        ("Owned", "Yes"): 1_572_000,
        ("Rented", "No"): 25_000,
        ("Rented", "Yes"): 300_000,
    }
)

rprint(controls)
{
    'Occupied for free': nan,
    'Owned': nan,
    'Rented': nan,
    'No': nan,
    'Yes': nan,
    ('Occupied for free', 'No'): 40000,
    ('Occupied for free', 'Yes'): 210000,
    ('Owned', 'No'): 360000,
    ('Owned', 'Yes'): 1572000,
    ('Rented', 'No'): 25000,
    ('Rented', 'Yes'): 300000
}

Tip: Use the by parameter in calibrate() to control by domain.

Raking (Iterative Proportional Fitting)

Raking (also called iterative proportional fitting or IPF) adjusts survey weights so that weighted sample distributions match known population margins for several categorical variables.

Unlike calibration, which aligns multiple totals simultaneously, raking updates weights iteratively—adjusting one margin at a time until all specified margins agree with population controls within tolerance.

Raking is especially useful when only marginal totals are available (e.g., totals by age group and totals by gender, but not their cross-tabulation).

TipImplementation in svy

Use Sample.rake() to apply iterative proportional fitting.

First, create a categorical variable for household size:

hld_sample = hld_sample.wrangling.categorize(
    "hhsize",
    bins=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 30],
    into="hhsize_cat",
    labels=("1", "2", "3", "4", "5", "6", "7", "8", "9", "10+"),
)
print(
    hld_sample.categorical.tabulate(
        rowvar="hhsize_cat",
        drop_nulls=True,
        units=svy.TableUnits.COUNT,
    )
)
╭─────────────────────────────── Table ───────────────────────────────╮
 Type=One-Way                                                        
 Alpha=0.05                                                          
                                                                     
 Row      Estimate      Std Err       CV         Lower         Upper 
 ─────────────────────────────────────────────────────────────────── 
 1     223585.8102   26841.7337   0.1201   170764.5924   276407.0281 
 2     370807.5011   34866.4367   0.0940   302194.6587   439420.3436 
 3     510909.8103   27845.6883   0.0545   456112.9337   565706.6869 
 4     540769.5741   28386.4805   0.0525   484908.4853   596630.6629 
 5     369959.4288   22050.2962   0.0596   326567.1684   413351.6891 
 6     194964.9906   14982.3973   0.0768   165481.4827   224448.4986 
 7     125078.6089   13765.1648   0.1101    97990.4643   152166.7536 
 8      87823.8785   12582.3734   0.1433    63063.3212   112584.4359 
 9      34838.6570    5924.7016   0.1701    23179.5758    46497.7382 
 10+    48261.7404   12211.3078   0.2530    24231.3943    72292.0865 
╰─────────────────────────────────────────────────────────────────────╯

Define the marginal control totals:

hld_control_totals = {
    "statocc": {
        "Occupied for free": 250_000,
        "Owned": 1_932_000,
        "Rented": 325_000,
    },
    "electricity": {"No": 425_000, "Yes": 2_082_000},
}

rprint(hld_control_totals)
{
    'statocc': {'Occupied for free': 250000, 'Owned': 1932000, 'Rented': 325000},
    'electricity': {'No': 425000, 'Yes': 2082000}
}

Apply raking:

hld_sample = hld_sample.weighting.rake(
    controls=hld_control_totals, wgt_name="rake_wgt"
)
# Show a random sample with the key columns
out = hld_sample.show_data(
    columns=[
        "hid",
        "statocc",
        "electricity",
        "ps_wgt",
        "rake_wgt",
    ],
    how="sample",
    n=10,
    sort_by=("statocc", "electricity"),
    rstate=rng,
)

print(out)
shape: (10, 5)
┌─────────────┬───────────────────┬─────────────┬────────────┬────────────┐
│ hid         ┆ statocc           ┆ electricity ┆ ps_wgt     ┆ rake_wgt   │
│ ---         ┆ ---               ┆ ---         ┆ ---        ┆ ---        │
│ str         ┆ str               ┆ str         ┆ f64        ┆ f64        │
╞═════════════╪═══════════════════╪═════════════╪════════════╪════════════╡
│ cc88f791301 ┆ Occupied for free ┆ No          ┆ 410.804443 ┆ 402.042387 │
│ db1cae7c010 ┆ Occupied for free ┆ Yes         ┆ 453.608104 ┆ 444.876409 │
│ d14502c337c ┆ Owned             ┆ Yes         ┆ 548.276403 ┆ 545.913353 │
│ 52131a405ef ┆ Owned             ┆ Yes         ┆ 221.058954 ┆ 220.106199 │
│ d5d78453cb6 ┆ Owned             ┆ Yes         ┆ 321.619428 ┆ 320.233261 │
│ 5d370c9683d ┆ Owned             ┆ Yes         ┆ 314.343939 ┆ 312.989129 │
│ 32200087576 ┆ Owned             ┆ Yes         ┆ 288.819573 ┆ 287.574772 │
│ 10e31214178 ┆ Owned             ┆ Yes         ┆ 209.215739 ┆ 208.314027 │
│ bd3054ac406 ┆ Rented            ┆ Yes         ┆ 431.762444 ┆ 451.453448 │
│ e29dd48d134 ┆ Rented            ┆ Yes         ┆ 326.310929 ┆ 341.1927   │
└─────────────┴───────────────────┴─────────────┴────────────┴────────────┘

Weight Normalization

Surveys sometimes normalize weights to a convenient constant (e.g., the sample size or 1,000) so results are easier to compare across analyses.

Normalization multiplies every weight by the same factor. It does not change weighted means, proportions, or regression coefficients (the factor cancels), but it does change level estimates such as totals—and their standard errors—by the same factor.

TipImplementation in svy

Use Sample.normalize() to scale sample weights to a target sum.

hld_sample = hld_sample.weighting.normalize(
    controls=1_000, wgt_name="norm_wgt"
)

print(hld_sample.data["norm_wgt"].sum())
1000.0

Replicate Weights for Variance Estimation

Replicate weights are constructed primarily for variance (uncertainty) estimation. They are especially useful when:

  • Estimating non-linear parameters where Taylor linearization may be inaccurate
  • The number of PSUs per stratum is small, making linearization unstable

This section demonstrates three replication methods using the svy library:

Method Function Requirements
Balanced Repeated Replication (BRR) create_brr_wgts() Exactly 2 PSUs per stratum
Jackknife (JK) create_jk_wgts() ≥2 PSUs per stratum
Bootstrap (BS) create_bs_wgts() ≥2 PSUs per stratum

Sample Data for Replicate Weights

BRR assumes exactly two PSUs per stratum after any collapsing. To demonstrate the syntax without complex data engineering, we’ll construct a small BRR-compatible dummy sample:

rows = []
y_means = {
    "S1_P1": 10,
    "S1_P2": 12,
    "S2_P1": 8,
    "S2_P2": 9,
    "S3_P1": 15,
    "S3_P2": 13,
    "S4_P1": 11,
    "S4_P2": 10,
}

for s in range(1, 5):  # S1..S4
    for p in range(1, 3):  # P1..P2 (2 PSUs per stratum)
        label = f"S{s}_P{p}"
        for i in range(3):  # 3 units per PSU
            rows.append(
                {
                    "unit_id": f"S{s}P{p}U{i + 1}",
                    "stratum": f"S{s}",
                    "cluster": f"P{p}",
                    "weight": 1.0,  # base weight
                    "y": rng.normal(y_means[label], 1.0),  # outcome
                }
            )

df_rep = pl.DataFrame(rows)

print(df_rep)
shape: (24, 5)
┌─────────┬─────────┬─────────┬────────┬───────────┐
│ unit_id ┆ stratum ┆ cluster ┆ weight ┆ y         │
│ ---     ┆ ---     ┆ ---     ┆ ---    ┆ ---       │
│ str     ┆ str     ┆ str     ┆ f64    ┆ f64       │
╞═════════╪═════════╪═════════╪════════╪═══════════╡
│ S1P1U1  ┆ S1      ┆ P1      ┆ 1.0    ┆ 7.780013  │
│ S1P1U2  ┆ S1      ┆ P1      ┆ 1.0    ┆ 9.211001  │
│ S1P1U3  ┆ S1      ┆ P1      ┆ 1.0    ┆ 10.354935 │
│ S1P2U1  ┆ S1      ┆ P2      ┆ 1.0    ┆ 11.277252 │
│ S1P2U2  ┆ S1      ┆ P2      ┆ 1.0    ┆ 12.26242  │
│ …       ┆ …       ┆ …       ┆ …      ┆ …         │
│ S4P1U2  ┆ S4      ┆ P1      ┆ 1.0    ┆ 9.844337  │
│ S4P1U3  ┆ S4      ┆ P1      ┆ 1.0    ┆ 10.741156 │
│ S4P2U1  ┆ S4      ┆ P2      ┆ 1.0    ┆ 10.305219 │
│ S4P2U2  ┆ S4      ┆ P2      ┆ 1.0    ┆ 8.991012  │
│ S4P2U3  ┆ S4      ┆ P2      ┆ 1.0    ┆ 9.013348  │
└─────────┴─────────┴─────────┴────────┴───────────┘

Balanced Repeated Replication (BRR)

BRR forms balanced half-samples within each stratum using a Hadamard design. It requires exactly 2 PSUs per stratum.

By default, svy sets the number of replicates to the smallest multiple of 4 strictly greater than the number of strata. You can request more by passing n_reps.

rep_sample = svy.Sample(
    data=df_rep,
    design=svy.Design(stratum="stratum", wgt="weight", psu="cluster"),
)

brr_sample = rep_sample.weighting.create_brr_wgts(rep_prefix="brr_rep_wgt")

print(brr_sample)
╭─────────────────────────── Sample ────────────────────────────╮
 Survey Data:                                                  
   Number of rows: 24                                          
   Number of columns: 16                                       
   Number of strata: 4                                         
   Number of PSUs: 8                                           
                                                               
 Survey Design:                                                
                                                               
    Field               Value                                  
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  
    Row index           svy_row_index                          
    Stratum             stratum                                
    PSU                 cluster                                
    SSU                 None                                   
    Weight              weight                                 
    With replacement    False                                  
    Prob                None                                   
    Hit                 None                                   
    MOS                 None                                   
    Population size     None                                   
    Replicate weights   RepWeights(method=BRR,                 
                        prefix='brr_rep_wgt', n_reps=8, df=4,  
                        fay=0.0)                               
                                                               
╰───────────────────────────────────────────────────────────────╯

Fay-BRR (Damped BRR)

Fay-BRR is a damped version where each replicate weight combines the full weight and the BRR half-sample weight. Choose a Fay factor ρ ∈ (0,1), commonly between 0.3 and 0.5, to reduce perturbation and improve stability:

fay_sample = rep_sample.weighting.create_brr_wgts(
    n_reps=12, rep_prefix="fay_rep_wgt", fay_coef=0.45
)

print(fay_sample)
╭─────────────────────────── Sample ────────────────────────────╮
 Survey Data:                                                  
   Number of rows: 24                                          
   Number of columns: 28                                       
   Number of strata: 4                                         
   Number of PSUs: 8                                           
                                                               
 Survey Design:                                                
                                                               
    Field               Value                                  
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  
    Row index           svy_row_index                          
    Stratum             stratum                                
    PSU                 cluster                                
    SSU                 None                                   
    Weight              weight                                 
    With replacement    False                                  
    Prob                None                                   
    Hit                 None                                   
    MOS                 None                                   
    Population size     None                                   
    Replicate weights   RepWeights(method=BRR,                 
                        prefix='brr_rep_wgt', n_reps=8, df=4,  
                        fay=0.0)                               
                                                               
╰───────────────────────────────────────────────────────────────╯

Jackknife (JK)

Jackknife forms replicates by deleting one PSU within each stratum and re-weighting the remainder. Each stratum must have two or more PSUs.

jk_sample = rep_sample.weighting.create_jk_wgts(rep_prefix="jk_rep_wgt")

print(jk_sample)
╭─────────────────────────── Sample ────────────────────────────╮
 Survey Data:                                                  
   Number of rows: 24                                          
   Number of columns: 36                                       
   Number of strata: 4                                         
   Number of PSUs: 8                                           
                                                               
 Survey Design:                                                
                                                               
    Field               Value                                  
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  
    Row index           svy_row_index                          
    Stratum             stratum                                
    PSU                 cluster                                
    SSU                 None                                   
    Weight              weight                                 
    With replacement    False                                  
    Prob                None                                   
    Hit                 None                                   
    MOS                 None                                   
    Population size     None                                   
    Replicate weights   RepWeights(method=BRR,                 
                        prefix='brr_rep_wgt', n_reps=8, df=4,  
                        fay=0.0)                               
                                                               
╰───────────────────────────────────────────────────────────────╯

Bootstrap (BS)

Bootstrap replicates are formed by re-sampling PSUs with replacement within each stratum, drawing the same number of PSUs as observed in the sample for every replicate. The selection is independent across replicates, and weights are rescaled (e.g., Rao–Wu rescaled bootstrap) so estimators remain unbiased under the design.

If n_reps is omitted, create_bs_wgts() defaults to 500 replicates. Increase this for highly non-linear targets.

bs_sample = rep_sample.weighting.create_bs_wgts(
    n_reps=50, rep_prefix="bs_rep_wgt"
)

print(bs_sample)
╭─────────────────────────── Sample ────────────────────────────╮
 Survey Data:                                                  
   Number of rows: 24                                          
   Number of columns: 86                                       
   Number of strata: 4                                         
   Number of PSUs: 8                                           
                                                               
 Survey Design:                                                
                                                               
    Field               Value                                  
    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  
    Row index           svy_row_index                          
    Stratum             stratum                                
    PSU                 cluster                                
    SSU                 None                                   
    Weight              weight                                 
    With replacement    False                                  
    Prob                None                                   
    Hit                 None                                   
    MOS                 None                                   
    Population size     None                                   
    Replicate weights   RepWeights(method=Bootstrap,           
                        prefix='bs_rep_wgt', n_reps=50,        
                        df=49)                                 
                                                               
╰───────────────────────────────────────────────────────────────╯

Adjustment of Replicate Weights

Coming Soon: This section will cover how to apply nonresponse and calibration adjustments to replicate weights.

Summary

This tutorial covered the essential techniques for survey weight adjustment and variance estimation:

Weight Adjustments:

  1. Nonresponse adjustment with adjust_nr() — redistributes weights from non-respondents to respondents
  2. Poststratification with poststratify() — aligns weights to known population totals
  3. Calibration with calibrate() — uses GREG framework with auxiliary variables
  4. Raking with rake() — iteratively matches multiple marginal distributions
  5. Normalization with normalize() — scales weights to a convenient total

Replicate Weights:

  1. BRR with create_brr_wgts() — requires exactly 2 PSUs per stratum
  2. Jackknife with create_jk_wgts() — flexible, works with ≥2 PSUs per stratum
  3. Bootstrap with create_bs_wgts() — most flexible, good for complex designs

Next Steps

Now that you understand how to create and adjust survey weights, continue to the Estimation tutorial to learn how to compute point estimates and standard errors using these weights.

Ready to analyze your data?
Learn estimation methods in Survey Estimation →

References

  • Deville, J.-C., & Särndal, C.-E. (1992). Calibration estimators in survey sampling. Journal of the American Statistical Association, 87(418), 376–382.
  • Särndal, C.-E., Swensson, B., & Wretman, J. (1992). Model Assisted Survey Sampling. Springer.
  • Valliant, R., Dever, J. A., & Kreuter, F. (2018). Practical Tools for Designing and Weighting Survey Samples (2nd ed.). Springer.

References

Deville, Jean-Claude, and Carl-Erik Särndal. 1992. “Calibration Estimators in Survey Sampling.” J. Amer. Statist. Assoc. 87 (418): 376–82. https://doi.org/10.1080/01621459.1992.10475217.
Särndal, Carl-Erik, Bengt Swensson, and Jan Wretman. 1992. Model Assisted Survey Sampling. Springer-Verlag New York, Inc. https://link.springer.com/book/9780387406206.
Valliant, R, and J A Dever. 2018. Survey Weights: A Step-by-Step Guide to Calculation. Stata Press. https://www.stata-press.com/books/survey-weights/.
World Bank. 2023. “Synthetic Data for an Imaginary Country, Sample, 2023.” World Bank, Development Data Group. https://doi.org/10.48529/MC1F-QH23.