Buckets:

mishig's picture
|
download
raw
68.5 kB

Learning Invariant Representations with Missing Data

Mark Goldstein *

Aahlad Puli

Rajesh Ranganath

New York University

GOLDSTEIN@NYU.EDU

AAHLAD@NYU.EDU

RAJESHR@CIMS.NYU.EDU

Jörn-Henrik Jacobsen

Olina Chau

Adriel Saporta

Andrew C. Miller

Apple

JHJACOBSEN@APPLE.COM

OLINA@APPLE.COM

ASAPORTA@APPLE.COM

ACMILLER@APPLE.COM

Editors: Bernhard Schölkopf, Caroline Uhler and Kun Zhang

Abstract

Spurious correlations allow flexible models to predict well during training but poorly on related test distributions. Recent work has shown that models that satisfy particular independencies involving correlation-inducing nuisance variables have guarantees on their test performance. Enforcing such independencies requires nuisances to be observed during training. However, nuisances, such as demographics or image background labels, are often missing. Enforcing independence on just the observed data does not imply independence on the entire population. Here we derive MMD estimators used for invariance objectives under missing nuisances. On simulations and clinical data, optimizing through these estimates achieves test performance similar to using estimators that make use of the full data.

Keywords: invariant representations, missing data, doubly robust estimator, MMD

1. Introduction

Spurious correlations allow models that predict well on training data to have worse than chance performance on related distributions at test time (Geirhos et al., 2020; Puli et al., 2022; Veitch et al., 2021; Makar et al., 2021; Gulrajani and Lopez-Paz, 2020; Sagawa et al., 2020). For example, diabetes is associated with high body mass index (BMI) in the United States. However, in India and Taiwan, diabetes also frequently co-occurs with low and average BMI (WHO, 2004). Due to their shifting relationship with the label, nuisance variables (e.g., BMI) can cause models to exploit non-causal correlations in training data, leading models to generalize poorly.

Invariant prediction methods are designed to improve performance on a range of test distributions when training data exhibits spurious correlations (Peters et al., 2016; Arjovsky et al., 2019). We focus on methods that enforce independencies between the model and nuisance given some assumed causal structure (Makar et al., 2021; Veitch et al., 2021; Puli et al., 2022). These methods require the nuisance to be specified explicitly and observed. Nuisances must be observed for all samples because independence constraints are enforced via metrics such as Maximum Mean Discrepancy (MMD), which require samples from the fully-observed data. However, in large health datasets, nuisances are often missing. For example, not all people who report diabetes status report other correlated conditions (e.g., hypertension, depression) or demographics (e.g., gender). To


* Work done in part during an internship at Apple.improve generalization on a range of test distributions, it is necessary to handle missingness appropriately.

We propose MMD estimators for measuring nuisance-model dependence under nuisance missingness. First, we show that enforcing independence on only the nuisance-observed data does not imply independence on the full data distribution, and vice versa. Next, we derive three estimators, including one that is doubly-robust: it is consistent when either the nuisance or its missingness can be consistently modeled given covariates (Bang and Robins, 2005). Using simulations, a semi-simulation based on textured MNIST and clinical data from MIMIC, we show that the estimators perform close to ground-truth estimation with no missingness and that they improve test accuracy relative to computing the original objective only on samples with nuisances observed.

2. Notation and background

Notation. Let $X$ denote features. Let $Y$ be a label such as disease status. Let $Z$ denote the nuisance, e.g., another disease correlated with $Y$ , demographics, or image backgrounds. Denote the nuisance missingness indicator as $\Delta$ . Instead of $(X, Y, Z)$ , we observe $(X, Y, \Delta, \tilde{Z} = \Delta Z)$ , where $\tilde{Z} = Z$ when $\Delta = 1$ and $Z$ is unobserved otherwise. We write functions as $f_X = f(X)$ . Let $h_X$ denote a model to predict $Y$ . When conditioning on events involving both $X, Y$ we use $W = (X, Y)$ and $w = (x, y)$ to lighten notation so that $\mathbb{E}[\Delta|X = x, Y = y] = \mathbb{E}[\Delta|W = w]$ and $f(X, Y) = f_{XY} = f_W$ . Let $\mathcal{B}(p)$ denote Bernoulli( $p$ ).

Modeling under spurious correlations. Nuisance-based prediction arises in training data when $Z$ is predictive of $Y$ and associated with $X$ , causing models to use information about $Z$ in $X$ to predict $Y$ . This may not be a problem in all scenarios, but it is when the test distribution is expected to have a different $(Y, Z)$ relationship from the training distribution, as this may imply $P_{train}(Y|X) \neq P_{test}(Y|X)$ and a model built on training data may not perform well on test data. Most approaches to this problem start off by narrowing down the set of possible test distributions and their relationship to the training distribution. In this work, we focus on a family studied in Makar et al. (2021); Veitch et al. (2021); Puli et al. (2022). The distributions are indexed by $D$ and vary only in the factor $P(Z|Y)$ :

F={PD(X,Y,Z)=P(Y)PD(ZY)P(XY,Z)}D.(1)\mathcal{F} = \{P_D(X, Y, Z) = P(Y)P_D(Z|Y)P(X|Y, Z)\}_D. \quad (1)

When $P_{test}(Z|Y) \neq P_{train}(Z|Y)$ , in general $P_{train}(Y|X) \neq P_{test}(Y|X)$ and a model built on $P_{train}$ can generalize poorly. For example consider, for $a \in \mathbb{R}$ ,

YB(0.5),μZ=a(2Y1),ZN(μZ,1).Y \sim \mathcal{B}(0.5), \quad \mu_Z = a(2Y - 1), \quad Z \sim \mathcal{N}(\mu_Z, 1).

When $P_{train}$ uses $a = 0.5$ and $P_{test}$ uses $a = -0.9$ , Puli et al. (2022) show an example of an $X|Y, Z$ distribution where an ERM model that predicts $Y$ from $X$ with 80% training accuracy only achieves 40% on the test set, due to the changing relationship between $Y, Z$ , even when $Z$ is not an input feature to the model.

When it is possible to anticipate and observe nuisances during training, enforcing certain independence constraints (Makar et al., 2021; Veitch et al., 2021; Puli et al., 2022) helps guarantee performance regardless of the nuisance-label relationship. For example, for this choice of $\mathcal{F}$ and model $h$ , maximum likelihood estimation for $Y|h_X$ while enforcing the constraint $h_X \perp!!!\perp Z | Y$ implies equal performance on all $P_D \in \mathcal{F}$ , and better than chance performance (Appendix A).```

graph TD
    Y((Y)) --> X((X))
    Y --> Delta((Δ))
    Y -.-> Z((Z))
    Z --> X
    X --> Delta

**Figure 1:** Generative process we consider in this work. The  $Y \rightarrow Z$  edge is dashed to emphasize that the  $Z|Y$  may change at test time.  $\Delta$  determines missingness of  $Z$  and satisfies  $Z \perp\!\!\!\perp \Delta | (X, Y)$ .

**Measuring Dependence.** To enforce independencies it is necessary to measure dependence and to minimize this measure. One way to measure dependence is to measure distance between a joint distribution and the product of its marginals. The general Integral Probability Metrics (IPMs) class defines a metric on distributions. A special case, the kernel-based MMD (Gretton et al., 2012), has a closed form. Let  $X_1 \sim P, X_2 \sim Q$ . Let  $X'_j$  be an independent sample identically distributed as  $X_j$ . For kernel  $k$ ,

MMDk(P,Q)=E[k(X1,X1)]+E[k(X2,X2)]2E[k(X1,X2)].(2)\text{MMD}_k(P, Q) = \mathbb{E}[k(X_1, X'_1)] + \mathbb{E}[k(X_2, X'_2)] - 2 \mathbb{E}[k(X_1, X_2)]. \quad (2)

The MMD is 0 if and only if  $P =_d Q$  under certain mild conditions on  $P, Q$  and the kernel  $k$  (Gretton et al., 2012). Computing the MMD on a joint distribution  $P$  of two variables and the product of its marginals  $Q$  is a measure dependence, which also coincides with the Hilbert-Schmidt Independence Criterion (HSIC) (Gretton et al., 2005; Szabó and Sriperumbudur, 2017).

**Assumptions and scope.** The estimators in this work require ignorability:  $Z \perp\!\!\!\perp \Delta | W$  (Hernan and Robins, 2021). This holds, e.g., in graphs such as Figure 1. We also require positivity  $0 < \epsilon \leq P(\Delta = 1|W)$  to observe  $Z$  appropriately. While we focus on the graph in Figure 1 with binary  $Z$ , the presented method can extend to continuous  $Z$  (Appendix H), to other data generating processes, and to other invariance objectives. The method applies when the data distributions satisfy ignorability, positivity, and any assumptions made by the underlying invariance method.

**Estimation under missingness.** The problem that this work tackles is enforcing conditional independencies such as  $h_X \perp\!\!\!\perp Z | Y$  to improve generalization in families like  $\mathcal{F}$  in Equation (1) even when  $Z$  is subject to missingness. Methodology from causal inference solves a related problem: estimating  $\mathbb{E}[Z]$  when  $Z$  is subject to missingness. In this work, we extend this methodology from estimates of  $\mathbb{E}[Z]$  to estimates of objectives that enforce independencies involving  $Z$ . Here we review estimation of  $\mathbb{E}[Z]$  under missingness. The following assumes ignorability and positivity. Two parts of the data-generating distribution can help. Letting  $W = (X, Y)$ , define:

GWE[ΔX,Y](missingness process)mWE[ZX,Y](conditional expectation)\begin{aligned} G_W &\triangleq \mathbb{E}[\Delta | X, Y] && \text{(missingness process)} \\ m_W &\triangleq \mathbb{E}[Z | X, Y] && \text{(conditional expectation)} \end{aligned}

We review estimators of  $\mathbb{E}[Z]$  that use  $G_W$  (Horvitz and Thompson, 1952; Binder, 1983; Robins et al., 1994) or  $m_W$  (Rubin, 1976; Schafer, 1997) in Appendix E. The *doubly-robust* (DR) estimator(Robins and Rotnitzky, 2001; Bang and Robins, 2005; Kang and Schafer, 2007) combines both by noting the following equality:

E[Z]=E[ΔZ~GWΔGWGWmW].(3)\mathbb{E}[Z] = \mathbb{E} \left[ \frac{\Delta \tilde{Z}}{G_W} - \frac{\Delta - G_W}{G_W} m_W \right]. \quad (3)

Crucially, the right side of Equation (3) does not require samples of  $Z$  when  $\Delta = 0$ . When  $G_W$  or  $m_W$  are replaced with estimates  $\hat{G}_W, \hat{m}_W$ , the equality still holds provided that for all  $w$ , either  $\hat{G}_w = G_w$  or  $\hat{m}_w = m_w$  (Appendix F). Moreover, Monte Carlo estimates of the right side of Equation (3) are consistent for  $\mathbb{E}[Z]$  when either  $\hat{G}_W$  consistently estimates  $G_W$  or  $\hat{m}_W$  consistently estimates  $m_W$  (Robins and Rotnitzky, 2001). This is useful because in practice neither of  $G_W$  nor  $m_W$  are known and both must be estimated.

### 3. Invariant representations with missing data

For the graph in Figure 1 and binary  $Z$ , Veitch et al. (2021) enforce  $h_X \perp\!\!\!\perp Z | Y$  for the predictive model  $h$  by maximizing likelihood while minimizing the MMD:

maxhlogp(yhX)λy{0,1}MMDk(p(hXZ=1,Y=y),p(hXZ=0,Y=y)),(4)\max_h \log p(y|h_X) - \lambda \cdot \sum_{y \in \{0,1\}} \text{MMD}_k(p(h_X|Z=1, Y=y), p(h_X|Z=0, Y=y)), \quad (4)

for  $\lambda \geq 0$ . The first term is the usual maximum-likelihood objective for predicting  $y$  with model  $h_X$ . The second term, because  $Z$  is binary, enforces  $h_X \perp\!\!\!\perp Z | Y$  when minimized. First, we motivate the independence constraint for the choice of assumed family  $\mathcal{F}$  in Equation (1). We then demonstrate what can go wrong when enforcing this MMD penalty only on samples where  $Z$  is observed. We then derive estimators of the full-data MMD under missingness.

#### 3.1. Conditional Independence implies equal performance on the anti-causal family

There are at least two distinct usages of the word *invariance* in the literature. One refers to independence (e.g. of a model to a nuisance or environment variable). The other refers to *invariant risk*, i.e., the risk is the same for all test distributions in some family (Arjovsky et al., 2019; Krueger et al., 2021). In some families and for some independence constraints, these can coincide.

**Proposition 1** *Suppose model  $h_X$  satisfies  $h_X \perp\!\!\!\perp Z | Y$  on any  $P_D \in \mathcal{F}$ . Then for all  $P_{D'} \in \mathcal{F}$ ,  $\mathbb{E}_{P_D}[\log p_D(Y|h_X)] = \mathbb{E}_{P_{D'}}[\log p_D(Y|h_X)]$ .*

This has been shown in Veitch et al. (2021); Puli et al. (2022) but we provide a self-contained proof in Appendix A. This result means that estimates of held-out performance from the training data (one member of  $\mathcal{F}$ ) will represent test performance on other member of  $\mathcal{F}$  under  $h_X \perp\!\!\!\perp Z | Y$ .

#### 3.2. Failures of restricting to observed data

Under missingness, we observe  $(X, Y, \Delta, \tilde{Z})$  instead of  $(X, Y, Z)$ , where  $\Delta = 1$  means  $\tilde{Z} = Z$  and  $Z$  is unobserved when  $\Delta = 0$ . When  $Z$  is subject to missingness, we cannot directly compute empirical estimates of the MMD. What happens when we compute the MMD only on samples with  $Z$  observed? Let us refer to this as the *observed-only* MMD. Restricting computation to data with non-missing  $Z$  enforces  $h_X \perp\!\!\!\perp Z | Y = y, \Delta = 1$  instead of  $h_X \perp\!\!\!\perp Z | Y = y$ . We show that these conditions do not imply each other in general.**Proposition 2** *There exist distributions on  $(X, Y, \Delta, Z)$  such that*

hX s.t. hX ⁣ ⁣ ⁣ZY=y, but hX⊥̸ ⁣ ⁣ ⁣ZY=y,Δ=1\exists h_X^* \text{ s.t. } h_X^* \perp\!\!\!\perp Z|Y = y, \text{ but } h_X^* \not\perp\!\!\!\perp Z|Y = y, \Delta = 1

*and there exist distributions on  $(X, Y, \Delta, Z)$  such that*

hX s.t. hX ⁣ ⁣ ⁣ZY=y,Δ=1 but hX⊥̸ ⁣ ⁣ ⁣ZY=y\exists h_X^* \text{ s.t. } h_X^* \perp\!\!\!\perp Z|Y = y, \Delta = 1 \text{ but } h_X^* \not\perp\!\!\!\perp Z|Y = y

The proof is in [Appendix D](#). This existence implies:

1. 1. Optimizing the observed-only MMD can discard a solution to the full-data MMD
2. 2. Using the observed-only MMD may lead one to believe a model is invariant when it is not.

To keep generalization guarantees one must enforce independence on the *full data distribution*.

### 3.3. MMD estimation under missingness

We present estimators of the full-data *unconditional* MMD under missing  $Z$ , which enforces  $h_X \perp\!\!\!\perp Z$  (unconditional on  $Y$ ). Everything that follows can be conditioned on  $Y = y$  to enforce  $h_X \perp\!\!\!\perp Z|Y = y$  simply by restricting samples used to estimate the expectations to those with  $Y = y$ . For a kernel  $k$  let  $k_{XX'} \triangleq k(h_X, h_{X'})$ . The MMD can be written as:

MMDk(p(hXZ=1),p(hXZ=0))=EXZ=1XZ=1kXX+EXZ=0XZ=0kXX2EXZ=1XZ=0kXX.(5)\text{MMD}_k(p(h_X|Z = 1), p(h_X|Z = 0)) = \mathbb{E}_{\substack{X|Z=1 \\ X'|Z'=1}} k_{XX'} + \mathbb{E}_{\substack{X|Z=0 \\ X'|Z'=0}} k_{XX'} - 2 \mathbb{E}_{\substack{X|Z=1 \\ X'|Z'=0}} k_{XX'}. \quad (5)

Estimation is challenging due to missingness in the conditioning set. For  $b, b' \in \{0, 1\}$ , let  $N(b, b') \triangleq P(Z = b)P(Z' = b')$  and let  $Z_1 \triangleq Z$  and  $Z_0 \triangleq 1 - Z$ . For any of the expectations, the dependence on  $Z$  can be re-written with indicators in a joint expectation:

EXZ=bXZ=bkXX=1N(b,b)E[kXXZbZb],(6)\mathbb{E}_{\substack{X|Z=b \\ X'|Z'=b'}} k_{XX'} = \frac{1}{N(b, b')} \mathbb{E} \left[ k_{XX'} \cdot Z_b \cdot Z'_{b'} \right], \quad (6)

Under no missingness, each expectation could be estimated with Monte Carlo. We now develop three estimators<sup>1</sup> for these expectations. We propose simple  $G_W$ -based and  $m_W$ -based estimators in [Equations \(7\)](#) and [\(8\)](#) and then a doubly-robust estimator that combines them in [Equation \(9\)](#).

**Proposition 3** ( *$G_W$ -based re-weighted estimator*) Assume positivity, ignorability, and,  $\forall w, G_w = \mathbb{E}[\Delta|W = w]$ . Then,

EXZ=bXZ=b[kXX]=1N(b,b)E[ΔΔZ~bZ~bGWWkXX].(7)\mathbb{E}_{\substack{X|Z=b \\ X'|Z'=b'}} \left[ k_{XX'} \right] = \frac{1}{N(b, b')} \mathbb{E} \left[ \frac{\Delta \Delta' \tilde{Z}_b \tilde{Z}'_{b'}}{G_{WW'}} k_{XX'} \right]. \quad (7)

**Proposition 4** ( *$m_W$ -based regression estimator*) Assume ignorability, and,  $\forall w, m_w = \mathbb{E}[Z|W = w]$ . Then,

EXZ=bXZ=b[kXX]=1N(b,b)E[mWbmWbkXX].(8)\mathbb{E}_{\substack{X|Z=b \\ X'|Z'=b'}} \left[ k_{XX'} \right] = \frac{1}{N(b, b')} \mathbb{E} \left[ m_{Wb} \cdot m_{W'b'} \cdot k_{XX'} \right]. \quad (8)


---

1. By *estimator*, we really mean that we provide an alternate form of the expectation. The actual estimators we propose are empirical Monte Carlo estimates of the presented expectations, with  $G$  and  $m$  replaced with models.Let  $m_{W1} \triangleq m_W$ ,  $m_{W0} \triangleq 1 - m_W$ , and  $G_{WW'} \triangleq G_W G_{W'}$ .

**Proposition 5** (DR estimator). Assume positivity, ignorability, and  $\forall w$ , either  $G_w = \mathbb{E}[\Delta|W = w]$  or  $m_w = \mathbb{E}[Z|W = w]$ . Then,

EXZ=bXZ=b[kXX]=1N(b,b)E[(ΔΔZ~bZ~bGWWΔΔGWWGWWmWbmWb)kXX].(9)\mathbb{E}_{\substack{X|Z=b \\ X'|Z'=b'}} [k_{XX'}] = \frac{1}{N(b, b')} \mathbb{E} \left[ \left( \frac{\Delta \Delta' \tilde{Z}_b \tilde{Z}'_{b'}}{G_{WW'}} - \frac{\Delta \Delta' - G_{WW'}}{G_{WW'}} \cdot m_{Wb} \cdot m_{W'b'} \right) k_{XX'} \right]. \quad (9)

The proof is in [Appendix G](#). We can use any of [Equations \(7\) to \(9\)](#) to estimate the terms in [eq. \(5\)](#). Each of [Equations \(7\) to \(9\)](#) is a ratio of two expectations: the normalization constant  $N(b, b')$  depends on  $\mathbb{E}[Z]$  and must itself be estimated under missingness (e.g., with [Equation \(3\)](#)). The ratio of consistent estimates of these quantities is consistent by Weak Law of Large Numbers and Slutsky's theorem. We discuss estimation in practice, trade-offs among the three estimators, and variance in [Appendix B](#). We review recent related work in [Section 5](#).

## 4. Experiments

We compare accuracy and MMD minimization using different estimators: NONE (MLE only, no MMD), FULL (MLE and MMD using data with  $Z$  fully-observed, which is what could be used as an objective under no missingness), OBS (MLE and observed-only MMD), DR (MLE and DR estimator, called DR+ when using true  $G_W$ ), IP (MLE and re-weighted estimator, called IP+ when using true  $G_W$ ), and REG (MLE and regression estimator).

We first compare these algorithms in a simulation study. We then use textured MNIST to show the utility of the proposed estimators on high-dimensional data. In quantitative tables, we show mean  $\pm$  standard deviation over three seeds. We then predict hospital length of stay in the MIMIC dataset, and compare performance when demographic nuisances are subject to missingness. For the  $Y|X$  predictive loss, we use negative Bernoulli log likelihood with logit equal to model output  $h_X$ .

**Comparing MMDs.** In all tables, the training set MMD to evaluate each method is computed using the ground-truth full-data MMD estimation method (see [eq. \(6\)](#)) to show the actual value of MMD achieved, regardless of optimization method. This is also what the FULL method directly optimizes. True  $Z$ 's are available in both simulated and real data as missingness is simulated. However, each model trains and validates the  $\log p(Y|X) + \text{MMD}$  loss using its own estimation method for MMD.

### 4.1. Experiment 1: Simulation.

We set up strong  $(Y, Z)$  correlation. With  $\bar{Y} = 1 - Y$ , the training and validation sets are drawn:

YB(0.5),ZB(.9Y+.1Yˉ),X[N(YZ,σX2),N(Y+Z,σX2)].(10)Y \sim \mathcal{B}(0.5), \quad Z \sim \mathcal{B}(.9Y + .1\bar{Y}), \quad X \sim [\mathcal{N}(Y - Z, \sigma_X^2), \mathcal{N}(Y + Z, \sigma_X^2)]. \quad (10)

The test set has the opposite relationship  $Z \sim \mathcal{B}(.1Y + .9\bar{Y})$ . Here  $h_X^* = (X_1 + X_2)/2$  predicts  $Y$  with smallest MSE among representations satisfying independence. We construct  $\Delta$  to show the failure of computing MMD on the observed-only subset. For this, we use  $\hat{Z} \triangleq -(X_1 - X_2)/2$ , which is correlated with  $Z$ . We draw  $\Delta$  conditional on  $h_X^*$  and  $\hat{Z}$  (both are functions of  $X$ ):

Q=1[hX>0.6]1[Z^<0.6],ΔB(Q+0.2Qˉ).Q = \mathbb{1}[h_X^* > 0.6] \cdot \mathbb{1}[\hat{Z} < 0.6], \quad \Delta \sim \mathcal{B}(Q + 0.2\bar{Q}).**Table 1:** Simulation.  $\lambda = 1$ . NONE has highest MMD and lowest test accuracy. OBS improves over this. The DR and REG methods are able to bring the MMD close to 0.0 and attain best test accuracy.

<table border="1">
<thead>
<tr>
<th></th>
<th>NONE</th>
<th>OBS</th>
<th>FULL</th>
<th>DR</th>
<th>DR+</th>
<th>REG</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR MMD</td>
<td><math>0.21 \pm 0.04</math></td>
<td><math>0.05 \pm 0.04</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.01 \pm 0.00</math></td>
</tr>
<tr>
<td>TR ACC</td>
<td><math>0.89 \pm 0.00</math></td>
<td><math>0.87 \pm 0.00</math></td>
<td><math>0.86 \pm 0.01</math></td>
<td><math>0.85 \pm 0.01</math></td>
<td><math>0.84 \pm 0.02</math></td>
<td><math>0.86 \pm 0.00</math></td>
</tr>
<tr>
<td>TE ACC</td>
<td><math>0.67 \pm 0.02</math></td>
<td><math>0.77 \pm 0.02</math></td>
<td><math>0.80 \pm 0.01</math></td>
<td><math>0.81 \pm 0.02</math></td>
<td><math>0.81 \pm 0.01</math></td>
<td><math>0.79 \pm 0.02</math></td>
</tr>
</tbody>
</table>

**Figure 2:** Textured MNIST with digits 0, 1 on two textures from the Brodatz dataset.

This example construction leads to  $h_X^* \perp\!\!\!\perp Z|Y$  but  $h_X^* \not\perp\!\!\!\perp Z|Y, \Delta = 1$ . For  $h, G_W$  and  $m_W$  we use small feed-forward neural networks. See the repository<sup>2</sup> for more details.

**Results.** In Table 1, the DR estimators achieve indistinguishable performance to the full-data MMD, both in MMD and accuracy, and better than NONE and OBS. We include more results in Appendix C.

#### 4.2. Experiment 2: Textured MNIST.

Following Goodfellow et al. (2013)<sup>3</sup>, we correlate MNIST digits 0 and 1 with two textures from the Brodatz dataset (Figure 2). This is an example of invariance to image backgrounds when not all background labels are available. We follow a similar setup to colored MNIST (Arjovsky et al., 2019): because  $Y|X$  is essentially deterministic, even strong spurious correlations may be ignored by a model on MNIST. To push  $Y|X$  closer to what may be expected in noisier real data, we flip the label with 25% chance. Letting  $X$  only predict  $Y$  with 75% chance means the model will use texture instead. The missingness is based on the average pixel intensity of  $X$  and its class. Let  $\mu_X$  be the mean pixel value of a 28x28 MNIST image. We set

Q=1[Y=1]1[μX<0.3],ΔB(Q+.2Qˉ).Q = \mathbb{1}[Y = 1] \cdot \mathbb{1}[\mu_X < 0.3], \quad \Delta \sim \mathcal{B}(Q + .2\bar{Q}).

The choice of  $Q$  is correlated with  $Z$  through whether the image is light or dark grey. Similar to proposition 2 and experiment 1, this means subsetting on  $\Delta = 1$  does not imply independence on the full population and may throw away solutions that do. For  $h, G_W$  and  $m_W$  we use small convolutional networks. We include more details in the repository.

**Results.** In Table 2, NONE and OBS perform poorly on test. In contrast, the DR estimators — including the one with a learned  $G_W, m_W$  — achieve close to FULL’s performance.

2. <https://github.com/marikgoldstein/missing-mmd>

3. We adapt [this repository \(linked\)](#) to construct textured MNIST and will make our code available.**Table 2:** MNIST  $\lambda = 1$ . DR and REG estimators achieve close to full performance as measured by full MMD= 0 and high test accuracy. NONE and OBS perform poorly on test. OBS is notably high variance.

<table border="1">
<thead>
<tr>
<th></th>
<th>NONE</th>
<th>OBS</th>
<th>FULL</th>
<th>DR</th>
<th>DR+</th>
<th>REG</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR MMD</td>
<td><math>2.05 \pm 0.18</math></td>
<td><math>0.02 \pm 0.04</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
</tr>
<tr>
<td>TR ACC</td>
<td><math>0.90 \pm 0.01</math></td>
<td><math>0.74 \pm 0.03</math></td>
<td><math>0.76 \pm 0.01</math></td>
<td><math>0.77 \pm 0.0</math></td>
<td><math>0.76 \pm 0.01</math></td>
<td><math>0.76 \pm 0.01</math></td>
</tr>
<tr>
<td>TE ACC</td>
<td><math>0.13 \pm 0.01</math></td>
<td><math>0.63 \pm 0.17</math></td>
<td><math>0.74 \pm 0.01</math></td>
<td><math>0.72 \pm 0.04</math></td>
<td><math>0.73 \pm 0.01</math></td>
<td><math>0.73 \pm 0.01</math></td>
</tr>
</tbody>
</table>

**Table 3:** MIMIC  $\lambda = 1$ . REG estimator matches FULL’s performance and improves upon OBS while DR does not, due to high objective variance during training (not shown in table).

<table border="1">
<thead>
<tr>
<th></th>
<th>NONE</th>
<th>OBS</th>
<th>FULL</th>
<th>DR</th>
<th>REG</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR MMD</td>
<td><math>0.017 \pm 0.02</math></td>
<td><math>0.002 \pm 0.01</math></td>
<td><math>0.00 \pm 0.00</math></td>
<td><math>0.009 \pm 0.01</math></td>
<td><math>0.00 \pm 0.00</math></td>
</tr>
<tr>
<td>TR ACC</td>
<td><math>0.71 \pm 0.02</math></td>
<td><math>0.68 \pm 0.01</math></td>
<td><math>0.70 \pm 0.01</math></td>
<td><math>0.70 \pm 0.01</math></td>
<td><math>0.71 \pm 0.00</math></td>
</tr>
<tr>
<td>TE ACC</td>
<td><math>0.64 \pm 0.00</math></td>
<td><math>0.64 \pm 0.00</math></td>
<td><math>0.66 \pm 0.00</math></td>
<td><math>0.62 \pm 0.00</math></td>
<td><math>0.66 \pm 0.01</math></td>
</tr>
</tbody>
</table>

### 4.3. Experiment 3: Predicting length of stay in the ICU

We predict length of stay in the intensive care unit (ICU) in MIMIC (Johnson et al., 2016)<sup>4</sup> using demographics and first day labs/vitals among patients that stay at least one day. The prediction task is whether the stay is more than 2.5 days. To demonstrate that spurious correlations cause issues at deployment, we choose  $Z = 1$  to indicate the patient is recorded as white. While race may be correlated with health outcomes (e.g., due to unobserved socioeconomic factors (Obermeyer et al., 2019)), it may not always be appropriate for a model to use this information (Chen et al., 2018). The test set represents a new population with different outcome-demographic structure: we split the data so that the training/validation set has mostly samples with  $Y \neq Z$  while the test set has mostly samples with  $Y = Z$ . We set non-male patients to have  $Z$  observed with probability 0.2. We include more details in the repository.

**Results.** In this real data setting with strong  $(Y, Z)$  correlation, the full-data MMD estimator reported in the table for all methods may have high variance. We focus on the attained accuracies. The REG estimator matches the ground-truth FULL estimator and performs better than OBS and DR. This is not unexpected, since it is possible for the REG estimator to be lower variance than DR when the true  $G_W$  is small or  $G_W$  is not modeled well (Davidian, 2005), especially under strong  $(Y, Z)$  correlation (Appendix B).

## 5. Related work

We focus on recent work in fairness and invariant prediction on missing group/environment labels. Motivated by fairness, Wang et al. (2020) study a related problem of optimizing invariance-inducing objectives when the protected group label (analogous to our nuisance variables) is noisy. Given bounds on the level of label noise, this work proposes optimizing an objective based on the *distri-*

4. The MIMIC critical-care database is available on Physionet (Goldberger et al., 2000).butionally robust optimization framework (Namkoong and Duchi, 2016). Additionally, if given a small amount of true labels the authors suggest fitting a model to de-noise the noisy group labels and re-weight examples in the objective, which is similar in spirit to our work. In our approach, however, we exploit structural assumptions about the missingness process to build a doubly-robust estimator of the MMD penalty used during optimization.

Lahoti et al. (2020) optimize worst-case-over-groups performance without known group labels. They rely on the assumption that groups are *computationally-identifiable* (i.e. that there exists some function on the data that labels their protected group membership) (Hébert-Johnson et al., 2018) and use a model to identify groups on which performance is worst. They pose an adversarial optimization between the group-labeling model — which searches for groups with poor performance — and the primary predictive model. Inspired by this work, Creager et al. (2021) find worst-case group assignments based on an empirical risk minimization (ERM) model that maximizes invariance penalties and Ahmed et al. (2020) illustrate that this objective performs well on a wide range of benchmarks. Relatedly, Liu et al. (2021) run usual ERM training and then a second iteration of ERM that upweights the loss for datapoints on which the model performs badly. This identifies groups with bad model performance without explicit group labels. In both of these works, the groups could be seen either as a nuisance variable or as a confounder that correlates the label and some nuisance variable. However, in our setting (and in that of Makar et al. (2021); Veitch et al. (2021); Puli et al. (2022)), in exchange for being willing to make assumptions on the test distribution family, we do not need to observe samples with poor model performance at training time (and may not see any) to prevent sudden decreases in performance on held-out data at test time.

## 6. Conclusion

We present estimators for the MMD that extend recent invariant prediction methods to missing data. Unlike prior estimators that only leverage data with nuisances observed, or consider worst-case estimation, the presented estimators of the full data objective are consistent when either auxiliary model can be learned. As we show in [proposition 2](#), estimation of the full data objective is necessary to preserve the theoretical properties of invariant prediction methods. In the experiments, the DR and REG estimators are able to match full-data MMD performance and improve test accuracy relative to the OBS estimator. In practice, we recommend exploring the two simpler proposed estimators (REG and IP) in addition to the DR estimator and selecting the model based on the validation metric.

Moving forward, one limitation is that the full-data estimator — used as ground-truth MMD evaluation for the experiments — may itself have high variance on small datasets with strong nuisance-label correlation. Variance reduction is an important avenue both for optimizing and evaluating with the MMD using smaller batch sizes (in our experiments, batch sizes 1500 for MNIST and 4000 for MIMIC are large). Beyond variance reduction, it is a promising direction to apply the methodology in this work to the mutual information objective in Puli et al. (2022), which sidesteps the choice of kernel and may be better suited for continuous and high dimensional nuisances.## Acknowledgments

The authors thank Scotty Fleming, Joe Futoma, Leon Gatys, Sean Jewell, Tayor Killian, and Guillermo Sapiro for feedback and discussions. This work was in part supported by NIH/NHLBI Award R01HL148248, NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science, and NSF Award 1815633 SHF.

## References

Faruk Ahmed, Yoshua Bengio, Harm van Seijen, and Aaron Courville. Systematic generalisation with group invariant predictions. In *International Conference on Learning Representations*, 2020.

Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. *arXiv preprint arXiv:1907.02893*, 2019.

Heejung Bang and James M Robins. Doubly robust estimation in missing data and causal inference models. *Biometrics*, 61(4):962–973, 2005.

David A Binder. On the variances of asymptotically normal estimators from complex surveys. *International Statistical Review/Revue Internationale de Statistique*, pages 279–292, 1983.

Irene Chen, Fredrik D Johansson, and David Sontag. Why is my classifier discriminatory? *arXiv preprint arXiv:1805.12002*, 2018.

Elliot Creager, Jörn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In *International Conference on Machine Learning*, pages 2189–2200. PMLR, 2021.

Marie Davidian. Double robustness in estimation of causal treatment effects, 2005.

Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. *Nature Machine Intelligence*, 2(11):665–673, 2020.

Ary L Goldberger, Luis AN Amaral, Leon Glass, Jeffrey M Hausdorff, Plamen Ch Ivanov, Roger G Mark, Joseph E Mietus, George B Moody, Chung-Kang Peng, and H Eugene Stanley. Physibank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. *circulation*, 101(23):e215–e220, 2000.

Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Frédéric Bastien, and Yoshua Bengio. Pylearn2: a machine learning research library. *arXiv preprint arXiv:1308.4214*, 2013. URL <http://arxiv.org/abs/1308.4214>.

Arthur Gretton, Olivier Bousquet, Alex Smola, and Bernhard Schölkopf. Measuring statistical dependence with hilbert-schmidt norms. In *International conference on algorithmic learning theory*, pages 63–77. Springer, 2005.

Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. *The Journal of Machine Learning Research*, 13(1):723–773, 2012.Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. *arXiv preprint arXiv:2007.01434*, 2020.

Ursula Hébert-Johnson, Michael Kim, Omer Reingold, and Guy Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In *International Conference on Machine Learning*, pages 1939–1948. PMLR, 2018.

Miguel A. Hernan and James M. Robins. *Causal Inference: What If*. Chapman & Hall/CRC, 1st edition, 2021.

Daniel G Horvitz and Donovan J Thompson. A generalization of sampling without replacement from a finite universe. *Journal of the American statistical Association*, 47(260):663–685, 1952.

Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. *Scientific data*, 3:160035, 2016.

Joseph DY Kang and Joseph L Schafer. Demystifying double robustness: A comparison of alternative strategies for estimating a population mean from incomplete data. *Statistical science*, pages 523–539, 2007.

David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In *International Conference on Machine Learning*, pages 5815–5826. PMLR, 2021.

Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H Chi. Fairness without demographics through adversarially reweighted learning. *arXiv preprint arXiv:2006.13114*, 2020.

Roderick JA Little and Donald B Rubin. *Statistical analysis with missing data*, volume 793. John Wiley & Sons, 2019.

Evan Z Liu, Behzad Haghgoo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just train twice: Improving group robustness without training group information. In *International Conference on Machine Learning*, pages 6781–6792. PMLR, 2021.

Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, and Alexander D’Amour. Causally-motivated shortcut removal using auxiliary labels. *arXiv preprint arXiv:2105.06422*, 2021.

Hongseok Namkoong and John C Duchi. Stochastic gradient methods for distributionally robust optimization with f-divergences. In *NeurIPS*, volume 29, pages 2208–2216, 2016.

Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. *Science*, 366(6464):447–453, 2019.

Judea Pearl. *Causality*. Cambridge university press, 2009.Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. Causal inference by using invariant prediction: identification and confidence intervals. *Journal of the Royal Statistical Society. Series B (Statistical Methodology)*, pages 947–1012, 2016.

Aahlad Manas Puli, Lily H Zhang, Eric Karl Oermann, and Rajesh Ranganath. Out-of-distribution generalization in the presence of nuisance-induced spurious correlations. In *International Conference on Learning Representations*, 2022. URL <https://openreview.net/forum?id=12RoR2o32T>.

James M Robins, Andrea Rotnitzky, and Lue Ping Zhao. Estimation of regression coefficients when some regressors are not always observed. *Journal of the American statistical Association*, 89 (427):846–866, 1994.

J.M. Robins and A G Rotnitzky. Comment on the bickel and kwon article, ‘inference for semiparametric models: Some questions and an answer’. *Statistica Sinica*, 11:920–936, 01 2001.

Donald B Rubin. Inference and missing data. *Biometrika*, 63(3):581–592, 1976.

Donald B Rubin. *Multiple imputation for nonresponse in surveys*, volume 81. John Wiley & Sons, 2004.

Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of why overparameterization exacerbates spurious correlations. In *International Conference on Machine Learning*, pages 8346–8356. PMLR, 2020.

Joseph L Schafer. *Analysis of incomplete multivariate data*. CRC press, 1997.

Zoltán Szabó and Bharath K Sriperumbudur. Characteristic and universal tensor product kernels. *J. Mach. Learn. Res.*, 18:233–1, 2017.

Mark J Van der Laan and James M Robins. *Unified methods for censored longitudinal data and causality*. Springer Science & Business Media, 2003.

Victor Veitch, Alexander D’Amour, Steve Yadlowsky, and Jacob Eisenstein. Counterfactual invariance to spurious correlations in text classification. *Advances in Neural Information Processing Systems*, 34, 2021.

Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, and Michael I Jordan. Robust optimization for fairness with noisy protected groups. *arXiv preprint arXiv:2002.09343*, 2020.

Expert Consultation WHO. Appropriate body-mass index for Asian populations and its implications for policy and intervention strategies. *Lancet (London, England)*, 363(9403):157–163, 2004.## Appendix A. Invariant predictor

Here, we show that  $h_X \perp\!\!\!\perp Z | Y = y$  for all  $y$  implies invariant risk in  $\mathcal{F}$ . The proof uses the following insights

- • Satisfying independence  $h_X \perp\!\!\!\perp Z | Y$  means  $P_{train}(h_X|Y, Z) = P_{train}(h_X|Y)$ .
- • The assumptions on  $\mathcal{F}$  mean  $P_{train}(h_X|Y, Z) = P(h_X|Y, Z)$  in any member of the family.
- • Combined this means  $P_{train}(h_X|Y, Z) = P_D(h_X|Y)$  for any  $P_D \in \mathcal{F}$  when the model  $P_{train}(Y|h_X)$  satisfies  $h_X \perp\!\!\!\perp Z | Y$ .

**Proposition** Suppose model  $h_X$  satisfies  $h_X \perp\!\!\!\perp Z | Y$  on any  $P_D \in \mathcal{F}$ . Then for all  $P_{D'} \in \mathcal{F}$ ,  $\mathbb{E}_{P_D}[\log p_D(Y|h_X)] = \mathbb{E}_{P_{D'}}[\log p_{D'}(Y|h_X)]$ .

**Proof** Consider test set performance  $\mathbb{E}_{P_{test}(Y,X)}[\log P_{train}(Y|h_X)]$ . By the assumption on the family, by Bayes, and by satisfying the independence constraint:

EPtest(Y,X)[logPtrain(YhX)]=EPtest(Y,X)[logPtrain(hXY)P(Y)Ptrain(hX)]=EPtest(Y,X,Z)[logPtrain(hXY,Z)P(Y)EP(Y)[Ptrain(hXY,Z)]]=EPtest(Y,hX,Z)[logPtrain(hXY,Z)P(Y)EP(Y)[Ptrain(hXY,Z)]]=EPtest(Y,hX,Z)[logP(hXY,Z)P(Y)EP(Y)[P(hXY,Z)]]=EPtest(Y,hX,Z)[logP(hXY)P(Y)EP(Y)[P(hXY)]]=EPtest(Y,hX)[logP(hXY)P(Y)EP(Y)[P(hXY)]]=EPtest(hXY)Ptest(Y)[logP(hXY)P(Y)EP(Y)[P(hXY)]]=EP(hXY)P(Y)[logP(hXY)P(Y)EP(Y)[P(hXY)]]=EP(hX,Y)[logP(hXY)P(Y)EP(Y)[P(hXY)]]\begin{aligned}
 \mathbb{E}_{P_{test}(Y,X)}[\log P_{train}(Y|h_X)] &= \mathbb{E}_{P_{test}(Y,X)} \left[ \log \frac{P_{train}(h_X|Y)P(Y)}{P_{train}(h_X)} \right] \\
 &= \mathbb{E}_{P_{test}(Y,X,Z)} \left[ \log \frac{P_{train}(h_X|Y, Z)P(Y)}{\mathbb{E}_{P(Y)}[P_{train}(h_X|Y, Z)]} \right] \\
 &= \mathbb{E}_{P_{test}(Y,h_X,Z)} \left[ \log \frac{P_{train}(h_X|Y, Z)P(Y)}{\mathbb{E}_{P(Y)}[P_{train}(h_X|Y, Z)]} \right] \\
 &= \mathbb{E}_{P_{test}(Y,h_X,Z)} \left[ \log \frac{P(h_X|Y, Z)P(Y)}{\mathbb{E}_{P(Y)}[P(h_X|Y, Z)]} \right] \\
 &= \mathbb{E}_{P_{test}(Y,h_X,Z)} \left[ \log \frac{P(h_X|Y)P(Y)}{\mathbb{E}_{P(Y)}[P(h_X|Y)]} \right] \\
 &= \mathbb{E}_{P_{test}(Y,h_X)} \left[ \log \frac{P(h_X|Y)P(Y)}{\mathbb{E}_{P(Y)}[P(h_X|Y)]} \right] \\
 &= \mathbb{E}_{P_{test}(h_X|Y)P_{test}(Y)} \left[ \log \frac{P(h_X|Y)P(Y)}{\mathbb{E}_{P(Y)}[P(h_X|Y)]} \right] \\
 &= \mathbb{E}_{P(h_X|Y)P(Y)} \left[ \log \frac{P(h_X|Y)P(Y)}{\mathbb{E}_{P(Y)}[P(h_X|Y)]} \right] \\
 &= \mathbb{E}_{P(h_X,Y)} \left[ \log \frac{P(h_X|Y)P(Y)}{\mathbb{E}_{P(Y)}[P(h_X|Y)]} \right]
 \end{aligned}

The last quantity does not depend on any specific  $P_D(Z|Y)$ . This means that performance of the  $P_{train}(Y|h_X)$  model, when the independence is satisfied, is the same on all  $P_{test}$  in  $\mathcal{F}$ . ■## Appendix B. Estimation in practice

### B.1. Splitting samples

For a given batch, we use 1/4 of the samples for the normalization term and 3/4 for the main term, though this number may be changed. Further, the main term of any of the three estimators is defined on a pair of independent samples, i.e. it is a *U-statistic*. There are two ways to estimate such expectations. One option is to further break the samples left for the main term in half into two batches  $S_1$  and  $S_2$  and then compute on all pairs  $i \in S_1, j \in S_2$ . The alternative, which has slightly higher sampler efficiency and is the method we use, is to compute on all pairs of samples and then leave out any diagonal terms  $k(X_i, X_i)$  from the average.

### B.2. Trade-offs among the 3 proposed estimators

For large samples, DR estimates with correct  $G_W$  and correct  $m_W$  are lower variance than the regression with correct  $m_W$ , and lower variance than re-weighting with correct  $G_W$ . Even when  $m_W$  is mis-specified but  $G_W$  is correct, the DR estimator may still be lower variance than the re-weighting estimator with correct  $G_W$  alone. However, the DR estimator with correct  $m_W$  but mis-specified  $G_W$  may be *higher variance* than the regression estimator with correct  $m_W$  (Davidian, 2005). For this reason, when the missingness model  $G_W$  is wrong, the regression estimator may out-perform the DR estimator even in large sample sizes.

The variance of the DR and re-weighting estimators comes from two distinct places. One is general to missingness: small observation probabilities  $G_W$  in the denominator. The other reason is general Monte Carlo error: we need individual samples of  $\tilde{Z}$  in the numerator. This is *especially a problem in the spurious correlation setting*:  $Y$  and  $Z$  are possibly strongly correlated. We need to compute the MMD conditional on  $Y = y$  which involves, for each  $Y = y$ , expectations using samples where  $Z = 1$  and where  $Z = 0$ , but we may have very few samples for one of these  $Z$  values. This second source of variance *also applies to estimates of the full-data MMD* under no missingness (eq. (6)). We compare the mean and variance of these estimators empirically in Appendix B.3.

### B.3. Empirical investigation of variance

As discussed, when  $\mathbb{E}[\Delta|X, Y]$  small, or  $(Y, Z)$  highly correlated, or both, all estimators will be high variance. We train a model on the experiment 1 simulation using the NONE method and then study the mean and variance of DR, DR+ (to study the effect of using the true  $G_W$ ), REG (since it yielded better performance on MIMIC) and FULL (since this method is used to report the MMDs in the tables). In this simulation, we are free to generate as many large batches of samples as needed. Keeping the model fixed, for each batch size between 1000 and 10,000 incrementing by 250 we draw 100 new batches of that size and estimate the MMD using each method. For each method, we report the mean (fig. 3(a)) and standard deviation (fig. 3(b)) of these estimates.

Notably, we cannot compute an actual ground-truth for the MMD of this model, but we could take the mean of the FULL estimate (no missingness) at the largest sample size of 10,000 samples. This is about 0.2. We see that the regression estimator stays closer to this number for all sample sizes relative to the DR methods. Interesting, for standard deviation, we see that the DR estimator is more well-behaved than the DR+ estimator that uses the true  $G_W$ . This has also been observed for learned versus true propensity scores in treatment effect estimation and usually results from models**Figure 3:** Figure 3(a): Mean of 100 MMD estimates at each batch size. Figure 3(b): Standard Deviation of 100 MMD estimates at each batch size

learning less extreme probabilities than the true ones, trading some bias. In this case, there is not a substantial difference in estimated weights or in bias, but there is a large difference in variance. More investigation is required.

The main take-away from both plots is that the regression method seems more stable than DR and that  $G_W$  may be the part of the DR estimator that is not being learned well. On the other hand the DR estimator may possibly be safer when it is unknown if it is easier to estimate  $G_W$  or  $m_W$ . We recommend using all 3 of the proposed estimators and comparing validation objectives.

### Appendix C. Full experiments

**Table 4:** Simulation.  $\lambda = 1$ .

<table border="1">
<thead>
<tr>
<th></th>
<th>NONE</th>
<th>OBS</th>
<th>FULL</th>
<th>DR</th>
<th>DR+</th>
<th>REG</th>
<th>IP</th>
<th>IP+</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR MMD</td>
<td><math>0.21 \pm 0.04</math></td>
<td><math>0.05 \pm 0.04</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.01 \pm 0.00</math></td>
<td><math>0.00 \pm 0.00</math></td>
<td><math>0.00 \pm 0.01</math></td>
</tr>
<tr>
<td>TR ACC</td>
<td><math>0.89 \pm 0.00</math></td>
<td><math>0.87 \pm 0.00</math></td>
<td><math>0.86 \pm 0.01</math></td>
<td><math>0.85 \pm 0.01</math></td>
<td><math>0.84 \pm 0.02</math></td>
<td><math>0.86 \pm 0.00</math></td>
<td><math>0.84 \pm 0.01</math></td>
<td><math>0.84 \pm 0.01</math></td>
</tr>
<tr>
<td>TE ACC</td>
<td><math>0.67 \pm 0.02</math></td>
<td><math>0.77 \pm 0.02</math></td>
<td><math>0.80 \pm 0.01</math></td>
<td><math>0.81 \pm 0.02</math></td>
<td><math>0.81 \pm 0.01</math></td>
<td><math>0.79 \pm 0.02</math></td>
<td><math>0.82 \pm 0.02</math></td>
<td><math>0.81 \pm 0.00</math></td>
</tr>
</tbody>
</table>

**Table 5:** Simulation.  $\lambda = 5$ .

<table border="1">
<thead>
<tr>
<th></th>
<th>NONE</th>
<th>OBS</th>
<th>FULL</th>
<th>DR</th>
<th>DR+</th>
<th>REG</th>
<th>IP</th>
<th>IP+</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR MMD</td>
<td><math>0.21 \pm 0.04</math></td>
<td><math>0.03 \pm 0.02</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.00</math></td>
<td><math>0.00 \pm 0.0</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.0</math></td>
<td><math>0.00 \pm 0.0</math></td>
</tr>
<tr>
<td>TR ACC</td>
<td><math>0.89 \pm 0.0</math></td>
<td><math>0.85 \pm 0.02</math></td>
<td><math>0.84 \pm 0.01</math></td>
<td><math>0.82 \pm 0.01</math></td>
<td><math>0.78 \pm 0.06</math></td>
<td><math>0.84 \pm 0.00</math></td>
<td><math>0.81 \pm 0.02</math></td>
<td><math>0.81 \pm 0.03</math></td>
</tr>
<tr>
<td>TE ACC</td>
<td><math>0.67 \pm 0.02</math></td>
<td><math>0.78 \pm 0.02</math></td>
<td><math>0.83 \pm 0.01</math></td>
<td><math>0.82 \pm 0.02</math></td>
<td><math>0.77 \pm 0.04</math></td>
<td><math>0.82 \pm 0.01</math></td>
<td><math>0.81 \pm 0.02</math></td>
<td><math>0.80 \pm 0.01</math></td>
</tr>
</tbody>
</table>**Table 6:** MNIST  $\lambda = 1$ .

<table border="1">
<thead>
<tr>
<th></th>
<th>NONE</th>
<th>OBS</th>
<th>FULL</th>
<th>DR</th>
<th>DR+</th>
<th>REG</th>
<th>IP</th>
<th>IP+</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR MMD</td>
<td><math>2.05 \pm 0.18</math></td>
<td><math>0.02 \pm 0.04</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.07 \pm 0.12</math></td>
<td><math>0.03 \pm 0.06</math></td>
</tr>
<tr>
<td>TR ACC</td>
<td><math>0.90 \pm 0.01</math></td>
<td><math>0.74 \pm 0.03</math></td>
<td><math>0.76 \pm 0.01</math></td>
<td><math>0.77 \pm 0.00</math></td>
<td><math>0.76 \pm 0.01</math></td>
<td><math>0.76 \pm 0.01</math></td>
<td><math>0.67 \pm 0.16</math></td>
<td><math>0.68 \pm 0.15</math></td>
</tr>
<tr>
<td>TE ACC</td>
<td><math>0.13 \pm 0.01</math></td>
<td><math>0.63 \pm 0.17</math></td>
<td><math>0.74 \pm 0.01</math></td>
<td><math>0.72 \pm 0.04</math></td>
<td><math>0.73 \pm 0.01</math></td>
<td><math>0.73 \pm 0.01</math></td>
<td><math>0.64 \pm 0.14</math></td>
<td><math>0.61 \pm 0.11</math></td>
</tr>
</tbody>
</table>

**Table 7:** MNIST  $\lambda = 5$ .

<table border="1">
<thead>
<tr>
<th></th>
<th>NONE</th>
<th>OBS</th>
<th>FULL</th>
<th>DR</th>
<th>DR+</th>
<th>REG</th>
<th>IP</th>
<th>IP+</th>
</tr>
</thead>
<tbody>
<tr>
<td>TR MMD</td>
<td><math>2.05 \pm 0.18</math></td>
<td><math>0.01 \pm 0.02</math></td>
<td><math>0.00 \pm 0.00</math></td>
<td><math>0.00 \pm 0.00</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.00 \pm 0.01</math></td>
<td><math>0.01 \pm 0.01</math></td>
<td><math>0.01 \pm 0.02</math></td>
</tr>
<tr>
<td>TR ACC</td>
<td><math>0.9 \pm 0.01</math></td>
<td><math>0.66 \pm 0.15</math></td>
<td><math>0.75 \pm 0.01</math></td>
<td><math>0.65 \pm 0.14</math></td>
<td><math>0.65 \pm 0.13</math></td>
<td><math>0.75 \pm 0.01</math></td>
<td><math>0.71 \pm 0.08</math></td>
<td><math>0.60 \pm 0.12</math></td>
</tr>
<tr>
<td>TE ACC</td>
<td><math>0.13 \pm 0.01</math></td>
<td><math>0.65 \pm 0.15</math></td>
<td><math>0.75 \pm 0.01</math></td>
<td><math>0.73 \pm 0.02</math></td>
<td><math>0.70 \pm 0.09</math></td>
<td><math>0.75 \pm 0.01</math></td>
<td><math>0.55 \pm 0.3</math></td>
<td><math>0.60 \pm 0.12</math></td>
</tr>
</tbody>
</table>

## Appendix D. Failures of restricting to observed data

**Proposition** *There exist distributions on  $(X, Y, \Delta, Z)$  such that*

hX s.t. hX ⁣ ⁣ ⁣ZY=y, but hX⊥̸ ⁣ ⁣ ⁣ZY=y,Δ=1\exists h_X^* \text{ s.t. } h_X^* \perp\!\!\!\perp Z|Y = y, \text{ but } h_X^* \not\perp\!\!\!\perp Z|Y = y, \Delta = 1

*and there exist distributions on  $(X, Y, \Delta, Z)$  such that*

hX s.t. hX ⁣ ⁣ ⁣ZY=y,Δ=1 but hX⊥̸ ⁣ ⁣ ⁣ZY=y\exists h_X^* \text{ s.t. } h_X^* \perp\!\!\!\perp Z|Y = y, \Delta = 1 \text{ but } h_X^* \not\perp\!\!\!\perp Z|Y = y

**First direction.** There exist distributions on  $(X, Y, \Delta, Z)$  such that

hX s.t. hX ⁣ ⁣ ⁣ZY=y, but hX⊥̸ ⁣ ⁣ ⁣ZY=y,Δ=1\exists h_X^* \text{ s.t. } h_X^* \perp\!\!\!\perp Z|Y = y, \text{ but } h_X^* \not\perp\!\!\!\perp Z|Y = y, \Delta = 1

It suffices to illustrate this even when  $Z, Y$  are not correlated. Consider

YN(0,1),ZN(0,σZ2),ϵXN(0,σX2),X=[YZ+ϵX,Y+Z]Y \sim \mathcal{N}(0, 1), \quad Z \sim \mathcal{N}(0, \sigma_Z^2), \quad \epsilon_X \sim \mathcal{N}(0, \sigma_X^2), \quad X = [Y - Z + \epsilon_X, Y + Z]

For  $h_X^* = (X_1 + X_2)$ , we first show  $h_X^* \perp\!\!\!\perp Z|Y = y$ . We have

hXYN(2Y,σX2)h_X^*|Y \sim \mathcal{N}(2Y, \sigma_X^2)

and in particular  $h_X^* = 2Y + \epsilon_X$ . Given  $Y = y$ , the only randomness in  $h_X^*$  is due  $\epsilon_X$ . But  $\epsilon_X$  is independent of the joint variable  $(Z, Y)$  meaning  $\epsilon_X \perp\!\!\!\perp Z|Y = y$  and therefore  $h_X^* \perp\!\!\!\perp Z|Y = y$ .

We now construct  $\Delta|(X, Y)$  such that  $h_X^* \not\perp\!\!\!\perp Z|Y = y, \Delta = 1$ . Let

Δ=OR(1[X1+X2<0],1[X2Y<0]).\Delta = \text{OR}\left(\mathbb{1}[X_1 + X_2 < 0], \mathbb{1}[X_2 - Y < 0]\right).

Checking the condition

hX⊥̸ ⁣ ⁣ ⁣ZY=y,Δ=1h_X^* \not\perp\!\!\!\perp Z|Y = y, \Delta = 1

(using definition of  $h_X^*$ ) is equivalent to checking

(X1+X2)⊥̸ ⁣ ⁣ ⁣ZY=y,Δ=1(X_1 + X_2) \not\perp\!\!\!\perp Z|Y = y, \Delta = 1(using definition of  $\Delta$ ) is equivalent to checking

(X1+X2) ⁣ ⁣ ⁣ZY=y,OR(1[X1+X2<0],1[X2Y<0])=1(X_1 + X_2) \perp\!\!\!\perp Z | Y = y, \text{OR} \left( \mathbb{1}[X_1 + X_2 < 0], \mathbb{1}[X_2 - Y < 0] \right) = 1

(using definition of  $X_2$ ) is equivalent to checking

(X1+X2) ⁣ ⁣ ⁣ZY=y,OR(1[X1+X2<0],1[Z<0])=1(X_1 + X_2) \perp\!\!\!\perp Z | Y = y, \text{OR} \left( \mathbb{1}[X_1 + X_2 < 0], \mathbb{1}[Z < 0] \right) = 1

To check that, we need to check if the distribution of  $(X_1 + X_2) | Y = y, \Delta = 1$  changes when conditioning on different events involving the random variable  $Z$ . For example,  $\mathbb{1}[Z < 0]$  and  $\mathbb{1}[Z \geq 0]$ :

1. 1.  $(X_1 + X_2) \mid Y = y, \text{OR} \left( \mathbb{1}[X_1 + X_2 < 0], \mathbb{1}[Z < 0] \right) = 1, \mathbb{1}[Z < 0] = 1$
2. 2.  $(X_1 + X_2) \mid Y = y, \text{OR} \left( \mathbb{1}[X_1 + X_2 < 0], \mathbb{1}[Z < 0] \right) = 1, \mathbb{1}[Z \geq 0] = 1.$

We can show these two conditional variables differ in distribution simply by showing they differ in support. The first conditional variable can be full support because the event  $\mathbb{1}[Z < 0]$  satisfies one of the OR conditions leaving the other condition  $\mathbb{1}[X_1 + X_2 < 0] = \mathbb{1}[h_X^* < 0]$  free to take either value. However, the second conditional variable needs  $X_1 + X_2 = h_X^* < 0$  because  $\mathbb{1}[Z < 0]$  is not satisfied (since we condition on  $\mathbb{1}[Z \geq 0] = 1$ ) but the OR has to be 1. These different supports imply the distributions differ. That the variables differ on two non-measure zero sets is enough to show dependence. Then  $(X_1 + X_2) \perp\!\!\!\perp Z | Y = y, \Delta = 1$  which means  $h_X^* \perp\!\!\!\perp Z | Y = y, \Delta = 1$ .

**Second direction.** There exist distributions on  $(X, Y, \Delta, Z)$  such that

hXs.t.hX ⁣ ⁣ ⁣ZY=y,Δ=1buthX ⁣ ⁣ ⁣ZY=y\exists h_X^* \quad \text{s.t.} \quad h_X^* \perp\!\!\!\perp Z | Y = y, \Delta = 1 \quad \text{but} \quad h_X^* \perp\!\!\!\perp Z | Y = y

Let the data be drawn as

YN(0,1),ZB(0.5),X=[YZ,Y+Z]Y \sim \mathcal{N}(0, 1), \quad Z \sim \mathcal{B}(0.5), \quad X = [Y - Z, Y + Z]

Let  $h_X^* = \mathbb{1}[X_1 \geq 0]$ . We first show  $h_X^* \perp\!\!\!\perp Z | Y = y$ . We have

hX=1[X10]=1[YZ0]\begin{aligned} h_X^* &= \mathbb{1}[X_1 \geq 0] \\ &= \mathbb{1}[Y - Z \geq 0] \end{aligned}

Given  $Y = y$ , we ask if the random variable  $\mathbb{1}[y - Z \geq 0]$  is independent of  $Z$ . To show dependence, we show that the random variable  $\mathbb{1}[y - Z \geq 0]$  changes in distribution when  $Z$  takes on its two values:

1. 1.  $\mathbb{1}[y - Z \geq 0] | Y = y, Z = 0$
2. 2.  $\mathbb{1}[y - Z \geq 0] | Y = y, Z = 1$

Suppose  $y \in (0, 1)$ . When  $Z = 0$  we have that  $\mathbb{1}[y - Z \geq 0] = 1$  with probability one. When  $Z = 1$ , we have  $\mathbb{1}[y - Z \geq 0] = 0$  with probability one. Therefore the variables are dependent.

We now let  $\Delta = \mathbb{1}[X_1 \geq 0] = \mathbb{1}[Y - Z \geq 0]$  and show  $h_X^* \perp\!\!\!\perp Z | Y = y, \Delta = 1$ . Note that  $\Delta(X, Y) = h_X^*$ . We ask whether

1[YZ0] ⁣ ⁣ ⁣ZY=y,1[YZ0]\mathbb{1}[Y - Z \geq 0] \perp\!\!\!\perp Z | Y = y, \mathbb{1}[Y - Z \geq 0]

The conditioning set fully determines the variable  $\mathbb{1}[Y - Z \geq 0]$  meaning it is a constant and is therefore independent of  $Z$ . Therefore  $h_X^* \perp\!\!\!\perp Z | Y = y, \Delta = 1$  as desired.## Appendix E. IP and outcome estimators

We review estimation of  $\mathbb{E}[Z]$  under missingness. Two pieces of the data generation process can help, the missingness process  $G_W$  and the conditional expectation  $m_W$  of the missing variable:

GWE[ΔX,Y],mWE[ZX,Y]G_W \triangleq \mathbb{E}[\Delta \mid X, Y], \quad m_W \triangleq \mathbb{E}[Z \mid X, Y]

Inverse-weighting estimators use  $G_W$  ([Horvitz and Thompson, 1952](#); [Binder, 1983](#); [Robins et al., 1994](#); [Van der Laan and Robins, 2003](#); [Hernan and Robins, 2021](#))

E[Z]=EXEZX[Z]=EXEZX[E[ΔX]E[ΔX]Z]=EXEZXEΔX[ΔZE[ΔX]]=EXZΔ[ΔZE[ΔX]]=EXΔZ[ΔZGW]=EXΔZ[ΔZ~GW](11)\begin{aligned} \mathbb{E}[Z] &= \mathbb{E}_X \mathbb{E}_{Z|X} [Z] \\ &= \mathbb{E}_X \mathbb{E}_{Z|X} \left[ \frac{\mathbb{E}[\Delta|X]}{\mathbb{E}[\Delta|X]} Z \right] \\ &= \mathbb{E}_X \mathbb{E}_{Z|X} \mathbb{E}_{\Delta|X} \left[ \frac{\Delta Z}{\mathbb{E}[\Delta|X]} \right] \\ &= \mathbb{E}_{XZ\Delta} \left[ \frac{\Delta Z}{\mathbb{E}[\Delta|X]} \right] \\ &= \mathbb{E}_{X\Delta Z} \left[ \frac{\Delta Z}{G_W} \right] \\ &= \mathbb{E}_{X\Delta Z} \left[ \frac{\Delta \tilde{Z}}{G_W} \right] \end{aligned} \tag{11}

This means we can estimate  $\mathbb{E}[Z]$  provided that (1) ignorability and positivity hold and (2)  $G_W$  is known.  $G_W$  can be estimated by regressing  $\Delta$  on  $X$ . Alternatively, standardization estimators use  $m_W$  ([Rubin, 1976](#); [Schafer, 1997](#); [Rubin, 2004](#); [Pearl, 2009](#); [Little and Rubin, 2019](#); [Hernan and Robins, 2021](#)):

E[Z]=EX[E[ZX]]=EX[E[ZX,Δ=1]]=E[mW](12)\mathbb{E}[Z] = \mathbb{E}_X \left[ \mathbb{E}[Z|X] \right] = \mathbb{E}_X \left[ \mathbb{E}[Z|X, \Delta = 1] \right] = \mathbb{E}[m_W] \tag{12}

The equality between the middle two terms means that  $m_W$  can be estimated by regressing  $\tilde{Z}$  on  $X$  just on those samples where  $\Delta = 1$ . The equality follows from the ignorability assumption  $Z \perp\!\!\!\perp \Delta | X, Y$ .

## Appendix F. DR estimator of mean of Z

The inverse weighting and regression estimators can be combined. [Equation \(3\)](#) defines the DR estimator of  $\mathbb{E}[Z]$  by

E[Z]=E[ΔZ~GWΔGWGWmW]\mathbb{E}[Z] = \mathbb{E} \left[ \frac{\Delta \tilde{Z}}{G_W} - \frac{\Delta - G_W}{G_W} m_W \right]Let us re-write this expectation until we see it equals  $\mathbb{E}[Z]$  when  $G$  or  $m$  are correct.

E[ΔZ~GWΔGWGWmW]=E[ΔZGWΔGWGWmW]=E[Z+ΔZGWZΔGWGWmW]=E[Z+ΔZGWGWGWZΔGWGWmW]=E[Z+ΔGWGWZΔGWGWmW]=E[Z+ΔGWGW(ZmW)]=E[Z]+E[ΔGWGW(ZmW)]\begin{aligned}
 & \mathbb{E} \left[ \frac{\Delta \tilde{Z}}{G_W} - \frac{\Delta - G_W}{G_W} m_W \right] \\
 &= \mathbb{E} \left[ \frac{\Delta Z}{G_W} - \frac{\Delta - G_W}{G_W} m_W \right] \\
 &= \mathbb{E} \left[ Z + \frac{\Delta Z}{G_W} - Z - \frac{\Delta - G_W}{G_W} m_W \right] \\
 &= \mathbb{E} \left[ Z + \frac{\Delta Z}{G_W} - \frac{G_W}{G_W} Z - \frac{\Delta - G_W}{G_W} m_W \right] \\
 &= \mathbb{E} \left[ Z + \frac{\Delta - G_W}{G_W} Z - \frac{\Delta - G_W}{G_W} m_W \right] \\
 &= \mathbb{E} \left[ Z + \frac{\Delta - G_W}{G_W} (Z - m_W) \right] \\
 &= \mathbb{E} [Z] + \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (Z - m_W) \right]
 \end{aligned}

The first term is what we want, so we just have to check if the second term is 0 when either  $G$  or  $m$  are correct. If  $G$  is correct (regardless of  $m$ ) then:

E[ΔGWGW(ZmW)]=E[E[ΔGWGW(ZmW)|X,Z]]=E[E[ΔX,Z]GWGW(ZmW)]=E[E[ΔX]GWGW(ZmW)]=E[GWGWGW(ZmW)]=0\begin{aligned}
 \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (Z - m_W) \right] &= \mathbb{E} \left[ \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (Z - m_W) \middle| X, Z \right] \right] \\
 &= \mathbb{E} \left[ \frac{\mathbb{E}[\Delta | X, Z] - G_W}{G_W} (Z - m_W) \right] \\
 &= \mathbb{E} \left[ \frac{\mathbb{E}[\Delta | X] - G_W}{G_W} (Z - m_W) \right] \\
 &= \mathbb{E} \left[ \frac{G_W - G_W}{G_W} (Z - m_W) \right] = 0
 \end{aligned}

When  $m$  is correct (regardless of  $G$ ):

E[ΔGWGW(ZmW)]=E[E[ΔGWGW(ZmW)|X,Δ]]=E[ΔGWGW(E[ZX,Δ]mW)|X,Δ]=E[ΔGWGW(E[ZX,Δ]E[ZX,Δ=1])|X,Δ]=E[ΔGWGW(E[ZX]E[ZX])|X,Δ]=0\begin{aligned}
 \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (Z - m_W) \right] &= \mathbb{E} \left[ \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (Z - m_W) \middle| X, \Delta \right] \right] \\
 &= \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (\mathbb{E}[Z | X, \Delta] - m_W) \middle| X, \Delta \right] \\
 &= \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (\mathbb{E}[Z | X, \Delta] - \mathbb{E}[Z | X, \Delta = 1]) \middle| X, \Delta \right] \\
 &= \mathbb{E} \left[ \frac{\Delta - G_W}{G_W} (\mathbb{E}[Z | X] - \mathbb{E}[Z | X]) \middle| X, \Delta \right] = 0
 \end{aligned}## Appendix G. Deriving MMD estimators under missingness

### G.1. Deriving the $G_W$ -based re-weighted estimator

Here we start at the target quantity and derive the estimator. We give the derivation for  $Z = 1, Z' = 1$ . The other cases are analogous.

EP(XZ=1)P(XZ=1)[kXX]=X,XkP(XZ=1)P(XZ=1)dXdX=1P(Z=1)1P(Z=1)X,XkP(Z=1,X)P(Z=1,X)dXdX=1P(Z=1)1P(Z=1)X,XkP(Z=1X)P(Z=1X)P(X)P(X)dXdX=1P(Z=1)1P(Z=1)X,XkE(Z=1X)E(Z=1X)P(X)P(X)dXdX=1P(Z=1)1P(Z=1)EX,Z,X,Z[kZZ]=1P(Z=1)1P(Z=1)EX,Z,X,Z[E[ΔX]E[ΔX]E[ΔX]E[ΔX]kZZ]=1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ΔΔGWGWkZZ]\begin{aligned}
 & \mathbb{E}_{P(X|Z=1)P(X'|Z'=1)} \left[ k_{XX'} \right] \\
 &= \int_{X,X'} k P(X|Z=1) P(X'|Z'=1) dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X,X'} k P(Z=1, X) P(Z'=1, X') dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X,X'} k P(Z=1|X) P(Z'=1|X) P(X) P(X') dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X,X'} k \mathbb{E}(Z=1|X) \mathbb{E}(Z'=1|X) P(X) P(X') dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X,Z, X',Z'} \left[ k \cdot Z \cdot Z' \right] \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X,Z, X',Z'} \left[ \frac{\mathbb{E}[\Delta|X] \mathbb{E}[\Delta'|X']}{\mathbb{E}[\Delta|X] \mathbb{E}[\Delta'|X']} k \cdot Z \cdot Z' \right] \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X,\Delta,Z, X',\Delta',Z'} \left[ \frac{\Delta \Delta'}{G_W G_{W'}} k \cdot Z \cdot Z' \right]
 \end{aligned}

### G.2. Deriving the $m_W$ -based standardization estimator

Here we start at the target quantity and derive the estimator. We give the derivation for  $Z = 1, Z' = 1$ . The other cases are analogous.

EP(XZ=1)P(XZ=1)[kXX]=X,XkP(XZ=1)P(XZ=1)dXdX=1P(Z=1)1P(Z=1)X,XkP(Z=1,X)P(Z=1,X)dXdX=1P(Z=1)1P(Z=1)X,XkP(Z=1X)P(Z=1X)P(X)P(X)dXdX=1P(Z=1)1P(Z=1)X,XkE(Z=1X)E(Z=1X)P(X)P(X)dXdX=1P(Z=1)1P(Z=1)EX,X[mWmWk]\begin{aligned}
 & \mathbb{E}_{P(X|Z=1)P(X'|Z'=1)} \left[ k_{XX'} \right] \\
 &= \int_{X,X'} k P(X|Z=1) P(X'|Z'=1) dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X,X'} k P(Z=1, X) P(Z'=1, X') dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X,X'} k P(Z=1|X) P(Z'=1|X) P(X) P(X') dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X,X'} k \mathbb{E}(Z=1|X) \mathbb{E}(Z'=1|X) P(X) P(X') dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X,X'} \left[ m_W \cdot m_{W'} \cdot k \right]
 \end{aligned}### G.3. Deriving the DR estimator

Here we start at the estimator and derive the target quantity. We give the derivation for  $Z = 1, Z' = 1$ . The other cases are analogous.

1P(Z=1)1P(Z=1)1N(N1)ij[ΔijZ~ijkijGijΔijGijGijmijkij]1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ΔΔZ~Z~GWGWkΔΔGWGWGWGWmWmWk]=1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ΔΔZZGWGWkΔΔGWGWGWGWmWmWk]=1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ZZk+ΔΔGWGWZZkZZkΔΔGWGWGWGWmWmWk]=1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ZZk+ΔΔGWGWZZkGWGWGWGWZZkΔΔGWGWGWGWmWmWk]=1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ZZk+ΔΔGWGWGWGWZZkΔΔGWGWGWGWmWmWk]=1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ZZk+ΔΔGWGWGWGW(ZZmWmW)k]=1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ZZk]+1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ΔΔGWGWGWGW(ZZmWmW)k]\begin{aligned}
 & \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \frac{1}{N(N-1)} \sum_{i \neq j} \left[ \frac{\Delta_{ij} \tilde{Z}_{ij} k_{ij}}{G_{ij}} - \frac{\Delta_{ij} - G_{ij}}{G_{ij}} m_{ij} k_{ij} \right] \\
 & \approx \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ \frac{\Delta \Delta' \tilde{Z} \tilde{Z}'}{G_W G_{W'}} k - \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} m_W m_{W'} k \right] \\
 & = \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ \frac{\Delta \Delta' Z Z'}{G_W G_{W'}} k - \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} m_W m_{W'} k \right] \\
 & = \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ Z Z' k + \frac{\Delta \Delta'}{G_W G_{W'}} Z Z' k - Z Z' k - \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} m_W m_{W'} k \right] \\
 & = \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ Z Z' k + \frac{\Delta \Delta'}{G_W G_{W'}} Z Z' k - \frac{G_W G_{W'}}{G_W G_{W'}} Z Z' k - \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} m_W m_{W'} k \right] \\
 & = \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ Z Z' k + \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} Z Z' k - \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} m_W m_{W'} k \right] \\
 & = \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ Z Z' k + \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} (Z Z' - m_W m_{W'}) k \right] \\
 & = \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ Z Z' k \right] + \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} (Z Z' - m_W m_{W'}) k \right]
 \end{aligned}Our estimator equals two terms. We first show that the first term equals the desired quantity, and then show the second term equals 0 when either auxiliary model is correct.

1P(Z=1)1P(Z=1)EX,Δ,Z,X,Δ,Z[ZZk]=1P(Z=1)1P(Z=1)EX,X[kE[Z,ZX,X]]=1P(Z=1)1P(Z=1)EX,X[kP(Z=1,Z=1X,X)]=1P(Z=1)1P(Z=1)X,XkP(Z=1,Z=1X,X)P(X,X)dXdX=1P(Z=1)1P(Z=1)X,XkP(Z=1X)P(Z=1X)P(X)P(X)dXdX=1P(Z=1)1P(Z=1)X,XkP(Z=1,X)P(Z=1,X)dXdX=1P(Z=1)1P(Z=1)X,XkP(Z=1,X)P(Z=1,X)dXdX=X,XkP(XZ=1)P(XZ=1)dXdX=EP(XZ=1)P(XZ=1)[k]\begin{aligned}
 \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} [ZZ'k] &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, X'} [k \mathbb{E}[Z, Z'|X, X']] \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \mathbb{E}_{X, X'} [kP(Z=1, Z'=1|X, X')] \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X, X'} kP(Z=1, Z'=1|X, X')P(X, X')dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X, X'} kP(Z=1|X)P(Z'=1|X)P(X)P(X')dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X, X'} kP(Z=1, X)P(Z'=1, X')dX dX' \\
 &= \frac{1}{P(Z=1)} \frac{1}{P(Z'=1)} \int_{X, X'} kP(Z=1, X)P(Z'=1, X')dX dX' \\
 &= \int_{X, X'} kP(X|Z=1)P(X'|Z'=1)dX dX' \\
 &= \mathbb{E}_{P(X|Z=1)P(X'|Z'=1)} [k]
 \end{aligned}

That's the expectation we want missing just the  $P(Z=1)$  constants, so now we should show the next term is 0 when either  $m$  or  $G$  are correct. When  $G$  correct:

EX,Δ,Z,X,Δ,Z[ΔΔGWGWGWGW(ZZmWmW)k]=EX,Z,X,Z[E[ΔΔX,X,Y,Z]GWGWGWGW(ZZmWmW)k]=EX,Z,X,Z[E[ΔΔX,X]GWGWGWGW(ZZmWmW)k]=EX,Z,X,Z[E[ΔX]E[ΔX]GWGWGWGW(ZZmWmW)k]=EX,Z,X,Z[GWGWGWGWGWGW(ZZmWmW)k]=0\begin{aligned}
 \mathbb{E}_{X, \Delta, Z, X', \Delta', Z'} \left[ \frac{\Delta\Delta' - G_W G_{W'}}{G_W G_{W'}} (ZZ' - m_W m_{W'}) k \right] &= \mathbb{E}_{X, Z, X', Z'} \left[ \frac{\mathbb{E}[\Delta\Delta'|X, X', Y, Z'] - G_W G_{W'}}{G_W G_{W'}} (ZZ' - m_W m_{W'}) k \right] \\
 &= \mathbb{E}_{X, Z, X', Z'} \left[ \frac{\mathbb{E}[\Delta\Delta'|X, X'] - G_W G_{W'}}{G_W G_{W'}} (ZZ' - m_W m_{W'}) k \right] \\
 &= \mathbb{E}_{X, Z, X', Z'} \left[ \frac{\mathbb{E}[\Delta|X] \mathbb{E}[\Delta'|X'] - G_W G_{W'}}{G_W G_{W'}} (ZZ' - m_W m_{W'}) k \right] \\
 &= \mathbb{E}_{X, Z, X', Z'} \left[ \frac{G_W G_{W'} - G_W G_{W'}}{G_W G_{W'}} (ZZ' - m_W m_{W'}) k \right] = 0
 \end{aligned}Likewise, when  $m$  correct:

EX,Δ,ZX,Δ,Z[ΔΔGWGWGWGW(ZZmWmW)k]=EX,ΔX,Δ[ΔΔGWGWGWGW(E[ZZX,X,Δ,Δ]mWmW)k]=EX,ΔX,Δ[ΔΔGWGWGWGW(E[ZX,Δ]E[ZX,Δ]mWmW)k]=EX,ΔX,Δ[ΔΔGWGWGWGW(E[ZX,Δ]E[ZX,Δ]E[ZX,Δ=1]E[ZX,Δ=1])k]=EX,ΔX,Δ[ΔΔGWGWGWGW(E[ZX]E[ZX]E[ZX]E[ZX])k]=0\begin{aligned}
 & \mathbb{E}_{\substack{X, \Delta, Z \\ X', \Delta', Z'}} \left[ \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} (ZZ' - m_W m_{W'}) k \right] \\
 &= \mathbb{E}_{\substack{X, \Delta \\ X', \Delta'}} \left[ \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} \left( \mathbb{E}[ZZ'|X, X', \Delta, \Delta'] - m_W m_{W'} \right) k \right] \\
 &= \mathbb{E}_{\substack{X, \Delta \\ X', \Delta'}} \left[ \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} \left( \mathbb{E}[Z|X, \Delta] \mathbb{E}[Z'|X', \Delta'] - m_W m_{W'} \right) k \right] \\
 &= \mathbb{E}_{\substack{X, \Delta \\ X', \Delta'}} \left[ \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} \left( \mathbb{E}[Z|X, \Delta] \mathbb{E}[Z'|X', \Delta'] - \mathbb{E}[Z|X, \Delta = 1] \mathbb{E}[Z'|X', \Delta' = 1] \right) k \right] \\
 &= \mathbb{E}_{\substack{X, \Delta \\ X', \Delta'}} \left[ \frac{\Delta \Delta' - G_W G_{W'}}{G_W G_{W'}} \left( \mathbb{E}[Z|X] \mathbb{E}[Z'|X'] - \mathbb{E}[Z|X] \mathbb{E}[Z'|X'] \right) k \right] = 0
 \end{aligned}

The proof for the other two terms is analogous but with using  $\bar{Z} = (1 - Z)$  instead of  $Z$  and  $\bar{m} = 1 - m$  when conditioning on  $Z = 0$ .

## Appendix H. kernel mmd between joint and product of marginals

**Continuous nuisances.** In this work we study binary nuisance. We can instead measure the MMD between joint  $p(h_X, Z)$  and product of marginals  $p(h_X)P(Z)$ , which allows for continuous nuisance.

The above formulation of MMD between  $h_X|Z = 1$  and  $h_X|Z = 0$  relied on optimizing with respect to  $h$  only:  $P(Z)$  is constant in the optimization so the distance between conditionals specifies the distance between the product of marginals and joint and thus the dependence. However, considering the more general case of MMD between  $P(h_X, Z)$  and  $P(h_X)P(Z)$  has the advantage that is not necessary to consider a finite set of conditioning values for  $Z$ . That means the MMD can be extended to continuous nuisance  $Z$ . Let  $X :: Z$  denote the concatenation of  $X$  and  $Z$ . The more general formulation is:

E(X,Z)P(X,Z)(X,Z)P(X,Z)[k(X::Z,X::Z)]+E(X,Z)P(X)P(Z)(X,Z)P(X)P(Z)[k(X::Z,X::Z)]2E(X,Z)P(X,Z)(X,Z)P(X)P(Z)[k(X::Z,X::Z)]\begin{aligned}
 & \mathbb{E}_{\substack{(X, Z) \sim P(X, Z) \\ (X', Z') \sim P(X, Z)}} \left[ k(X :: Z, X' :: Z') \right] + \mathbb{E}_{\substack{(X, Z) \sim P(X)P(Z) \\ (X', Z') \sim P(X')P(Z')}} \left[ k(X :: Z, X' :: Z') \right] \\
 & \quad - 2\mathbb{E}_{\substack{(X, Z) \sim P(X, Z) \\ (X', Z') \sim P(X')P(Z')}} \left[ k(X :: Z, X' :: Z') \right]
 \end{aligned}

This leads to the following estimator:

EP(X,Z)P(X,Z)[k(X::Z,X::Z)]=E[ΔΔk(X::Z,X::Z)GWWΔΔGWWGWWE[kX,X]]\mathbb{E}_{\substack{P(X, Z) \\ P(X', Z')}} \left[ k(X :: Z, X' :: Z') \right] = \mathbb{E} \left[ \frac{\Delta \Delta' k(X :: Z, X' :: Z')}{G_{WW'}} - \frac{\Delta \Delta' - G_{WW'}}{G_{WW'}} \mathbb{E}[k|X, X'] \right]and

EP(X)P(Z)P(X)P(Z)[k(X::Z,X::Z)]=EP(X1)P(X2,Z2)P(X3)P(X4,Z4)[k(X1::Z2,X3::Z4)]=E[ΔΔk(X1::Z2,X3::Z4)GX1X3ΔΔGX1X3GX1X3E[k(X1::Z2,X3::Z4)X1,X3]]\begin{aligned}
 & \mathbb{E}_{\substack{P(X)P(Z) \\ P(X')P(Z')}} \left[ k(X::Z, X'::Z') \right] \\
 &= \mathbb{E}_{\substack{P(X_1)P(X_2, Z_2) \\ P(X_3)P(X_4, Z_4)}} \left[ k(X_1::Z_2, X_3::Z_4) \right] \\
 &= \mathbb{E} \left[ \frac{\Delta\Delta' k(X_1::Z_2, X_3::Z_4)}{G_{X_1 X_3}} - \frac{\Delta\Delta' - G_{X_1 X_3}}{G_{X_1 X_3}} \mathbb{E}[k(X_1::Z_2, X_3::Z_4) | X_1, X_3] \right]
 \end{aligned}

and

EP(X,Z)P(X)P(Z)[k(X::Z,X::Z)]=EP(X1,Z1)P(X2)P(X3,Z3)[k(X1::Z1,X2::Z3)]=E[ΔΔk(X1::Z1,X2::Z3)GX1X3ΔΔGX1X3GX1X3E[k(X1::Z1,X2::Z3)X1,X3]]\begin{aligned}
 & \mathbb{E}_{\substack{P(X, Z) \\ P(X')P(Z')}} \left[ k(X::Z, X'::Z') \right] \\
 &= \mathbb{E}_{\substack{P(X_1, Z_1) \\ P(X_2)P(X_3, Z_3)}} \left[ k(X_1::Z_1, X_2::Z_3) \right] \\
 &= \mathbb{E} \left[ \frac{\Delta\Delta' k(X_1::Z_1, X_2::Z_3)}{G_{X_1 X_3}} - \frac{\Delta\Delta' - G_{X_1 X_3}}{G_{X_1 X_3}} \mathbb{E}[k(X_1::Z_1, X_2::Z_3) | X_1, X_3] \right]
 \end{aligned}

The challenging part of applying this estimator is that now instead of one function  $m_W$  we have three functions, each of which estimates the mean of  $k$  under a different sampling distribution. Moreover, these conditional expectations depend on the current representation  $h_X$ . This means they must be updated each time  $h$  changes.

Xet Storage Details

Size:
68.5 kB
·
Xet hash:
4b0f0fe6d3188a8fe38cdd5672f6e98f60add942aff72a59c1832b4d02590f0a

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.