Fair Domain Generalization with Heterogeneous Sensitive Attributes Across Domains
Published in WACV, 2024
Domain generalization(DG) techniques develop models that accurately classify new, unseen domains by learning from multiple source domains. Most methods in DG focus on improving predictive performance measures on the unseen domain. Recent studies also attempt to enforce fairness measures on the unseen domain. However, these studies assume that every domain has the same sensitive attribute, including the unseen domain. In practice, each domain may be required to satisfy fairness with respect to its own set of multiple sensitive attributes. Given a set of sensitive attributes ($\mathcal{S}$), current methods need to train $2^n$ models to ensure fairness with respect to any subset of $\mathcal{S}$, where $n=|\mathcal{S}|$. We propose a single-model solution to address this new problem setting. We learn two feature representations, one to generalize the model’s predictive performance, and another to generalize the model’s fairness. The first representation is made invariant across domains to generalize predictive performance. The second representation is kept selectively invariant, \ie, invariant only across domains having the same sensitive attributes. Our single model exhibits superior predictive performance and fairness measures against the current alternative of $2^n$ models on unseen domains on multiple real-world vision datasets.
Link