Sunipa Dev, Tao Li, Jeff Phillips and Vivek Srikumar
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.

Abstract

Language representations are known to carry stereotypical biases and, as a result, lead to biased predictions in downstream tasks. While existing methods are effective at mitigating biases by linear projection, such methods are too aggressive: they not only remove bias, but also erase valuable information from word embeddings. We develop new measures for evaluating specific information retention that demonstrate the tradeoff between bias removal and information retention. To address this challenge, we propose OSCaR (Orthogonal Subspace Correction and Rectification), a bias-mitigating method that focuses on disentangling biased associations between concepts instead of removing concepts wholesale. Our experiments on gender biases show that OSCaR is a well-balanced approach that ensures that semantic information is retained in the embeddings and bias is also effectively mitigated.

Links

Bib Entry

@inproceedings{dev2021oscar,
  author = {Dev, Sunipa and Li, Tao and Phillips, Jeff and Srikumar, Vivek},
  title = {{OSCaR: Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings}},
  booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year = {2021}
}