Toward Understanding the Unintended Disparate Impacts of Differentially Private Decision and Learning Systems
Friday, November 11, 2022 11 AM to 12 PM
About this Event
6760 Forest Park Pkwy, St. Louis, MO 63105, USA
Assistant Professor
Electrical Engineering and Computer Science
Many agencies release statistics about groups of individuals that are then used as input to critical decision processes. For example, census data is used to allocate funds and distribute resources to states and jurisdictions. Similarly, corporates are increasingly adopting machine learning systems to derive socio-technical decisions, including criminal assessment, landing, and hiring. The resulting decisions can have significant societal and economic impacts on participating individuals.
In many cases, the released data contain sensitive information whose privacy is strictly regulated and Differential Privacy has become the paradigm of choice for protecting data privacy. However, while differential privacy provides strong privacy guarantees, it has become apparent recently that it may induce biases and fairness issues in downstream decision processes, including the allotment of federal funds, apportionment of congressional seats, and learning biased classification results in lending and hiring. These issues may adversely affect many individuals' health, well-being, and sense of belonging, and are currently poorly understood.
This talk will describe our efforts in understanding and addressing these issues at the intersection of privacy, fairness, and decision processes. I will first review the notion of Differential Privacy and discuss its applications in data release and learning tasks. I will then examine the societal impacts of privacy under a fairness lens and propose a theoretical framework to illustrate what aspects of the private algorithms and/or data may be responsible for exacerbating unfairness. Finally, I will propose a path to partially address these fairness issues and present some open questions.
Event Details
See Who Is Interested
Dial-In Information
Join Zoom Meeting
https://wustl.zoom.us/j/92696113975?pwd=L0ttSDdic1d5a1N3MmN0cjFJWmowZz09
About this Event
6760 Forest Park Pkwy, St. Louis, MO 63105, USA
Assistant Professor
Electrical Engineering and Computer Science
Many agencies release statistics about groups of individuals that are then used as input to critical decision processes. For example, census data is used to allocate funds and distribute resources to states and jurisdictions. Similarly, corporates are increasingly adopting machine learning systems to derive socio-technical decisions, including criminal assessment, landing, and hiring. The resulting decisions can have significant societal and economic impacts on participating individuals.
In many cases, the released data contain sensitive information whose privacy is strictly regulated and Differential Privacy has become the paradigm of choice for protecting data privacy. However, while differential privacy provides strong privacy guarantees, it has become apparent recently that it may induce biases and fairness issues in downstream decision processes, including the allotment of federal funds, apportionment of congressional seats, and learning biased classification results in lending and hiring. These issues may adversely affect many individuals' health, well-being, and sense of belonging, and are currently poorly understood.
This talk will describe our efforts in understanding and addressing these issues at the intersection of privacy, fairness, and decision processes. I will first review the notion of Differential Privacy and discuss its applications in data release and learning tasks. I will then examine the societal impacts of privacy under a fairness lens and propose a theoretical framework to illustrate what aspects of the private algorithms and/or data may be responsible for exacerbating unfairness. Finally, I will propose a path to partially address these fairness issues and present some open questions.