The Rock Ethics Institute

« November 2019 »
November
MoTuWeThFrSaSu
123
45678910
11121314151617
18192021222324
252627282930
Home > Events > Rock Ethics Institute Colloquium

Events

Rock Ethics Institute Colloquium

"Towards Ethically Bounded Artificial Intelligence Systems: Algorithmic Fairness Through the Lens of Causality" by Vasant Honavar
by Betsy VanNoy Oct 23, 2019
When Oct 31, 2019
from 1:30 PM to 2:30 PM
Where 133 Sparks
Add event to calendar vCal
iCal
There will be a Rock Ethics Institute Colloquium Thursday, October 31, 1:30 - 2:30pm, which will feature Vasant Honavar, Professor and Edward Frymoyer Chair of Information Sciences and Technology here at Penn State.
Here are the details for this upcoming talk: 
Towards Ethically Bounded Artificial Intelligence Systems: Algorithmic Fairness Through the Lens of Causality

 By Vasant Honavar


WHAT: Rock Ethics Institute Colloquium 
WHEN: Thursday, October 31, 1:30 - 2:30pm 
WHERE: Sparks 133

This event is open to the public, and all are welcome to attend.
Towards Ethically Bounded Artificial Intelligence Systems: 
Algorithmic Fairness Through the Lens of Causality
 
By Vasant Honavar

Abstract:

With the impressive successes of artificial intelligence and machine learning on many challenging applications, there is much interest in, and optimism about, using such techniques to automate decision-making in higher stakes contexts, such as policing, consumer lending, hiring, college admissions, and so on. There is evidence that, in many such settings, algorithms can perpetuate or amplify undesirable biases or discrimination based on race, gender, sexual orientation, religion, and other protected attributes. As virtually all aspects of our lives are increasingly impacted by Artificial Intelligence (AI), or algorithmic decision making systems that act autonomously, it is incumbent upon us as a society to ensure such systems conform to ethical norms. This is of course, easier said than done: First, such systems operate with a certain degree of autonomy in the real world making it hard to make precise predictions about how they would behave in any given circumstance. Second, we largely lack the means to specify in algorithmic terms, what it means for such systems to conform to ethical norms. Third, we lack effective tools to ensure compliance with the applicable ethical norms. Consider, for example, the task of ensuring that AI systems are demonstrably fair, that is, they do not exhibit undesirable bias against or discrimination against certain groups. There exist a broad range of notions of fairness, e.g., those based on the legal notions of disparate treatment,  and disparate impact. The first logical and necessary step in ensuring that AI systems are demonstrably fair is to operationalize such notions in algorithmic terms. While there have been several attempts at formalizing algorithmic fairness criteria, most existing criteria can be expressed as a property of the joint distribution of the individual attributes (including protected attributes), and the predicted and/or actual decision outcomes. Recent work has shown that such criteria can fail to detect algorithmic decisions that demonstrably violate our notions of fairness. We propose to overcome this difficulty by viewing algorithmic fairness questions through a lens of causality. Specifically, we reduce question, "Is the decision discriminatory with respect to a protected attribute, e.g., gender?'' with the question, "Does the protected attribute have a causal effect on the decision?" We leverage the recent advances in causal inference from observational data to determine whether or not there is a causal link between a protected attribute and the decision produced by an AI system. We illustrate our approach using a definition of group fairness: fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework to robustly estimate FACT from observational data about the AI system’s input (individual attributes) and output (decisions or predictions). We demonstrate the effectiveness of our proposed approach on synthetic data. We present results of FACT analyses of three real-world data sets, the Adult income data (with gender as the protected attribute), and the NYC Stop and Frisk data (with race as the protected attribute), and the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) data (with race as a protected attribute). We conclude with a brief discussion of some promising directions for further research on how to design, monitor, and evaluate and improve ethically bounded AI systems.