Episode 3: Fairness and Anti-Discrimination in Machine Learning

Episode 3: Fairness and Anti-Discrimination in Machine Learning

podcast cover art

Show Notes

We all know what it means for a human to discriminate against another human, but the concept of a predictive model or an artificial intelligence is relatively new. What does it mean for a model or an AI to discriminate against someone? In this episode of Value Driven Data Science, Dr Genevieve Hayes is joined by Dr Fei Huang to discuss the importance of considering fairness and avoiding discrimination when developing machine learning models for your business.

Guest Bio

Dr Fei Huang is a senior lecturer in the School of Risk and Actuarial Studies at the University of New South Wales, who has won awards for both her teaching and her research. Her main research interest is predictive modelling and data analytics, and has recently been focussing on insurance discrimination and pricing fairness.

Talking Points

  • Direct vs indirect discrimination and how data scientists can create discriminatory machine learning models without ever intending to.
  • What it means for a model to be fair and the trade-off that exists between individual and group fairness.
  • How fairness and discrimination come up (and have been addressed) in different applications of machine learning, including (but not limited to) insurance.
  • How different jurisdictions are currently addressing algorithmic discrimination, through regulation and other means.
  • What this means for organisations who currently make use of machine learning models or would like to in the future.
  • Why organisations should start considering fairness and discrimination when using analytics and what they can do about it now.

Links

podcast cover art
Value Driven Data Science
Episode 3: Fairness and Anti-Discrimination in Machine Learning
Loading
/