Abstract
Artificial intelligence (AI) is increasingly being used for decision-making. Technological developments have significantly increased the performance of AI models but have also increased their complexity. As a result, IT professionals are struggling to develop fair AI implementations. Using a case study, we demonstrate how both gender and age bias can be addressed in practice. We do this by developing a credit default prediction model and detecting and mitigating both age and gender bias within this model. A neural network was trained using a real world credit data set from Taiwan. Existing ’bias’ in the data set and bias introduced by this initial model was measured using a combination of previously published methods. A corrected model was created by training and evaluating a series of models to control bias along multiple dimensions. The final model eliminates the measured bias with- out sacrificing accuracy. It uses a top-down post-processing technique focusing on an equal increase of the default rate per group.
| Original language | English |
|---|---|
| Article number | 3 |
| Pages (from-to) | 1-12 |
| Number of pages | 12 |
| Journal | Computers and Society Research Journal |
| Volume | 2022 |
| Early online date | 3 May 2022 |
| DOIs | |
| Publication status | Published - May 2022 |
Fingerprint
Dive into the research topics of 'Practical bias correction in neural networks: a credit default prediction case study'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver