Fairness in Artificial Intelligence
As artificial intelligence (AI) systems play an increasingly central role in high-stakes decisions, the question of fairness has become central to technological, societal, and regulatory discourse. While advances in machine learning offer new opportunities, prior research has shown that these may come with the side effects of algorithmic discrimination and unintended bias (e.g., Angwin et al., 2016; Lum & Isaac, 2016).
This challenge has been recognized at the European level: the EU AI Act, the world’s first comprehensive legal framework for AI, introduces a risk-based classification system that groups AI systems into four categories – unacceptable, high, limited, and minimal risk (European Commission, 2021; European Parliament, 2024). High-risk systems – such as those used in credit scoring, job recruitment, or access to public services – are subject to strict obligations, including transparency, human oversight, risk management, and data quality standards. This regulatory framework has far-reaching consequences for organizations, requiring them to proactively assess the risk level of their AI systems, document compliance, and in some cases, obtain external certification before deployment. A helpful resource for navigating the regulation is the European Commission’s AI Act Explorer.
In line with this, our previous research has addressed the tension between normative ideals and practical constraints in algorithmic decision-making. As part of Zevedi's interdisciplinary NoKI – Normordnung Künstlicher Intelligenz project (2021–2023), we contributed to a BISE discussion paper outlining challenges such as incompatible fairness definitions, limited guidance on bias mitigation, and trade-offs between fairness and predictive performance (Pfeiffer et al., 2023). We also conducted a conjoint study (manuscript in preparation) examining how varying levels of fairness criteria, transparency, and conformity assessments shape users’ perceptions of fairness in high-stakes contexts. Additionally, we have engaged in broader academic discourse, including a BISE editorial on research opportunities arising from new regulatory frameworks (Pfeiffer, 2024) and participation in an interdisciplinary panel on High-Risk AI at WI 2024, together with colleagues from TU Darmstadt, KIT, EnBW, and InformMe GmbH.
Our current research investigates fairness in AI from a user-centered and socio-technical perspective, with a particular focus on how individuals perceive algorithmic fairness, which key dimensions influence this perception, and whether these perceptions align with commonly used fairness metrics in machine learning. We are especially interested in the implications of AI systems for marginalized or disadvantaged groups and in developing frameworks that support ethically grounded and trustworthy system design.
References of our research group
- Pfeiffer, J., Gutschow, J., Haas, C., Möslein, F., Maspfuhl, O., Borgers, F., & Alpsancar, S. (2023). Algorithmic Fairness in AI: An Interdisciplinary View. Business & Information Systems Engineering, 65(2), 209-222.
- Pfeiffer, J., Lachenmaier, J. F., Hinz, O., & van der Aalst, W. (2024). New Laws and Regulation. Business & Information Systems Engineering, 1-14.
Additional references
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. In Ethics of Data and Analytics (pp. 254-264). Auerbach Publications.
- European Commission (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. COM (2021) 206 final. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206&qid=1724767511072.
- European Parliament (2024). Regulation (eu) 2024/1689 of the european parliament and of the council laying down harmonised rules on artificial intelligence and amending regulations (ec) no 300/2008, (eu) no 167/2013, (eu) no 168/2013, (eu) 2018/858, (eu) 2018/1139 and (eu) 2019/2144 and directives 2014/90/eu, (eu) 2016/797 and (eu) 2020/1828 (artificial intelligence act). PE/24/2024/REV/1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689.
- Lum, K., & Isaac, W. (2016). To Predict and Serve? Significance, 13(5), 14-19. https://doi.org/10.1111/j.1740-9713.2016.00960