Emma Beauxis-Aussalet

Position
Assistant professor of Ethical Computing, User Centric Data Science group
Lab manager, Civic AI Lab
Research focus

I mainly research methods to model, visualise, and explain AI bias, and to make fairness assessments more transparent and comprehensive.

Research Description
My research is particularly useful for assessing the risks of discrimination, e.g., depending on ethnicity, gender, age, or any combination of features (intersectional fairness). For instance, an AI may fail to detect medical conditions (false negatives) more often for specific populations. My research aims at modelling such error discrepancies, using algorithm-agnostic and clustering methods. I also research Explainable AI (XAI) methods to assess the validity and discrepancies of data features on which AI decisions are based. Finally, I research the means to model the variability of AI bias, due to random variance and more systematic shift in distributions (e.g., seasonal patterns).
Pillars

Technological challenges
Implementation challenges

Personal page