Upcoming events

AI4Health Meetup

Date: March 27, 2025
Time: 5:00 pm - 7:00 pm
Location: NU-4B47

Speaker: Jacqueline Bereska

Title: SACP: Spatially-Aware Conformal Prediction in Uncertainty Quantification of Medical Image Segmentation

Abstract: Conformal Prediction provides statistical coverage guarantees for uncertainty quantification but fails to account for spatially varying importance of predictive uncertainty in medical image segmentation. This paper introduces a spatially-aware conformal prediction frame- work that enhances uncertainty quantification by incorporating spatial context near critical anatomical interfaces such as a vessel or critical organ. Our framework consists of three key components: (1) a base nonconformity score derived from segmentation model probabilities, (2) a calibration mechanism that applies structure-specific importance weights based on spatial proximity, and (3) a prediction set construction method that preserves mathematical coverage guarantees while providing targeted uncertainty quantification in critical regions. The calibration mechanism employs a distance-weighted scoring

Speaker: Ihtesham Shah

Title: Evaluation of Model-agnostic explanation techniques on healthcare dataset.

Abstract: Explainable AI (XAI) assist clinicians and researcher in understanding the rationale behind the predictions made by data-driven models which helps them to make informed decisions and trust the model’s outputs. Providing accurate explanations for breast cancer treatment predictions in the context of highly imbalanced, multiclass-multioutput classification problem is extremely challenging.
The aim of this study is to perform a comprehensive and detailed analysis of the explanations generated by post-hoc explanatory methods: Local Interpretable Model-agnostic Explanation (LIME) and SHaply Additive exPlanations (SHAP) for breast cancer treatment prediction using highly imbalanced oncologycal dataset.
We introduced evaluation matrices including consistency, fidelity, alignment with established clinical guidelines and qualitative analysis to evaluate the effectiveness and faithfulness of these methods.
By examining the strengths and limitations of LIME and SHAP, we aim to determine their suitability for supporting clinical decision making in multifaceted treatments and complex scenarios.
Our findings provide important insights into the use of these explanation methods, highlighting the importance of transparent and robust predictive models. This experiment showed that SHAP perform better than LIME in term of fidelity and by providing more stable explanation that are better aligned with medical guidelines.
This work provides guidance to practitioners and model developers in selecting the most suitable explanation technique to promote trust and enhance understanding in predictive healthcare models.