Arrival, mingle & light refreshments
Introduction by CeBIL Director Timo Minssen: Medical AI and the work of (Inter-) CeBIL
Nicholson Price: Medical AI: Contextual bias, liability, and regulation
AI systems perform differently when used in different environments; how should the law respond? Are ex post liability rules adequate to create incentives for safe development and deployment, or can ex ante regulatory oversight resolve the problem from a different direction? As AI enters health use, policymakers, innovators, and health system actors alike should consider how to address the problems of contextual bias and differential performance.
Glenn Cohen: Label Bias, Informed Consent, Explainability, and Promise and Peril of ChatGPT
It is well-known that medical AI sometimes generates outcomes that are less good as to certain groups, especially racialized minority. But while data set bias is easy to conceptualize, how well-equipped are regulators and the law for more subtle forms of bias such as label bias? When are physicians legally or ethically obligated to inform patients that medical AI is involved in their care as a matter of informed consent? Should regulators be demanding explainability or interpretability in medical AI or is the black box defensible, and under what circumstances? Finally how might integration of Large Language Models (LLMs) such as ChatGPT raise particular challenges?
Katharina Ó Cathaoir: Medical AI: GDPR and informed consent in Nordic law
The explainability requirements of the GPDR have been widely discussed in academic literature. However, in European countries healthcare is mainly governed at the national level, which may set higher standards than what is mandated by EU law. Consequently, healthcare providers must adhere to domestic healthcare regulations when utilizing ML models to offer medical advice. What level of understanding must physicians have regarding such models in order to integrate them into patient care in a manner compliant with domestic law?