Event details

Date
Mon 17. Jun 2024
Time
09:00 -
11:00
Language
English
Venue
University of Copenhagen, Room 6B-0-06 (Moot Court Room), ground floor
Location
Njalsgade 76
2300
Copenhagen S
If you have any questions leading up to the event, feel free to reply to this email: See registration link
conference

Current Ethical and Legal Issues in Medical AI

Registration Deadline: Fri 14. Jun 2024
University of Copenhagen, Room 6B-0-06 (Moot Court Room), ground floor
Njalsgade 76, 
2300 
Copenhagen S
Mon 17. Jun 2024

Description

Must Medical AI be Explainable?

Explainability in artificial intelligence and machine learning (“AI/ML”) is emerging as a leading area of academic research and a topic of significant regulatory concern. Indeed, a near-consensus exists in favor of explainable AI/ML among academics, governments, and civil society groups. In this project, we challenge this prevailing trend. We discuss why explainable AI often cannot achieve what it promises. There is, however, an alternative — interpretable AI/ML — which we will distinguish from explainable AI/ML. Interpretable AI/ML can be useful where it is appropriate, but represents real trade-offs as to algorithmic performance and in some instances (in medicine and elsewhere) adopting an interpretable AI/ML may mean adopting a less accurate AI/ML. We argue that it is better to face those trade-offs head on.

Registration
Register

Organized by Lifesciencelaw.dk and CeBIL - Centre for Advanced Studies in Biomedical Innovation Law, with Timo Minssen as chair and moderator.


Having issue with registation?
Try copy and paste the link below in you browser
https://jura.ku.dk/cebil/calendar/2024/current-ethical-and-legal-issues-in-medical-ai/registration/

Audience
The event is aimed at anyone working in the life science industry. Feel free to invite your colleagues or others in your network with an interest in the life science industry.

 

Metro stop “Islands Brygge”

 

Parking at the University of Copenhagen parking lot

Glenn Cohen, JD, PhD, is a Deputy Dean and James A. Attwood and Leslie Williams Professor of Law, Harvard Law School as well as the Faculty Director of the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics. His work focuses on how the law grapples with new medical technologies – including reproductive technologies, psychedelics, and artificial intelligence. He is co-PI of the Project on Precision Medicine, Artificial Intelligence, and the Law at the Petrie-Flom Center at Harvard Law School and a Core Partner at the University of Copenhagen’s International Collaborative Bioscience Innovation & Law (Inter-CeBIL) Program.

Will the EU AI Act help to eliminate data bias in medical AI?
It is widely acknowledged that AI systems are as good as the data they are trained on. Training an AI system on biased datasets (i.e., datasets skewed toward certain subgroups, defined by, for example, age or ethnicity) can lead to underperformance of the system on the subgroups underrepresented in the data. To address this issue, the EU AI Act includes provisions focusing on data quality and bias. The aim of this talk is to explore the implications of these provisions for medical AI systems.

 

Emilia Niemiec, is a postdoctoral researcher at the Centre for Advanced Studies in Bioscience Innovation Law at the University of Copenhagen. She specializes in legal and ethical issues in biomedical research and technologies, in particular in medical AI and genomics. Emilia has a multidisciplinary background combining degrees and experience in Biotechnology (MSc, BEng, Warsaw University of Life Sciences) and Bioethics (MSc, KULeuven). She also holds a PhD in Law, Science and Technology from the University of Bologna.

Governance Standards for Medical AI: The Role of Humans in the Loop

As medical AI begins to mature as a health-care tool, the task of governance grows increasingly important. Ensuring that medical AI works, works where it’s used, and works for the patient in the moment is a challenging, multifaceted task. Some of this governance can be centralized—in review by the FDA or by national accreditation labs, for instance. Some must be local, performed by the hospital or health system about to use the product in their own, unique environment. But a large amount of governance is left to the individual provider in the room, the human in the loop who presumably knows the patient and the health system environment, and who can ensure that the AI system is being used in a safe and effective manner. Unfortunately, placing such a burden on the physician poorly reflects the reality of modern medical technology and practice, and law and policy must take that reality into account.

 

Nicholson Price, is a Professor of Law at the University of Michigan. He studies how law shapes biomedical innovation, especially medical AI. Nicholson teaches patents, health law, innovation in the life sciences, AI and the law, and science fiction and the law. He holds a PhD in Biological Sciences and a JD, both from Columbia, and an AB from Harvard. He is also a research fellow at the University of Copenhagen’s International Collaborative Bioscience Innovation & Law (Inter-CeBIL) Program.

Fit for Purpose? Analyzing the current regulatory landscape for consumer wearables and potential future directions

As Google announces its plans to integrate a Personal Health Large Language Model into its consumer wearable devices and fitness applications, there is an urgent need to explore the current regulatory landscape pertaining to these technologies to ensure that users rights and needs are appropriately considered. This talk will highlight some of the gaps, grey areas, and current challenges associated with the current regulatory approach that fails to properly scrutinize 1) how companies may have obfuscated the sensitive nature of some of the data extracted by users under the guise of “wellness” data and 2) their disruption of traditional relationships and institutions in healthcare settings. These findings are premised as necessary preliminary work to better understand the ways in which aspects of medical AI may enter wider society and the key actors and their respective roles in its rollout.

 

Hannah Louise Smith, is a postdoc working at Inter-CeBIL where she draws upon her socio-legal background to explore novel and responsive ways to regulate emerging technologies that promotes positive societal outcomes. She holds a DPhil, M.St, BCL, and BA from the University of Oxford and previously worked at the University of Western Australia’s Tech & Policy Lab.

 

 

 

Timo Minssen, (chair and moderator of this event) is Professor of Law at the University of Copenhagen and the Founder and Managing Director of the Center for Advanced Studies in Biomedical Innovation Law (CeBIL). As a leading European authority with close to 200 publications in the area, Timo is also an LML Research Affiliate with the University of Cambridge. His research and advisory practice concentrates on Intellectual Property-, Competition, Data Protection & Regulatory Law with a special focus on new technologies in the health & life sciences including artificial intelligence and quantum technology. His studies comprise a plethora of legal and ethical issues emerging in the lifecycle of relevant products and processes - from the regulation of responsible research and incentives for innovation to sustainable drug development, technology transfer and commercialization.

Program

09:00-
09:15

Introduction by CeBIL Director Timo Minssen

09:05-
09:25

Must Medical AI be Explainable? by Glenn Cohen

09:25-
09:45

Will the EU AI Act help to eliminate data bias in medical AI? by Emilia Niemiec

09:45-
10:05

Governance Standards for Medical AI: The Role of Humans in the Loop by Nicholson Price

10:05-
10:25

Fit for Purpose? Analyzing the current regulatory landscape for consumer wearables and potential future directions

10:25-
10:50

Panel Discussion

10:50-
11:30

Mingle with light refreshments (open end)

Print

Sign up for the event

Sorry, the event is no longer available for sign-up due to its expired registration date or the event has been completed.

If you have any other questions to the event please contact the organizer for the event at See registration link

Event details

Date
Mon 17. Jun 2024
Time
09:00 -
11:00
Language
English
Venue
University of Copenhagen, Room 6B-0-06 (Moot Court Room), ground floor
Location
Njalsgade 76
2300
Copenhagen S