Bumblekite 2024
schedule in detail
Antonija writes:
When we looked at how to curate Bumblekite 2024 we wanted to showcase the diversity of the health, care and biomedicine ecosystem and its intersection with technology. We seeked to do so through the breadth of the backgrounds of our lecturers, the heterogeneity of their current functions as well as the variety of roles their organisations play in nurturing the health systems and the health of our populations, alongside simply being their basic building blocks.
Health systems are gorgeous in their multifacetedness; they are complex, challenging, layered, and at times frustrating, deeply inspiring -- just like we are as human beings. We are also their beating heart. Whether we use an abacus, a calculator or an LLM: what matters in the end is the positive impact on the human, our health and our societies that we could achieve with the ever evolving technology tools at our disposal.
Below you can learn more about our Bumblekite 2024 sessions, including additional materials that our lecturers recommend for thoughtful consumption before their respective sessions.
Day 1: June 30th, Sunday - arrivals, social programme↓
15:30 - 18:30 hike to Uetliberg with Anja and Mirna
18:00 - 20:00 guided city tour
Day 2: July 1st, Monday↓
09:45 - 10:30 welcome session
Anja Hartewig, Bumblekite 2024 host
In this session we will ensure you start into our Bumblekite week with all the necessary logistics information, full of energy and enthusiasm.
11:00 - 12:00 ETH tour
13:00 - 14:30 opening engineering keynote lecture
Leo Celi, principal research scientist, MIT; staff physician, Beth Israel Deaconess Medical Center; associate professor of medicine, Harvard Medical School
All data are biased, but AI does not have to be.
The context of the data is key. It should only be used to train artificial intelligence when there is sufficient understanding of the data issues such as missing confounders, measurement errors from medical device bias and systematic exclusion of patients from the database, to name a few. This requires that the data is curated by a large community with different perspectives and lived experiences, and this entails data sharing at its core. It also requires a more drastic shift from a culture of competition to a culture of collaboration.
References
15:00 - 15:30 closing the day
Mirna Šmidt, founder, Happiness Academy
In this session you can reflect about the day and your learnings as well as set intentions for the following day.
15:30 - 17:00 leadership Q&A with Janet Adeyemi (Roche), Leo Celi (MIT / Harvard), Tobias Heimann (Siemens Healthineers) and Kortine Kleinheinz (Bayer)
Q&A theme: how to build and nurture data science and engineering teams as well as how to build your own career trajectory in our health, care and technology space.
Antonija writes:
I am excited about this Q&A not only because it is our very first one, and by its nature, the one that sets the tone for the rest of our Bumblekite week, but also because it is the first Q&A Anja will be moderating. This Q&A two summers ago was the start of Anja’s leadership journey and I could not be prouder to see to which current heights it led her to. We hope this session will be the start or a new step towards leadership for many of you as well. With our selection of lecturers for this Q&A we wanted to show that there is not one path to leading: e.g. Janet has built a stellar, diverse career history with experiences in technology, hospital and biopharma organisations, while Tobias is a human treasure trove of the history of medical imaging and the applications of AI within it.
Day 3: July 2nd, Tuesday↓
08:45 - 09:00 intro to the day, Mirna Šmidt
In this session we will prepare ourselves for the day ahead. It is an opportunity to continue building relationships as well as consciously start the day.
9:00 - 10:30 engineering keynote lecture, Sileye Ba
Machine learning for beauty and skin care at L'Oreal.
Founded in 1909, the L'Oréal Group has become the largest cosmetics and beauty company in the world. This leadership is partly based on strong research and development activities on its traditional activities such as chemistry, biology, and optics, and more recently on machine learning and artificial intelligence. In this talk, I will present research efforts conducted in the computer vision and artificial intelligence team at L’Oreal. Three use cases are considered forskin color estimation using deep learning. These models are crucial for the foundation recommendation. Then, considering virtual try on cases allowing customers to try make ups virtually using cellphones, I will present a controllable makeup transfer using generative adversarial networks (GANs) developed by the team. And finally, I will present face aging modelling using GANs. These models allow us to infer what a person will potentially look like in the future and help she/he decide whether she/he could need specific anti-aging products.
References
Kips R, Jiang R, Ba S, Duke B et al. "Real-time Virtual-Try-On from a Single Example Image through Deep Inverse Graphics and Learned Differentiable Renderers." 2022.
Kips R, Tran L, Malherbe E, and Perrot M. "Beyond color correction: Skin color estimation in the wild through deep learning." Electronic Imaging. 2020.
Despois J, Flament F, and Perrot M. "AgingMapGAN (AMGAN): High-Resolution Controllable Face Aging with Spatially-Aware Conditional GANs." 2020.
Antonija writes:
When inviting and welcoming Sileye to take part in Bumblekite 2024, we looked to challenge what is usually the first thing that comes to one’s mind when thinking about AI that impacts our biology and health: biopharma R&D or academic research, and widen our collective aperture. Nurturing and maintaining our health starts with how we take care of ourselves on a daily basis, long before any disease state takes place. In our daily care, personal care and cosmetics play a significant role. This is also why this year in the title of Bumbekite 2024 we broke “healthcare” into two pieces: “health” and “care”, in comparison to years prior.
11:00 - 12:00 communication workshop, Paul Clemencon
12:00 - 13:00 leadership conversation series, Aishwarya Parthasarathy, Nina Sesto and Valeria De Luca
14:00 - 17:00 hands-on tutorial, Aya El Mir and Leo Celi
Uncovering Bias in Clinical Data: SpO2, SaO2, and Hidden Hypoxemia
SpO2 (peripheral oxygen saturation) and SaO2 (arterial oxygen saturation) both measure blood oxygen levels but can yield different readings. This tutorial highlights the importance of understanding the origins and limitations of these measurements in the context of clinical biases. By examining data from diverse racial and ethnic groups, participants will learn how biases in SpO2 measurements can perpetuate societal disparities, leading to misdiagnoses and poor clinical outcomes. This tutorial aims to make both clinicians and data scientists aware of the problem, the solution, and the importance of addressing biases in the data, for this tutorial in SpO2 and SaO2 measurements. By identifying and understanding these biases, participants will gain critical insights, enhancing their awareness and approach to working with medical data, leading to more informed and equitable healthcare practices.
References
- Wong AI, Charpignon M, Kim H, et al. Analysis of Discrepancies Between Pulse Oximetry and Arterial Oxygen Saturation Measurements by Race and Ethnicity and Association With Organ Dysfunction and Mortality. JAMA Netw Open. 2021
Relevance: This cross-sectional study of 5 databases with 87,971 patients highlights significant disparities in pulse oximetry accuracy across racial and ethnic subgroups. The findings are crucial for understanding hidden hypoxemia's impact on mortality, organ dysfunction, and lab results. This study is foundational to the tutorial as it provides a comprehensive analysis of the biases in SpO2 measurements and their clinical implications.
- Sjoding MW, Dickinson RP, Iwashyna TJ, et al. "Racial Bias in Pulse Oximetry Measurement." N Engl J Med. 2020.
Relevance: This study of 48,097 measurement pairs from the University of Michigan shows nearly three times more occult hypoxemia in Black patients compared to White patients. This article helps participants understand the systemic biases in medical device readings and their impact on clinical outcomes. It's relevant to the tutorial as it provides evidence of racial disparities in SpO2 measurements, underscoring the importance of addressing these biases.
- Gottlieb ER, Ziegler J, Morley K, Rush B, Celi LA. "Assessment of Racial and Ethnic Differences in Oxygen Supplementation Among Patients in the Intensive Care Unit." JAMA Intern Med. 2022.
Relevance: This cohort study of 3,069 ICU patients shows that Asian, Black, and Hispanic patients had higher average pulse oximetry readings and received less supplemental oxygen than White patients. This study is relevant for illustrating how biases in oxygen measurement affect clinical decisions and patient care, reinforcing the tutorial’s focus on the implications of measurement inaccuracies.
- Jamali H, Castillo LT, Morgan CC, et al. "Tractable function-space variational inference in Bayesian neural networks." Ann Am Thorac Soc. 2022.
Relevance: This article reviews the accuracy of pulse oximeters for individuals with dark skin tones and highlights that overestimation of oxygen saturation can lead to occult hypoxemia and poorer clinical outcomes. The study underscores the necessity for awareness and corrective measures among clinicians and device manufacturers. This paper supports the tutorial’s aim by emphasizing the impact of racial bias in medical devices and advocating for stricter regulatory standards and improved device calibration to address these disparities.
17:30 - 18:00 closing the day, Mirna Šmidt
In this session you can reflect about the day and your learnings as well as set intentions for the following day.
18:00 - 19:00 hands-on tutorial, Aya El Mir and Leo Celi, continued
Day 4: July 3rd, Wednesday ↓
08:45 - 09:00 intro to the day, Mirna Šmidt
In this session you can reflect about the day and your learnings as well as set intentions for the following day.
9:00 - 10:30 engineering keynote lecture, Valeria De Luca
Omics data science: from research to clinical trials
In this lecture we will go through a few examples on how patient omics data can provide new, detailed insights across different stages of pharma R&D. We will highlight the most common key scientific questions, analytical tools, challenges and opportunities for data scientists in this field.
References
- Micheel CM, Nass SJ, Omenn GS, editors. "Evolution of Translational Omics: Lessons Learned and the Path Forward." National Academies Press (US); 2012 Mar 23.
- Tsukita K, Sakamaki-Tsukita H, Kaiser S, et al. "High-Throughput CSF Proteomics and Machine Learning to Identify Proteomic Signatures for Parkinson Disease Development and Progression" Neurology. 2023.
- Dimitrieva S, Janssens R, Li G, et al. "Biologically relevant integration of transcriptomics profiles from cancer cell lines, patient-derived xenografts and clinical tumors using deep learning" biorxiv. 2022.
- Cardner M, Tuckwell D, Kostikova A, et al. "Analysis of serum proteomics data identifies a quantitative association between beta-defensin 2 at baseline and clinical response to IL-17 blockade in psoriatic arthritis " RMD Open. 2023.
11:00 - 12:00 communication workshop, Paul Clemencon
12:00 - 13:00 hands-on tutorial, Drago Plečko
Causal Fairness Analysis
Decision-making systems based on AI and machine learning have been used throughout a wide range of real-world scenarios, including healthcare, law enforcement, education, and finance. It is no longer far-fetched to envision a future where autonomous systems will drive entire business decisions and, more broadly, support large-scale decision-making infrastructure to solve society’s most challenging problems. Issues of unfairness and discrimination are pervasive when decisions are being made by humans, and remain (or are potentially amplified) when decisions are made using machines with little transparency, accountability, and fairness.
In this tutorial, we describe the framework for causal fairness analysis with the intent of filling in this gap, i.e., understanding, modeling, and possibly solving issues of fairness in decision-making settings. The main insight of our approach will be to link the quantification of the disparities present in the observed data with the underlying, often unobserved, collection of causal mechanisms that generate the disparity in the first place, a challenge we call the Fundamental Problem of Causal Fairness Analysis (FPCFA). In order to solve the FPCFA, we study the problem of decomposing variations and empirical measures of fairness that attribute such variations to structural mechanisms and different units of the population. Our effort culminates in the Fairness Map, the first systematic attempt to organize and explain the relationship between various criteria found in the literature. Finally, we discuss which causal assumptions are minimally needed for performing causal fairness analysis and propose the Fairness Cookbook, which allows data scientists to assess the existence of disparate impact and disparate treatment in practice.
References
Plečko, D, and Bareinboim, E. "Causal Fairness Analysis: A Causal Toolkit for Fair Machine Learning." Foundations and Trends® in Machine Learning. 2024.
Plečko, D, and Bareinboim, E. "Causal Fairness for Outcome Control." Foundations and Trends® in Machine Learning. 2024.
14:00 - 17:00 hands-on tutorial, Drago Plečko, continued
17:30 - 18:00 closing the day, Mirna Šmidt
In this session you can reflect about the day and your learnings as well as set intentions for the following day.
18:00 - 19:00 leadership conversation series, Gian-Reto Grond, Haris Shuaib, Bettina Wapf
Day 5: July 4th, Thursday↓
08:30 - 09:30 kickboxing with Leo Celi
11:00 - 13:00 hands-on tutorial, Lito Kriara
Making sense from passive monitoring sensor data in clinical trials
In this session, we will explore the utilization of diverse passive monitoring data collected during a clinical trial and demonstrate how to derive meaningful insights from it. Our discussion will cover the challenges associated with data quality issues from certain sensors and the importance of addressing these issues before extracting features from such signals. The primary goal of this tutorial is to guide you through the development of a deep learning model designed to filter out noise from sensor signals (i.e., PPG), thereby enhancing the accuracy of feature extraction and subsequent clinical assessments. Participants will gain practical knowledge in applying deep learning techniques to improve data integrity and reliability in clinical research contexts.
References
- Lee H, Chung H, and Lee J. "Motion Artifact Cancellation in Wearable Photoplethysmography Using Gyroscope." IEEE Sensors Journal. 2019.
- Kriara L, Zanon M, Lipsmeier F, and Lindemann, M. "Physiological sensor data cleaning with autoencoders." Physiol Meas. 2023.
14:00 - 16:00 hands-on tutorial, Lito Kriara, continued
16:30 - 17:00 closing the day, Mirna Šmidt
In this session you can reflect about the day and your learnings as well as set intentions for the following day.
17:00 - 18:00 leadership conversation, María Cervera de la Rosa, Shez Partovi, Fabian Rudolf
Day 6: July 5th, Friday↓
08:45 - 09:00 intro to the day, Mirna Šmidt
In this session we will prepare ourselves for the day ahead. It is an opportunity to continue building relationships as well as consciously start the day.
09:00 - 10:30 engineering keynote lecture, Ece Özkan Elsen
What Do I Expect from Machine Learning for Healthcare? - Interpretable, Fair, Data-Efficient and Generalizable Models (and Much More)
In recent years, modern machine learning (ML) algorithms have broken records achieving impressive performance. They have been used in various fields, such as creating new images, translation, and autonomous driving. Because computer-based systems have become an integral part of modern hospitals, numerous machine learning-based methods have also been developed for healthcare. In this talk, we will discuss the limitations, challenges, and opportunities of ML for healthcare and showcase my current research.
References
- Marcinkevičs R, and Vogt JE. "Interpretability and Explainability: A Machine Learning Zoo Mini-tour."
This is a literature review for explainable AI.
- Ricci Lara MA, Echeveste R, and Ferrante E. Addressing fairness in artificial intelligence for medical imaging. Nat. Comm. 2022.
This is a comment on fair AI.
- Gichoya Wawira J, Banerjee I, Bhimireddy Reddy A, et al.AI recognition of patient race in medical imaging: a modelling study Lancet Digit Health. 2022.
This is interesting work which can detect race from chest x-rays.
11:00 - 12:15 hands-on tutorial, Krishna Chaitanya, Pushpak Pati
Foundation models for medical imaging applications
Foundation models have revolutionized medical imaging by leveraging self-supervised learning techniques to analyze vast and diverse datasets. These models, such as vision transformers, excel in capturing complex patterns in medical images, facilitating tasks like medical image classification and segmentation. This tutorial aims to introduce the principles and advancements in foundation models, highlighting their transformative impact on medical imaging applications and their potential to enhance downstream applications such as diagnostic accuracy and clinical outcomes. The participants will explore cutting-edge methodologies, practical applications, and future directions in this rapidly evolving field.
References
This is the first part of a 4-part series, which is essential for understanding transformers and self-attention mechanisms, providing the core concepts of foundation models.
The Vision Transformer tutorial is crucial for comprehending how transformers are applied in computer vision, a key area in medical imaging.
This tutorial on DINO, a self-supervised learning method, is important for learning about advanced training techniques that improve foundation model performance without labeled data.
13:15 - 15:15 hands-on tutorial, Krishna Chaitanya, Pushpak Pati, continued
15:45 - 16:45 leadership conversation series, Tracy Glass, Rebecca Kaufmann