WEBVTT 00:01:21.000 --> 00:01:23.000 Good afternoon. I'm Susan Margulies, the Assistant Director for Engineering at the United States. 00:01:23.000 --> 00:01:41.000 National Science Foundation, and I'm very pleased to welcome you today. To this very special symposium, Smart Health Frontiers, Driving Innovation with Fundamental Science, Precision Medicine through engineering. 00:01:41.000 --> 00:01:47.000 We're hosting this symposium to celebrate the ten-year anniversary of the Smart Health program. 00:01:47.000 --> 00:01:49.000 We have 3 distinguished speakers for you today, all of whom are working at the cutting edge of biomedical technology and engineering. 00:01:49.000 --> 00:02:02.000 But before we get to their talks, I wanted to take a moment to tell you about the Smart Health program which funded their work. 00:02:02.000 --> 00:02:03.000 Smart health is a multidisciplinary collaboration between NSF and the National Institutes for Health. 00:02:03.000 --> 00:02:21.000 The Smart Health program is designed to accelerate innovative research that bridges that gap between technological and biomedical research to help transform biomedical and public health research. 00:02:21.000 --> 00:02:43.000 United States. A key component of this process and the program is to create sustainable partnerships between engineers, scientists and medical practitioners to ensure that the most innovative and transformative technical research target the most critical biomedical engineering questions. 00:02:43.000 --> 00:02:55.000 The goal is to ensure that practitioners and the patients are able to take full advantage of the advances in engineering and computing because they've been part of the research on the start. 00:02:55.000 --> 00:03:08.000 Engineering, artificial intelligence, and other technological advances are poised to expand and accelerate positive contributions to multiple sectors of American society. 00:03:08.000 --> 00:03:19.000 To integrate into health-related fields and applications and there's a unique challenge. In doing that, as you all have appreciated firsthand. 00:03:19.000 --> 00:03:24.000 Today's symposium will showcase how advances in engineering can help create targeted and tailored solutions for critical medical issues that are facing the American public. 00:03:24.000 --> 00:03:44.000 Including chronic conditions like diabetes, HIV and infertility. Increasingly novel scientific approaches and new technologies are providing healthcare providers with better tools and new technologies are providing healthcare providers with better tools for disease detection. 00:03:44.000 --> 00:03:53.000 And better data for disease progression and management, which in turn guides more effective diagnostic and intervention protocols. 00:03:53.000 --> 00:04:06.000 Together, engineering, science, and the medical community can create a healthier tomorrow for us all. So what I'd like to do now is to introduce us, our speakers, and then they will each speak in turn. 00:04:06.000 --> 00:04:16.000 So I'm going to introduce all 3 of them at the start. Our 1st speaker is Janis Pachodelitis. Dr. 00:04:16.000 --> 00:04:28.000 Paschalitis is a distinguished professor of electrical and computer engineering, systems engineering and biomedical engineering and a founding professor of computing and data sciences at Boston University. 00:04:28.000 --> 00:04:42.000 He is the director of the Harare Institute for Computing and Computational Science and Engineering. Boston University's Federation and Convergence Accelerator of Centers and Initiatives in this area of research. 00:04:42.000 --> 00:04:58.000 He has a PhD from MIT in electrical engineering, computer science, and his current research interests lie in the field of optimization, control, stochastic systems, robust learning, computational medicine, and computational biology. 00:04:58.000 --> 00:05:08.000 His work has been recognized at the very start with a career award from NSF and he's won numerous awards for his research since that time. 00:05:08.000 --> 00:05:14.000 Our second speaker is Dr. Chaifra. Chitra. Go Palapa. Dr. 00:05:14.000 --> 00:05:27.000 Kalapa is an associate professor of industrial engineering at the Department of Mechanical and Industrial Engineering and Commonwealth Honors College at the University of Massachusetts Amherst. 00:05:27.000 --> 00:05:31.000 She's a guest researcher at the US Centre for Disease Control and Prevention. 00:05:31.000 --> 00:05:39.000 She has a PhD in industrial engineering from the University of South Florida Tampa and her work is in the area of simulation. 00:05:39.000 --> 00:05:47.000 Simulation based optimization and reinforcement learning. Stochastic processes all applied to the areas of health. 00:05:47.000 --> 00:05:59.000 And social equity research. Her lab is funded with grants from the National Institutes of Health, the National Science Foundation, the US CDC, and the World Health Organization. 00:05:59.000 --> 00:06:01.000 And our final speaker today is Ramesh Jahari. Dr. Jahari is a professor at Stanford. 00:06:01.000 --> 00:06:15.000 University with a full-time appointment in the Department of Management Science and Engineering and a courtesy appointment in the Department of Electrical Engineering. 00:06:15.000 --> 00:06:26.000 He is an associate director of the Stanford Data Science Center and also the executive on the executive committee of the Stanford Causal Science Center. 00:06:26.000 --> 00:06:37.000 He's a member of Operations Research Group and the social algorithms lab, the informational systems laboratory and the for computational and mathematical engineering. 00:06:37.000 --> 00:06:47.000 He received his PhD in electrical engineering and computer science from MIT and his recent research includes data-driven decision-making. 00:06:47.000 --> 00:06:59.000 Causal inference and experimental design, particularly as applied to online platforms and digital health interventions. So please join me today in this webinar. 00:06:59.000 --> 00:07:05.000 And our 1st speaker is Dr. 00:07:05.000 --> 00:07:06.000 Thank you. Thank you for inviting me to be part of this, exciting symposium. 00:07:06.000 --> 00:07:19.000 Let me share my slides. And let me ask for a confirmation if you can see the slide in a slide mode. 00:07:19.000 --> 00:07:35.000 Perfect, thank you. So, today I'm going to talk about ridiculous models of pregnancy and health conditions associated with infertility and this is work that has been funded by the smartphone program at NSF. 00:07:35.000 --> 00:07:45.000 And let me start by some basic facts that I think provide some evidence. That they are accelerating challenges for family planning. 00:07:45.000 --> 00:08:06.000 1st the average age of 1st mothers in the US has been increasing steadily from about 21 or so years in 1972,021 was 27.3 years and clearly the probability of conceiving naturally is dropping quite drastically with age. 00:08:06.000 --> 00:08:31.000 So currently in the US, about 15% of couples suffering from infertility and soft fertility, about 12% of women in about 9.4% of men using fertility services They are technologies out there that come under the name assisted reproductive technologies, a RP for sort. 00:08:31.000 --> 00:08:42.000 IVF in vitro fertilization is one version of those technologies and these technologies can help couples in order to conceive. 00:08:42.000 --> 00:08:55.000 For instance, an IVF cycle has a probability of success, generic about one out of 3 IVF cycles and there is a PTC significant cost associated with these types of services. 00:08:55.000 --> 00:09:03.000 So the cost of an IVF cycle is, anywhere between 15,000 to $30,000. 00:09:03.000 --> 00:09:33.000 And because this is not university car covered by instruments, this creates obvious disparities. There are certain conditions for instance polychistic ovary syndrome that affects about 15% of women and this is the leading cause of laboratory infertility and it's also causing long-term health issues at the cost that has been estimated to be on the order of 4.4 billion annually in the US. 00:09:35.000 --> 00:09:46.000 So with these facts in mind, the question that we set out to answer is what can we be MSF smart health community do to help? 00:09:46.000 --> 00:10:09.000 And I would like to thank NSF for particularly Wendy Nixon that had the vision to start this program and understanding that there's quite significant things that people that are closer to the sciences of medical sciences, computer sciences can have an impact in the smart health type of applications. 00:10:09.000 --> 00:10:26.000 So in terms of what I think we can do is we can leverage strings of data and can list the power of AI, machine learning algorithms in all to develop predictive analytics and in this particular presentation I'm going to discuss 3 models. 00:10:26.000 --> 00:10:33.000 One is a model for predicting pregnancy outcomes. The other is a model for predicting IVF outcomes. 00:10:33.000 --> 00:10:43.000 And the 3rd is a model for predicting PCOS. And, these models in turn, the 1st one, and has his family planning. 00:10:43.000 --> 00:11:07.000 And as you will see, it actually highlights a number of important factors that are affecting effectively. In terms of predicting our GF outcomes, one important consideration is that again enhances family planning and also decision making that couples have to go through when deciding to seek these types of treatments. 00:11:07.000 --> 00:11:18.000 And finally, with respect to this polykistica ovary syndrome, there are significant large number of undiagnosed cases in the US. 00:11:18.000 --> 00:11:22.000 And predictive models can help, enabler intervention and addressing therefore the long-term health risks. 00:11:22.000 --> 00:11:41.000 That are associated with this type of condition. So starting with the 1st model we use data from a study it's called the pregnancy study online Prestel for short. 00:11:41.000 --> 00:11:50.000 This is a web-based big conception cohort study of women in the US and Canada or women of reproductive age actively trying to conceive. 00:11:50.000 --> 00:11:59.000 It is led by Lauren Wise at Boston University who is a Ko Pi in the ground that has been funded this work. 00:11:59.000 --> 00:12:12.000 So women that are interested in participating in the study, they are submit an online question. At enrollment and then every 2 months for up to 12 months or until they get pregnant. 00:12:12.000 --> 00:12:41.000 And there's also an opportunity for mail partners to participate in the study. So the dataset we used had about 4,000 or so participants and we extract it from the questionnaires a hundred 63 variables that covered a variety of different factors from a socio-demographic factors, lifestyle, diet, medical history and selected male partner characteristics. 00:12:41.000 --> 00:12:45.000 Earlier work in the space was more in the mold of epidemiological research looking at one factor at a time. 00:12:45.000 --> 00:13:03.000 So the contributions here is trying to look at all the factors at the same time and sort of optimize what factors are the most predictive of the final outcome of interest and in this case We, I'll present 2 models. 00:13:03.000 --> 00:13:14.000 One is predicting pregnancy within 6 months. Again, these are women that are trying to conceive and are not using ART technologies. 00:13:14.000 --> 00:13:35.000 And the second order is predicting pregnancy with 12 months. So you can see in the stables that the a UC away on the curve with some metric of accuracy is on the order of 70, 71% of moderate ability to predict. 00:13:35.000 --> 00:13:42.000 And it's also interesting. I think that what we label as sparse and robust model, it gives you better predictive performance than a full model that uses the entire set of variables that were available to us. 00:13:42.000 --> 00:14:08.000 And the models that we develop are interpretably and you can see some of the factors that are the more significant in terms of making the prediction and in green we are highlighting some factors that you can argue are potentially modified. 00:14:08.000 --> 00:14:17.000 Been and, they give some ideas on what women could possibly control in order to improve their chances of, of concealing. 00:14:17.000 --> 00:14:28.000 So the paper was published in 2,021 in human reproduction and in fact it was fairly well received by the community the European Society for Human in production selected to this paper for a Qi tutorial. 00:14:28.000 --> 00:14:44.000 They call it so it was discussed for about 24 h and that discussion generate about 1,000 tweets and about 1 million impressions. 00:14:44.000 --> 00:14:52.000 So moving on to the second model. So here we set out to predict whether. The first, st Hi, VF cycle for some participant is going to be successful. 00:14:52.000 --> 00:15:06.000 It's going to lead to a pregnancy. We use data from a number of different IVF clinics that are part of this, EI VF platform. 00:15:06.000 --> 00:15:18.000 So these were clinics from Massachusetts. New York and California. We had about 22,000 or so, cycles and we had very, including demographics. 00:15:18.000 --> 00:15:31.000 And also variables that had to do with that specific IVF society. So the number of all sites retrieved number of embryos that were prepared. 00:15:31.000 --> 00:15:43.000 And lifestyle factors or leverage during that cycle and medical diagnosis associated with the women that were participating in this study. 00:15:43.000 --> 00:16:05.000 And the outcome was to predict pregnancy after the 1st IVF. Cycle. Again, existing models were so far using basic demographics and medical history and there were no models out there that were using basic demographics and medical history and there were no models out there that were using also the insight information which as you will see is very important in making a prediction. 00:16:05.000 --> 00:16:13.000 So, for 4 months of this model, it's from the order of 68%, 67% or so. 00:16:13.000 --> 00:16:18.000 It is interesting that what we call a linear and a robust model does pretty much as well as a very complex not so interpretable non-linear model. 00:16:18.000 --> 00:16:34.000 And you can see on the right side of the slide that there are certain predictors, key predictors of the success of the IVF cycle. 00:16:34.000 --> 00:16:39.000 In green again, our predictors that you could argue are possibly modifiable and these are some ideas in terms of what we could do to improve their chances. 00:16:39.000 --> 00:17:00.000 What is also interesting is that there are some racial variables that ended up being very significant. And I think they point to the fact that I've been optimized for certain races. 00:17:00.000 --> 00:17:03.000 And, a certain other races are sort of disadvantaged, because of the way these procedures have been optimized. 00:17:03.000 --> 00:17:29.000 And this is good for thought. The final model I'll present. Is trying to assess cases of PCOS that are undiagnosed so about 30,000 or so women that were included in the dataset we had access to. 00:17:29.000 --> 00:17:36.000 And we selected women that should have an I PCOS diagnosis They were not checked for a so-called polykistic, ovarian morphology. 00:17:36.000 --> 00:17:51.000 So therefore they did not have a diagnosis in their electronic health record even though they should have based on other factors. 00:17:51.000 --> 00:18:01.000 Existing models in this space were focusing more on the development of PCOS rather than helping identify the undagnosed cases which are models did. 00:18:01.000 --> 00:18:31.000 So you can see again here the performance of these models. Is interesting because that what we call linear, robust model has performance that is very close to a non-linear very complex not so interpretable model again you can see the number of different variables that are important in terms of this model and you can see again some possibly modifiable factors but also some ratio factors that are playing at all. 00:18:33.000 --> 00:18:50.000 So given that this is NSF. Just, in the last 2 min or so, I would like to just pick behind the card and then give you some idea about some of the methods that are behind the motives that we were developing. 00:18:50.000 --> 00:19:03.000 In general and our work has been motivated by working in healthcare and developing this sort of models. They're usually a large number of predictive variables. 00:19:03.000 --> 00:19:06.000 We need to avoid overfitting. We need to optimize out of sample performance. Interpretability is quite important. 00:19:06.000 --> 00:19:36.000 So sparse models are interesting. Linear models are interesting because of this reason. One sort of met observation is that with proper featured selection and linear models can do as well as much more complex nonlinear and not interpretable models and also there is a need for robustness because of that. 00:19:36.000 --> 00:20:06.000 In fact we have to set up the same work for what we call a distributionally robust learning and just to give you an idea what this is typically when you develop a motor you are trying to optimize minimize some loss function with respect to, usually a probability distribution that generates the data that you do not know and you are using Okay, the probability solution that can be derived from a training dataset in order 00:20:10.000 --> 00:20:25.000 to assess that expected loss. So in this methodology that we have developed, we are essentially letting . So that generates the data to be anything it can be. 00:20:25.000 --> 00:20:36.000 You can argue essentially that to be anything it can be. You can argue essentially that nature is doing the worst possible thing for developing the model, you can argue essentially that nature is doing the worst possible thing for developing the model in selecting a worst case probability that nature is doing the worst possible thing for developing the model in selecting a worst case probability distribution. 00:20:36.000 --> 00:20:38.000 And we as models come after this. And we are learning about that is trying to minimize the worst case. 00:20:38.000 --> 00:21:04.000 Expected loss that nature has somehow selected for us. So because of this mean max structure, there is a robustness embedded in the development of this model and this is part of what is known as distribution or robust optimization of throats to learning. 00:21:04.000 --> 00:21:14.000 What is, I think interesting, and this also leads to some mathematical facility in dealing with these types of problems. 00:21:14.000 --> 00:21:32.000 We are using an ambiguous set for selecting these probably distributions that is based on the WASSEY set for selecting these probably distributions that, is based on the Walsh Stein metric that is centered around the empirical probability distribution that is generated by the, the training set. 00:21:32.000 --> 00:21:52.000 So, I will conclude by thanking many of my students that have contributed to this work that are listed in this live and also thinking my collaborators, Laura Wise, and Suzi Mahal Gaya at Harvard, Lauren Wise at BU and Alex Zefsky at BU who are also Kopi's in the ground that supported this work. 00:21:52.000 --> 00:22:10.000 Thank you so much for your attention. 00:22:10.000 --> 00:22:19.000 Thank you very much. Our next speaker is Chitra Gopalopa. 00:22:19.000 --> 00:22:29.000 Okay. Thank you everyone for joining and thanks to NSF for this funding and invitation to present. 00:22:29.000 --> 00:22:33.000 So the title of our talk is all of our work is simulation and decision analysis for integrated modeling of diseases. 00:22:33.000 --> 00:22:57.000 Healthy lives for all approach. The motivation, decades of research using observational studies has shown that socioeconomic factors, the y-axis of the bottom graph, are key determinants of health and thus simply call social determinants of health. 00:22:57.000 --> 00:23:06.000 However, intervention analysis models the ones on the right have traditionally focused on behavioral and pharmaceutical interventions. 00:23:06.000 --> 00:23:17.000 Not modeling social determinants and the silo single disease approach is my opaque as it overlooks the root cause of diseases. 00:23:17.000 --> 00:23:31.000 On the other hand, observational studies help inform structural intervention programs. However, the fragmented approach of different sectors targeting different social determinants may not be effective for disease prevention. 00:23:31.000 --> 00:23:44.000 No, many community-based organizations already strive to provide combined care to their community. How however the lack of a systemic approach means both in optimal use of resources. 00:23:44.000 --> 00:23:59.000 And as a burden of planning for coordinated care is shifted to the community, there is the risk that more of the disadvantaged community higher the strain on its system, likely compounding disparities. 00:23:59.000 --> 00:24:19.000 Long-term goal is to develop a decision support tool that helps predict potential health risks specific to a population and joint intervention needs to inform a more systemic change in allocation of public health resources that optimize health and social equity. 00:24:19.000 --> 00:24:26.000 This could help inform interagency programs and policies at the federal or state agency level and at the care facility level it helps infrastructure and operational planning. 00:24:26.000 --> 00:24:41.000 This is a complex problem and a long-term goal. A shutdown goal is to advance the modeling research needed to achieve this long-term goal. 00:24:41.000 --> 00:24:49.000 Advancements in AI can play a critical role in addressing this problem. However, pure data driven AI lacks the robustness. 00:24:49.000 --> 00:25:02.000 Therefore, our approach is to combine physics based methods with AI to build mechanistic models which we can then use in conjunction with AI for scalability. 00:25:02.000 --> 00:25:08.000 For this grant we we use sexually and intravenously transmitted infections as a case study. 00:25:08.000 --> 00:25:16.000 Observational studies have shown many social determinants and drivers of risky sexual and injecting drug use behaviors. 00:25:16.000 --> 00:25:25.000 These behaviours are common modes of transmission for multiple infections. Which in turn lead to other severe diseases. 00:25:25.000 --> 00:25:31.000 There's biological interactions between these diseases. 00:25:31.000 --> 00:25:39.000 It's that necessary to intervene on each of these mechanisms to effectively avert infection risk. 00:25:39.000 --> 00:25:52.000 So our goal is to double up a model that simulates all these mechanisms so that we can predict populations at risk of infection and determine optimum optimal combinations. 00:25:52.000 --> 00:25:56.000 In 00:25:56.000 --> 00:26:06.000 There are multiple challenges in developing such a model. I will provide an overview of 3 of them. The 1st complex system dynamics. 00:26:06.000 --> 00:26:22.000 So we need a suitable method to jointly model diseases of brain epidemiology. Second complex dependency between features but we mostly have data on the Paradise Association's between a social determinant and a behavior. 00:26:22.000 --> 00:26:26.000 So we want to estimate the multivariate joint distributions. So we want to estimate the multivariate joint distributions. 00:26:26.000 --> 00:26:37.000 3, rd we need new algorithms to identify optimal combinations of interventions in the context of disease syndemics and epidemic dynamics. 00:26:37.000 --> 00:26:43.000 I will provide a brief overview of work on the 3 challenges. There are many collaborators on each of these work. 00:26:43.000 --> 00:26:51.000 I will only highlight the lead student authors on the papers I cite. 00:26:51.000 --> 00:27:03.000 Challenge one the goal is a technique to joint model interacting diseases that may vary epidemiologically but have seen behavioral transmission mechanisms. 00:27:03.000 --> 00:27:12.000 The status quo is we either have compartmental models that are sufficient for fast spreading but not servo for slower spreading diseases. 00:27:12.000 --> 00:27:19.000 On network models that are suitable for all. But face challenges and application to multi-disease. 00:27:19.000 --> 00:27:45.000 Because of the varying epidemiology. Hello, taking an example if you simulate 100,000 nodes we have 3,900 cases of HPV but only 10 cases of HIV scaling 10 times is still not sufficient for the slow spreading diseases but drastically increases computational challenges which is of concern in this context of marine diseases. 00:27:45.000 --> 00:27:48.000 Our approach is thus a mixed model. The red nodes are persons infected with at least one slow spreading disease. 00:27:48.000 --> 00:28:00.000 The green nodes are their susceptible contacts and the rest of the population are in the compartmental model. 00:28:00.000 --> 00:28:11.000 As new infections occur that is the green nodes become infected with a slow spreading disease their contacts move from the compartmental to the network. 00:28:11.000 --> 00:28:33.000 So this makes it easy to generate large sample sizes in the network. The novelty is a network generation algorithm that maintains the many dynamics include being the network structure which is not random age correlation geographical correlation partnership changes over time and so on. 00:28:33.000 --> 00:28:40.000 To demonstrate the significance of such a model. We applied it to HIV, HPV, and cervical cancer. 00:28:40.000 --> 00:28:48.000 HPV is 1.7 times higher and soervical cancer is 3.6 times higher in women with HIV compared to women without which is already known. 00:28:48.000 --> 00:29:07.000 What the study adds is the percent attribution to biological versus behavioral factors and does the need for structural plus behavioral interventions versus pharmaceutical interventions. 00:29:07.000 --> 00:29:24.000 I don't, we realized later was that the new model can also generate cluster outbreaks which led us to the new model can also generate cluster outbreaks which led us to building the cluster outbreaks which led us to building the 1st national level model, generate cluster outbreaks, which led us to building the 1st national level model for HIV in the US that accurately replicated cl And I state this here because motto 00:29:24.000 --> 00:29:30.000 validity is extremely important if you want to use it for informing policy decisions. Well, up until this work, we had validated it on many demographic-based surveillance metrics. 00:29:30.000 --> 00:29:49.000 This application allowed us to more intricately validate the network structure and features from the micro to the macro scales, which really increase the confidence in the model. 00:29:49.000 --> 00:30:04.000 I from validity, the application is also significant and novel cluster detection and response are conducted using a novel cluster detection and response are conducted using DNA sequence of detection and response are conducted using DNA sequence of the virus from recent outbreaks. 00:30:04.000 --> 00:30:14.000 So this helps us move from cluster detection and response to cluster prediction and prevention. 00:30:14.000 --> 00:30:22.000 Challenge to is estimation of the, joint distributions. 00:30:22.000 --> 00:30:31.000 The overall goal is to use data which are mainly pairwise associations or marginal distributions to estimate the joint distributions. 00:30:31.000 --> 00:30:45.000 That we then use to parameterize a simulation model. So in the simulation, in addition to simulating the various behavioral risk elements, we jointly add SDH features. 00:30:45.000 --> 00:30:59.000 We still simulate infection risk as a function of behaviors, but as social determinants are jointly modeled. 00:30:59.000 --> 00:31:10.000 Yeah, just wanted to see some questions. Okay, but as social determinants and jointly modeled with behaviors, we can evaluate joint intervention needs. 00:31:10.000 --> 00:31:18.000 For here I will only focus on the joint distribution estimation. 00:31:18.000 --> 00:31:36.000 So very briefly, the novelty here was developing a method to integrate the disparate data sets. Mainly we have multiple large national data sets but located in different data sets conducted by different agencies. 00:31:36.000 --> 00:31:56.000 So integrating that by applying continuous copula to learn the dependency and transforming it to discrete copula to extract the bivariate joints which were then combined in graphical models to estimate the multi-variate. 00:31:56.000 --> 00:32:06.000 Okay. Yeah, so this is the slide that was discussing about the method. So really using the continuous to discrete copulars. 00:32:06.000 --> 00:32:21.000 To extract the bivariate associations and then incorporating all of the biradiate associations into graphical models to estimate the multi-variant joints. 00:32:21.000 --> 00:32:29.000 To demonstrate the significance of such a model, we conducted hypothetical analysis and these are the sample results. 00:32:29.000 --> 00:32:37.000 The 2 graphs show SDH burden in people with HIV on the left and the men who exchange sex on their right. 00:32:37.000 --> 00:32:45.000 On the y-axis is the distribution as percent of people. And the x-axis is the number of disadvantaged SDH. 00:32:45.000 --> 00:32:51.000 So these graphs still ask what percent of people have 0 SDH one SDH and so on. 00:32:51.000 --> 00:33:03.000 We also know what those different combinations are which can shed light on intervention needs. We can also do threshold impact analysis to estimate maximum impact from structural interventions. 00:33:03.000 --> 00:33:23.000 Oh, for example, here we did what if an intervention improved HIV care behavior in people with greater than you know, as the age to become equal to behavior in people with 0 SDH, what is its impact on not only HIV, but HPV and cervical cancer? 00:33:23.000 --> 00:33:38.000 So what is the significance of these joint distributions and hypothetical analysis? As a number of combination interventions increases, even designing or testing interventions through controlled trials becomes a challenge. 00:33:38.000 --> 00:34:00.000 So this type of data analysis can help identify or streamline viable joint interventions. That we can then be used to testing that can then be used in control trials which in turn provides the evidence for the model for analysis of interventions. 00:34:00.000 --> 00:34:08.000 The last one is AI for control optimization. 00:34:08.000 --> 00:34:17.000 The goal is identifying intervention combinations, but there are multiple scales of complexity. First, st for each disease we have multiple levels of interventions, structural, behavioral and pharmaceuticals specific to demographics. 00:34:17.000 --> 00:34:42.000 Second, we have multiple diseases. And 3, rd we want policy specific to a jurisdiction. But while considering the fact that epidemics are not restricted to geographical boundaries, that is there is mixing across the jurisdictions. 00:34:42.000 --> 00:34:47.000 We use deep reinforcement learning algorithms. First, st a single agent for single disease and then a multi-agent for Malta jurisdiction. 00:34:47.000 --> 00:35:02.000 The novelty here was mainly efficient formulations to improve convergence which is of which is of concern when we are talking about a very complex problem. 00:35:02.000 --> 00:35:15.000 But this work also helped us test different algorithms and identify research gaps. And the main challenge was that as a number of agents increases, the algorithms become more challenging to converge. 00:35:15.000 --> 00:35:37.000 Going work. We are developing new algorithms on this specific grant. Very briefly, we are mainly trying to leverage the system's knowledge for coordinating learning of the policy or the value networks of the reinforcement learning, but each agent is able to execute its own policies. 00:35:37.000 --> 00:35:58.000 Yeah, so I'll just conclude. By summarizing that I will just conclude by summarizing that I work developed novel methods in multi-scale simulation, statistical machine learning, I work developed novel methods in my work developed novel methods in multi-scale simulation, statistical machine learning and Mata scale simulation, statistical machine learning and multi agent reinforcement learning. 00:35:58.000 --> 00:36:02.000 I would like to acknowledge the many collaborators on 00:36:02.000 --> 00:36:13.000 Thank you very much Chitra. Our final speaker. Is Professor Romesh Jihari from the Department of Management Science and Engineering at Stanford University. 00:36:13.000 --> 00:36:17.000 Great. Thank you, George, and thank you to the National Science Foundation for hosting us today and also of course for the funding that supported this type of work. 00:36:17.000 --> 00:36:43.000 So I'm gonna be talking to you today about a project that is in collaboration with Stanford Children's Health actually deliberately set my background to be the to be the Stanford Children's health background even though I'm in the engineering school at Stanford and in part that's really to highlight the fact I think that I'm in the engineering school at Stanford. 00:36:43.000 --> 00:36:57.000 And in part, that's really to highlight the fact I think that kind of a major point I'm gonna try to make today is that, it's essential, to start in partnership with, clinicians and care providers when you're trying to actually impact care on the ground. 00:36:57.000 --> 00:37:06.000 So with that of mind, let's jump in. First, st diabetes, I think all of us know that diabetes is, prevalent with an incredibly large disease burden, but just to put some numbers on it. 00:37:06.000 --> 00:37:11.000 There's 30.8, 38 million individuals that are at least diagnosed with some form of diabetes. 00:37:11.000 --> 00:37:12.000 There's 97, 98 million adults that have pre diabetes. I actually am among them. 00:37:12.000 --> 00:37:24.000 And specific to this talk, I'm going to be focusing on some care changes in care model for type one diabetes. 00:37:24.000 --> 00:37:31.000 And there's 300,000 youth. So, you know, under, 18 years old, diagnosed with type one diabetes. 00:37:31.000 --> 00:37:38.000 Yeah, unsurprisingly, just like you've heard in the other talks, disparities abound by race, by socioeconomic status, by education. 00:37:38.000 --> 00:37:44.000 And so, you know, any care model that you change needs to needs to also be the accounting for that. 00:37:44.000 --> 00:37:56.000 Okay, let me start with sort of a slide that's actually the end of the talk at some level, which is, you know, what is amazing about this project that that I'm a part of at Stanford. 00:37:56.000 --> 00:37:58.000 It's an incredibly large team and I'll have a slide at the end of that shows you the team that's had amazing effect on caring for type one diabetes. 00:37:58.000 --> 00:38:17.000 So what you're seeing here is 12 month outcomes and newly diagnosed youth. This is a recent paper that came out in Nature Medicine, just in April of this year. 00:38:17.000 --> 00:38:24.000 And in this work, what we're reporting on our 12 month outcomes under a new care model from from the study, the 4 T study. 00:38:24.000 --> 00:38:29.000 The 4 T stands for teamwork targets technology and tie control You know, what I really wanna highlight in this picture. 00:38:29.000 --> 00:38:42.000 Is that if you look at the historical cohort that we're comparing against the historical control cohort, the HBA one sees in that cohort were about 7.6 7.7% 12 months after diagnosis. 00:38:42.000 --> 00:38:55.000 Whereas in in the. 4 T study group in in study one in particular that dropped to 6.5% and one of the things I want to point out here without getting into the details of how to interpret A onec. 00:38:55.000 --> 00:39:13.000 What I really like is one of my collaborators, the PI on the NIH grant that funded this David Maz is fond of saying that if the 4 T study intervention was a pill it would be worth billions of dollars because that's the type of thing that is extremely difficult to accomplish in diabetes care. 00:39:13.000 --> 00:39:18.000 So I decided I think of that kind of as the end story here. You know, this is the type of improvement that's been happening. 00:39:18.000 --> 00:39:22.000 What is the talk about? The talk is really about thinking about well, what is the role of technology? 00:39:22.000 --> 00:39:24.000 And so, you know, here's where I want to really talk and as an engineer. What is it that technology brings to the table and achieving these kinds of outcomes? 00:39:24.000 --> 00:39:35.000 You know, I'd say that, you know, the way I think about those outcomes and achieving these kinds of outcomes. 00:39:35.000 --> 00:39:36.000 You know, I'd say that, you know, the way I think about those outcomes. 00:39:36.000 --> 00:39:47.000 You know, I'd say that, you know, the way I think about those outcomes are those are on par with the best that So again, you know, I really want to emphasize here that the grounding has to be that you start with the clinical context. 00:39:47.000 --> 00:39:52.000 You start with patients, providers, devices, data. What are the interventions available to you? From there, you start asking yourself, what are feasible and efficient innovations that could be implemented? 00:39:52.000 --> 00:40:05.000 And fundamentally in our work, it's thinking about what kinds of algorithm augmented improvements in care could could we create? 00:40:05.000 --> 00:40:11.000 And then what you do is you get this virtuous cycle going where you can go back and you could say, okay, from what we've implemented, what have we learned, what worked, what didn't work. 00:40:11.000 --> 00:40:25.000 That actually changes the clinical context that changes the data. That changes how you think about interventions and that should be changing your thoughts on what's feasible and efficient innovations in the next stage could look like. 00:40:25.000 --> 00:40:31.000 So this is my viewpoint on how we do effective engineering. Within the clinical context. And you could see I hope that, you know, the partnership here is kind of a key piece of that. 00:40:31.000 --> 00:40:46.000 All right. In this talk, I'm going to sort of talk a little bit about an instantiation of this viewpoint specifically for improving care for children with type one diabetes. 00:40:46.000 --> 00:40:56.000 So the status quo in in care for type one diabetes is the you know a number of different ways of measuring your your glucose levels over time the best you could hope for is what's called a continuous glucose monitor that continuesly measures your glucose levels. 00:40:56.000 --> 00:41:15.000 But even with these monitors, Historically, data was really only reviewed every 3 months, which might mean you would have these sort of moments in between the three-month visits where control is deteriorating and it's completely missed until the next 3 month visit. 00:41:15.000 --> 00:41:20.000 So obviously what you'd love to be able to do is to say, hey, why can't I review this data more frequently? 00:41:20.000 --> 00:41:30.000 Why isn't there a way to actually intervene? With patients more frequently, in which case you could potentially detect and address that deteriorating control sooner. 00:41:30.000 --> 00:41:35.000 So in twenty-, working closely with the clinical care providers and by closely I really mean the clinical care providers and by closely I really mean specifically talking to them about what kind of tool would be useful. 00:41:35.000 --> 00:41:37.000 And by closely I really mean specifically talking to them about what kind of tool would be useful. There's the launch of the 1st version of what kind of tool would be useful. 00:41:37.000 --> 00:41:52.000 I mean, specifically talking to them about what kind of tool would be useful. There's the launch of the 1st version of what we call Tide, Timely Interventions for Diabetes Excellence. 00:41:52.000 --> 00:42:02.000 Now here's really what I want to point out you know, we're fond of using the most advanced technology, but the truth here was that in talking to the care providers, the most important thing was not to build the world's fanciest algorithm. 00:42:02.000 --> 00:42:13.000 Was actually to just build something that works and that they could use and engage with and provide feedback on. So this 1st dashboard was deployed in less than a month. 00:42:13.000 --> 00:42:19.000 Using our shiny, which is one of the easiest open-source packages to use, just on a single laptop, not even on the cloud. 00:42:19.000 --> 00:42:27.000 And you know, this is what it looked like. It contained a number of different diagnostic criteria, notably things like the time within target glucose range. 00:42:27.000 --> 00:42:53.000 By 2021 we had been able to use continuous improvement iterative improvement and buy-in from providers to scale up to a new version of Tide that looks a lot more like the types of platforms that you would expect for digital health where it's deployed on server now so that all providers can access it without needing their own laptop contextual ranking and risk stratification of patients into different groups. 00:42:53.000 --> 00:43:08.000 What that meant was that we had taken a pain point for the providers being able to provide this continuous kind of view to patients without being overburdened worked closely with them to iterate deployed something that not only was delivering better care as you saw in the initial chart on A onec, but actually was developed delivering more efficient care. 00:43:08.000 --> 00:43:19.000 And what you could see here is that we had 2, you know, this is depicting sort of improvements. 00:43:19.000 --> 00:43:26.000 In the time of review with a couple of successive iterative improvements. That we made to the platform, changing the flagging and ranking criteria and changing our deployment to be using Tableau. 00:43:26.000 --> 00:43:37.000 And you can see that that led to a 36% and then the 37% reduction in the time spent per per patient review. 00:43:37.000 --> 00:43:50.000 And so, you know, these kinds of improvements, I think for me, what was striking as an engineer is I historically, I think would have put the cart before the horse to build the thing and then hope that somebody uses it. 00:43:50.000 --> 00:43:57.000 And was amazing to me was that starting with the absolute simplest thing, something that arguably you wouldn't even consider necessarily as a technological innovation given the tools of available to us. 00:43:57.000 --> 00:44:07.000 Was so important from a human standpoint to get buy-in from the clinical team. But here's the thing, so that gets us from left to right. 00:44:07.000 --> 00:44:11.000 That gets us from the context to something deployed. What's great is that once you're deployed, once you have buy-in, once you have a tool that's being used, that tool is generating its own data. 00:44:11.000 --> 00:44:31.000 And that data allows us to learn. And allows us to think about the next step. So in going back from here to here and thinking about how the context has changed, we can think about what the next stage of the evolution of these kinds of tools can look like. 00:44:31.000 --> 00:44:38.000 So the next part of this talk I want to talk to you very quickly about a couple of lessons of what we can learn from tied. 00:44:38.000 --> 00:44:54.000 Now, fundamentally, I think, you know, if you think about what's challenging about health care and especially this kind of digital health intervention, I'd say one of the biggest reasons that interventions that claim that they can improve patient care fail or fail to be adopted. 00:44:54.000 --> 00:45:02.000 Is because providers are on the frontline of caring for patients. And fundamentally, there's a very limited number of providers and many, many patients. 00:45:02.000 --> 00:45:14.000 And even if you believe that digital health interventions can automate some part of that process, the gating factor will always be the ability of providers to be intermediaries to that care. 00:45:14.000 --> 00:45:25.000 So when you're optimizing care under capacity constraints, the question you're really asking is How do I maximize and efficiently target the resources that these providers are giving to the system? 00:45:25.000 --> 00:45:36.000 So the 1st thing that we were thinking about here is something along the lines of learning personalized treatment policies and sort of a simple way to think about that is given patient state, which patient should you prioritize for review or contact? 00:45:36.000 --> 00:45:47.000 So when I showed you that dashboard earlier, row is a patient with their their statistics and kind of one of the questions we're asking ourselves is well who should be higher or lower within the different risk strata? 00:45:47.000 --> 00:45:54.000 The And that's important because it's affecting the the sort of attention of the providers and who they think needs most care that week. 00:45:54.000 --> 00:46:05.000 Now a key challenge here is that the patient state every week is actually extremely high dimensional. It's a weekly time series of five-minute intervals of continuous glucose monitor data, potentially also activity. 00:46:05.000 --> 00:46:23.000 Monitor data, etc. And so learning those representations is actually quite challenging. And you know, as a computer scientist as an engineer, we would think to ourselves, well, why don't we just use black box methods to learn representations of what the state of the patient is. 00:46:23.000 --> 00:46:32.000 And what treatment intervention would make the most sense for them. And what you learn, what you see if you use black box representations is you get some very weird behavior in the prioritization of patients. 00:46:32.000 --> 00:46:42.000 For example, just to take a look at something like time and range, so time and range is measuring, what's the fraction of time in the past week you were within a target range. 00:46:42.000 --> 00:46:53.000 The better the higher that number is, the better you're doing. And arguably a patient who's higher on time and range, clinical guidelines say, you know, that's probably not a patient who needs intervention this week. 00:46:53.000 --> 00:46:54.000 Conversely, if your time in range is low, this is probably a patient that providers want to reach out to. 00:46:54.000 --> 00:47:02.000 To intervene on, to at least have a conversation with about, you know, what might be leading up. 00:47:02.000 --> 00:47:09.000 Leading to that low timing range. And you can see that the black box representations, you know, they're they're actually at best sort of flat and it worst actually learning the long the wrong kind of relationship here. 00:47:09.000 --> 00:47:19.000 You could see some similar effects in some of these other metrics that I won't I won't go through in detail. 00:47:19.000 --> 00:47:29.000 By contrast, one of the things that we spent time on is actually taking advantage. Of inductive bias in the process of learning representations using sort of clinician informed guidance. 00:47:29.000 --> 00:47:41.000 And what that led to then is that the prioritization you get out aligns with clinical guidelines and actually performs better as well. 00:47:41.000 --> 00:47:52.000 There's a reason the clinical guidelines look the way they do. And so in particular, what you see is that when patients have higher time and range, they're less likely to be prioritized when we use these clinician informed representations. 00:47:52.000 --> 00:48:08.000 I also want to mention that where we're headed with this is that we want to be able to on an ongoing basis inject randomization into the system so that we can optimize the performance of the prioritization policies that we use in the capacity allocation policies that we use. 00:48:08.000 --> 00:48:27.000 And I think one thing I want to point out to you is that when you have these kinds of platforms available and running, then the implicit and explicit randomization that's present in these kinds of platforms can actually enable perspective simulations of trials so that you can actually maximize the targeting F to see the policy design. 00:48:27.000 --> 00:48:42.000 So just as a simple example of this, we in a recent paper we talked about estimation of the predictive probability of success of an optimization trial depending on which targeting policy you would choose to to test in in the next trial. 00:48:42.000 --> 00:48:47.000 And I think one of the things I want to point out here is this is saying, you know, how many patients do you need to reach a particular predicted probability of success on your trial. 00:48:47.000 --> 00:48:59.000 And if you use random targeting, you know, you may need, you know, somewhere between a hundred to 150 patients if you're running a 1-week trial to reach a reasonable level. 00:48:59.000 --> 00:49:02.000 Of predicted probability of success and that can cut down significantly if you use a better targeting policy. Like one which for example targets patients with the largest expected effect. 00:49:02.000 --> 00:49:13.000 There's a problem here of course you want to be careful because targeting should not be at the expense of equitable care and that's kind of an important aspect of our current policy and trial design. 00:49:13.000 --> 00:49:27.000 In the future we're working towards scaling the model with a nine-profit tide pool which is deployed actually a version of Tide that we're now piloting in the Stanford clinic and it's turn key available to 12,000 clinics nationwide. 00:49:27.000 --> 00:49:41.000 Enables all sorts of additional features and data logging. And one of the great things here is the tie pool is something that, you know, is that next iterative step. 00:49:41.000 --> 00:49:49.000 And so I kind of want to wrap up here by just noting that what we've been able to see with Tide and with 4 T is that continuous of proofment is really essential to really achieving the impact of digital health intervention. 00:49:49.000 --> 00:49:59.000 And I think a big part of that I want to emphasize from this talk is starting at the care team and the patients. 00:49:59.000 --> 00:50:14.000 The failure modes for these kinds of things happen because we inject ourselves at the algorithms first.st And our main learning is really that if you start here and you you get in and get something that's working then the the flywheel can get started much more effectively. 00:50:14.000 --> 00:50:20.000 So that I've got a partial list of references here that will be in the slides. And a very, very large team. 00:50:20.000 --> 00:50:26.000 I also want to thank the NIH. Stanford Maternal and Child Health Research Institute, human centered AI Institute, Hubsley Trust, and of course the the National Science Foundation for all the support. 00:50:26.000 --> 00:50:36.000 So thank you. 00:50:36.000 --> 00:50:41.000 Thank you all. These are excellent, excellent talks. So we have a little bit of time for Q&A. 00:50:41.000 --> 00:50:56.000 I'd like to address the 1st question 1st to Chitra and the question is how do you go about validating the models using surveillance data to get a distribution of disease dynamics. 00:50:56.000 --> 00:50:57.000 Sure, in this case, you know, I'll give you an example for HRV. 00:50:57.000 --> 00:51:10.000 So we have HIV, HIV surveillance systems which looks at how many people were diagnosed every year by different risk groups. 00:51:10.000 --> 00:51:40.000 We can also have incidents, estimations from different methods. So how many new people were infected in a year by demographics and 3rd we can also look at the stage a diagnosis so in a model we are tracking the disease stage so we can look at how long has it been since the persons were infected and yeah so there are many demographic factors for incidents, prevalence, and stage a diagnosis that we looked at at the 00:51:49.000 --> 00:51:58.000 macro scale. And at the network level, we mainly use cluster molecular cluster based data. 00:51:58.000 --> 00:52:07.000 What that is is so people who are diagnosed with HIV. The the DNA sequence of the viruses. 00:52:07.000 --> 00:52:22.000 Can be used to look at how closely the viruses are connected in transmissions. So that gives us some insight on the transmission networks and we were able to use that to compare our model against for validation. 00:52:22.000 --> 00:52:35.000 And this is done every year. So we have ongoing data. That we can use for validating. 00:52:35.000 --> 00:52:40.000 Wonderful, thank you. Yais, do you have anything to add to that? 00:52:40.000 --> 00:52:46.000 Not really. I think it's important. To actually take some of these models and validate them. 00:52:46.000 --> 00:53:10.000 Larger populations and, you know, multiple sites. Cause typically they are maybe differences in way some of these datasets are constructed specifically if the dataset is coming from a So that's always an important consideration in in developing these models. 00:53:10.000 --> 00:53:27.000 I think another important consideration which is now very standard practice is when one is looking at performance or these models clearly is when one is looking at performance or these models looking at performance or these models, clearly, practice is when one is looking at performance or these models, clearly, performed mathematics should be evaluated out of samples. 00:53:27.000 --> 00:53:37.000 So in a part of the data that have not been used in order to develop or change the model. So test the general possibility of the model beyond the training data. 00:53:37.000 --> 00:53:43.000 Thank you very much. Ramesh, any comments on the model validation? 00:53:43.000 --> 00:53:48.000 Yeah, I think. I mean, I don't have a lot to add to what what the other 2 said. 00:53:48.000 --> 00:53:56.000 I think one thing I will say is that it's part of the reason that we're so excited about micro randomization about injecting a little bit of randomization. 00:53:56.000 --> 00:54:12.000 I think Chedra pointed out sort of having controlled and observational studies alongside each other. And I think having a little bit of randomization in the data that you're getting back, is a great way to be able to get that kind of model validation on ongoing basis and in these continuous systems. 00:54:12.000 --> 00:54:36.000 Thank you very much. I'd like to address this next question 1st to Janis. Can you research on disparities such as racial disparities in IVF treatment impact how the biomedical community deals with other instances of health disparities rather than just on IVF. 00:54:36.000 --> 00:54:40.000 Yes. Thank you for, for this question. So I think there are 2 types of despite this that one can talk about. 00:54:40.000 --> 00:55:04.000 One is what perhaps we and call and go if make his parity, in the sense that in our, an algorithm is trying to exploit these priorities that are present in the data and perhaps in by doing so accentuating that despite your developing models that are not equitable. 00:55:04.000 --> 00:55:15.000 So there is ongoing work. On that space and we have done some related work on how essentially to correct models for these types of disparities. 00:55:15.000 --> 00:55:25.000 And, produce models that are more actively across different cohorts of patients. Now, but there is the larger question. 00:55:25.000 --> 00:55:35.000 How do you deal with disparities that may be present in the data and for instance in my presentation I talked about some disparities that are present. 00:55:35.000 --> 00:55:43.000 In terms of IVF outcomes. So this is a disparity that is fairly well recognized. So there are other studies. 00:55:43.000 --> 00:55:59.000 Going to different success rates for different cohorts based on race based on other factors. And my collaborators, who's MAGGAIA, at Harvard, who is a NBGYN physician. 00:55:59.000 --> 00:56:06.000 I'm not, has 2 hypotheses. One is that There is potentially differences in terms of education and information that is available. 00:56:06.000 --> 00:56:19.000 And there may be also potential issues with the way these IVF protocols have been. Optimized so to speak. 00:56:19.000 --> 00:56:36.000 So I think the 1st is being addressed in the sense that there are foundations out there that are practices out there that are trying to improve education and information available to different types of cohorts. 00:56:36.000 --> 00:56:47.000 I'm not aware of any efforts to modify IVF protocol so that you make them more specific for specific or cohorts. 00:56:47.000 --> 00:57:03.000 Thank you. Ramess, similar question on equity and how it relates to your works. Specifically and in general about the biomedical implications. 00:57:03.000 --> 00:57:20.000 Yeah, I think I wanted to say 2 things on that question. One is about access. And so there's a policy question here about ensuring access even just to the basic technological infrastructure that enables a patient to participate in remote monitoring. 00:57:20.000 --> 00:57:23.000 One of the great ironies is that I think the patients who stand to benefit most are exactly those who live in regions where access to specialty care is extremely difficult. 00:57:23.000 --> 00:57:35.000 On the other hand, those are also often the geographic regions. We're technological barriers are the highest. 00:57:35.000 --> 00:57:46.000 So I think that's 1 major issue I want to highlight. And there are some aspects of this project that have tried to tackle that from a from a policy perspective, but admittedly that remains a huge and challenging national problem. 00:57:46.000 --> 00:57:50.000 I think the other one I want to highlight though on the on the intervention side is that I do think it's critical when one is thinking about personalized interventions. 00:57:50.000 --> 00:58:11.000 The way you deal with providing equity is an equitable care is that you ensure that if you're targeting policy is favoring some patients over others that there are other interventions that are appropriate for the patients who are, let's say, less prioritized according to that role. 00:58:11.000 --> 00:58:21.000 In our clinic, for example, that means certain patients may be less responsive to messaging, in which case the providers will often be reaching out to those particular patients with phone calls instead. 00:58:21.000 --> 00:58:32.000 So having multimodal care allows you actually to target the intervention better to the individual. I think sometimes we mistake personalized interventions. 00:58:32.000 --> 00:58:41.000 Of access for certain patients and I think that would be a disaster. So that's again where starting with the clinical context is fundamental. 00:58:41.000 --> 00:58:43.000 Thank you. Chatra, Equitable, Equitability concerns are very central to your work. 00:58:43.000 --> 00:58:50.000 What are your thoughts on that question? 00:58:50.000 --> 00:59:02.000 Yeah, so in our work, you know, like what I showed, we do see that a lot of risky behavior correlated with social determinants. 00:59:02.000 --> 00:59:12.000 And in this one, we only looked at the joint distributions, but not necessarily the causality. You know, like what are the causes? 00:59:12.000 --> 00:59:23.000 We do see that disparities by certain Rath things are much, much higher than other populations. So and what is the reason? 00:59:23.000 --> 00:59:36.000 You know, is it socioeconomic factors or is it discrimination or is it just lack of research in the understanding for the specific risks associated to a certain population. 00:59:36.000 --> 00:59:55.000 And I think, you know, sort of I think as the other 2 founders mentioned, it is really collectively thinking about all of these features and to, you know, understanding or estimating the causality is a very complex challenge. 00:59:55.000 --> 01:00:06.000 So and even conducting such control studies may not be feasible, but how can we use these large data sets to observe longitudinal changes. 01:00:06.000 --> 01:00:16.000 You know, there are several programs that have been implemented over time. To address the several social determinants or the structural intervention programs. 01:00:16.000 --> 01:00:38.000 So can we use that data even if it's just cross-sectional data? Just seeing the long digital changes of the programs to sort of try and info what the causal factors are, both from the perspective of what interventions are working, but also to understand the causality between these different features. 01:00:38.000 --> 01:00:44.000 Thank you. We have a number of of pressing questions, but I think unfortunately we're out of time. 01:00:44.000 --> 01:00:54.000 So I really, really want to thank the excellent speakers that we have and thank you to all the participants for attending the webinar. 01:00:54.000 --> 01:00:56.000 Thank you. 01:00:56.000 --> 01:01:00.000 Thank you.