99 research outputs found
Embryonic Death Is Linked to Maternal Identity in the Leatherback Turtle (Dermochelys coriacea)
Leatherback turtles have an average global hatching success rate of ∼50%, lower than other marine turtle species. Embryonic death has been linked to environmental factors such as precipitation and temperature, although, there is still a lot of variability that remains to be explained. We examined how nesting season, the time of nesting each season, the relative position of each clutch laid by each female each season, maternal identity and associated factors such as reproductive experience of the female (new nester versus remigrant) and period of egg retention between clutches (interclutch interval) affected hatching success and stage of embryonic death in failed eggs of leatherback turtles nesting at Playa Grande, Costa Rica. Data were collected during five nesting seasons from 2004/05 to 2008/09. Mean hatching success was 50.4%. Nesting season significantly influenced hatching success in addition to early and late stage embryonic death. Neither clutch position nor nesting time during the season had a significant affect on hatching success or the stage of embryonic death. Some leatherback females consistently produced nests with higher hatching success rates than others. Remigrant females arrived earlier to nest, produced more clutches and had higher rates of hatching success than new nesters. Reproductive experience did not affect stage of death or the duration of the interclutch interval. The length of interclutch interval had a significant affect on the proportion of eggs that failed in each clutch and the developmental stage they died at. Intrinsic factors such as maternal identity are playing a role in affecting embryonic death in the leatherback turtle
Federated learning enables big data for rare cancer boundary detection
Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing
Federated Benchmarking of Medical Artificial Intelligence With MedPerf
Medical artificial intelligence (AI) has tremendous potential to advance healthcare by supporting and contributing to the evidence-based practice of medicine, personalizing patient treatment, reducing costs, and improving both healthcare provider and patient experience. Unlocking this potential requires systematic, quantitative evaluation of the performance of medical AI models on large-scale, heterogeneous data capturing diverse patient populations. Here, to meet this need, we introduce MedPerf, an open platform for benchmarking AI models in the medical domain. MedPerf focuses on enabling federated evaluation of AI models, by securely distributing them to different facilities, such as healthcare organizations. This process of bringing the model to the data empowers each facility to assess and verify the performance of AI models in an efficient and human-supervised process, while prioritizing privacy. We describe the current challenges healthcare and AI communities face, the need for an open platform, the design philosophy of MedPerf, its current implementation status and real-world deployment, our roadmap and, importantly, the use of MedPerf with multiple international institutions within cloud-based technology and on-premises scenarios. Finally, we welcome new contributions by researchers and organizations to further strengthen MedPerf as an open benchmarking platform
Federated learning enables big data for rare cancer boundary detection.
Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing
Author Correction: Federated learning enables big data for rare cancer boundary detection.
10.1038/s41467-023-36188-7NATURE COMMUNICATIONS14
MedPerf : Open Benchmarking Platform for Medical Artificial Intelligence using Federated Evaluation
Medical AI has tremendous potential to advance healthcare by supporting the evidence-based practice of medicine, personalizing patient treatment, reducing costs, and improving provider and patient experience. We argue that unlocking this potential requires a systematic way to measure the performance of medical AI models on large-scale heterogeneous data. To meet this need, we are building MedPerf, an open framework for benchmarking machine learning in the medical domain. MedPerf will enable federated evaluation in which models are securely distributed to different facilities for evaluation, thereby empowering healthcare organizations to assess and verify the performance of AI models in an efficient and human-supervised process, while prioritizing privacy. We describe the current challenges healthcare and AI communities face, the need for an open platform, the design philosophy of MedPerf, its current implementation status, and our roadmap. We call for researchers and organizations to join us in creating the MedPerf open benchmarking platform
- …