18 research outputs found

    Federated Learning for Breast Density Classification: A Real-World Implementation

    Full text link
    Building robust deep learning-based models requires large quantities of diverse training data. In this study, we investigate the use of federated learning (FL) to build medical imaging classification models in a real-world collaborative setting. Seven clinical institutions from across the world joined this FL effort to train a model for breast density classification based on Breast Imaging, Reporting & Data System (BI-RADS). We show that despite substantial differences among the datasets from all sites (mammography system, class distribution, and data set size) and without centralizing data, we can successfully train AI models in federation. The results show that models trained using FL perform 6.3% on average better than their counterparts trained on an institute's local data alone. Furthermore, we show a 45.8% relative improvement in the models' generalizability when evaluated on the other participating sites' testing data.Comment: Accepted at the 1st MICCAI Workshop on "Distributed And Collaborative Learning"; add citation to Fig. 1 & 2 and update Fig.

    System on Chip for Sensor Network Security: A Proposed Architecture

    No full text

    Surgical Management of Locoregional Recurrence in Breast Cancer

    Full text link

    Use of LFSR for Sensor Network Security: A New Approach

    No full text

    Proposed design for simultaneous measurement of wall and near-wall temperatures in gas microflows

    No full text
    International audienceGas behavior in systems at microscale has been receiving significant attention from researchers in the last two decades [1-4]. Today, there is an enhanced emphasis on developing new experimental techniques to capture the local temperature profiles in gases at rarefied conditions. The main underlying reason behind this focus is the interesting physics exhibited by gases at these rarefied conditions, especially in the transition regime. There is the onset of local thermodynamic disequilibrium, which manifests as velocity slip and temperature jump [1-4] at the wall. However, there is limited experimental evidence on understanding these aforementioned phenomena. With the advances in experimental facilities, it is today possible, at least in principle, to map the local temperature profiles in gases at rarefied conditions. Molecular tagging approach is one such technique which has shown the potential to map the temperature profile in low pressure conditions [5]. In molecular tagging approach, a very small percentage of tracer molecules are introduced into the gas of interest, referred as carrier gas. In gas flow studies, the typical tracers employed are acetone and biacetyl. These tracer molecules, assumed to be in equilibrium with the carrier gas, are excited with a source of energy at a specific wavelength, typically a laser. The excited molecules are unstable and tend to de-excite in a radiative and non-radiative manner, which is manifested as fluorescence and phosphorescence. Following the deformation with time of a tagged line permits to obtain the flow velocity. In addition, the dependence of the phosphorescence and fluorescence intensity to the gas temperature could also allow to use this technique for local temperature measurements. The objective of this study is to develop an experimental setup capable of simultaneously mapping the wall and fluid near-wall temperatures with the final goal to measure temperature jump at the wall when rarefied conditions are reached. The originality of this setup shown in Figure 1 is to couple surface temperature measurements using an infrared camera with Molecular Tagging Thermometry (MTT) for gas temperature measurements. The bottom wall of the channel will be made of Sapphire substrate of 650 µm thickness coated with a thin film of Indium Tin Oxide (ITO). The average roughness of this ITO layer is about 3 nm. The top wall of the channel will be made of SU8 and bonded with the bottom wall with a layer of PDMS. The channel will be filled in with acetone vapor

    A Hybrid Model for Soybean Yield Prediction Integrating Convolutional Neural Networks, Recurrent Neural Networks, and Graph Convolutional Networks

    No full text
    Soybean yield prediction is one of the most critical activities for increasing agricultural productivity and ensuring food security. Traditional models often underestimate yields because of limitations associated with single data sources and simplistic model architectures. These prevent complex, multifaceted factors influencing crop growth and yield from being captured. In this line, this work fuses multi-source data—satellite imagery, weather data, and soil properties—through the approach of multi-modal fusion using Convolutional Neural Networks and Recurrent Neural Networks. While satellite imagery provides information on spatial data regarding crop health, weather data provides temporal insights, and the soil properties provide important fertility information. Fusing these heterogeneous data sources embeds an overall understanding of yield-determining factors in the model, decreasing the RMSE by 15% and improving R2 by 20% over single-source models. We further push the frontier of feature engineering by using Temporal Convolutional Networks (TCNs) and Graph Convolutional Networks (GCNs) to capture time series trends, geographic and topological information, and pest/disease incidence. TCNs can capture long-range temporal dependencies well, while the GCN model has complex spatial relationships and enhanced the features for making yield predictions. This increases the prediction accuracy by 10% and boosts the F1 score for low-yield area identification by 5%. Additionally, we introduce other improved model architectures: a custom UNet with attention mechanisms, Heterogeneous Graph Neural Networks (HGNNs), and Variational Auto-encoders. The attention mechanism enables more effective spatial feature encoding by focusing on critical image regions, while the HGNN captures interaction patterns that are complex between diverse data types. Finally, VAEs can generate robust feature representation. Such state-of-the-art architectures could then achieve an MAE improvement of 12%, while R2 for yield prediction improves by 25%. In this paper, the state of the art in yield prediction has been advanced due to the employment of multi-source data fusion, sophisticated feature engineering, and advanced neural network architectures. This provides a more accurate and reliable soybean yield forecast. Thus, the fusion of Convolutional Neural Networks with Recurrent Neural Networks and Graph Networks enhances the efficiency of the detection process
    corecore