118,697 research outputs found

    Vicinal Feature Statistics Augmentation for Federated 3D Medical Volume Segmentation

    Full text link
    Federated learning (FL) enables multiple client medical institutes collaboratively train a deep learning (DL) model with privacy protection. However, the performance of FL can be constrained by the limited availability of labeled data in small institutes and the heterogeneous (i.e., non-i.i.d.) data distribution across institutes. Though data augmentation has been a proven technique to boost the generalization capabilities of conventional centralized DL as a "free lunch", its application in FL is largely underexplored. Notably, constrained by costly labeling, 3D medical segmentation generally relies on data augmentation. In this work, we aim to develop a vicinal feature-level data augmentation (VFDA) scheme to efficiently alleviate the local feature shift and facilitate collaborative training for privacy-aware FL segmentation. We take both the inner- and inter-institute divergence into consideration, without the need for cross-institute transfer of raw data or their mixup. Specifically, we exploit the batch-wise feature statistics (e.g., mean and standard deviation) in each institute to abstractly represent the discrepancy of data, and model each feature statistic probabilistically via a Gaussian prototype, with the mean corresponding to the original statistic and the variance quantifying the augmentation scope. From the vicinal risk minimization perspective, novel feature statistics can be drawn from the Gaussian distribution to fulfill augmentation. The variance is explicitly derived by the data bias in each individual institute and the underlying feature statistics characterized by all participating institutes. The added-on VFDA consistently yielded marked improvements over six advanced FL methods on both 3D brain tumor and cardiac segmentation.Comment: 28th biennial international conference on Information Processing in Medical Imaging (IPMI 2023): Oral Pape

    Hidden Gibbs random fields model selection using Block Likelihood Information Criterion

    Full text link
    Performing model selection between Gibbs random fields is a very challenging task. Indeed, due to the Markovian dependence structure, the normalizing constant of the fields cannot be computed using standard analytical or numerical methods. Furthermore, such unobserved fields cannot be integrated out and the likelihood evaluztion is a doubly intractable problem. This forms a central issue to pick the model that best fits an observed data. We introduce a new approximate version of the Bayesian Information Criterion. We partition the lattice into continuous rectangular blocks and we approximate the probability measure of the hidden Gibbs field by the product of some Gibbs distributions over the blocks. On that basis, we estimate the likelihood and derive the Block Likelihood Information Criterion (BLIC) that answers model choice questions such as the selection of the dependency structure or the number of latent states. We study the performances of BLIC for those questions. In addition, we present a comparison with ABC algorithms to point out that the novel criterion offers a better trade-off between time efficiency and reliable results

    Estimating the granularity coefficient of a Potts-Markov random field within an MCMC algorithm

    Get PDF
    This paper addresses the problem of estimating the Potts parameter B jointly with the unknown parameters of a Bayesian model within a Markov chain Monte Carlo (MCMC) algorithm. Standard MCMC methods cannot be applied to this problem because performing inference on B requires computing the intractable normalizing constant of the Potts model. In the proposed MCMC method the estimation of B is conducted using a likelihood-free Metropolis-Hastings algorithm. Experimental results obtained for synthetic data show that estimating B jointly with the other unknown parameters leads to estimation results that are as good as those obtained with the actual value of B. On the other hand, assuming that the value of B is known can degrade estimation performance significantly if this value is incorrect. To illustrate the interest of this method, the proposed algorithm is successfully applied to real bidimensional SAR and tridimensional ultrasound images

    Simultaneous multi-band detection of Low Surface Brightness galaxies with Markovian modelling

    Get PDF
    We present an algorithm for the detection of Low Surface Brightness (LSB) galaxies in images, called MARSIAA (MARkovian Software for Image Analysis in Astronomy), which is based on multi-scale Markovian modeling. MARSIAA can be applied simultaneously to different bands. It segments an image into a user-defined number of classes, according to their surface brightness and surroundings - typically, one or two classes contain the LSB structures. We have developed an algorithm, called DetectLSB, which allows the efficient identification of LSB galaxies from among the candidate sources selected by MARSIAA. To assess the robustness of our method, the method was applied to a set of 18 B and I band images (covering 1.3 square degrees in total) of the Virgo cluster. To further assess the completeness of the results of our method, both MARSIAA, SExtractor, and DetectLSB were applied to search for (i) mock Virgo LSB galaxies inserted into a set of deep Next Generation Virgo Survey (NGVS) gri-band subimages and (ii) Virgo LSB galaxies identified by eye in a full set of NGVS square degree gri images. MARSIAA/DetectLSB recovered ~20% more mock LSB galaxies and ~40% more LSB galaxies identified by eye than SExtractor/DetectLSB. With a 90% fraction of false positives from an entirely unsupervised pipeline, a completeness of 90% is reached for sources with r_e > 3" at a mean surface brightness level of mu_g=27.7 mag/arcsec^2 and a central surface brightness of mu^0 g=26.7 mag/arcsec^2. About 10% of the false positives are artifacts, the rest being background galaxies. We have found our method to be complementary to the application of matched filters and an optimized use of SExtractor, and to have the following advantages: it is scale-free, can be applied simultaneously to several bands, and is well adapted for crowded regions on the sky.Comment: 39 pages, 18 figures, accepted for publication in A

    Determining the Success of NCAA Basketball Teams through Team Characteristics

    Get PDF
    Every year much of the nation becomes engulfed in the NCAA basketball postseason tournament more affectionately known as “March Madness.” The tournament has received the name because of the ability for any team to win a single game and advance to the next round. The purpose of this study is to determine whether concrete statistical measures can be used to predict the final outcome of the tournament. The data collected in the study include 13 independent variables ranging from the 2003-2004 season up until the current 2009-2010 season. Different tests were run in an attempt to achieve the most accurate predictive model. First, the data were input into Excel and ordinary least squares regressions were run for each year. Then the data were compiled into one file and an ordinary least squares regression was run on that collection of data in Excel. Next, the data were input into Minitab and a stepwise regression was run in order to keep only the significant independent variables. Following that, a regression analysis was run in Minitab. The coefficients from that regression analysis were input into a file with the 2009-2010 data in an attempt to test the model’s results against the actual results. All of the models developed, except one for the year 2005-2006, were determined to be significant. There were 6 significant independent variables determined. The final results showed that although the model developed through the study was significant, the ability to accurately predict the outcomes is very difficult
    • 

    corecore