8 research outputs found
Recommended from our members
Granger causality-based information fusion applied to electrical measurements from power transformers
White Matter, Gray Matter and Cerebrospinal Fluid Segmentation from Brain 3D MRI Using B-UNET
The accurate segmentation of brain tissues in Magnetic Resonance (MR) images is an important step for detection and treatment planning of brain diseases. Among other brain tissues, Gray Matter, White Matter and Cerebrospinal Fluid are commonly segmented for Alzheimer diagnosis purpose. Therefore, different algorithms for segmenting these tissues in MR image scans have been proposed over the years. Nowadays, with the trend of deep learning, many methods are trained to learn important features and extract information from the data leading to very promising segmentation results. In this work, we propose an effective approach to segment three tissues in 3D Brain MR images based on B-UNET. The method is implemented by using the Bitplane method in each convolution of the UNET model. We evaluated the proposed method using two public databases with very promising results. (c) Springer Nature Switzerland AG 2019
Recommended from our members
Case-Based Statistical Learning: A Non-Parametric Implementation with a Conditional-Error Rate SVM
© 2013 IEEE. Machine learning has been successfully applied to many areas of science and engineering. Some examples include time series prediction, optical character recognition, signal and image classification in biomedical applications for diagnosis and prognosis and so on. In the theory of semi-supervised learning, we have a training set and an unlabeled data, that are employed to fit a prediction model or learner, with the help of an iterative algorithm, such as the expectation-maximization algorithm. In this paper, a novel non-parametric approach of the so-called case-based statistical learning is proposed in a low-dimensional classification problem. This supervised feature selection scheme analyzes the discrete set of outcomes in the classification problem by hypothesis-testing and makes assumptions on these outcome values to obtain the most likely prediction model at the training stage. A novel prediction model is described in terms of the output scores of a confidence-based support vector machine classifier under class-hypothesis testing. To have a more accurate prediction by considering the unlabeled points, the distribution of unlabeled examples must be relevant for the classification problem. The estimation of the error rates from a well-trained support vector machines allows us to propose a non-parametric approach avoiding the use of Gaussian density function-based models in the likelihood ratio test
Recommended from our members
Statistical Agnostic Mapping: A framework in neuroimaging based on concentration inequalities
© 2020 The Authors In the 1970s a novel branch of statistics emerged focusing its effort on the selection of a function for the pattern recognition problem that would fulfill a relationship between the quality of the approximation and its complexity. This theory is mainly devoted to problems of estimating dependencies in the case of limited sample sizes, and comprise all the empirical out-of sample generalization approaches; e.g. cross validation (CV). In this paper a data-driven approach based on concentration inequalities is designed for testing competing hypothesis or comparing different models. In this sense we derive a Statistical Agnostic (non-parametric) Mapping (SAM) for neuroimages at voxel or regional levels which is able to: (i) relieve the problem of instability with limited sample sizes when estimating the actual risk via CV; and (ii) provide an alternative way of Family-wise-error (FWE) corrected p-value maps in inferential statistics for hypothesis testing. Using several neuroimaging datasets (containing large and small effects) and random task group analyses to compute empirical familywise error rates, this novel framework resulted in a model validation method for small samples over dimension ratios, and a less-conservative procedure than FWE p-value correction to determine the significance maps from the inferences made using small upper bounds of the actual risk