12 research outputs found

    Tsallis generalized entropy for Gaussian mixture model parameter estimation on brain segmentation application

    Get PDF
    Among statistical models, Gaussian Mixture Models (GMMs) have been used in numerous applications to model the data in which a mixture of Gaussian curves fits them. Several methods have been introduced to estimate the optimum parameters to a GMM fitted to the data. The accuracy of such estimation methods is crucial to interpret the data. In this paper, we proposed a new approach to estimate the parameters of a GMM using critical points of Tsallis-entropy to adjust each parameter's accuracy. To evaluate the proposed method, seven GMMs of simulated random (noisy) samples generated by MATLAB were used. Each simulated model was repeated 1000 times to generates 1000 random values obeying the GMM. In addition, five GMM shaped samples extracted from magnetic resonance brain images were used, aiming for image segmentation application. For comparison assessment, Expectation-Maximization, K-means, and Shannon's estimator were employed on the same dataset. These four estimation methods using accuracy, Akaike information criterion (AIC), Bayesian information criterion (BIC), and Mean Squared Error (MSE) were evaluated. The mean accuracies of the Tsallis-estimator for simulated data, i.e., the mean values, variances, and proportions, were 99.9(±0.1), 99.8(±0.2), and 99.7(±0.3)%, respectively. For both datasets, the Tsallis-estimator accuracies were significantly higher than EM, K-means, and Shannon. Tsallis-estimator, increasing the estimated parameters' accuracy, can be used in statistical approaches and machine learning

    Biomimetic phantom with anatomical accuracy for evaluating brain volumetric measurements with magnetic resonance imaging

    Get PDF
    Purpose: Brain image volumetric measurements (BVM) methods have been used to quantify brain tissue volumes using magnetic resonance imaging (MRI) when investigating abnormalities. Although BVM methods are widely used, they need to be evaluated to quantify their reliability. Currently, the gold-standard reference to evaluate a BVM is usually manual labeling measurement. Manual volume labeling is a time-consuming and expensive task, but the confidence level ascribed to this method is not absolute. We describe and evaluate a biomimetic brain phantom as an alternative for the manual validation of BVM. Methods: We printed a three-dimensional (3D) brain mold using an MRI of a three-year-old boy diagnosed with Sturge-Weber syndrome. Then we prepared three different mixtures of styrene-ethylene/butylene-styrene gel and paraffin to mimic white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). The mold was filled by these three mixtures with known volumes. We scanned the brain phantom using two MRI scanners, 1.5 and 3.0 Tesla. Our suggestion is a new challenging model to evaluate the BVM which includes the measured volumes of the phantom compartments and its MRI. We investigated the performance of an automatic BVM, i.e., the expectation–maximization (EM) method, to estimate its accuracy in BVM. Results: The automatic BVM results using the EM method showed a relative error (regarding the phantom volume) of 0.08, 0.03, and 0.13 (±0.03 uncertainty) percentages of the GM, CSF, and WM volume, respectively, which was in good agreement with the results reported using manual segmentation. Conclusions: The phantom can be a potential quantifier for a wide range of segmentation methods

    Prototype of a low-cost 3D breast ultrasound imaging system

    Get PDF
    This work describes a setup of the new acquisition system for 3D ultrasound images (i.e. B-mode) for breast tomography. Since premature and precise breast lesions diagnoses turn out in treatment more efficient and save lives, we are looking for a more precise, less painful exams and dose reduction for the patient. Therefore, a low cost scanner mechanism was built aiming to accommodate breasts under water while patient is laid down on a bed in which a robotic arm guides the ultrasound probe to acquire 2D images. Then 3D image is reconstructed using the 2D images due to render the mammary volume searching for lesions. The low cost scanner was built using a regular ultrasound machine, linear probe and major controls made by an Arduino Uno. We compared the acquired phantom images with gold standard images for mammary tissues diagnostics, i.e. Computerized Tomography and Magnetic Resonance Images. This study was evaluated using a paraffin-gel and mineral oil control phantom. Results show that the provided module is convicting enough to be used in local hospital as the next step of this study

    Quantitative Estimation of the Nonstationary Behavior of Neural Spontaneous Activity

    Get PDF
    The “stationarity time” (ST) of neuronal spontaneous activity signals of rat embryonic cortical cells, measured by means of a planar Multielectrode Array (MEA), was estimated based on the “Detrended Fluctuation Analysis” (DFA). The ST is defined as the mean time interval during which the signal under analysis keeps its statistical characteristics constant. An upgrade on the DFA method is proposed, leading to a more accurate procedure. Strong statistical correlation between the ST, estimated from the Absolute Amplitude of Neural Spontaneous Activity (AANSA) signals and the Mean Interburst Interval (MIB), calculated by classical spike sorting methods applied to the interspike interval time series, was obtained. In consequence, the MIB may be estimated by means of the ST, which further includes relevant biological information arising from basal activity. The results point out that the average ST of MEA signals lies between 2-3 seconds. Furthermore, it was shown that a neural culture presents signals that lead to different statistical behaviors, depending on the relative geometric position of each electrode and the cells. Such behaviors may disclose physiological phenomena, which are possibly associated with different adaptation/facilitation mechanisms

    Quantification of fractal dimension and Shannon’s entropy in histological diagnosis of prostate cancer

    Get PDF
    Background: Prostate cancer is a serious public health problem that affects quality of life and has a significant mortality rate. The aim of the present study was to quantify the fractal dimension and Shannon’s entropy in the histological diagnosis of prostate cancer. Methods: Thirty-four patients with prostate cancer aged 50 to 75 years having been submitted to radical prostatectomy participated in the study. Histological slides of normal (N), hyperplastic (H) and tumor (T) areas of the prostate were digitally photographed with three different magnifications (40x, 100x and 400x) and analyzed. The fractal dimension (FD), Shannon’s entropy (SE) and number of cell nuclei (NCN) in these areas were compared. Results: FD analysis demonstrated the following significant differences between groups: T vs. N and H vs. N groups (p < 0.05) at a magnification of 40x; T vs. N (p < 0.01) at 100x and H vs. N (p < 0.01) at 400x. SE analysis revealed the following significant differences groups: T vs. H and T vs. N (p < 0.05) at 100x; and T vs. H and T vs. N (p < 0.001) at 400x. NCN analysis demonstrated the following significant differences between groups: T vs. H and T vs. N (p < 0.05) at 40x; T vs. H and T vs. N (p < 0.0001) at 100x; and T vs. H and T vs. N (p < 0.01) at 400x. Conclusions: The quantification of the FD and SE, together with the number of cell nuclei, has potential clinical applications in the histological diagnosis of prostate cancer

    Evaluation of physiologic complexity in time series using generalized sample entropy and surrogate data analysis

    No full text
    Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiff(max)) for q not equal 1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiff(max) values were capable of distinguish HRV groups (p-values 5.10 x 10(-3); 1.11 x 10(-7), and 5.50 x 10(-7) for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4758815]CAPESCAPESFAPESPFAPES

    Changes in the Complexity of Heart Rate Variability with Exercise Training Measured by Multiscale Entropy-Based Measurements

    No full text
    Quantifying complexity from heart rate variability (HRV) series is a challenging task, and multiscale entropy (MSE), along with its variants, has been demonstrated to be one of the most robust approaches to achieve this goal. Although physical training is known to be beneficial, there is little information about the long-term complexity changes induced by the physical conditioning. The present study aimed to quantify the changes in physiological complexity elicited by physical training through multiscale entropy-based complexity measurements. Rats were subject to a protocol of medium intensity training ( n = 13 ) or a sedentary protocol ( n = 12 ). One-hour HRV series were obtained from all conscious rats five days after the experimental protocol. We estimated MSE, multiscale dispersion entropy (MDE) and multiscale SDiff q from HRV series. Multiscale SDiff q is a recent approach that accounts for entropy differences between a given time series and its shuffled dynamics. From SDiff q , three attributes (q-attributes) were derived, namely SDiff q m a x , q m a x and q z e r o . MSE, MDE and multiscale q-attributes presented similar profiles, except for SDiff q m a x . q m a x showed significant differences between trained and sedentary groups on Time Scales 6 to 20. Results suggest that physical training increases the system complexity and that multiscale q-attributes provide valuable information about the physiological complexity
    corecore