3,255 research outputs found

    A novel approach for breast ultrasound classification using two-dimensional empirical mode decomposition and multiple features

    Get PDF
    Aim: Breast cancer stands as a prominent cause of female mortality on a global scale, underscoring the critical need for precise and efficient diagnostic techniques. This research significantly enriches the body of knowledge pertaining to breast cancer classification, especially when employing breast ultrasound images, by introducing a novel method rooted in the two dimensional empirical mode decomposition (biEMD) method. In this study, an evaluation of the classification performance is proposed based on various texture features of breast ultrasound images and their corresponding biEMD subbands. Methods: A total of 437 benign and 210 malignant breast ultrasound images were analyzed, preprocessed, and decomposed into three biEMD sub-bands. A variety of features, including the Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), and Histogram of Oriented Gradient (HOG), were extracted, and a feature selection process was performed using the least absolute shrinkage and selection operator method. The study employed GLCM, LBP and HOG, and machine learning techniques, including artificial neural networks (ANN), k-nearest neighbors (kNN), the ensemble method, and statistical discriminant analysis, to classify benign and malignant cases. The classification performance, measured through Area Under the Curve (AUC), accuracy, and F1 score, was evaluated using a 10-fold cross-validation approach. Results: The study showed that using the ANN method and hybrid features (GLCM+LBP+HOG) from BUS images' biEMD sub-bands led to excellent performance, with an AUC of 0.9945, an accuracy of 0.9644, and an F1 score of 0.9668. This has revealed the effectiveness of the biEMD method for classifying breast tumor types from ultrasound images. Conclusion: The obtained results have revealed the effectiveness of the biEMD method for classifying breast tumor types from ultrasound images, demonstrating high-performance classification using the proposed approach

    Neutro-Connectedness Theory, Algorithms and Applications

    Get PDF
    Connectedness is an important topological property and has been widely studied in digital topology. However, three main challenges exist in applying connectedness to solve real world problems: (1) the definitions of connectedness based on the classic and fuzzy logic cannot model the “hidden factors” that could influence our decision-making; (2) these definitions are too general to be applied to solve complex problem; and (4) many measurements of connectedness are heavily dependent on the shape (spatial distribution of vertices) of the graph and violate the intuitive idea of connectedness. This research focused on solving these challenges by redesigning the connectedness theory, developing fast algorithms for connectedness computation, and applying the newly proposed theory and algorithms to solve challenges in real problems. The newly proposed Neutro-Connectedness (NC) generalizes the conventional definitions of connectedness and can model uncertainty and describe the part and the whole relationship. By applying the dynamic programming strategy, a fast algorithm was proposed to calculate NC for general dataset. It is not just calculating NC map, and the output NC forest can discover a dataset’s topological structure regarding connectedness. In the first application, interactive image segmentation, two approaches were proposed to solve the two most difficult challenges: user interaction-dependence and intense interaction. The first approach, named NC-Cut, models global topologic property among image regions and reduces the dependence of segmentation performance on the appearance models generated by user interactions. It is less sensitive to the initial region of interest (ROI) than four state-of-the-art ROI-based methods. The second approach, named EISeg, provides user with visual clues to guide the interacting process based on NC. It reduces user interaction greatly by guiding user to where interacting can produce the best segmentation results. In the second application, NC was utilized to solve the challenge of weak boundary problem in breast ultrasound image segmentation. The approach can model the indeterminacy resulted from weak boundaries better than fuzzy connectedness, and achieved more accurate and robust result on our dataset with 131 breast tumor cases

    Optical coherence tomography assessment of vessel wall degradation in thoracic aortic aneurysms

    Get PDF
    Optical coherence tomography images of human thoracic aorta from aneurysms reveal elastin disorders and smooth muscle cell alterations when visualizing the media layer of the aortic wall. These disorders can be employed as indicators for wall degradation and, therefore, become a hallmark for diagnosis of risk of aneurysm under intraoperative conditions. Two approaches are followed to evaluate this risk: the analysis of the reflectivity decay along the penetration depth and the textural analysis of a two-dimensional spatial distribution of the aortic wall backscattering. Both techniques require preprocessing stages for the identification of the air–sample interface and for the segmentation of the media layer. Results show that the alterations in the media layer of the aortic wall are better highlighted when the textural approach is considered and also agree with a semiquantitative histopathological grading that assesses the degree of wall degradation. The correlation of the co-occurrence matrix attains a sensitivity of 0.906 and specificity of 0.864 when aneurysm automatic diagnosis is evaluated with a receiver operating characteristic curve

    Independent component analysis (ICA) applied to ultrasound image processing and tissue characterization

    Get PDF
    As a complicated ubiquitous phenomenon encountered in ultrasound imaging, speckle can be treated as either annoying noise that needs to be reduced or the source from which diagnostic information can be extracted to reveal the underlying properties of tissue. In this study, the application of Independent Component Analysis (ICA), a relatively new statistical signal processing tool appeared in recent years, to both the speckle texture analysis and despeckling problems of B-mode ultrasound images was investigated. It is believed that higher order statistics may provide extra information about the speckle texture beyond the information provided by first and second order statistics only. However, the higher order statistics of speckle texture is still not clearly understood and very difficult to model analytically. Any direct dealing with high order statistics is computationally forbidding. On the one hand, many conventional ultrasound speckle texture analysis algorithms use only first or second order statistics. On the other hand, many multichannel filtering approaches use pre-defined analytical filters which are not adaptive to the data. In this study, an ICA-based multichannel filtering texture analysis algorithm, which considers both higher order statistics and data adaptation, was proposed and tested on the numerically simulated homogeneous speckle textures. The ICA filters were learned directly from the training images. Histogram regularization was conducted to make the speckle images quasi-stationary in the wide sense so as to be adaptive to an ICA algorithm. Both Principal Component Analysis (PCA) and a greedy algorithm were used to reduce the dimension of feature space. Finally, Support Vector Machines (SVM) with Radial Basis Function (RBF) kernel were chosen as the classifier for achieving best classification accuracy. Several representative conventional methods, including both low and high order statistics based methods, and both filtering and non-filtering methods, have been chosen for comparison study. The numerical experiments have shown that the proposed ICA-based algorithm in many cases outperforms other algorithms for comparison. Two-component texture segmentation experiments were conducted and the proposed algorithm showed strong capability of segmenting two visually very similar yet different texture regions with rather fuzzy boundaries and almost the same mean and variance. Through simulating speckle with first order statistics approaching gradually to the Rayleigh model from different non-Rayleigh models, the experiments to some extent reveal how the behavior of higher order statistics changes with the underlying property of tissues. It has been demonstrated that when the speckle approaches the Rayleigh model, both the second and higher order statistics lose the texture differentiation capability. However, when the speckles tend to some non-Rayleigh models, methods based on higher order statistics show strong advantage over those solely based on first or second order statistics. The proposed algorithm may potentially find clinical application in the early detection of soft tissue disease, and also be helpful for better understanding ultrasound speckle phenomenon in the perspective of higher order statistics. For the despeckling problem, an algorithm was proposed which adapted the ICA Sparse Code Shrinkage (ICA-SCS) method for the ultrasound B-mode image despeckling problem by applying an appropriate preprocessing step proposed by other researchers. The preprocessing step makes the speckle noise much closer to the real white Gaussian noise (WGN) hence more amenable to a denoising algorithm such as ICS-SCS that has been strictly designed for additive WGN. A discussion is given on how to obtain the noise-free training image samples in various ways. The experimental results have shown that the proposed method outperforms several classical methods chosen for comparison, including first or second order statistics based methods (such as Wiener filter) and multichannel filtering methods (such as wavelet shrinkage), in the capability of both speckle reduction and edge preservation

    Novel Hypertrophic Cardiomyopathy Diagnosis Index Using Deep Features and Local Directional Pattern Techniques

    Get PDF
    Hypertrophic cardiomyopathy (HCM) is a genetic disorder that exhibits a wide spectrum of clinical presentations, including sudden death. Early diagnosis and intervention may avert the latter. Left ventricular hypertrophy on heart imaging is an important diagnostic criterion for HCM, and the most common imaging modality is heart ultrasound (US). The US is operator-dependent, and its interpretation is subject to human error and variability. We proposed an automated computer-aided diagnostic tool to discriminate HCM from healthy subjects on US images. We used a local directional pattern and the ResNet-50 pretrained network to classify heart US images acquired from 62 known HCM patients and 101 healthy subjects. Deep features were ranked using Student's t-test, and the most significant feature (SigFea) was identified. An integrated index derived from the simulation was defined as 100.log(10 )(SigFea /root 2) in each subject, and a diagnostic threshold value was empirically calculated as the mean of the minimum and maximum integrated indices among HCM and healthy subjects, respectively. An integrated index above a threshold of 0.5 separated HCM from healthy subjects with 100% accuracy in our test dataset

    Two-Dimensional EspEn: A New Approach to Analyze Image Texture by Irregularity

    Get PDF
    Image processing has played a relevant role in various industries, where the main challenge is to extract specific features from images. Specifically, texture characterizes the phenomenon of the occurrence of a pattern along the spatial distribution, taking into account the intensities of the pixels for which it has been applied in classification and segmentation tasks. Therefore, several feature extraction methods have been proposed in recent decades, but few of them rely on entropy, which is a measure of uncertainty. Moreover, entropy algorithms have been little explored in bidimensional data. Nevertheless, there is a growing interest in developing algorithms to solve current limits, since Shannon Entropy does not consider spatial information, and SampEn2D generates unreliable values in small sizes. We introduce a proposed algorithm, EspEn (Espinosa Entropy), to measure the irregularity present in two-dimensional data, where the calculation requires setting the parameters as follows: m (length of square window), r (tolerance threshold), and ρ (percentage of similarity). Three experiments were performed; the first two were on simulated images contaminated with different noise levels. The last experiment was with grayscale images from the Normalized Brodatz Texture database (NBT). First, we compared the performance of EspEn against the entropy of Shannon and SampEn2D. Second, we evaluated the dependence of EspEn on variations of the values of the parameters m, r, and ρ. Third, we evaluated the EspEn algorithm on NBT images. The results revealed that EspEn could discriminate images with different size and degrees of noise. Finally, EspEn provides an alternative algorithm to quantify the irregularity in 2D data; the recommended parameters for better performance are m = 3, r = 20, and ρ = 0.7
    corecore