83 research outputs found

    Feature Selection Mammogram based on Breast Cancer Mining

    Get PDF
    The very dense breast of mammogram image makes the Radiologists often have difficulties in interpreting the mammography objectively and accurately. One of the key success factors of computer-aided diagnosis (CADx) system is the use of the right features. Therefore, this research emphasizes on the feature selection process by performing the data mining on the results of mammogram image feature extraction. There are two algorithms used to perform the mining, the decision tree and the rule induction. Furthermore, the selected features produced by the algorithms are tested using classification algorithms: k-nearest neighbors, decision tree, and naive bayesian with the scheme of 10-fold cross validation using stratified sampling way. There are five descriptors that are the best features and have contributed in determining the classification of benign and malignant lesions as follows: slice, integrated density, area fraction, model gray value, and center of mass. The best classification results based on the five features are generated by the decision tree algorithm with accuracy, sensitivity, specificity, FPR, and TPR of 93.18%; 87.5%; 3.89%; 6.33% and 92.11% respectively

    Comparative Study on Local Binary Patterns for Mammographic Density and Risk Scoring

    Get PDF
    Breast density is considered to be one of the major risk factors in developing breast cancer. High breast density can also affect the accuracy of mammographic abnormality detection due to the breast tissue characteristics and patterns. We reviewed variants of local binary pattern descriptors to classify breast tissue which are widely used as texture descriptors for local feature extraction. In our study, we compared the classification results for the variants of local binary patterns such as classic LBP (Local Binary Pattern), ELBP (Elliptical Local Binary Pattern), Uniform ELBP, LDP (Local Directional Pattern) and M-ELBP (Mean-ELBP). A wider comparison with alternative texture analysis techniques was studied to investigate the potential of LBP variants in density classification. In addition, we investigated the effect on classification when using descriptors for the fibroglandular disk region and the whole breast region. We also studied the effect of the Region-of-Interest (ROI) size and location, the descriptor size, and the choice of classifier. The classification results were evaluated based on the MIAS database using a ten-run ten-fold cross validation approach. The experimental results showed that the Elliptical Local Binary Pattern descriptors and Local Directional Patterns extracted most relevant features for mammographic tissue classification indicating the relevance of directional filters. Similarly, the study showed that classification of features from ROIs of the fibroglandular disk region performed better than classification based on the whole breast region

    Breast Density Estimation and Micro-Calcification Classification

    Get PDF

    WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM

    Get PDF
    Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments

    Optimization of discrete wavelet transform features using artificial bee colony algorithm for texture image classification

    Get PDF
    Selection of appropriate image texture properties is one of the major issues in texture classification. This paper presents an optimization technique for automatic selection of multi-scale discrete wavelet transform features using artificial bee colony algorithm for robust texture classification performance. In this paper, an artificial bee colony algorithm has been used to find the best combination of wavelet filters with the correct number of decomposition level in the discrete wavelet transform.  The multi-layered perceptron neural network is employed as an image texture classifier.  The proposed method tested on a high-resolution database of UMD texture. The texture classification results show that the proposed method could provide an automated approach for finding the best input parameters combination setting for discrete wavelet transform features that lead to the best classification accuracy performance

    Radon Projections as Image Descriptors for Content-Based Retrieval of Medical Images

    Get PDF
    Clinical analysis and medical diagnosis of diverse diseases adopt medical imaging techniques to empower specialists to perform their tasks by visualizing internal body organs and tissues for classifying and treating diseases at an early stage. Content-Based Image Retrieval (CBIR) systems are a set of computer vision techniques to retrieve similar images from a large database based on proper image representations. Particularly in radiology and histopathology, CBIR is a promising approach to effectively screen, understand, and retrieve images with similar level of semantic descriptions from a database of previously diagnosed cases to provide physicians with reliable assistance for diagnosis, treatment planning and research. Over the past decade, the development of CBIR systems in medical imaging has expedited due to the increase in digitized modalities, an increase in computational efficiency (e.g., availability of GPUs), and progress in algorithm development in computer vision and artificial intelligence. Hence, medical specialists may use CBIR prototypes to query similar cases from a large image database based solely on the image content (and no text). Understanding the semantics of an image requires an expressive descriptor that has the ability to capture and to represent unique and invariant features of an image. Radon transform, one of the oldest techniques widely used in medical imaging, can capture the shape of organs in form of a one-dimensional histogram by projecting parallel rays through a two-dimensional object of concern at a specific angle. In this work, the Radon transform is re-designed to (i) extract features and (ii) generate a descriptor for content-based retrieval of medical images. Radon transform is applied to feed a deep neural network instead of raw images in order to improve the generalization of the network. Specifically, the framework is composed of providing Radon projections of an image to a deep autoencoder, from which the deepest layer is isolated and fed into a multi-layer perceptron for classification. This approach enables the network to (a) train much faster as the Radon projections are computationally inexpensive compared to raw input images, and (b) perform more accurately as Radon projections can make more pronounced and salient features to the network compared to raw images. This framework is validated on a publicly available radiography data set called "Image Retrieval in Medical Applications" (IRMA), consisting of 12,677 train and 1,733 test images, for which an classification accuracy of approximately 82% is achieved, outperforming all autoencoder strategies reported on the Image Retrieval in Medical Applications (IRMA) dataset. The classification accuracy is calculated by dividing the total IRMA error, a calculation outlined by the authors of the data set, with the total number of test images. Finally, a compact handcrafted image descriptor based on Radon transform was designed in this work that is called "Forming Local Intersections of Projections" (FLIP). The FLIP descriptor has been designed, through numerous experiments, for representing histopathology images. The FLIP descriptor is based on Radon transform wherein parallel projections are applied in a local 3x3 neighborhoods with 2 pixel overlap of gray-level images (staining of histopathology images is ignored). Using four equidistant projection directions in each window, the characteristics of the neighborhood is quantified by taking an element-wise minimum between each adjacent projection in each window. Thereafter, the FLIP histogram (descriptor) for each image is constructed. A multi-resolution FLIP (mFLIP) scheme is also proposed which is observed to outperform many state-of-the-art methods, among others deep features, when applied on the histopathology data set KIMIA Path24. Experiments show a total classification accuracy of approximately 72% using SVM classification, which surpasses the current benchmark of approximately 66% on the KIMIA Path24 data set
    corecore