18 research outputs found

    Brain image clustering by wavelet energy and CBSSO optimization algorithm

    Get PDF
    Previously, the diagnosis of brain abnormality was significantly important in the saving of social and hospital resources. Wavelet energy is known as an effective feature detection which has great efficiency in different utilities. This paper suggests a new method based on wavelet energy to automatically classify magnetic resonance imaging (MRI) brain images into two groups (normal and abnormal), utilizing support vector machine (SVM) classification based on chaotic binary shark smell optimization (CBSSO) to optimize the SVM weights. The results of the suggested CBSSO-based KSVM are compared favorably to several other methods in terms of better sensitivity and authenticity. The proposed CAD system can additionally be utilized to categorize the images with various pathological conditions, types, and illness modes

    The Contour Extraction of Cup in Fundus Images for Glaucoma Detection

    Get PDF
    Glaucoma is the second leading cause of blindness in the world; therefore the detection of glaucoma is required. The detection of glaucoma is used to distinguish whether a patient's eye is normal or glaucoma. An expert observed the structure of the retina using fundus image to detect glaucoma. In this research, we propose feature extraction method based on cup area contour using fundus images to detect glaucoma. Our proposed method has been evaluated on 44 fundus images consisting of 23 normal and 21 glaucoma. The data is divided into two parts: firstly, used to the learning phase and secondly, used to the testing phase. In order to identify the fundus images including the class of normal or glaucoma, we applied Support Vector Machines (SVM) method. The performance of our method achieves the accuracy of 94.44%

    The Contour Extraction of Cup in Fundus Images for Glaucoma Detection

    Get PDF

    Pathological Brain Detection Using Weiner Filtering, 2D-Discrete Wavelet Transform, Probabilistic PCA, and Random Subspace Ensemble Classifier

    Get PDF
    Accurate diagnosis of pathological brain images is important for patient care, particularly in the early phase of the disease. Although numerous studies have used machine-learning techniques for the computer-aided diagnosis (CAD) of pathological brain, previous methods encountered challenges in terms of the diagnostic efficiency owing to deficiencies in the choice of proper filtering techniques, neuroimaging biomarkers, and limited learning models. Magnetic resonance imaging (MRI) is capable of providing enhanced information regarding the soft tissues, and therefore MR images are included in the proposed approach. In this study, we propose a new model that includes Wiener filtering for noise reduction, 2D-discrete wavelet transform (2D-DWT) for feature extraction, probabilistic principal component analysis (PPCA) for dimensionality reduction, and a random subspace ensemble (RSE) classifier along with the K-nearest neighbors (KNN) algorithm as a base classifier to classify brain images as pathological or normal ones. The proposed methods provide a significant improvement in classification results when compared to other studies. Based on 5×5 cross-validation (CV), the proposed method outperforms 21 state-of-the-art algorithms in terms of classification accuracy, sensitivity, and specificity for all four datasets used in the study

    Una revisión sistemática de métodos de aprendizaje profundo aplicados a imágenes oculares

    Get PDF
    Artificial intelligence is having an important effect on different areas of medicine, and ophthalmology has not been the exception. In particular, deep learning methods have been applied successfully to the detection of clinical signs and the classification of ocular diseases. This represents a great potential to increase the number of people correctly diagnosed. In ophthalmology, deep learning methods have primarily been applied to eye fundus images and optical coherence tomography. On the one hand, these methods have achieved an outstanding performance in the detection of ocular diseases such as: diabetic retinopathy, glaucoma, diabetic macular degeneration and age-related macular degeneration.  On the other hand, several worldwide challenges have shared big eye imaging datasets with segmentation of part of the eyes, clinical signs and the ocular diagnostic performed by experts. In addition, these methods are breaking the stigma of black-box models, with the delivering of interpretable clinically information. This review provides an overview of the state-of-the-art deep learning methods used in ophthalmic images, databases and potential challenges for ocular diagnosisLa inteligencia artificial está teniendo un importante impacto en diversas áreas de la medicina y a la oftalmología no ha sido la excepción. En particular, los métodos de aprendizaje profundo han sido aplicados con éxito en la detección de signos clínicos y la clasificación de enfermedades oculares. Esto representa un potencial impacto en el incremento de pacientes correctamente y oportunamente diagnosticados. En oftalmología, los métodos de aprendizaje profundo se han aplicado principalmente a imágenes de fondo de ojo y tomografía de coherencia óptica. Por un lado, estos métodos han logrado un rendimiento sobresaliente en la detección de enfermedades oculares tales como: retinopatía diabética, glaucoma, degeneración macular diabética y degeneración macular relacionada con la edad. Por otro lado, varios desafíos mundiales han compartido grandes conjuntos de datos con segmentación de parte de los ojos, signos clínicos y el diagnóstico ocular realizado por expertos. Adicionalmente, estos métodos están rompiendo el estigma de los modelos de caja negra, con la entrega de información clínica interpretable. Esta revisión proporciona una visión general de los métodos de aprendizaje profundo de última generación utilizados en imágenes oftálmicas, bases de datos y posibles desafíos para los diagnósticos oculare

    Explainable machine learning to enable high-throughput electrical conductivity optimization of doped conjugated polymers

    Full text link
    The combination of high-throughput experimentation techniques and machine learning (ML) has recently ushered in a new era of accelerated material discovery, enabling the identification of materials with cutting-edge properties. However, the measurement of certain physical quantities remains challenging to automate. Specifically, meticulous process control, experimentation and laborious measurements are required to achieve optimal electrical conductivity in doped polymer materials. We propose a ML approach, which relies on readily measured absorbance spectra, to accelerate the workflow associated with measuring electrical conductivity. The first ML model (classification model), accurately classifies samples with a conductivity >~25 to 100 S/cm, achieving a maximum of 100% accuracy rate. For the subset of highly conductive samples, we employed a second ML model (regression model), to predict their conductivities, yielding an impressive test R2 value of 0.984. To validate the approach, we showed that the models, neither trained on the samples with the two highest conductivities of 498 and 506 S/cm, were able to, in an extrapolative manner, correctly classify and predict them at satisfactory levels of errors. The proposed ML workflow results in an improvement in the efficiency of the conductivity measurements by 89% of the maximum achievable using our experimental techniques. Furthermore, our approach addressed the common challenge of the lack of explainability in ML models by exploiting bespoke mathematical properties of the descriptors and ML model, allowing us to gain corroborated insights into the spectral influences on conductivity. Through this study, we offer an accelerated pathway for optimizing the properties of doped polymer materials while showcasing the valuable insights that can be derived from purposeful utilization of ML in experimental science.Comment: 33 Pages, 17 figure

    Adaptive Feature Engineering Modeling for Ultrasound Image Classification for Decision Support

    Get PDF
    Ultrasonography is considered a relatively safe option for the diagnosis of benign and malignant cancer lesions due to the low-energy sound waves used. However, the visual interpretation of the ultrasound images is time-consuming and usually has high false alerts due to speckle noise. Improved methods of collection image-based data have been proposed to reduce noise in the images; however, this has proved not to solve the problem due to the complex nature of images and the exponential growth of biomedical datasets. Secondly, the target class in real-world biomedical datasets, that is the focus of interest of a biopsy, is usually significantly underrepresented compared to the non-target class. This makes it difficult to train standard classification models like Support Vector Machine (SVM), Decision Trees, and Nearest Neighbor techniques on biomedical datasets because they assume an equal class distribution or an equal misclassification cost. Resampling techniques by either oversampling the minority class or under-sampling the majority class have been proposed to mitigate the class imbalance problem but with minimal success. We propose a method of resolving the class imbalance problem with the design of a novel data-adaptive feature engineering model for extracting, selecting, and transforming textural features into a feature space that is inherently relevant to the application domain. We hypothesize that by maximizing the variance and preserving as much variability in well-engineered features prior to applying a classifier model will boost the differentiation of the thyroid nodules (benign or malignant) through effective model building. Our proposed a hybrid approach of applying Regression and Rule-Based techniques to build our Feature Engineering and a Bayesian Classifier respectively. In the Feature Engineering model, we transformed images pixel intensity values into a high dimensional structured dataset and fitting a regression analysis model to estimate relevant kernel parameters to be applied to the proposed filter method. We adopted an Elastic Net Regularization path to control the maximum log-likelihood estimation of the Regression model. Finally, we applied a Bayesian network inference to estimate a subset for the textural features with a significant conditional dependency in the classification of the thyroid lesion. This is performed to establish the conditional influence on the textural feature to the random factors generated through our feature engineering model and to evaluate the success criterion of our approach. The proposed approach was tested and evaluated on a public dataset obtained from thyroid cancer ultrasound diagnostic data. The analyses of the results showed that the classification performance had a significant improvement overall for accuracy and area under the curve when then proposed feature engineering model was applied to the data. We show that a high performance of 96.00% accuracy with a sensitivity and specificity of 99.64%) and 90.23% respectively was achieved for a filter size of 13 × 13
    corecore