48 research outputs found
Medical image classification based on artificial intelligence approaches: A practical study on normal and abnormal confocal corneal images
Corneal images can be acquired using confocal microscopes which provide detailed views of the different layers inside a human cornea. Some corneal problems and diseases can occur in one or more of the main corneal layers: the epithelium, stroma and endothelium. Consequently, for automatically extracting clinical information associated with corneal diseases, identifying abnormality or evaluating the normal cornea, it is important to be able to automatically recognise these layers reliably. Artificial intelligence (AI) approaches can provide improved accuracy over the conventional processing techniques and save a useful amount of time over the manual analysis time required by clinical experts. Artificial neural networks (ANNs), adaptive neuro fuzzy inference systems (ANFIS) and a committee machine (CM) have been investigated and tested to improve the recognition accuracy of the main corneal layers and identify abnormality in these layers. The performance of the CM, formed from ANN and ANFIS, achieves an accuracy of 100% for some classes in the processed data sets. Three normal corneal data sets and seven abnormal corneal images associated with diseases in the main corneal layers have been investigated with the proposed system. Statistical analysis for these data sets is performed to track any change in the processed images. This system is able to pre-process (quality enhancement, noise removal), classify corneal images, identify abnormalities in the analysed data sets and visualise corneal stroma images as well as each individual keratocyte cell in a 3D volume for further clinical analysis
Prediction of Extreme Ultraviolet Variability Experiment (EVE)/Extreme Ultraviolet Spectro-Photometer (ESP) Irradiance from Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA) Images Using Fuzzy Image Processing and Machine Learning
YesThe cadence and resolution of solar images have been increasing dramatically with the launch of new spacecraft such as STEREO and SDO. This increase in data volume provides new opportunities for solar researchers, but the efficient processing and analysis of these data create new challenges. We introduce a fuzzy-based solar feature-detection system in this article. The proposed system processes SDO/AIA images using fuzzy rules to detect coronal holes and active regions. This system is fast and it can handle different size images. It is tested on six months of solar data (1 October 2010 to 31 March 2011) to generate filling factors (ratio of area of solar feature to area of rest of the solar disc) for active regions and coronal holes. These filling factors are then compared to SDO/EVE/ESP irradiance measurements. The correlation between active-region filling factors and irradiance measurements is found to be very high, which has encouraged us to design a time-series prediction system using Radial Basis Function Networks to predict ESP irradiance measurements from our generated filling factors
Automated Prediction of CMEs Using Machine Learning of CME – Flare Associations
YesIn this work, machine learning algorithms are applied to explore the relation between significant flares and their associated CMEs. The NGDC flares catalogue and the SOHO/LASCO CMEs catalogue are processed to associate X and M-class flares with CMEs based on timing information. Automated systems are created to process and associate years of flares and CMEs data, which are later arranged in numerical training vectors and fed to machine learning algorithms to extract the embedded knowledge and provide learning rules that can be used for the automated prediction of CMEs. Different properties are extracted from all the associated (A) and not-associated (NA) flares representing the intensity, flare duration, duration of decline and duration of growth. Cascade Correlation Neural Networks (CCNN) are used in our work. The flare properties are converted to numerical formats that are suitable for CCNN. The CCNN will predict if a certain flare is likely to initiate a CME after input of its properties. Intensive experiments using the Jack-knife techniques are carried out and it is concluded that our system provides an accurate prediction rate of 65.3%. The prediction performance is analysed and recommendation for enhancing the performance are provided
Machine learning-based investigation of the association between CMEs and filaments
YesIn this work we study the association between eruptive filaments/prominences and coronal mass ejections (CMEs) using machine learning-based algorithms that analyse the solar data available between January 1996 and December 2001. The Support Vector Machine (SVM) learning algorithm is used for the purpose of knowledge extraction from the association results. The aim is to identify patterns of associations that can be represented using SVM learning rules for the subsequent use in near real-time and reliable CME prediction systems. Timing and location data in the NGDC filament catalogue and the SOHO/LASCO CME catalogue are processed to associate filaments with CMEs. In the previous studies which classified CMEs into gradual and impulsive CMEs, the associations were refined based on CME speed and acceleration. Then the associated pairs were refined manually to increase the accuracy of the training dataset. In the current study, a data- mining system has been created to process and associate filament and CME data, which are arranged in numerical training vectors. Then the data are fed to SVMs to extract the embedded knowledge and provide the learning rules that could have the potential, in the future, to provide automated predictions of CMEs. The features representing the event time (average of the start and end times), duration, type and extent of the filaments are extracted from all the associated and not-associated filaments and converted to a numerical format that is suitable for SVM use. Several validation and verification methods are used on the extracted dataset to determine if CMEs can be predicted solely and efficiently based on the associated filaments. More than 14000 experiments are carried out to optimise the SVM and determine the input features that provide the best performance
Learning to self-manage by intelligent monitoring, prediction and intervention
Despite the growing prevalence of multimorbidities, current digital self-management approaches still prioritise single conditions. The future of outof- hospital care requires researchers to expand their horizons; integrated assistive technologies should enable people to live their life well regardless of their chronic conditions. Yet, many of the current digital self-management technologies are not equipped to handle this problem. In this position paper, we suggest the solution for these issues is a model-aware and data-agnostic platform formed on the basis of a tailored self-management plan and three integral concepts - Monitoring (M) multiple information sources to empower Predictions (P) and trigger intelligent Interventions (I). Here we present our ideas for the formation of such a platform, and its potential impact on quality of life for sufferers of chronic conditions
Joint Associations of Activity Energy Expenditure and Sedentary Behaviors with Adolescent's Obesity and Dietary Habits
PURPOSE: Physical inactivity and sedentary behaviors are thought to be independent entities and differently associate with adverse health outcomes. Thus, the aim of this study was to investigate the joint associations of physical activity and sedentary behaviors with obesity indices and dietary habits among adolescents from the Gulf Cooperation Council (GCC) countries.
METHODS: Data were from the Arab Teens Lifestyle Study (ATLS), a school based, cross-sectional lifestyle study. The present analysis included 6279 adolescents (49.4% males) aged 14-20 years, randomly selected from eight major cities in the GCC countries, using a multistage stratified cluster sampling technique. Anthropometric and self-reported lifestyle data were obtained from participants. Adolescents were classified into four categories: high active & low sedentary (HA-LS), high active & high sedentary (HA- HS), low active & low sedentary (LA-LS) and low active & high sedentary (LA-HS), based on cut off scores of total activity energy expenditure and daily screen time above and below 1680 METs-min/week and above or below 3 hours/day, respectively.
RESULTS: Results of MANCOVA tests controlling for age revealed that compared with those with LA-HS, adolescents with HA-LS had signifi cantly (<0.001) lower mean (SD) values for BMI (22.6 (5.5) vs 23.7 (6.2)), waist to height ratio (0.45 (.07) vs 0.48 (.08)), and less frequent intakes of sugar-sweetened drinks (3.7 (2.5) vs 4.5 (2.3)), fast foods (2.2 (1.9) vs 2.9 (1.9)), French fries/potato chips (2.1 (2.0) vs 2.9 (2.1)), cakes/donuts (2.4 (2.1) vs 2.7 (2.1)) and sweets (2.5 (2.1)) vs 3.8 (2.3) but more frequent intakes of breakfast (3.9 (2.7) vs 3.3 (2.6)), vegetables (4.4 (2.3) vs 3.5 (2.4)), fruits (4.2 (2.3) vs 2.8 (2.2)) and milk (4.3 (2.5) vs 3.6 (2.5)).
CONCLUSIONS: Adolescents with combined high activity energy expenditure and low sedentary behaviors tend to have lower risk of obesity and have more favorable (healthy) dietary habits. These findings carry important implications for adolescent’s health promotion and obesity prevention.American College of Sports Medicin
Solar flare prediction using advanced feature extraction, machine learning and feature selection
YesNovel machine-learning and feature-selection algorithms have been developed to study: (i)
the flare prediction capability of magnetic feature (MF) properties generated by the recently developed
Solar Monitor Active Region Tracker (SMART); (ii) SMART's MF properties that are most significantly
related to flare occurrence. Spatio-temporal association algorithms are developed to associate MFs
with flares from April 1996 to December 2010 in order to differentiate flaring and non-flaring MFs and
enable the application of machine learning and feature selection algorithms. A machine-learning
algorithm is applied to the associated datasets to determine the flare prediction capability of all 21
SMART MF properties. The prediction performance is assessed using standard forecast verification
measures and compared with the prediction measures of one of the industry's standard technologies
for flare prediction that is also based on machine learning - Automated Solar Activity Prediction (ASAP).
The comparison shows that the combination of SMART MFs with machine learning has the potential to
achieve more accurate flare prediction than ASAP. Feature selection algorithms are then applied to
determine the MF properties that are most related to flare occurrence. It is found that a reduced set of
6 MF properties can achieve a similar degree of prediction accuracy as the full set of 21 SMART MF
properties
Progress in space weather modeling in an operational environment
This paper aims at providing an overview of latest advances in space weather modeling in an operational environment in Europe,
including both the introduction of new models and improvements to existing codes and algorithms that address the broad range of space weather’s prediction requirements from the Sun to the Earth. For each case, we consider the model’s input data, the output
parameters, products or services, its operational status, and whether it is supported by validation results, in order to build a solid basis for future developments. This work is the output of the Sub Group 1.3 ‘‘Improvement of operational models’’ of the European Cooperation in Science and Technology (COST) Action ES0803 ‘‘Developing Space Weather Products and services in Europe’’ and therefore this review focuses on the progress achieved by European research teams involved in the action
Recommended from our members
Fundus-DeepNet: Multi-Label Deep Learning Classification System for Enhanced Detection of Multiple Ocular Diseases through Data Fusion of Fundus Images
YesDetecting multiple ocular diseases in fundus images is crucial in ophthalmic diagnosis. This study introduces the Fundus-DeepNet system, an automated multi-label deep learning classification system designed to identify multiple ocular diseases by integrating feature representations from pairs of fundus images (e.g., left and right eyes). The study initiates with a comprehensive image pre-processing procedure, including circular border cropping, image resizing, contrast enhancement, noise removal, and data augmentation. Subsequently, discriminative deep feature representations are extracted using multiple deep learning blocks, namely the High-Resolution Network (HRNet) and Attention Block, which serve as feature descriptors. The SENet Block is then applied to further enhance the quality and robustness of feature representations from a pair of fundus images, ultimately consolidating them into a single feature representation. Finally, a sophisticated classification model, known as a Discriminative Restricted Boltzmann Machine (DRBM), is employed. By incorporating a Softmax layer, this DRBM is adept at generating a probability distribution that specifically identifies eight different ocular diseases. Extensive experiments were conducted on the challenging Ophthalmic Image Analysis-Ocular Disease Intelligent Recognition (OIA-ODIR) dataset, comprising diverse fundus images depicting eight different ocular diseases. The Fundus-DeepNet system demonstrated F1-scores, Kappa scores, AUC, and final scores of 88.56%, 88.92%, 99.76%, and 92.41% in the off-site test set, and 89.13%, 88.98%, 99.86%, and 92.66% in the on-site test set.In summary, the Fundus-DeepNet system exhibits outstanding proficiency in accurately detecting multiple ocular diseases, offering a promising solution for early diagnosis and treatment in ophthalmology.European Union under the REFRESH – Research Excellence for Region Sustainability and High-tech Industries project number CZ.10.03.01/00/22_003/0000048 via the Operational Program Just Transition. The Ministry of Education, Youth, and Sports of the Czech Republic - Technical University of Ostrava, Czechia under Grants SP2023/039 and SP2023/042
A multimodal deep learning framework using local feature representations for face recognition
YesThe most recent face recognition systems are
mainly dependent on feature representations obtained using
either local handcrafted-descriptors, such as local binary patterns
(LBP), or use a deep learning approach, such as deep
belief network (DBN). However, the former usually suffers
from the wide variations in face images, while the latter
usually discards the local facial features, which are proven
to be important for face recognition. In this paper, a novel
framework based on merging the advantages of the local
handcrafted feature descriptors with the DBN is proposed to
address the face recognition problem in unconstrained conditions.
Firstly, a novel multimodal local feature extraction
approach based on merging the advantages of the Curvelet
transform with Fractal dimension is proposed and termed
the Curvelet–Fractal approach. The main motivation of this
approach is that theCurvelet transform, a newanisotropic and
multidirectional transform, can efficiently represent themain
structure of the face (e.g., edges and curves), while the Fractal
dimension is one of the most powerful texture descriptors
for face images. Secondly, a novel framework is proposed,
termed the multimodal deep face recognition (MDFR)framework,
to add feature representations by training aDBNon top
of the local feature representations instead of the pixel intensity
representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary
to those acquired by the Curvelet–Fractal approach.
Finally, the performance of the proposed approaches has
been evaluated by conducting a number of extensive experiments
on four large-scale face datasets: the SDUMLA-HMT,
FERET, CAS-PEAL-R1, and LFW databases. The results
obtained from the proposed approaches outperform other
state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by
achieving new state-of-the-art results on all the employed
datasets