8,879 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Advanced framework for epilepsy detection through image-based EEG signal analysis

    Get PDF
    BackgroundRecurrent and unpredictable seizures characterize epilepsy, a neurological disorder affecting millions worldwide. Epilepsy diagnosis is crucial for timely treatment and better outcomes. Electroencephalography (EEG) time-series data analysis is essential for epilepsy diagnosis and surveillance. Complex signal processing methods used in traditional EEG analysis are computationally demanding and difficult to generalize across patients. Researchers are using machine learning to improve epilepsy detection, particularly visual feature extraction from EEG time-series data.ObjectiveThis study examines the application of a Gramian Angular Summation Field (GASF) approach for the analysis of EEG signals. Additionally, it explores the utilization of image features, specifically the Scale-Invariant Feature Transform (SIFT) and Oriented FAST and Rotated BRIEF (ORB) techniques, for the purpose of epilepsy detection in EEG data.MethodsThe proposed methodology encompasses the transformation of EEG signals into images based on GASF, followed by the extraction of features utilizing SIFT and ORB techniques, and ultimately, the selection of relevant features. A state-of-the-art machine learning classifier is employed to classify GASF images into two categories: normal EEG patterns and focal EEG patterns. Bern-Barcelona EEG recordings were used to test the proposed method.ResultsThis method classifies EEG signals with 96% accuracy using SIFT features and 94% using ORB features. The Random Forest (RF) classifier surpasses state-of-the-art approaches in precision, recall, F1-score, specificity, and Area Under Curve (AUC). The Receiver Operating Characteristic (ROC) curve shows that Random Forest outperforms Support Vector Machine (SVM) and k-Nearest Neighbors (k-NN) classifiers.SignificanceThe suggested method has many advantages over time-series EEG data analysis and machine learning classifiers used in epilepsy detection studies. A novel image-based preprocessing pipeline using GASF for robust image synthesis and SIFT and ORB for feature extraction is presented here. The study found that the suggested method can accurately discriminate between normal and focal EEG signals, improving patient outcomes through early and accurate epilepsy diagnosis

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Assessing the advancement of artificial intelligence and drones’ integration in agriculture through a bibliometric study

    Get PDF
    Integrating artificial intelligence (AI) with drones has emerged as a promising paradigm for advancing agriculture. This bibliometric analysis investigates the current state of research in this transformative domain by comprehensively reviewing 234 pertinent articles from Scopus and Web of Science databases. The problem involves harnessing AI-driven drones' potential to address agricultural challenges effectively. To address this, we conducted a bibliometric review, looking at critical components, such as prominent journals, co-authorship patterns across countries, highly cited articles, and the co-citation network of keywords. Our findings underscore a growing interest in using AI-integrated drones to revolutionize various agricultural practices. Noteworthy applications include crop monitoring, precision agriculture, and environmental sensing, indicative of the field’s transformative capacity. This pioneering bibliometric study presents a comprehensive synthesis of the dynamic research landscape, signifying the first extensive exploration of AI and drones in agriculture. The identified knowledge gaps point to future research opportunities, fostering the adoption and implementation of these technologies for sustainable farming practices and resource optimization. Our analysis provides essential insights for researchers and practitioners, laying the groundwork for steering agricultural advancements toward an enhanced efficiency and innovation era

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Using Image Translation To Synthesize Amyloid Beta From Structural MRI

    Get PDF
    Amyloid-beta and brain atrophy are known hallmarks of Alzheimer’s Disease (AD) and can be quantified with positron emission tomography (PET) and structural magnetic resonance imaging (MRI), respectively. PET uses radiotracers that bind to amyloid-beta, whereas MRI can measure brain morphology. PET scans have limitations including cost, invasiveness (involve injections and ionizing radiation exposure), and have limited accessibility, making PET not practical for screening early-onset AD. Conversely, MRI is a cheaper, less-invasive (free from ionizing radiation), and is more widely available, however, it cannot provide the necessary molecular information. There is a known relationship between amyloid-beta and brain atrophy. This thesis aims to synthesize amyloid-beta PET images from structural MRI using image translation, an advanced form of machine learning. The developed models have reported high-similarity metrics between the real and synthetic PET images and high-degree of accuracy in radiotracer quantification. The results are highly impactful as it enables amyloid-beta measurements form every MRI, for free

    Gene expression insights: Chronic stress and bipolar disorder: A bioinformatics investigation

    Get PDF
    Bipolar disorder (BD) is a psychiatric disorder that affects an increasing number of people worldwide. The mechanisms of BD are unclear, but some studies have suggested that it may be related to genetic factors with high heritability. Moreover, research has shown that chronic stress can contribute to the development of major illnesses. In this paper, we used bioinformatics methods to analyze the possible mechanisms of chronic stress affecting BD through various aspects. We obtained gene expression data from postmortem brains of BD patients and healthy controls in datasets GSE12649 and GSE53987, and we identified 11 chronic stress-related genes (CSRGs) that were differentially expressed in BD. Then, we screened five biomarkers (IGFBP6, ALOX5AP, MAOA, AIF1 and TRPM3) using machine learning models. We further validated the expression and diagnostic value of the biomarkers in other datasets (GSE5388 and GSE78936) and performed functional enrichment analysis, regulatory network analysis and drug prediction based on the biomarkers. Our bioinformatics analysis revealed that chronic stress can affect the occurrence and development of BD through many aspects, including monoamine oxidase production and decomposition, neuroinflammation, ion permeability, pain perception and others. In this paper, we confirm the importance of studying the genetic influences of chronic stress on BD and other psychiatric disorders and suggested that biomarkers related to chronic stress may be potential diagnostic tools and therapeutic targets for BD

    Adversarial sketch-photo transformation for enhanced face recognition accuracy: a systematic analysis and evaluation

    Get PDF
    This research provides a strategy for enhancing the precision of face sketch identification through adversarial sketch-photo transformation. The approach uses a generative adversarial network (GAN) to learn to convert sketches into photographs, which may subsequently be utilized to enhance the precision of face sketch identification. The suggested method is evaluated in comparison to state-of-the-art face sketch recognition and synthesis techniques, such as sketchy GAN, similarity-preserving GAN (SPGAN), and super-resolution GAN (SRGAN). Possible domains of use for the proposed adversarial sketch-photo transformation approach include law enforcement, where reliable face sketch recognition is essential for the identification of suspects. The suggested approach can be generalized to various contexts, such as the creation of creative photographs from drawings or the conversion of pictures between modalities. The suggested method outperforms state-of-the-art face sketch recognition and synthesis techniques, confirming the usefulness of adversarial learning in this context. Our method is highly efficient for photo-sketch synthesis, with a structural similarity index (SSIM) of 0.65 on The Chinese University of Hong Kong dataset and 0.70 on the custom-generated dataset

    Meta-learning algorithms and applications

    Get PDF
    Meta-learning in the broader context concerns how an agent learns about their own learning, allowing them to improve their learning process. Learning how to learn is not only beneficial for humans, but it has also shown vast benefits for improving how machines learn. In the context of machine learning, meta-learning enables models to improve their learning process by selecting suitable meta-parameters that influence the learning. For deep learning specifically, the meta-parameters typically describe details of the training of the model but can also include description of the model itself - the architecture. Meta-learning is usually done with specific goals in mind, for example trying to improve ability to generalize or learn new concepts from only a few examples. Meta-learning can be powerful, but it comes with a key downside: it is often computationally costly. If the costs would be alleviated, meta-learning could be more accessible to developers of new artificial intelligence models, allowing them to achieve greater goals or save resources. As a result, one key focus of our research is on significantly improving the efficiency of meta-learning. We develop two approaches: EvoGrad and PASHA, both of which significantly improve meta-learning efficiency in two common scenarios. EvoGrad allows us to efficiently optimize the value of a large number of differentiable meta-parameters, while PASHA enables us to efficiently optimize any type of meta-parameters but fewer in number. Meta-learning is a tool that can be applied to solve various problems. Most commonly it is applied for learning new concepts from only a small number of examples (few-shot learning), but other applications exist too. To showcase the practical impact that meta-learning can make in the context of neural networks, we use meta-learning as a novel solution for two selected problems: more accurate uncertainty quantification (calibration) and general-purpose few-shot learning. Both are practically important problems and using meta-learning approaches we can obtain better solutions than the ones obtained using existing approaches. Calibration is important for safety-critical applications of neural networks, while general-purpose few-shot learning tests model's ability to generalize few-shot learning abilities across diverse tasks such as recognition, segmentation and keypoint estimation. More efficient algorithms as well as novel applications enable the field of meta-learning to make more significant impact on the broader area of deep learning and potentially solve problems that were too challenging before. Ultimately both of them allow us to better utilize the opportunities that artificial intelligence presents

    MRI radiomics-based decision support tool for a personalized classification of cervical disc degeneration: a two-center study

    Get PDF
    Objectives: To develop and validate an MRI radiomics-based decision support tool for the automated grading of cervical disc degeneration.Methods: The retrospective study included 2,610 cervical disc samples of 435 patients from two hospitals. The cervical magnetic resonance imaging (MRI) analysis of patients confirmed cervical disc degeneration grades using the Pfirrmann grading system. A training set (1,830 samples of 305 patients) and an independent test set (780 samples of 130 patients) were divided for the construction and validation of the machine learning model, respectively. We provided a fine-tuned MedSAM model for automated cervical disc segmentation. Then, we extracted 924 radiomic features from each segmented disc in T1 and T2 MRI modalities. All features were processed and selected using minimum redundancy maximum relevance (mRMR) and multiple machine learning algorithms. Meanwhile, the radiomics models of various machine learning algorithms and MRI images were constructed and compared. Finally, the combined radiomics model was constructed in the training set and validated in the test set. Radiomic feature mapping was provided for auxiliary diagnosis.Results: Of the 2,610 cervical disc samples, 794 (30.4%) were classified as low grade and 1,816 (69.6%) were classified as high grade. The fine-tuned MedSAM model achieved good segmentation performance, with the mean Dice coefficient of 0.93. Higher-order texture features contributed to the dominant force in the diagnostic task (80%). Among various machine learning models, random forest performed better than the other algorithms (p < 0.01), and the T2 MRI radiomics model showed better results than T1 MRI in the diagnostic performance (p < 0.05). The final combined radiomics model had an area under the receiver operating characteristic curve (AUC) of 0.95, an accuracy of 89.51%, a precision of 87.07%, a recall of 98.83%, and an F1 score of 0.93 in the test set, which were all better than those of other models (p < 0.05).Conclusion: The radiomics-based decision support tool using T1 and T2 MRI modalities can be used for cervical disc degeneration grading, facilitating individualized management
    • …
    corecore