7,156 research outputs found

    Advanced framework for epilepsy detection through image-based EEG signal analysis

    Get PDF
    BackgroundRecurrent and unpredictable seizures characterize epilepsy, a neurological disorder affecting millions worldwide. Epilepsy diagnosis is crucial for timely treatment and better outcomes. Electroencephalography (EEG) time-series data analysis is essential for epilepsy diagnosis and surveillance. Complex signal processing methods used in traditional EEG analysis are computationally demanding and difficult to generalize across patients. Researchers are using machine learning to improve epilepsy detection, particularly visual feature extraction from EEG time-series data.ObjectiveThis study examines the application of a Gramian Angular Summation Field (GASF) approach for the analysis of EEG signals. Additionally, it explores the utilization of image features, specifically the Scale-Invariant Feature Transform (SIFT) and Oriented FAST and Rotated BRIEF (ORB) techniques, for the purpose of epilepsy detection in EEG data.MethodsThe proposed methodology encompasses the transformation of EEG signals into images based on GASF, followed by the extraction of features utilizing SIFT and ORB techniques, and ultimately, the selection of relevant features. A state-of-the-art machine learning classifier is employed to classify GASF images into two categories: normal EEG patterns and focal EEG patterns. Bern-Barcelona EEG recordings were used to test the proposed method.ResultsThis method classifies EEG signals with 96% accuracy using SIFT features and 94% using ORB features. The Random Forest (RF) classifier surpasses state-of-the-art approaches in precision, recall, F1-score, specificity, and Area Under Curve (AUC). The Receiver Operating Characteristic (ROC) curve shows that Random Forest outperforms Support Vector Machine (SVM) and k-Nearest Neighbors (k-NN) classifiers.SignificanceThe suggested method has many advantages over time-series EEG data analysis and machine learning classifiers used in epilepsy detection studies. A novel image-based preprocessing pipeline using GASF for robust image synthesis and SIFT and ORB for feature extraction is presented here. The study found that the suggested method can accurately discriminate between normal and focal EEG signals, improving patient outcomes through early and accurate epilepsy diagnosis

    RLS-LCD : an efficient Loop Closure Detection for Rotary-LiDAR Scans

    Get PDF

    Adversarial sketch-photo transformation for enhanced face recognition accuracy: a systematic analysis and evaluation

    Get PDF
    This research provides a strategy for enhancing the precision of face sketch identification through adversarial sketch-photo transformation. The approach uses a generative adversarial network (GAN) to learn to convert sketches into photographs, which may subsequently be utilized to enhance the precision of face sketch identification. The suggested method is evaluated in comparison to state-of-the-art face sketch recognition and synthesis techniques, such as sketchy GAN, similarity-preserving GAN (SPGAN), and super-resolution GAN (SRGAN). Possible domains of use for the proposed adversarial sketch-photo transformation approach include law enforcement, where reliable face sketch recognition is essential for the identification of suspects. The suggested approach can be generalized to various contexts, such as the creation of creative photographs from drawings or the conversion of pictures between modalities. The suggested method outperforms state-of-the-art face sketch recognition and synthesis techniques, confirming the usefulness of adversarial learning in this context. Our method is highly efficient for photo-sketch synthesis, with a structural similarity index (SSIM) of 0.65 on The Chinese University of Hong Kong dataset and 0.70 on the custom-generated dataset

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks

    The Council of Europe’s Framework of Competences for Democratic Culture: Hope for democracy or an allusive Utopia?

    Get PDF
    Democracies around the world are increasingly polarized along political and cultural lines. To address these challenges, in 2016, the Council of Europe (CoE) produced a model of twenty competences for democratic culture. In 2018, this same model became the basis of the Reference Framework of Competences for Democratic Culture (RFCDC). The RFCDC provides pedagogical instructions to help implement these competences. Together, I call this set of materials “the Framework”. This thesis begins with the premise that utopia has long played an important role in the way power is maintained or resisted in democratic education. It questions the assumption that democratic culture can be cultivated instrumentally through policy- based competences without imposing power on subjects and views this assumption to be utopian. It thus excavates the potential utopian ideals at play in the Framework using ‘hidden utopias’ as a conceptual lens and method, which draws inspiration from the theories of Michùl Foucault, Ernst Bloch and Ruth Levitas. It investigates how using ‘hidden utopias’ as a theoretical lens might facilitate a deeper understanding of the nature and purpose of the Framework, how implicit utopias might be at play, how this could be problematic and how these theories might shed light on the application of the Framework in pedagogical contexts. The contribution of this thesis is to make visible potential utopias at the heart of the Framework. It suggests that making implicit utopias visible in democratic education can help educators and learners engage with these discourses in critical and innovative ways and think beyond them

    A novel segmentation approach for crop modeling using a plenoptic light-field camera : going from 2D to 3D

    Get PDF
    OMICASCrop phenotyping is a desirable task in crop characterization since it allows the farmer to make early decisions, and therefore be more productive. This research is motivated by the generation of tools for rice crop phenotyping within the OMICAS research ecosystem framework. It proposes implementing the image process- ing technologies and artificial intelligence technics through a multisensory approach with multispectral information. Three main stages are covered: (i) A segmentation approach that allows identifying the biological material associated with plants, and the main contri- bution is the GFKuts segmentation approach; (ii) a strategy that allows the development of sensory fusion between three different cameras, a 3D camera, an infrared multispectral camera, and a thermal multispectral camera, this stage is developed through a complex object detection approach; and (iii) the characterization of a 4D model that generates topological relationships with the information of the point cloud, the main contribution of this strategy is the improvement of the point cloud captured by the 3D sensor, in this sense, this stage improves the acquisition of any 3D sensor. This research presents a development that receives information from multiple sensors, especially infrared 2D, and generates a single 4D model in geometric space [X, Y, Z]. This model integrates the color information of 5 channels and topological information, relating the points in space. Overall, the research allows the integration of the 3D information from any sensor\technology and the multispectral channels from any multispectral camera, to generate direct non-invasive measurements on the plant.MagĂ­ster en IngenierĂ­a ElectrĂłnicaMagĂ­ster en Inteligencia ArtificialMaestrĂ­ahttps://orcid.org/ 0000-0002-1477-6825https://scholar.google.com/citations?user=cpuxcwgAAAAJ&hl=eshttps://scienti.minciencias.gov.co/cvlac/visualizador/generarCurriculoCv.do?cod_rh=000155691

    Automatic Characterization of Block-In-Matrix Rock Outcrops through Segmentation Algorithms and Its Application to an Archaeo-Mining Case Study

    Get PDF
    The mechanical behavior of block-in-matrix materials is heavily dependent on their block content. This parameter is in most cases obtained through visual analyses of the ground through digital imagery, which provides the areal block proportion (ABP) of the area analyzed. Nowadays, computer vision models have the capability to extract knowledge from the information stored in these images. In this research, we analyze and compare classical feature-detection algorithms with state-of-the-art models for the automatic calculation of the ABP parameter in images from surface and underground outcrops. The outcomes of this analysis result in the development of a framework for ABP calculation based on the Segment Anything Model (SAM), which is capable of performing this task at a human level when compared with the results of 32 experts in the field. Consequently, this model can help reduce human bias in the estimation of mechanical properties of block-in-matrix materials as well as contain underground technical problems due to mischaracterization of rock block quantities and dimensions. The methodology used to obtain the ABP at different outcrops is combined with estimates of the rock matrix properties and other characterization techniques to mechanically characterize the block-in-matrix materials. The combination of all these techniques has been applied to analyze, understand and try, for the first time, to model Roman gold-mining strategies in an archaeological site in NW Spain. This mining method is explained through a 2D finite-element method numerical model

    Modern computing: Vision and challenges

    Get PDF
    Over the past six decades, the computing systems field has experienced significant transformations, profoundly impacting society with transformational developments, such as the Internet and the commodification of computing. Underpinned by technological advancements, computer systems, far from being static, have been continuously evolving and adapting to cover multifaceted societal niches. This has led to new paradigms such as cloud, fog, edge computing, and the Internet of Things (IoT), which offer fresh economic and creative opportunities. Nevertheless, this rapid change poses complex research challenges, especially in maximizing potential and enhancing functionality. As such, to maintain an economical level of performance that meets ever-tighter requirements, one must understand the drivers of new model emergence and expansion, and how contemporary challenges differ from past ones. To that end, this article investigates and assesses the factors influencing the evolution of computing systems, covering established systems and architectures as well as newer developments, such as serverless computing, quantum computing, and on-device AI on edge devices. Trends emerge when one traces technological trajectory, which includes the rapid obsolescence of frameworks due to business and technical constraints, a move towards specialized systems and models, and varying approaches to centralized and decentralized control. This comprehensive review of modern computing systems looks ahead to the future of research in the field, highlighting key challenges and emerging trends, and underscoring their importance in cost-effectively driving technological progress
    • 

    corecore