31 research outputs found

    Impact of Machine Learning Pipeline Choices in Autism Prediction from Functional Connectivity Data

    Get PDF
    Autism Spectrum Disorder (ASD) is a largely prevalent neurodevelopmental condition with a big social and economical impact affecting the entire life of families. There is an intense search for biomarkers that can be assessed as early as possible in order to initiate treatment and preparation of the family to deal with the challenges imposed by the condition. Brain imaging biomarkers have special interest. Specifically, functional connectivity data extracted from resting state functional magnetic resonance imaging (rs-fMRI) should allow to detect brain connectivity alterations. Machine learning pipelines encompass the estimation of the functional connectivity matrix from brain parcellations, feature extraction, and building classification models for ASD prediction. The works reported in the literature are very heterogeneous from the computational and methodological point of view. In this paper, we carry out a comprehensive computational exploration of the impact of the choices involved while building these machine learning pipelines. Specifically, we consider six brain parcellation definitions, five methods for functional connectivity matrix construction, six feature extraction/selection approaches, and nine classifier building algorithms. We report the prediction performance sensitivity to each of these choices, as well as the best results that are comparable with the state of the art.This work has been partially supported by theFEDER funds through MINECO project TIN2017-85827-P. This project has received funding from theEuropean Union’s Horizon 2020 research and inno-vation program under the Marie Sklodowska-Curiegrant agreement No 77772

    Behavioral Activity Recognition Based on Gaze Ethograms

    Get PDF
    Noninvasive behavior observation techniques allow more natural human behavior assessment experiments with higher ecological validity. We propose the use of gaze ethograms in the context of user interaction with a computer display to characterize the user's behavioral activity. A gaze ethogram is a time sequence of the screen regions the user is looking at. It can be used for the behavioral modeling of the user. Given a rough partition of the display space, we are able to extract gaze ethograms that allow discrimination of three common user behavioral activities: reading a text, viewing a video clip, and writing a text. A gaze tracking system is used to build the gaze ethogram. User behavioral activity is modeled by a classifier of gaze ethograms able to recognize the user activity after training. Conventional commercial gaze tracking for research in the neurosciences and psychology science are expensive and intrusive, sometimes impose wearing uncomfortable appliances. For the purposes of our behavioral research, we have developed an open source gaze tracking system that runs on conventional laptop computers using their low quality cameras. Some of the gaze tracking pipeline elements have been borrowed from the open source community. However, we have developed innovative solutions to some of the key issues that arise in the gaze tracker. Specifically, we have proposed texture-based eye features that are quite robust to low quality images. These features are the input for a classifier predicting the screen target area, the user is looking at. We report comparative results of several classifier architectures carried out in order to select the classifier to be used to extract the gaze ethograms for our behavioral research. We perform another classifier selection at the level of ethogram classification. Finally, we report encouraging results of user behavioral activity recognition experiments carried out over an inhouse dataset.This work has been supported by FEDER funds through MINECO project TIN2017-85827-P. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 777720. Additional support comes from grant IT1284-19 of the Basque Country Government

    Energy demands of diverse spiking cells from the neocortex, hippocampus, and thalamus

    Get PDF
    It has long been known that neurons in the brain are not physiologically homogeneous. In response to current stimulus, they can fire several distinct patterns of action potentials that are associated with different physiological classes ranging from regular-spiking cells, fast-spiking cells, intrinsically bursting cells, and low-threshold cells. In this work we show that the high degree of variability in firing characteristics of action potentials among these cells is accompanied with a significant variability in the energy demands required to restore the concentration gradients after an action potential. The values of the metabolic energy were calculated for a wide range of cell temperatures and stimulus intensities following two different approaches. The first one is based on the amount of Na+ load crossing the membrane during a single action potential, while the second one focuses on the electrochemical energy functions deduced from the dynamics of the computational neuron models. The results show that the thalamocortical relay neuron is the most energy-efficient cell consuming between 7 and 18 nJ/cm(2) for each spike generated, while both the regular and fast spiking cells from somatosensory cortex and the intrinsically-bursting cell from a cat visual cortex are the least energy-efficient, and can consume up to 100 nJ/cm(2) per spike. The lowest values of these energy demands were achieved at higher temperatures and high external stimuli.Authors received support from UFI11/07 of the UPV/EHU, SandS project EU grant agreement 317947, MECCO projects TIN2011-28753-C02-02, TIN2011-23823, which are partially funded by FEDER

    Deep transfer learning-based gaze tracking for behavioral activity recognition

    Get PDF
    Computational Ethology studies focused on human beings is usually referred as Human Activity Recognition (HAR). Specifically, this paper belongs to a line of work on the identification of broad cognitive activities that users carry out with computers. The keystone of this kind of systems is the noninvasive detection of the subject's gaze fixations in selected display areas. Noninvasiveness is ensured by using the conventional laptop cameras without additional illumination or tracking devices. The gaze ethograms, composed as sequences of gaze fixations, are the basis to identify the user activities. To determine the gaze fixation display areas with the highest accuracy, this paper explores the use of a transfer learning approach applied to several well-known deep learning network (DLN) architectures whose input is the eye area extracted from the face image,and output is the identification of the gaze fixation area in the computer screen. Two different datasets are created and used in the validation experiments. We report encouraging results that may allow the general use of the system.This work has been supported by FEDER funds through MINECO project TIN2017-85827-P. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 777720. XinZhe Jin contributed some early computational experiences

    An ongoing review of speech emotion recognition

    Get PDF
    User emotional status recognition is becoming a key feature in advanced Human Computer Interfaces (HCI). A key source of emotional information is the spoken expression, which may be part of the interaction between the human and the machine. Speech emotion recognition (SER) is a very active area of research that involves the application of current machine learning and neural networks tools. This ongoing review covers recent and classical approaches to SER reported in the literature.This work has been carried out with the support of project PID2020-116346GB-I00 funded by the Spanish MICIN

    Automatic feedback and assessment of team-coding assignments in a DevOps context

    Get PDF
    We describe an automated assessment process for team-coding assignments based on DevOps best practices. This system and methodology includes the definition of Team Performance Metrics measuring properties of the software developed by each team, and their correct use of DevOps techniques. It tracks the progress on each of metric by each group. The methodology also defines Individual Performance Metrics to measure the impact of individual student contributions to increase in Team Performance Metrics. Periodically scheduled reports using these metrics provide students valuable feedback. This process also facilitates the process of assessing the assignments. Although this method is not intended to produce the final grade of each student, it provides very valuable information to the lecturers. We have used it as the main source of information for student and team assessment in one programming course. Additionally, we use other assessment methods to calculate the final grade: written conceptual tests to check their understanding of the development processes, and cross-evaluations. Qualitative evaluation of the students filling relevant questionnaires are very positive and encouraging.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature

    Clustering Cities over Features Extracted from Multiple Virtual Sensors Measuring Micro-Level Activity Patterns Allows One to Discriminate Large-Scale City Characteristics

    Get PDF
    The impact of micro-level people’s activities on urban macro-level indicators is a complex question that has been the subject of much interest among researchers and policymakers. Transportation preferences, consumption habits, communication patterns and other individual-level activities can significantly impact large-scale urban characteristics, such as the potential for innovation generation of the city. Conversely, large-scale urban characteristics can also constrain and determine the activities of their inhabitants. Therefore, understanding the interdependence and mutual reinforcement between micro- and macro-level factors is critical to defining effective public policies. The increasing availability of digital data sources, such as social media and mobile phones, has opened up new opportunities for the quantitative study of this interdependency. This paper aims to detect meaningful city clusters on the basis of a detailed analysis of the spatiotemporal activity patterns for each city. The study is carried out on a worldwide city dataset of spatiotemporal activity patterns obtained from geotagged social media data. Clustering features are obtained from unsupervised topic analyses of activity patterns. Our study compares state-of-the-art clustering models, selecting the model achieving a 2.7% greater Silhouette Score than the next-best model. Three well-separated city clusters are identified. Additionally, the study of the distribution of the City Innovation Index over these three city clusters shows discrimination of low performing from high performing cities relative to innovation. Low performing cities are identified in one well-separated cluster. Therefore, it is possible to correlate micro-scale individual-level activities to large-scale urban characteristics.This work would not have been accomplished without the financial support of CONICYT-PFCHA/DOCTORADO BECAS CHILE/2019-21190345. The last author received research funds from the Basque Government as the head of the Grupo de Inteligencia Computacional, Universidad del Pais Vasco, UPV/EHU, from 2007 until 2025. The current code for the grant is IT1689-22. Additionally, the author participates in Elkartek projects KK-2022/00051 and KK-2021/00070. The Spanish MCIN has also granted the author a research project under code PID2020-116346GB-I00

    Photometric Stereo-Based Defect Detection System for Steel Components Manufacturing Using a Deep Segmentation Network

    Get PDF
    This paper presents an automatic system for the quality control of metallic components using a photometric stereo-based sensor and a customized semantic segmentation network. This system is designed based on interoperable modules, and allows capturing the knowledge of the operators to apply it later in automatic defect detection. A salient contribution is the compact representation of the surface information achieved by combining photometric stereo images into a RGB image that is fed to a convolutional segmentation network trained for surface defect detection. We demonstrate the advantage of this compact surface imaging representation over the use of each photometric imaging source of information in isolation. An empirical analysis of the performance of the segmentation network on imaging samples of materials with diverse surface reflectance properties is carried out, achieving Dice performance index values above 0.83 in all cases. The results support the potential of photometric stereo in conjunction with our semantic segmentation network

    Generative Adversarial Networks to Improve the Robustness of Visual Defect Segmentation by Semantic Networks in Manufacturing Components

    Get PDF
    This paper describes the application of Semantic Networks for the detection of defects in images of metallic manufactured components in a situation where the number of available samples of defects is small, which is rather common in real practical environments. In order to overcome this shortage of data, the common approach is to use conventional data augmentation techniques. We resort to Generative Adversarial Networks (GANs) that have shown the capability to generate highly convincing samples of a specific class as a result of a game between a discriminator and a generator module. Here, we apply the GANs to generate samples of images of metallic manufactured components with specific defects, in order to improve training of Semantic Networks (specifically DeepLabV3+ and Pyramid Attention Network (PAN) networks) carrying out the defect detection and segmentation. Our process carries out the generation of defect images using the StyleGAN2 with the DiffAugment method, followed by a conventional data augmentation over the entire enriched dataset, achieving a large balanced dataset that allows robust training of the Semantic Network. We demonstrate the approach on a private dataset generated for an industrial client, where images are captured by an ad-hoc photometric-stereo image acquisition system, and a public dataset, the Northeastern University surface defect database (NEU). The proposed approach achieves an improvement of 7% and 6% in an intersection over union (IoU) measure of detection performance on each dataset over the conventional data augmentation

    Methodologically Grounded SemanticAnalysis of Large Volume of Chilean Medical Literature Data Applied to the Analysis of Medical Research Funding Efficiency in Chile

    Get PDF
    Background Medical knowledge is accumulated in scientific research papers along time. In order to exploit this knowledge by automated systems, there is a growing interest in developing text mining methodologies to extract, structure, and analyze in the shortest time possible the knowledge encoded in the large volume of medical literature. In this paper, we use the Latent Dirichlet Allocation approach to analyze the correlation between funding efforts and actually published research results in order to provide the policy makers with a systematic and rigorous tool to assess the efficiency of funding programs in the medical area. Results We have tested our methodology in the Revista Medica de Chile, years 2012-2015. 50 relevant semantic topics were identified within 643 medical scientific research papers. Relationships between the identified semantic topics were uncovered using visualization methods. We have also been able to analyze the funding patterns of scientific research underlying these publications. We found that only 29% of the publications declare funding sources, and we identified five topic clusters that concentrate 86% of the declared funds. Conclusions Our methodology allows analyzing and interpreting the current state of medical research at a national level. The funding source analysis may be useful at the policy making level in order to assess the impact of actual funding policies, and to design new policies.This research was partially funded by CONICYT, Programa de Formacion de Capital Humano avanzado (CONICYT-PCHA/Doctorado Nacional/2015-21150115). MG work in this paper has been partially supported by FEDER funds for the MINECO project TIN2017-85827-P, and projects KK-2018/00071 and KK2018/00082 of the Elkartek 2018 funding program. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 777720. No role has been played by funding bodies in the design of the study and collection, analysis, or interpretation of data or in writing the manuscript
    corecore