569 research outputs found

    A Mixed Data-Based Deep Neural Network to Estimate Leaf Area Index in Wheat Breeding Trials

    Get PDF
    Remote and non-destructive estimation of leaf area index (LAI) has been a challenge in the last few decades as the direct and indirect methods available are laborious and time-consuming. The recent emergence of high-throughput plant phenotyping platforms has increased the need to develop new phenotyping tools for better decision-making by breeders. In this paper, a novel model based on artificial intelligence algorithms and nadir-view red green blue (RGB) images taken from a terrestrial high throughput phenotyping platform is presented. The model mixes numerical data collected in a wheat breeding field and visual features extracted from the images to make rapid and accurate LAI estimations. Model-based LAI estimations were validated against LAI measurements determined non-destructively using an allometric relationship obtained in this study. The model performance was also compared with LAI estimates obtained by other classical indirect methods based on bottom-up hemispherical images and gaps fraction theory. Model-based LAI estimations were highly correlated with ground-truth LAI. The model performance was slightly better than that of the hemispherical image-based method, which tended to underestimate LAI. These results show the great potential of the developed model for near real-time LAI estimation, which can be further improved in the future by increasing the dataset used to train the model

    Revisitando y releyendo los epitafios en verso de Anse (RhĂ´ne): CIL XIII 1655, CIL XIII 1655, CIL XIII 1660

    Get PDF
    El trabajo incluye la edición crítica, comentario filológico y traducción al español de tres inscripciones latinas conservadas en Anse, con un análisis para su clasificación métrica. La primera de ellas, compuesta en dísticos elegíacos, serviría de modelo a la segunda, en cuyo proceso de adaptación se alejó de cualquier forma métrica conocida; la tercera, pese a su fragmentariedad, puede considerarse un carmen epigraphicum de ritmo dactílico

    Eight reasons why cybersecurity on novel generations of brain-computer interfaces must be prioritized

    Full text link
    This article presents eight neural cyberattacks affecting spontaneous neural activity, inspired by well-known cyberattacks from the computer science domain: Neural Flooding, Neural Jamming, Neural Scanning, Neural Selective Forwarding, Neural Spoofing, Neural Sybil, Neural Sinkhole and Neural Nonce. These cyberattacks are based on the exploitation of vulnerabilities existing in the new generation of Brain-Computer Interfaces. After presenting their formal definitions, the cyberattacks have been implemented over a neuronal simulation. To evaluate the impact of each cyberattack, they have been implemented in a Convolutional Neural Network (CNN) simulating a portion of a mouse's visual cortex. This implementation is based on existing literature indicating the similarities that CNNs have with neuronal structures from the visual cortex. Some conclusions are also provided, indicating that Neural Nonce and Neural Jamming are the most impactful cyberattacks for short-term effects, while Neural Scanning and Neural Nonce are the most damaging for long-term effects

    Excellence and Quality in Andalusia University Library System

    Get PDF
    From 1996 onwards, then, the Quality Assessment National Plan and the adoption of its agenda by regional authorities and Universities alike has resulted in a growing acceptance by the Spanish academic community of the challenges and opportunities offered by evaluation and quality assurance activities. Academic librarians have been committed to this culture of quality from the very beginnings and in most cases have being leading the way in their own institutions. General tools like the Evaluation Guide referred to above developed to be applied in administration and services alike were of little use for libraries, so academic libraries have been the first units to develop their own evaluation guides at local and regional levels. University System in Andalusia (Spain) is formed by 10 Universities financed by regional government. The Quality Unit of Andalusia Universities convened in 2000 an Assessment University Libraries Pilot Plan to do a global analysis of the Library System. This Pilot Plan has had three steps: - During 2000-2002, a technical committee to draft a new evaluation guide for academic libraries. Based on the EFQM, because of its growing influence in the evaluation of the public sector and not-for-profit organizations across Europe. During the course of our work we were delighted to see that we concurred basically with the approach taken by LISIM. The Guide is divided into 5 parts, as follows: Analysis and Description of 9 criteria adapted to library scenario, 35 Tables for data collection, a set of 30 quality and performance Indicators, a Excellence-rating matrix, an objective tool, to determine the level of excellence achieved by the library on a scale from 0 to 10, and General guidelines for the Assessment Committees of University Departments (the basic unit of research assessment undertaken by the University) and of degree courses (the basic unit of assessment of teaching personnel). - In 2002-2004, a coordination committee drove the assessment process of 9 libraries and tested materials and evaluation methodology. The Pilot Plan has finalised with External Evaluation for 5 External Committee formed by librarians, faculties and EFQM methodology specialist. The aim of this paper is explain different parts and strong points of this process and how EFQM is suitable for all kind of librarie

    Analyzing the impact of Driving tasks when detecting emotions through brain–computer interfaces

    Get PDF
    Traffic accidents are the leading cause of death among young people, a problem that today costs an enormous number of victims. Several technologies have been proposed to prevent accidents, being brain–computer interfaces (BCIs) one of the most promising. In this context, BCIs have been used to detect emotional states, concentration issues, or stressful situations, which could play a fundamental role in the road since they are directly related to the drivers’ decisions. However, there is no extensive literature applying BCIs to detect subjects’ emotions in driving scenarios. In such a context, there are some challenges to be solved, such as (i) the impact of performing a driving task on the emotion detection and (ii) which emotions are more detectable in driving scenarios. To improve these challenges, this work proposes a framework focused on detecting emotions using electroencephalography with machine learning and deep learning algorithms. In addition, a use case has been designed where two scenarios are presented. The first scenario consists in listening to sounds as the primary task to perform, while in the second scenario listening to sound becomes a secondary task, being the primary task using a driving simulator. In this way, it is intended to demonstrate whether BCIs are useful in this driving scenario. The results improve those existing in the literature, achieving 99% accuracy for the detection of two emotions (non-stimuli and angry), 93% for three emotions (non-stimuli, angry and neutral) and 75% for four emotions (non-stimuli, angry, neutral and joy)

    Single-board device individual authentication based on hardware performance and autoencoder transformer models

    Get PDF
    The proliferation of the Internet of Things (IoT) has led to the emergence of crowdsensing applications, where a multitude of interconnected devices collaboratively collect and analyze data. Ensuring the authenticity and integrity of the data collected by these devices is crucial for reliable decision-making and maintaining trust in the system. Traditional authentication methods are often vulnerable to attacks or can be easily duplicated, posing challenges to securing crowdsensing applications. Besides, current solutions leveraging device behavior are mostly focused on device identification, which is a simpler task than authentication. To address these issues, an individual IoT device authentication framework based on hardware behavior fingerprinting and Transformer autoencoders is proposed in this work. To support the design, a threat model details the security problems faced when performing hardware-based authentication in IoT. This solution leverages the inherent imperfections and variations in IoT device hardware to differentiate between devices with identical specifications. By monitoring and analyzing the behavior of key hardware components, such as the CPU, GPU, RAM, and Storage on devices, unique fingerprints for each device are created. The performance samples are considered as time series data and used to train outlier detection transformer models, one per device and aiming to model its normal data distribution. Then, the framework is validated within a spectrum crowdsensing system leveraging Raspberry Pi devices. After a pool of experiments, the model from each device is able to individually authenticate it between the 45 devices employed for validation. An average True Positive Rate (TPR) of 0.74±0.13 and an average maximum False Positive Rate (FPR) of 0.06±0.09 demonstrate the effectiveness of this approach in enhancing authentication, security, and trust in crowdsensing applications

    Single-board Device Individual Authentication based on Hardware Performance and Autoencoder Transformer Models

    Full text link
    The proliferation of the Internet of Things (IoT) has led to the emergence of crowdsensing applications, where a multitude of interconnected devices collaboratively collect and analyze data. Ensuring the authenticity and integrity of the data collected by these devices is crucial for reliable decision-making and maintaining trust in the system. Traditional authentication methods are often vulnerable to attacks or can be easily duplicated, posing challenges to securing crowdsensing applications. Besides, current solutions leveraging device behavior are mostly focused on device identification, which is a simpler task than authentication. To address these issues, an individual IoT device authentication framework based on hardware behavior fingerprinting and Transformer autoencoders is proposed in this work. This solution leverages the inherent imperfections and variations in IoT device hardware to differentiate between devices with identical specifications. By monitoring and analyzing the behavior of key hardware components, such as the CPU, GPU, RAM, and Storage on devices, unique fingerprints for each device are created. The performance samples are considered as time series data and used to train outlier detection transformer models, one per device and aiming to model its normal data distribution. Then, the framework is validated within a spectrum crowdsensing system leveraging Raspberry Pi devices. After a pool of experiments, the model from each device is able to individually authenticate it between the 45 devices employed for validation. An average True Positive Rate (TPR) of 0.74+-0.13 and an average maximum False Positive Rate (FPR) of 0.06+-0.09 demonstrate the effectiveness of this approach in enhancing authentication, security, and trust in crowdsensing applications

    SAFECAR: A Brain–Computer Interface and intelligent framework to detect drivers’ distractions

    Full text link
    As recently reported by the World Health Organization (WHO), the high use of intelligent devices such as smartphones, multimedia systems, or billboards causes an increase in distraction and, consequently, fatal accidents while driving. The use of EEG-based Brain–Computer Interfaces (BCIs) has been proposed as a promising way to detect distractions. However, existing solutions are not well suited for driving scenarios. They do not consider complementary data sources, such as contextual data, nor guarantee realistic scenarios with real-time communications between components. This work proposes an automatic framework for detecting distractions using BCIs and a realistic driving simulator. The framework employs different supervised Machine Learning (ML)-based models on classifying the different types of distractions using Electroencephalography (EEG) and contextual driving data collected by car sensors, such as line crossings or objects detection. This framework has been evaluated using a driving scenario without distractions and a similar one where visual and cognitive distractions are generated for ten subjects. The proposed framework achieved 83.9% -score with a binary model and 73% with a multiclass model using EEG, improving 7% in binary classification and 8% in multi-class classification by incorporating contextual driving into the training dataset. Finally, the results were confirmed by a neurophysiological study, which revealed significantly higher voltage in selective attention and multitasking
    • …
    corecore