10 research outputs found

    DropIn: Making Reservoir Computing Neural Networks Robust to Missing Inputs by Dropout

    Full text link
    The paper presents a novel, principled approach to train recurrent neural networks from the Reservoir Computing family that are robust to missing part of the input features at prediction time. By building on the ensembling properties of Dropout regularization, we propose a methodology, named DropIn, which efficiently trains a neural model as a committee machine of subnetworks, each capable of predicting with a subset of the original input features. We discuss the application of the DropIn methodology in the context of Reservoir Computing models and targeting applications characterized by input sources that are unreliable or prone to be disconnected, such as in pervasive wireless sensor networks and ambient intelligence. We provide an experimental assessment using real-world data from such application domains, showing how the Dropin methodology allows to maintain predictive performances comparable to those of a model without missing features, even when 20\%-50\% of the inputs are not available

    Detecting Adversarial Examples through Nonlinear Dimensionality Reduction

    Get PDF
    Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification. This work proposes a detection method based on combining non-linear dimensionality reduction and density estimation techniques. Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method. Given our promising results, we plan to extend our analysis to adaptive attackers in future work.Comment: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) 201

    Perplexity-free Parametric t-SNE

    Full text link
    The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method. Its non-parametric nature and impressive efficacy motivated its parametric extension. It is however bounded to a user-defined perplexity parameter, restricting its DR quality compared to recently developed multi-scale perplexity-free approaches. This paper hence proposes a multi-scale parametric t-SNE scheme, relieved from the perplexity tuning and with a deep neural network implementing the mapping. It produces reliable embeddings with out-of-sample extensions, competitive with the best perplexity adjustments in terms of neighborhood preservation on multiple data sets.Comment: ESANN 2020 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Online event, 2-4 October 2020, i6doc.com publ., ISBN 978-2-87587-074-2. Available from http://www.i6doc.com/en

    Neuroticism and Conscientiousness Moderate the Effect of Oral Medication Beliefs on Adherence of People with Mental Illness during the Pandemic

    Get PDF
    Background. After the declaration of the pandemic status in several countries, the continuity of face-to-face visits in psychiatric facilities has been delayed or even interrupted to reduce viral spread. Little is known about the personality factors associated with medication beliefs and adherence amongst individuals with mental illness during the COVID-19 pandemic. This brief report describes a preliminary naturalistic longitudinal study that explored whether the Big Five personality traits prospectively moderate the effects of medication beliefs on changes in adherence during the pandemic for a group of outpatients with psychosis or bipolar disorder. Methods. Thirteen outpatients undergoing routine face-to-face follow-up assessments during the pandemic were included (41 observations overall) and completed the Revised Italian Version of the Ten-Item Personality Inventory, the Beliefs about Medicines Questionnaire, the Morisky Medication Adherence Scale-8-item and the Beck Depression Inventory-II. Results. Participants had stronger concerns about their psychiatric medications rather than beliefs about their necessity, and adherence to medications was generally low. Participants who had more necessity beliefs than concerns had better adherence to medications. People scoring higher in Conscientiousness and Neuroticism traits and more concerned about the medication side effects had poorer adherence. Conclusions. These preliminary data suggest the importance of a careful assessment of the adherence to medications amongst people with psychosis/bipolar disorder during the pandemic. Interventions aimed to improve adherence might focus on patients' medication beliefs and their Conscientiousness and Neuroticism personality traits

    Deep Learning Safety under Non-Stationarity Assumptions

    No full text
    Deep Learning (DL) is having a transformational effect in critical areas such as finance, healthcare, transportation, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in DL, may have not scrutinized the potential induced security issues of including such intelligent components in their systems. Building a trustworthy DL system requires enforcing key properties, including robustness, privacy, and accountability. This thesis aims to contribute to enhancing DL model’s robustness to input distribution drifts, i.e. situations where training and test distribution differ. Notably, input distribution drifts may happen both naturally — induced by missing input data, e.g. due to some sensor fault — or adversarially, i.e. by an attacker to induce model behavior as desired. Through this thesis, we firstly provide a technique for making DL models robust to missing inputs by design, inducing resilience even in the case of sequential tasks. Then, we propose a detection framework for adversarial attacks accommodating many techniques in literature and novel proposals, as our newer detector exploiting non-linear dimensionality reduction techniques at its core. Finally, abstracting the analyzed defenses in our framework we identified common drawbacks which we propose to overcome with a fast adversarial examples detection technique, capable of a sensible overhead reduction without sacrificing detectors accuracy both on clean data and under attack

    Augmenting Recurrent Neural Networks Resiliency by Dropout

    No full text
    This thesis presents a novel, principled approach to training recurrent neural networks that are robust to missing part of the input features at prediction time. By building on the ensembling properties of Dropout regularization, we propose a methodology, named DropIn, which efficiently trains a neural network model as a committee machine of subnetworks, each capable of predicting with a subset of the original input features. We discuss the application of the DropIn methodology to the most representatives recurrent neural models, ranging from simplest recurrent networks to Reservoir Computing models and targeting applications characterized by input sources that might be unreliable or prone to collect discontinued measurements, leading to missingness in input data (e.g., as in pervasive wireless sensor networks and IoT contexts). We provide experimental assessment using real-world data from ambient assisted living and healthcare application domains, showing how the DropIn methodology allows maintaining predictive performances comparable to those of a model without missing features, even when 20%-50% of the inputs are not available

    Augmenting Recurrent Neural Networks Resilience by Dropout

    No full text
    This brief discusses the simple idea that dropout regularization can be used to efficiently induce resiliency to missing inputs at prediction time in a generic neural network. We show how the approach can be effective on tasks where imputation strategies often fail, namely, involving recurrent neural networks and scenarios where whole sequences of input observations are missing. The experimental analysis provides an assessment of the accuracy-resiliency tradeoff in multiple recurrent models, including reservoir computing methods, and comprising real-world ambient intelligence and biomedical time series

    FADER: Fast Adversarial Example Rejection

    No full text
    Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time. Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations - a behavior normally exhibited by adversarial attacks. Despite technical differences, all aforementioned methods share a common backbone structure that we formalize and highlight in this contribution, as it can help in identifying promising research directions and drawbacks of existing methods. The first main contribution of this work is the review of these detection methods in the form of a unifying framework designed to accommodate both existing defenses and newer ones to come. In terms of drawbacks, the overmentioned defenses require comparing input samples against an oversized number of reference prototypes, possibly at different representation layers, dramatically worsening the test-time efficiency. Besides, such defenses are typically based on ensembling classifiers with heuristic methods, rather than optimizing the whole architecture in an end-to-end manner to better perform detection. As a second main contribution of this work, we introduce FADER, a novel technique for speeding up detection-based methods. FADER overcome the issues above by employing RBF networks as detectors: by fixing the number of required prototypes, the runtime complexity of adversarial examples detectors can be controlled. Our experiments outline up to 73Ă— prototypes reduction compared to analyzed detectors for MNIST dataset, up to 50Ă— for CIFAR10 dataset, and up to 82Ă— on ImageNet10 dataset respectively, without sacrificing classification accuracy on both clean and adversarial data

    Perplexity-free Parametric t-SNE

    No full text
    The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method. Its non-parametric nature and impressive efficacy motivated its parametric extension. It is however bounded to a user-defined perplexity parameter, restricting its DR quality compared to recently developed multi-scale perplexity-free approaches. This paper hence proposes a multi-scale parametric t-SNE scheme, relieved from the perplexity tuning and with a deep neural network implementing the mapping. It produces reliable embeddings with out-of-sample extensions, competitive with the best perplexity adjustments in terms of neighborhood preservation on multiple data sets

    Neuroticism and Conscientiousness Moderate the Effect of Oral Medication Beliefs on Adherence of People with Mental Illness during the Pandemic

    Get PDF
    Background. After the declaration of the pandemic status in several countries, the continuity of face-to-face visits in psychiatric facilities has been delayed or even interrupted to reduce viral spread. Little is known about the personality factors associated with medication beliefs and adherence amongst individuals with mental illness during the COVID-19 pandemic. This brief report describes a preliminary naturalistic longitudinal study that explored whether the Big Five personality traits prospectively moderate the effects of medication beliefs on changes in adherence during the pandemic for a group of outpatients with psychosis or bipolar disorder. Methods. Thirteen outpatients undergoing routine face-to-face follow-up assessments during the pandemic were included (41 observations overall) and completed the Revised Italian Version of the Ten-Item Personality Inventory, the Beliefs about Medicines Questionnaire, the Morisky Medication Adherence Scale—8-item and the Beck Depression Inventory—II. Results. Participants had stronger concerns about their psychiatric medications rather than beliefs about their necessity, and adherence to medications was generally low. Participants who had more necessity beliefs than concerns had better adherence to medications. People scoring higher in Conscientiousness and Neuroticism traits and more concerned about the medication side effects had poorer adherence. Conclusions. These preliminary data suggest the importance of a careful assessment of the adherence to medications amongst people with psychosis/bipolar disorder during the pandemic. Interventions aimed to improve adherence might focus on patients’ medication beliefs and their Conscientiousness and Neuroticism personality traits
    corecore