521 research outputs found
A meta-learning approach for training explainable graph neural networks
In this article, we investigate the degree of explainability of graph neural networks (GNNs). The existing explainers work by finding global/local subgraphs to explain a prediction, but they are applied after a GNN has already been trained. Here, we propose a meta-explainer for improving the level of explainability of a GNN directly at training time, by steering the optimization procedure toward minima that allow post hoc explainers to achieve better results, without sacrificing the overall accuracy of GNN. Our framework (called MATE, MetA-Train to Explain) jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms that explain the model's decisions in a human-friendly way. In particular, we meta-train the model's parameters to quickly minimize the error of an instance-level GNNExplainer trained on-the-fly on randomly sampled nodes. The final internal representation relies on a set of features that can be ``better'' understood by an explanation algorithm, e.g., another instance of GNNExplainer. Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process. Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms. Furthermore, this increase in explainability comes at no cost to the accuracy of the model
SmartFog: Training the Fog for the energy-saving analytics of Smart-Meter data
In this paper, we characterize the main building blocks and numerically verify the classification accuracy and energy performance of SmartFog, a distributed and virtualized networked Fog technological platform for the support for Stacked Denoising Auto-Encoder (SDAE)-based anomaly detection in data flows generated by Smart-Meters (SMs). In SmartFog, the various layers of an SDAE are pretrained at different Fog nodes, in order to distribute the overall computational efforts and, then, save energy. For this purpose, a new Adaptive Elitist Genetic Algorithm (AEGA) is âad hocâ designed to find the optimized allocation of the SDAE layers to the Fog nodes. Interestingly, the proposed AEGA implements a (novel) mechanism that adaptively tunes the exploration and exploitation capabilities of the AEGA, in order to quickly escape the attraction basins of local minima of the underlying energy objective function and, then, speed up the convergence towards global minima. As a matter of fact, the main distinguishing feature of the resulting SmartFog paradigm is that it accomplishes the joint integration on a distributed Fog computing platform of the anomaly detection functionality and the minimization of the resulting energy consumption. The reported numerical tests support the effectiveness of the designed technological platform and point out that the attained performance improvements over some state-of-the-art competing solutions are around 5%, 68% and 30% in terms of detection accuracy, execution time and energy consumption, respectively
PHYDI. Initializing parameterized hypercomplex neural networks as identity functions
Neural models based on hypercomplex algebra systems are growing and prolificating for a plethora of applications, ranging from computer vision to natural language processing. Hand in hand with their adoption, parameterized hypercomplex neural networks (PHNNs) are growing in size and no techniques have been adopted so far to control their convergence at a large scale. In this paper, we study PHNNs convergence and propose parameterized hypercomplex identity initialization (PHYDI), a method to improve their convergence at different scales, leading to more robust performance when the number of layers scales up, while also reaching the same performance with fewer iterations. We show the effectiveness of this approach in different benchmarks and with common PHNNs with ResNets- and Transformer-based architecture. The code is available at https://github.com/ispamm/PHYDI
Compressing deep-quaternion neural networks with targeted regularisation
In recent years, hyper-complex deep networks (such as complex-valued and quaternion-valued neural networks - QVNNs) have received a renewed interest in the literature. They find applications in multiple fields, ranging from image reconstruction to 3D audio processing. Similar to their real-valued counterparts, quaternion neural networks require custom regularisation strategies to avoid overfitting. In addition, for many real-world applications and embedded implementations, there is the need of designing sufficiently compact networks, with few weights and neurons. However, the problem of regularising and/or sparsifying QVNNs has not been properly addressed in the literature as of now. In this study, the authors show how to address both problems by designing targeted regularisation strategies, which can minimise the number of connections and neurons of the network during training. To this end, they investigate two extensions of l1and structured regularisations to the quaternion domain. In the authors' experimental evaluation, they show that these tailored strategies significantly outperform classical (realvalued) regularisation approaches, resulting in small networks especially suitable for low-power and real-time applications
Combined Sparse Regularization for Nonlinear Adaptive Filters
Nonlinear adaptive filters often show some sparse behavior due to the fact
that not all the coefficients are equally useful for the modeling of any
nonlinearity. Recently, a class of proportionate algorithms has been proposed
for nonlinear filters to leverage sparsity of their coefficients. However, the
choice of the norm penalty of the cost function may be not always appropriate
depending on the problem. In this paper, we introduce an adaptive combined
scheme based on a block-based approach involving two nonlinear filters with
different regularization that allows to achieve always superior performance
than individual rules. The proposed method is assessed in nonlinear system
identification problems, showing its effectiveness in taking advantage of the
online combined regularization.Comment: This is a corrected version of the paper presented at EUSIPCO 2018
and published on IEEE https://ieeexplore.ieee.org/document/855295
Redes digitais e processos colaborativos em dança: por uma ecologia do corpo, corpomĂdia
Este artigo Ă© um recorte extraĂdo da tese de doutorado âDeslocar para permanecer: implicaçÔes polĂticas das redes digitais nos processos criativos colaborativosâ, defendida em 2016 na PUC - SP. O problema que instiga esta pesquisa Ă© o paradoxo produzido pela promessa de democratização que as redes trouxeram, reconfigurando o prĂłprio conceito de democracia, de participação e de compartilhamento, mas que nĂŁo impediu a mitificação da horizontalidade de comunicação como sinĂŽnimo do ideal iluminista de liberdade, igualdade e fraternidade (KATZ, 2014). A pesquisa se apoia na Teoria CorpomĂdia (KATZ & GREINER) para investigar o papel do corpo como uma ecologia de possibilidades alternativas de existĂȘnci
Deep belief network based audio classification for construction sites monitoring
In this paper, we propose a Deep Belief Network (DBN) based approach for the classification of audio signals to improve work activity identification and remote surveillance of construction projects. The aim of the work is to obtain an accurate and flexible tool for consistently executing and managing the unmanned monitoring of construction sites by using distributed acoustic sensors. In this paper, ten classes of multiple construction equipment and tools, frequently and broadly used in construction sites, have been collected and examined to conduct and validate the proposed approach. The input provided to the DBN consists in the concatenation of several statistics evaluated by a set of spectral features, like MFCCs and mel-scaled spectrogram. The proposed architecture, along with the preprocessing and the feature extraction steps, has been described in details while the effectiveness of the proposed idea has been demonstrated by some numerical results, evaluated by using real-world recordings. The final overall accuracy on the test set is up to 98% and is a significantly improved performance compared to other state-of-the-are approaches. A practical and real-time application of the presented method has been also proposed in order to apply the classification scheme to sound data recorded in different environmental scenarios
Group sparse regularization for deep neural networks
In this paper, we address the challenging task of simultaneously optimizing (i) the weights of a neural network, (ii) the number of neurons for each hidden layer, and (iii) the subset of active input features (i.e., feature selection). While these problems are traditionally dealt with separately, we propose an efficient regularized formulation enabling their simultaneous parallel execution, using standard optimization routines. Specifically, we extend the group Lasso penalty, originally proposed in the linear regression literature, to impose group-level sparsity on the networkâs connections, where each group is defined as the set of outgoing weights from a unit. Depending on the specific case, the weights can be related to an input variable, to a hidden neuron, or to a bias unit, thus performing simultaneously all the aforementioned tasks in order to obtain a compact network. We carry out an extensive experimental evaluation, in comparison with classical weight decay and Lasso penalties, both on a toy dataset for handwritten digit recognition, and multiple realistic mid-scale classification benchmarks. Comparative results demonstrate the potential of our proposed sparse group Lasso penalty in producing extremely compact networks, with a significantly lower number of input features, with a classification accuracy which is equal or only slightly inferior to standard regularization terms
Compliance with international guidelines for chronic inflammatory neuropathies
ERare diseasesâ management guidelines are produced with the primary aim of improving practice and standards of care for patients and may represent a useful framework for clinical practice. The EFNS/PNS (European Federation of Neurological Societies/Peripheral Nerve Society) guidelines for CIDP (chronic inflammatory demyelinating polyneuropathy) and MMN (multifocal motor neuropathy) were last published in 2010 (1, 2). Enthusiasm of the audience for whom they are produced, arguably primarily nonâsubâspecialists, is however largely unexplored. Compliance to these guidelines by neuromuscular and/or peripheral nerve specialists has not been investigated
- âŠ