28 research outputs found

    Idiopathic and acquired pedophilia as two distinct disorders: an insight from neuroimaging

    Get PDF
    Pedophilia is a disorder of public concern because of its association with child sexual offense and recidivism. Previous neuroimaging studies of potential brain abnormalities underlying pedophilic behavior, either in idiopathic or acquired (i.e., emerging following brain damages) pedophilia, led to inconsistent results. This study sought to explore the neural underpinnings of pedophilic behavior and to determine the extent to which brain alterations may be related to distinct psychopathological features in pedophilia. To this aim, we run a coordinate based meta-analysis on previously published papers reporting whole brain analysis and a lesion network analysis, using brain lesions as seeds in a resting state connectivity analysis. The behavioral profiling approach was applied to link identified regions with the corresponding psychological processes. While no consistent neuroanatomical alterations were identified in idiopathic pedophilia, the current results support that all the lesions causing acquired pedophilia are localized within a shared resting state network that included posterior midlines structures, right inferior temporal gyrus and bilateral orbitofrontal cortex. These regions are associated with action inhibition and social cognition, abilities that are consistently and severely impaired in acquired pedophiles. This study suggests that idiopathic and acquired pedophilia may be two distinct disorders, in line with their distinctive clinical features, including age of onset, reversibility and modus operandi. Understanding the neurobiological underpinnings of pedophilic behavior may contribute to a more comprehensive characterization of these individuals on a clinical ground, a pivotal step forward for the development of more efficient therapeutic rehabilitation strategies

    Approximated Neighbours MinHash Graph Node Kernel

    No full text
    In this paper, we propose a scalable kernel for nodes in a (huge) graph. In contrast with other state-of-the-art kernels that scale more than quadratically in the number of nodes, our approach scales lin- early in the average out-degree and quadratically in the number of nodes (for the Gram matrix computation). The kernel presented in this paper considers neighbours as sets, thus it ignores edge weights. Nevertheless, experimental results on real-world datasets show promising results

    An efficient graph kernel method for non-coding RNA functional prediction

    Get PDF
    The importance of RNA protein-coding gene regulation is by now well appreciated. Noncoding RNAs (ncRNAs) are known to regulate gene expression at practically every stage, ranging from chromatin packaging to mRNA translation. However the functional characterization of specific instances remains a challenging task in genome scale settings. For this reason, automatic annotation approaches are of interest. Existing computational methods are either efficient but non accurate or they offer increased precision, but present scalability problems

    Conditional Constrained Graph Variational Autoencoders for Molecule Design

    No full text
    none3In recent years, deep generative models for graphs have been used to generate new molecules. These models have produced good results, leading to several proposals in the literature. However, these models may have troubles learning some of the complex laws governing the chemical world. In this work, we explore the usage of the histogram of atom valences to drive the generation of molecules in such models. We present Conditional Constrained Graph Variational Autoencoder (CCGVAE), a model that implements this key-idea in a state-of-the-art model, and shows improved results on several evaluation metrics on two commonly adopted datasets for molecule generation.noneDavide Rigoni; Nicolo' Navarin; Alessandro SperdutiRigoni, Davide; Navarin, Nicolo'; Sperduti, Alessandr

    A Tree-Based Kernel for Graphs

    No full text
    This paper proposes a new tree-based kernel for graphs. Graphs are decomposed into multisets of ordered Directed Acyclic Graphs (DAGs) and a family of kernels computed by application of tree kernels extended to the DAG domain. We focus our attention on the efficient development of one member of this family. A technique for speeding up the computation is given, as well as theoretical bounds and practical evidence of its feasibility. State of the art results on various benchmark datasets prove the effectiveness of our approach

    Hyper-parameter tuning for graph kernels via multiple kernel learning

    No full text
    Kernelized learning algorithms have seen a steady growth in popularity during the last decades. The procedure to estimate the performances of these kernels in real applications is typical computationally demanding due to the process of hyper-parameter selection. This is especially true for graph kernels, which are computationally quite expensive. In this paper, we study an approach that substitutes the commonly adopted procedure for kernel hyper-parameter selection by a multiple kernel learning procedure that learns a linear combination of kernel matrices obtained by the same kernel with different values for the hyper-parameters. Empirical results on real-world graph datasets show that the proposed methodology is faster than the baseline method when the number of parameter configurations is large, while always maintaining comparable and in some cases superior performances

    Multiple graph-kernel learning

    No full text
    Kernels for structures, including graphs, generally suffer of the diagonally dominant gram matrix issue, the effect by which the number of sub-structures, or features, shared between instances are very few with respect to those shared by an instance with itself. A parametric rule is typically used to reduce the weights of largest (more complex) sub-structures. The particular rule which is adopted is in fact a strong external bias that may strongly affect the resulting predictive performance. Thus, in principle, the applied rule should be validated in addition to the other hyper-parameters of the kernel. Nevertheless, for the majority of graph kernels proposed in literature, the parameters of the weighting rule are fixed a priori. The contribution of this paper is two-fold. Firstly, we propose a Multiple Kernel Learning (MKL) approach to learn different weights of different bunches of features which are grouped by complexity. Secondly, we define a notion of kernel complexity, namely Kernel Spectral Complexity, and we show how this complexity relates to the well-known Empirical Rademacher Complexity for a natural class of functions which include SVM. The proposed approach is applied to a recently defined graph kernel and valuated on several real-world datasets. The obtained results show that our approach outperforms the original kernel on all the considered tasks
    corecore