2,834 research outputs found

    Revisiting Robustness in Graph Machine Learning

    Full text link
    Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure. However, because manual inspection of a graph is difficult, it is unclear if the studied perturbations always preserve a core assumption of adversarial examples: that of unchanged semantic content. To address this problem, we introduce a more principled notion of an adversarial graph, which is aware of semantic content change. Using Contextual Stochastic Block Models (CSBMs) and real-world graphs, our results uncover: i)i) for a majority of nodes the prevalent perturbation models include a large fraction of perturbed graphs violating the unchanged semantics assumption; ii)ii) surprisingly, all assessed GNNs show over-robustness - that is robustness beyond the point of semantic change. We find this to be a complementary phenomenon to adversarial examples and show that including the label-structure of the training graph into the inference process of GNNs significantly reduces over-robustness, while having a positive effect on test accuracy and adversarial robustness. Theoretically, leveraging our new semantics-aware notion of robustness, we prove that there is no robustness-accuracy tradeoff for inductively classifying a newly added node.Comment: Published as a conference paper at ICLR 2023. Preliminary version accepted as an oral at the NeurIPS 2022 TSRML workshop and at the NeurIPS 2022 ML safety worksho

    Concavity of Eigenvalue Sums and the Spectral Shift Function

    Get PDF
    It is well known that the sum of negative (positive) eigenvalues of some finite Hermitian matrix VV is concave (convex) with respect to VV. Using the theory of the spectral shift function we generalize this property to self-adjoint operators on a separable Hilbert space with an arbitrary spectrum. More precisely, we prove that the spectral shift function integrated with respect to the spectral parameter from -\infty to λ\lambda (from λ\lambda to ++\infty) is concave (convex) with respect to trace class perturbations. The case of relative trace class perturbations is also considered

    Topology-Matching Normalizing Flows for Out-of-Distribution Detection in Robot Learning

    Full text link
    To facilitate reliable deployments of autonomous robots in the real world, Out-of-Distribution (OOD) detection capabilities are often required. A powerful approach for OOD detection is based on density estimation with Normalizing Flows (NFs). However, we find that prior work with NFs attempts to match the complex target distribution topologically with naive base distributions leading to adverse implications. In this work, we circumvent this topological mismatch using an expressive class-conditional base distribution trained with an information-theoretic objective to match the required topology. The proposed method enjoys the merits of wide compatibility with existing learned models without any performance degradation and minimum computation overhead while enhancing OOD detection capabilities. We demonstrate superior results in density estimation and 2D object detection benchmarks in comparison with extensive baselines. Moreover, we showcase the applicability of the method with a real-robot deployment.Comment: Accepted on CoRL202

    Topology-Matching Normalizing Flows for Out-of-Distribution Detection in Robot Learning

    Get PDF
    To facilitate reliable deployments of autonomous robots in the real world, Out-of-Distribution (OOD) detection capabilities are often required. A powerful approach for OOD detection is based on density estimation with Normalizing Flows (NFs). However, we find that prior work with NFs attempts to match the complex target distribution topologically with naive base distributions leading to adverse implications. In this work, we circumvent this topological mismatch using an expressive class-conditional base distribution trained with an information-theoretic objective to match the required topology. The proposed method enjoys the merits of wide compatibility with existing learned models without any performance degradation and minimum computation overhead while enhancing OOD detection capabilities. We demonstrate superior results in density estimation and 2D object detection benchmarks in comparison with extensive baselines. Moreover, we showcase the applicability of the method with a real-robot deployment

    Three Weeks of Detraining Does Not Decrease Muscle Thickness, Strength or Sport Performance in Adolescent Athletes

    Get PDF
    International Journal of Exercise Science 13(6): 633-644, 2020. The purpose of this study was to examine the effects of detraining following a block (BLOCK) or daily undulating periodized (DUP) resistance training (RT) on hypertrophy, strength, and athletic performance in adolescent athletes. Twenty-one males (age = 16 ± 0.7 years; range 15-18 years) were randomly assigned to one of two 12-week intervention groups (three full-body RT sessions per week): BLOCK (n = 9); DUP (n = 12). Subsequently a three-week detraining period was applied. Body mass, fat mass (FM), fat-free mass (FFM), muscle mass, muscle thickness (rectus femoris, vastus lateralis and triceps brachii), one-repetition maximum squat and bench press, countermovement jump (CMJ), peak power calculated from CMJ (Ppeak), medicine ball put distance, and 36.58m sprint were recorded before and after RT as well as after detraining. BLOCK and DUP were equally effective for improvements of athletic performance in young athletes. Both groups displayed significantly (ρ ≤ 0.05) higher values of all measures after RT except FM, which was unchanged. Only FM increased (p = 0.010; ES = 0.14) and FFM decreased (p = 0.018; ES = -0.18) after detraining. All other measurements were unaffected by the complete cessation of training. Values were still elevated compared to pre-training. Linear regression showed a strong correlation between the percentage change by resistance training and the decrease during detraining for CMJ (R² = 0.472) and MBP (R² = 0.629). BLOCK and DUP RT seem to be equally effective in adolescent athletes for increasing strength, muscle mass, and sport performance. In addition, three weeks of detraining did not affect muscle thickness, strength, or sport performance in adolescent athletes independent of previous resistance training periodization model used

    On the Adversarial Robustness of Graph Contrastive Learning Methods

    Full text link
    Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks. More recently, researchers have extended the principles of contrastive learning to graph-structured data, giving birth to the field of graph contrastive learning (GCL). However, whether GCL methods can deliver the same advantages in adversarial robustness as their counterparts in the image and text domains remains an open question. In this paper, we introduce a comprehensive robustness evaluation protocol tailored to assess the robustness of GCL models. We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario. We evaluate node and graph classification tasks using diverse real-world datasets and attack strategies. With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.Comment: Accepted at NeurIPS 2023 New Frontiers in Graph Learning Workshop (NeurIPS GLFrontiers 2023

    Transformers Meet Directed Graphs

    Full text link
    Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs. However, transformers for directed graphs are a surprisingly underexplored topic, despite their applicability to ubiquitous domains, including source code and logic circuits. In this work, we propose two direction- and structure-aware positional encodings for directed graphs: (1) the eigenvectors of the Magnetic Laplacian - a direction-aware generalization of the combinatorial Laplacian; (2) directional random walk encodings. Empirically, we show that the extra directionality information is useful in various downstream tasks, including correctness testing of sorting networks and source code understanding. Together with a data-flow-centric graph construction, our model outperforms the prior state of the art on the Open Graph Benchmark Code2 relatively by 14.7%.Comment: 29 page

    Hall Conductance of a Two-Dimensional Electron Gas in Periodic Lattice with Triangular Antidots

    Full text link
    The topic of this contribution is the investigation of quantum states and quantum Hall effect in electron gas subjected to a periodic potential of the lateral lattice. The potential is formed by triangular quantum antidos located on the sites of the square lattice. In a such system the inversion center and the four-fold rotation symmetry are absent. The topological invariants which characterize different magnetic subbands and their Hall conductances are calculated. It is shown that the details of the antidot geometry are crucial for the Hall conductance quantization rule. The critical values of lattice parameters defining the shape of triangular antidots at which the Hall conductance is changed drastically are determined. We demonstrate that the quantum states and Hall conductance quantization law for the triangular antidot lattice differ from the case of the square lattice with cylindrical antidots. As an example, the Hall conductances of magnetic subbands for different antidot geometries are calculated for the case when the number of magnetic flux quanta per unit cell is equal to three.Comment: 6 pages, 5 figure
    corecore