280 research outputs found

    EDEN: A Plug-in Equivariant Distance Encoding to Beyond the 1-WL Test

    Full text link
    The message-passing scheme is the core of graph representation learning. While most existing message-passing graph neural networks (MPNNs) are permutation-invariant in graph-level representation learning and permutation-equivariant in node- and edge-level representation learning, their expressive power is commonly limited by the 1-Weisfeiler-Lehman (1-WL) graph isomorphism test. Recently proposed expressive graph neural networks (GNNs) with specially designed complex message-passing mechanisms are not practical. To bridge the gap, we propose a plug-in Equivariant Distance ENcoding (EDEN) for MPNNs. EDEN is derived from a series of interpretable transformations on the graph's distance matrix. We theoretically prove that EDEN is permutation-equivariant for all level graph representation learning, and we empirically illustrate that EDEN's expressive power can reach up to the 3-WL test. Extensive experiments on real-world datasets show that combining EDEN with conventional GNNs surpasses recent advanced GNNs

    Position-Aware Subgraph Neural Networks with Data-Efficient Learning

    Full text link
    Data-efficient learning on graphs (GEL) is essential in real-world applications. Existing GEL methods focus on learning useful representations for nodes, edges, or entire graphs with ``small'' labeled data. But the problem of data-efficient learning for subgraph prediction has not been explored. The challenges of this problem lie in the following aspects: 1) It is crucial for subgraphs to learn positional features to acquire structural information in the base graph in which they exist. Although the existing subgraph neural network method is capable of learning disentangled position encodings, the overall computational complexity is very high. 2) Prevailing graph augmentation methods for GEL, including rule-based, sample-based, adaptive, and automated methods, are not suitable for augmenting subgraphs because a subgraph contains fewer nodes but richer information such as position, neighbor, and structure. Subgraph augmentation is more susceptible to undesirable perturbations. 3) Only a small number of nodes in the base graph are contained in subgraphs, which leads to a potential ``bias'' problem that the subgraph representation learning is dominated by these ``hot'' nodes. By contrast, the remaining nodes fail to be fully learned, which reduces the generalization ability of subgraph representation learning. In this paper, we aim to address the challenges above and propose a Position-Aware Data-Efficient Learning framework for subgraph neural networks called PADEL. Specifically, we propose a novel node position encoding method that is anchor-free, and design a new generative subgraph augmentation method based on a diffused variational subgraph autoencoder, and we propose exploratory and exploitable views for subgraph contrastive learning. Extensive experiment results on three real-world datasets show the superiority of our proposed method over state-of-the-art baselines.Comment: 9 pages, 7 figures, accepted by WSDM 2

    Sparse Group Variable Selection for Gene-Environment Interactions in the Longitudinal Stud

    Get PDF
    Recently, regularized variable selection has emerged as a powerful tool to iden- tify and dissect gene-environment interactions. Nevertheless, in longitudinal studies with high di- mensional genetic factors, regularization methods for G×E interactions have not been systemati- cally developed. In this package, we provide the implementation of sparse group variable selec- tion, based on both the quadratic inference function (QIF) and generalized estimating equa- tion (GEE), to accommodate the bi-level selection for longitudinal G×E studies with high dimen- sional genomic features. Alternative methods conducting only the group or individual level se- lection have also been included. The core modules of the package have been developed in C++

    UNIDEAL: Curriculum Knowledge Distillation Federated Learning

    Full text link
    Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients while preserving data privacy. However, cross-domain FL tasks, where clients possess data from different domains or distributions, remain a challenging problem due to the inherent heterogeneity. In this paper, we present UNIDEAL, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios and heterogeneous model architectures. The proposed method introduces Adjustable Teacher-Student Mutual Evaluation Curriculum Learning, which significantly enhances the effectiveness of knowledge distillation in FL settings. We conduct extensive experiments on various datasets, comparing UNIDEAL with state-of-the-art baselines. Our results demonstrate that UNIDEAL achieves superior performance in terms of both model accuracy and communication efficiency. Additionally, we provide a convergence analysis of the algorithm, showing a convergence rate of O(1/T) under non-convex conditions.Comment: Submitted to ICASSP 202

    UI Layout Generation with LLMs Guided by UI Grammar

    Full text link
    The recent advances in Large Language Models (LLMs) have stimulated interest among researchers and industry professionals, particularly in their application to tasks concerning mobile user interfaces (UIs). This position paper investigates the use of LLMs for UI layout generation. Central to our exploration is the introduction of UI grammar -- a novel approach we proposed to represent the hierarchical structure inherent in UI screens. The aim of this approach is to guide the generative capacities of LLMs more effectively and improve the explainability and controllability of the process. Initial experiments conducted with GPT-4 showed the promising capability of LLMs to produce high-quality user interfaces via in-context learning. Furthermore, our preliminary comparative study suggested the potential of the grammar-based approach in improving the quality of generative results in specific aspects.Comment: ICML 2023 Workshop on AI and HC

    From Awareness to Action: Exploring End-User Empowerment Interventions for Dark Patterns in UX

    Full text link
    The study of UX dark patterns, i.e., UI designs that seek to manipulate user behaviors, often for the benefit of online services, has drawn significant attention in the CHI and CSCW communities in recent years. To complement previous studies in addressing dark patterns from (1) the designer's perspective on education and advocacy for ethical designs; and (2) the policymaker's perspective on new regulations, we propose an end-user-empowerment intervention approach that helps users (1) raise the awareness of dark patterns and understand their underlying design intents; (2) take actions to counter the effects of dark patterns using a web augmentation approach. Through a two-phase co-design study, including 5 co-design workshops (N=12) and a 2-week technology probe study (N=15), we reported findings on the understanding of users' needs, preferences, and challenges in handling dark patterns and investigated the feedback and reactions to users' awareness of and action on dark patterns being empowered in a realistic in-situ setting.Comment: Conditionally Accepted at CSCW 202

    Spraying exogenous hormones alleviate impact of weak-light on yield by improving leaf carbon and nitrogen metabolism in fresh waxy maize

    Get PDF
    Insufficient light during the growth periods has become one of the main factors restricting maize yield with global climate change. Exogenous hormones application is a feasible measure to alleviate abiotic stresses on crop productivity. In this study, a field trial was conducted to investigate the effects of spraying exogenous hormones on yield, dry matter (DM) and nitrogen (N) accumulation, leaf carbon and N metabolism of fresh waxy maize under weak-light stress in 2021 and 2022. Five treatments including natural light (CK), weak-light after pollination (Z), spraying water (ZP1), exogenous Phytase Q9 (ZP2) and 6-benzyladenine (ZP3) under weak-light after pollination were set up using two hybrids suyunuo5 (SYN5) and jingkenuo2000 (JKN2000). Results showed that weak-light stress significantly reduced the average fresh ear yield (49.8%), fresh grain yield (47.9%), DM (53.3%) and N accumulation (59.9%), and increased grain moisture content. The net photosynthetic rate (Pn), transpiration rate (Tr) of ear leaf after pollination decreased under Z. Furthermore, weak-light decreased the activities of RuBPCase and PEPCase, nitrate reductase (NR), glutamine synthetase (GS), glutamate synthase (GOGAT), superoxide dismutase (SOD), catalase (CAT) and peroxidase (POD) in ear leaves, and increased malondialdehyde (MDA) accumulation. And the decrease was greater on JKN2000. While ZP2 and ZP3 treatments increased the fresh ear yield (17.8%, 25.3%), fresh grain yield (17.2%, 29.5%), DM (35.8%, 44.6%) and N (42.5%, 52.4%) accumulation, and decreased grain moisture content compared with Z. The Pn, Tr increased under ZP2 and ZP3. Moreover, the ZP2 and ZP3 treatments improved the activities of RuBPCase, PEPCase; NR, GS, GOGAT; SOD, CAT, POD in ear leaves, and decreased MDA content during grain filling stage. The results also showed the mitigative effect of ZP3 was greater than ZP2, and the improvement effect was more significant on JKN2000

    Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications

    Full text link
    We introduce Deformable Convolution v4 (DCNv4), a highly efficient and effective operator designed for a broad spectrum of vision applications. DCNv4 addresses the limitations of its predecessor, DCNv3, with two key enhancements: 1. removing softmax normalization in spatial aggregation to enhance its dynamic property and expressive power and 2. optimizing memory access to minimize redundant operations for speedup. These improvements result in a significantly faster convergence compared to DCNv3 and a substantial increase in processing speed, with DCNv4 achieving more than three times the forward speed. DCNv4 demonstrates exceptional performance across various tasks, including image classification, instance and semantic segmentation, and notably, image generation. When integrated into generative models like U-Net in the latent diffusion model, DCNv4 outperforms its baseline, underscoring its possibility to enhance generative models. In practical applications, replacing DCNv3 with DCNv4 in the InternImage model to create FlashInternImage results in up to 80% speed increase and further performance improvement without further modifications. The advancements in speed and efficiency of DCNv4, combined with its robust performance across diverse vision tasks, show its potential as a foundational building block for future vision models.Comment: Tech report; Code: https://github.com/OpenGVLab/DCNv
    • …
    corecore