98 research outputs found

    An Information-Theoretic Framework for Out-of-Distribution Generalization

    Full text link
    We study the Out-of-Distribution (OOD) generalization in machine learning and propose a general framework that provides information-theoretic generalization bounds. Our framework interpolates freely between Integral Probability Metric (IPM) and ff-divergence, which naturally recovers some known results (including Wasserstein- and KL-bounds), as well as yields new generalization bounds. Moreover, we show that our framework admits an optimal transport interpretation. When evaluated in two concrete examples, the proposed bounds either strictly improve upon existing bounds in some cases or recover the best among existing OOD generalization bounds

    Clinical application of minimal invasive arthroscope on patella fracture surgery

    Get PDF
    The aim of the research is to perform the application of minimal invasive arthroscope on patella fracture surgery. A total of 100 patients with the cases of patella fracture were selected from our hospital and the Second Xiangya Hospital’s Orthopaedic Ward. These patients were divided into ‘Observation Group’ and ‘Comparison Group’. The ‘Comparison Group’ was treated using traditional open surgery whereas the ‘Observation Group’ used the arthroscopic surgery. The postsurgical score by both groups showed that there are statistical significance differences in Lysholm Knee Pain Scale (P < 0.05) and Oswestry Low Back Pain Scale (P < 0.05). By performing arthroscopic surgery on patella fractures, the patients’ recovery capabilities enhanced while the pain was greatly reduced, which in turn, has improved the quality of patients’ life and provide valuable clinical value

    SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation

    Full text link
    Diffusion models based on permutation-equivariant networks can learn permutation-invariant distributions for graph data. However, in comparison to their non-invariant counterparts, we have found that these invariant models encounter greater learning challenges since 1) their effective target distributions exhibit more modes; 2) their optimal one-step denoising scores are the score functions of Gaussian mixtures with more components. Motivated by this analysis, we propose a non-invariant diffusion model, called SwinGNN\textit{SwinGNN}, which employs an efficient edge-to-edge 2-WL message passing network and utilizes shifted window based self-attention inspired by SwinTransformers. Further, through systematic ablations, we identify several critical training and sampling techniques that significantly improve the sample quality of graph generation. At last, we introduce a simple post-processing trick, i.e.\textit{i.e.}, randomly permuting the generated graphs, which provably converts any graph generative model to a permutation-invariant one. Extensive experiments on synthetic and real-world protein and molecule datasets show that our SwinGNN achieves state-of-the-art performances. Our code is released at https://github.com/qiyan98/SwinGNN

    Joint Generative Modeling of Scene Graphs and Images via Diffusion Models

    Full text link
    In this paper, we present a novel generative task: joint scene graph - image generation. While previous works have explored image generation conditioned on scene graphs or layouts, our task is distinctive and important as it involves generating scene graphs themselves unconditionally from noise, enabling efficient and interpretable control for image generation. Our task is challenging, requiring the generation of plausible scene graphs with heterogeneous attributes for nodes (objects) and edges (relations among objects), including continuous object bounding boxes and discrete object and relation categories. We introduce a novel diffusion model, DiffuseSG, that jointly models the adjacency matrix along with heterogeneous node and edge attributes. We explore various types of encodings for the categorical data, relaxing it into a continuous space. With a graph transformer being the denoiser, DiffuseSG successively denoises the scene graph representation in a continuous space and discretizes the final representation to generate the clean scene graph. Additionally, we introduce an IoU regularization to enhance the empirical performance. Our model significantly outperforms existing methods in scene graph generation on the Visual Genome and COCO-Stuff datasets, both on standard and newly introduced metrics that better capture the problem complexity. Moreover, we demonstrate the additional benefits of our model in two downstream applications: 1) excelling in a series of scene graph completion tasks, and 2) improving scene graph detection models by using extra training samples generated from DiffuseSG
    • …
    corecore