793 research outputs found

    Link Prediction via Matrix Completion

    Full text link
    Inspired by practical importance of social networks, economic networks, biological networks and so on, studies on large and complex networks have attracted a surge of attentions in the recent years. Link prediction is a fundamental issue to understand the mechanisms by which new links are added to the networks. We introduce the method of robust principal component analysis (robust PCA) into link prediction, and estimate the missing entries of the adjacency matrix. On one hand, our algorithm is based on the sparsity and low rank property of the matrix, on the other hand, it also performs very well when the network is dense. This is because a relatively dense real network is also sparse in comparison to the complete graph. According to extensive experiments on real networks from disparate fields, when the target network is connected and sufficiently dense, whatever it is weighted or unweighted, our method is demonstrated to be very effective and with prediction accuracy being considerably improved comparing with many state-of-the-art algorithms

    A Dynamical Graph Prior for Relational Inference

    Full text link
    Relational inference aims to identify interactions between parts of a dynamical system from the observed dynamics. Current state-of-the-art methods fit a graph neural network (GNN) on a learnable graph to the dynamics. They use one-step message-passing GNNs -- intuitively the right choice since non-locality of multi-step or spectral GNNs may confuse direct and indirect interactions. But the \textit{effective} interaction graph depends on the sampling rate and it is rarely localized to direct neighbors, leading to local minima for the one-step model. In this work, we propose a \textit{dynamical graph prior} (DYGR) for relational inference. The reason we call it a prior is that, contrary to established practice, it constructively uses error amplification in high-degree non-local polynomial filters to generate good gradients for graph learning. To deal with non-uniqueness, DYGR simultaneously fits a ``shallow'' one-step model with shared graph topology. Experiments show that DYGR reconstructs graphs far more accurately than earlier methods, with remarkable robustness to under-sampling. Since appropriate sampling rates for unknown dynamical systems are not known a priori, this robustness makes DYGR suitable for real applications in scientific machine learning

    Statistical Mechanics of Generalization In Graph Convolution Networks

    Full text link
    Graph neural networks (GNN) have become the default machine learning model for relational datasets, including protein interaction networks, biological neural networks, and scientific collaboration graphs. We use tools from statistical physics and random matrix theory to precisely characterize generalization in simple graph convolution networks on the contextual stochastic block model. The derived curves are phenomenologically rich: they explain the distinction between learning on homophilic and heterophilic graphs and they predict double descent whose existence in GNNs has been questioned by recent work. Our results are the first to accurately explain the behavior not only of a stylized graph learning model but also of complex GNNs on messy real-world datasets. To wit, we use our analytic insights about homophily and heterophily to improve performance of state-of-the-art graph neural networks on several heterophilic benchmarks by a simple addition of negative self-loop filters

    Scaling and Alpha-Helix Regulation of Protein Relaxation in a Lipid Bilayer

    Get PDF
    Protein conformation and orientation in the lipid membrane plays a key role in many cellular processes. Here we use molecular dynamics simulation to investigate the relaxation and C-terminus diffusion of a model helical peptide: beta-amyloid (Aβ) in a lipid membrane.We observed that after the helical peptide was initially half-embedded in the extracelluar leaflet of phosphatidylcholine (PC) or PC/cholesterol (PC/CHOL) membrane, the C-terminus diffused across the membrane and anchored to PC headgroups of the cytofacial lipid leaflet. In some cases, the membrane insertion domain of the Aβ was observed to partially unfold. Applying a sigmoidal fit to the process, we found that the characteristic velocity of the C-terminus, as it moved to its anchor site, scaled with θu −4/3, where θu is the fraction of the original helix that was lost during a helix to coil transition. Comparing this scaling with that of bead-spring models of polymer relaxation suggests that the C-terminus velocity is highly regulated by the peptide helical content, but that it is independent of the amino acid type. The Aβ was stabilized by the attachment of the positive Lys28 side chain to the negative phosphate of PC or 3β oxygen of CHOL in the extracellular lipid leaflet and of the C-terminus to its anchor site in the cytofacial lipid leaflet

    Molecular Dynamics Simulations Reveal the Protective Role of Cholesterol in β-Amyloid Protein-Induced Membrane Disruptions in Neuronal Membrane Mimics

    Get PDF
    Interactions of β-amyloid (Aβ) peptides with neuronal membranes have been associated with the pathogenesis of Alzheimer\u27s disease (AD); however, the molecular details remain unclear. We used atomistic molecular dynamics (MD) simulations to study the interactions of Aβ40 and Aβ42 with model neuronal membranes. The differences between cholesterol-enriched and depleted lipid domains were investigated by the use of model phosphatidylcholine (PC) lipid bilayers with and without 40 mol % cholesterol. A total of 16 independent 200 ns simulation replicates were investigated. The surface area per lipid, bilayer thickness, water permeability barrier, and lipid order parameter, which are sensitive indicators of membrane disruption, were significantly altered by the inserted state of the protein. We conclude that cholesterol protects Aβ-induced membrane disruption and inhibits β-sheet formation of Aβ on the lipid bilayer. The latter could represent a two-dimensional (2D) seeding template for the formation of toxic oligomeric Aβ in the pathogenesis of AD

    Prompt Sapper: LLM-Empowered Software Engineering Infrastructure for AI-Native Services

    Full text link
    Foundation models, such as GPT-4, DALL-E have brought unprecedented AI "operating system" effect and new forms of human-AI interaction, sparking a wave of innovation in AI-native services, where natural language prompts serve as executable "code" directly (prompt as executable code), eliminating the need for programming language as an intermediary and opening up the door to personal AI. Prompt Sapper has emerged in response, committed to support the development of AI-native services by AI chain engineering. It creates a large language model (LLM) empowered software engineering infrastructure for authoring AI chains through human-AI collaborative intelligence, unleashing the AI innovation potential of every individual, and forging a future where everyone can be a master of AI innovation. This article will introduce the R\&D motivation behind Prompt Sapper, along with its corresponding AI chain engineering methodology and technical practices
    • …
    corecore