341 research outputs found

    An Approach for Fast Fault Detection in Virtual Network

    Get PDF
    The diversity of applications in cloud computing and the dynamic nature of environment deployment makes virtual machines, containers, and distributed software systems to often have various software failures, which make it impossible to provide external services normally. Whether it is cloud management or distributed application itself, it takes a few seconds to find the fault of protocol class detection methods on the management or control surfaces of distributed applications, hundreds of milliseconds to find the fault of protocol class detection methods based on user interfaces, and the main time from the failure to recovery of distributed software systems is spent in detecting the fault. Therefore, timely discovery of faults (virtual machines, containers, software) is the key to subsequent fault diagnosis, isolation and recovery. Considering the network connection of virtual machines/containers in cloud infrastructure, more and more intelligent virtual network cards are used to connect virtual network elements (Virtual Router or Virtual Switch). This paper studies a fault detection mechanism of virtual machines, containers and distributed software based on the message driven mode of virtual network elements. Taking advantage of the VIRTIO message queue memory sharing feature between the front-end and back-end in the virtual network card of the virtualization network element and the virtual machine or container it detects in the same server in the cloud network, when the virtualization network element sends packets to the virtual machine or container, quickly check whether the message on the queue header of the previously sent VIRTIO message has been received and processed. If it has not been received and processed beyond a certain time threshold, it indicates that the virtual machine, the container and distributed software have failed. The method in this paper can significantly improve the fault detection performance of virtual machine/container/distributed application (from the second pole to the millisecond level) for a large number of business message scenarios, and provide faster fault detection for the rapid convergence of virtual network traffic, migration of computing nodes, and high availability of distributed applications

    Cross-dimensional magnitude interactions arise from memory interference

    Get PDF
    Magnitudes from different dimensions (e.g., space and time) interact with each other in perception, but how these interactions occur remains unclear. In four experiments, we investigated whether cross-dimensional magnitude interactions arise from memory interference. In Experiment 1, participants perceived a constant-length line consisting of two line segments of complementary lengths and presented for a variable stimulus duration; then they received a cue about which of the two segment lengths to later reproduce. Participants were to first reproduce the stimulus duration and then the cued length. Reproduced durations increased as a function of the cued length if the cue was given before duration was retrieved from memory for reproduction (i.e. before duration reproduction; Experiment 1) but not if it was given after the duration memory had been retrieved from memory (i.e. after the start of duration reproduction; Experiment 2). These findings demonstrate that space-time interaction arises as a result of memory interference when length and duration information co-exist in working memory. Experiment 3 further demonstrated spatial interference on duration memories from memories of filled lengths (i.e. solid line segments) but not from noisier memories of unfilled lengths (demarcated empty spatial intervals), thus highlighting the role of memory noise in space-time interaction. Finally, Experiment 4 showed that time also exerted memory interference on space when space was presented as (relatively noisy) unfilled lengths. Taken together, these findings suggest that cross-dimensional magnitude interactions arise as a result of memory interference and the extent and direction of the interaction depend on the relative memory noises of the target and interfering dimensions. We propose a Bayesian model whereby the estimation of a magnitude is based on the integration of the noisily encoded percept of the target magnitude and the prior knowledge that magnitudes co-vary across dimensions (e.g., space and time). We discuss implications for cross-dimensional magnitude interactions in general

    The stability and instability of the language control network: a longitudinal resting-state functional magnetic resonance imaging study

    Full text link
    The language control network is vital among language-related networks responsible for solving the problem of multiple language switching. Researchers have expressed concerns about the instability of the language control network when exposed to external influences (e.g., Long-term second language learning). However, some studies have suggested that the language control network is stable. Therefore, whether the language control network is stable or not remains unclear. In the present study, we directly evaluated the stability and instability of the language control network using resting-state functional magnetic resonance imaging (rs-fMRI). We employed cohorts of Chinese first-year college students majoring in English who underwent second language (L2) acquisition courses at a university and those who did not. Two resting-state fMRI scans were acquired approximately 1 year apart. We found that the language control network was both moderately stable and unstable. We further investigated the morphological coexistence patterns of stability and instability within the language control network. First, we extracted connections representing stability and plasticity from the entire network. We then evaluated whether the coexistence patterns were modular (stability and instability involve different brain regions) or non-modular (stability and plasticity involve the same brain regions but have unique connectivity patterns). We found that both stability and instability coexisted in a non-modular pattern. Compared with the non-English major group, the English major group has a more non-modular coexistence pattern.. These findings provide preliminary evidence of the coexistence of stability and instability in the language control network

    Neoastilbin ameliorates sepsis-induced liver and kidney injury by blocking the TLR4/NF-κB pathway

    Get PDF
    Sepsis frequently causes systemic inflammatory response syndrome and multiple organ failure in patients. Neoastilbin (NAS) is a flavonoid that plays vital functions in inflammation. This work aims to investigate the protective effects of NAS against sepsisinduced liver and kidney injury and elucidate its underlying mechanisms. The mouse model was established using cecal ligation puncture (CLP) induction. NAS was given to mice by gavage for 7 consecutive days before surgery. Liver and kidney function, oxidative stress, and inflammatory factors in serum or tissues were examined by ELISA or related kits. The expression of relevant proteins was assessed by Western blot. Hematoxylin and eosin and/or periodic acid-Schiff staining revealed that NAS ameliorated the pathological damage in liver and kidney tissues of CLPinduced mice. NAS improved liver and kidney functions, as evidenced by elevated levels of blood urea nitrogen, Creatinine, ALT, and AST in the serum of septic mice. TUNEL assay and the expression of Bcl-2 and Bax showed that NAS dramatically reduced apoptosis in liver and renal tissues. NAS treatment lowered the levels of myeloperoxidase and malondialdehyde, while elevated the superoxide dismutase content in liver and kidney tissues of CLPinduced mice. The levels of inflammatory cytokines (IL6, TNF-α, and IL-1β) in the serum and both tissues of CLP-injured mice were markedly decreased by NAS. Mechanically, NAS downregulated TLR4 expression and inhibited NF-κB activation, and overexpression of TLR4 reversed the protective effects of NAS against liver and kidney injury. Collectively, NAS attenuated CLP-induced apoptosis, oxidative stress, inflammation, and dysfunction in the liver and kidney by restraining the TLR4/NF-κB pathway

    Task Aligned Meta-learning based Augmented Graph for Cold-Start Recommendation

    Full text link
    The cold-start problem is a long-standing challenge in recommender systems due to the lack of user-item interactions, which significantly hurts the recommendation effect over new users and items. Recently, meta-learning based methods attempt to learn globally shared prior knowledge across all users, which can be rapidly adapted to new users and items with very few interactions. Though with significant performance improvement, the globally shared parameter may lead to local optimum. Besides, they are oblivious to the inherent information and feature interactions existing in the new users and items, which are critical in cold-start scenarios. In this paper, we propose a Task aligned Meta-learning based Augmented Graph (TMAG) to address cold-start recommendation. Specifically, a fine-grained task aligned constructor is proposed to cluster similar users and divide tasks for meta-learning, enabling consistent optimization direction. Besides, an augmented graph neural network with two graph enhanced approaches is designed to alleviate data sparsity and capture the high-order user-item interactions. We validate our approach on three real-world datasets in various cold-start scenarios, showing the superiority of TMAG over state-of-the-art methods for cold-start recommendation

    Automated Machine Learning for Deep Recommender Systems: A Survey

    Full text link
    Deep recommender systems (DRS) are critical for current commercial online service providers, which address the issue of information overload by recommending items that are tailored to the user's interests and preferences. They have unprecedented feature representations effectiveness and the capacity of modeling the non-linear relationships between users and items. Despite their advancements, DRS models, like other deep learning models, employ sophisticated neural network architectures and other vital components that are typically designed and tuned by human experts. This article will give a comprehensive summary of automated machine learning (AutoML) for developing DRS models. We first provide an overview of AutoML for DRS models and the related techniques. Then we discuss the state-of-the-art AutoML approaches that automate the feature selection, feature embeddings, feature interactions, and system design in DRS. Finally, we discuss appealing research directions and summarize the survey

    Understanding the planning of LLM agents: A survey

    Full text link
    As Large Language Models (LLMs) have shown significant intelligence, the progress to leverage LLMs as planning modules of autonomous agents has attracted more attention. This survey provides the first systematic view of LLM-based agents planning, covering recent works aiming to improve planning ability. We provide a taxonomy of existing works on LLM-Agent planning, which can be categorized into Task Decomposition, Plan Selection, External Module, Reflection and Memory. Comprehensive analyses are conducted for each direction, and further challenges for the field of research are discussed.Comment: 9 pages, 2 tables, 2 figure
    corecore