36 research outputs found

    A fast restart mechanism for checkpoint/recovery protocols in networked environments

    Full text link
    Checkpoint/recovery has been studied extensively, and various optimization techniques have been presented for its improvement. Regardless of the considerable research efforts, little work has been done on improving its restart latency. The time spent on retrieving and loading the checkpoint image during a recovery is non-trivial, especially in networked environments. With the ever-increasing application memory footprint and system failure rate, it is becoming more of an issue. In this paper, we present a Fast REstart Mechanism called FREM. It allows fast restart of a failed process without requiring the availability of the entire checkpoint image. By dynamically tracking the process data accesses after each checkpoint, FREM masks restart latency by overlapping the computation of the resumed process with the retrieval of its checkpoint image. We have implemented FREM with the BLCR checkpointing tool in Linux systems. Our experiments with the SPEC benchmarks indicate that it can effectively reduce restart latency by 61.96 % on average in networked environments

    LLM-empowered Chatbots for Psychiatrist and Patient Simulation: Application and Evaluation

    Full text link
    Empowering chatbots in the field of mental health is receiving increasing amount of attention, while there still lacks exploration in developing and evaluating chatbots in psychiatric outpatient scenarios. In this work, we focus on exploring the potential of ChatGPT in powering chatbots for psychiatrist and patient simulation. We collaborate with psychiatrists to identify objectives and iteratively develop the dialogue system to closely align with real-world scenarios. In the evaluation experiments, we recruit real psychiatrists and patients to engage in diagnostic conversations with the chatbots, collecting their ratings for assessment. Our findings demonstrate the feasibility of using ChatGPT-powered chatbots in psychiatric scenarios and explore the impact of prompt designs on chatbot behavior and user experience

    Evaluation of oral Lanzhou lamb rotavirus vaccine via passive transfusion with CD4+/CD8+ T lymphocytes

    Get PDF
    AbstractLanzhou Lamb derived Rotavirus (RV) Vaccine (namely LLR) for children is only used in China. Since there were no reports on evaluation of LLR, even the data of phase IV clinical trial, we proceed the evaluation of LLR through focusing on T-cell to investigate whether LLR could induce the potential function involving in protection as a vaccine. Four groups of nude mice were transfused with CD4+/CD8+ T-cells isolated from LLR-immunized (primed) and LLR-unimmunized (naïve) mice via intraperitonea (i.p.) respectively. Consequently, the adoption mice were challenged with mice-origin wild rotavirus EDIM (Epizootic Diarrhea of Infant Mice) by intragastric administration. Series of fecal/serum samples were collected and viral shedding, then serum IgA/IgG and secreted IgA were assayed. Compared to the mice transfused with T lymphocytes from naïve mice, the nude mice transfused with CD4+ T lymphocytes from primed mice induce fecal and serum IgA increasing more rapidly, and have a shorter duration of virus shedding too. Whereas, no significant difference in virus clearance was found between the mice transfused with CD8+ T lymphocytes isolated from primed and naïve mice. Therefore, we cleared the distinct roles of transfused CD4+/CD8+ T lymphocytes for rotavirus clearance in nude mice, that the viral clearance conducted by CD4+ T lymphocytes. Meanwhile, it has ability to help induction of LLR specific immunogenicity. Comparing with the transfusion of cell from primed and naïve mice, LLR can induce CD4+ T lymphocytes memory which is a potential index to reflect the immunogenicity and protection, while CD8+ T lymphocytes remove rotavirus by CTL with little memory ability

    System log pre-processing to improve failure prediction

    Full text link
    Log preprocessing, a process applied on the raw log be-fore applying a predictive method, is of paramount impor-tance to failure prediction and diagnosis. While existing fil-tering methods have demonstrated good compression rate, they fail to preserve important failure patterns that are cru-cial for failure analysis. To address the problem, in this paper we present a log preprocessing method. It consists of three integrated steps: (1) event categorization to uni-formly classify system events and identify fatal events; (2) event filtering to remove temporal and spatial redundant records, while also preserving necessary failure patterns for failure analysis; (3) causality-related filtering to com-bine correlated events for filtering through apriori associ-ation rule mining. We demonstrate the effectiveness of our preprocessing method by using real failure logs collected from the Cray XT4 at ORNL and the Blue Gene/L system at SDSC. Experiments show that our method can preserve more failure patterns for failure analysis, thereby improv-ing failure prediction by up to 174%

    A Novel Workload Migration Scheme for Heterogeneous Distributed Computing

    No full text
    Dynamically partitioning of adaptive applications and migration of excess workload from overloaded processors to underloaded processors during execution are critical techniques needed for distributed computing. Distributed systems differ from traditional parallel systems in that they consist of heterogeneous resources connected with shared networks, thereby preventing existing schemes from benefiting large-scale applications. In particular, the cost entailed by workload migration is significant when the excess workload is transferred across heterogeneous distributed platforms. This paper introduces a novel distributed data migration scheme for large-scale adaptive applications. The major contributions of the paper include: (1) a novel hierarchical data migration scheme is proposed by considering the heterogeneous and dynamic features of distributed computing environments; and (2) a linear programming algorithm is presented to effectively reduce the overhead entailed in migrating excess workload across heterogeneous distributed platforms. Experiment results show that the proposed migration scheme outperforms common-used schemes with respect to reducing the communication cost and the application execution time. 1

    Adaptive Fault Management of Parallel Applications for High-Performance Computing

    No full text

    Dynamic load balancing of samr applications on distributed systems

    No full text
    Abstract. Dynamic load balancing(DLB) for parallel systems has been studied extensively; however, DLB for distributed systems is relatively new. To efficiently utilize computing resources provided by distributed systems, an underlying DLB scheme must address both heterogeneous and dynamic features of distributed systems. In this paper, we propose a DLB scheme for Structured Adaptive Mesh Refinement(SAMR) applications on distributed systems. While the proposed scheme can take into consideration (1) the heterogeneity of processors and (2) the heterogeneity and dynamic load of the networks, the focus of this paper is on the latter. The load-balancing processes are divided into two phases: global load balancing and local load balancing. We also provide a heuristic method to evaluate the computational gain and redistribution cost for global redistribution. Experiments show that by using our distributed DLB scheme, the execution time can be reduced by 9%-46% as compared to using parallel DLB scheme which does not consider the heterogeneous and dynamic features of distributed systems

    Dynamic load balancing of samr applications on distributed systems

    No full text
    Dynamic load balancing(DLB) for parallel systems has been studied extensively; however, DLB for distributed systems is relatively new. To efficiently utilize computing resources provided by distributed systems, an underlying DLB scheme must address both heterogeneous and dynamic features of distributed systems. In this paper, we propose a DLB scheme for Structured Adaptive Mesh Refinement(SAMR) applications on distributed systems. While the proposed scheme can take into consideration (1) the heterogeneity of processors and (2) the heterogeneity and dynamic load of the networks, the focus of this paper is on the latter. The load-balancing processes are divided into two phases: global load balancing and local load balancing. We also provide a heuristic method to evaluate the computational gain and redistribution cost for global redistribution. Experiments show that by using our distributed DLB scheme, the execution time can be reduced by 9%-46 % as compared to using parallel DLB scheme which does not consider the heterogeneous and dynamic features of distributed systems. [Keywords] dynamic load balancing, distributed systems, adaptive mesh refinement, heterogeneity, dynamic network loads
    corecore