13 research outputs found

    Predictability of Process Resource Usage: A Measurement-Based Study of UNIX

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryAT&T Metropolitan Networks / 1-5-13411NASA / NAG-1-61

    Usage Analysis of User Files in UNIX

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryAT&T Metropolitan Networks GrantNASA / NAG-1-61

    Clinical Trial Recommendations Using Semantics-Based Inductive Inference and Knowledge Graph Embeddings

    Full text link
    Designing a new clinical trial entails many decisions, such as defining a cohort and setting the study objectives to name a few, and therefore can benefit from recommendations based on exhaustive mining of past clinical trial records. Here, we propose a novel recommendation methodology, based on neural embeddings trained on a first-of-a-kind knowledge graph of clinical trials. We addressed several important research questions in this context, including designing a knowledge graph (KG) for clinical trial data, effectiveness of various KG embedding (KGE) methods for it, a novel inductive inference using KGE, and its use in generating recommendations for clinical trial design. We used publicly available data from clinicaltrials.gov for the study. Results show that our recommendations approach achieves relevance scores of 70%-83%, measured as the text similarity to actual clinical trial elements, and the most relevant recommendation can be found near the top of list. Our study also suggests potential improvement in training KGE using node semantics.Comment: 13 pages (w/o bibliography), 4 Figures, 6 Table

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    File Usage Analysis and Resource Usage Prediction: A Measurement-Based Study

    No full text
    90 p.Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 1988.This thesis demonstrates a practical methodology for file usage analysis and resource usage prediction using trace-data from a production system. A VAX 11/780 system running Berkeley UNIX was instrumented to gather file usage data, in the form of file-related system calls, and resource usage data for each process.First, a user-oriented analysis was done using the file usage data collected from the first measurement. The key aspect of this analysis is a characterization of users and files. Two characterization measures are employed: accesses-per-byte and file size. This new approach is shown to distinguish differences in files as well as in users, which can be used in efficient file system design, and in creating realistic test workloads for simulations. A multi-stage gamma distribution is shown to closely model the file usage measures. Even though overall file sharing is small, some files belonging to a bulletin board system are accessed by many users, simultaneously and otherwise.Next, the file usage data from the second measurement is analyzed using a few simple measures based on the notion of a file reference. The measures used are: fraction referenced, file size, reference-time, number of references, and inter-reference time. Neither the users nor the files were characterized in this analysis. It was shown that in most references, files were accessed completely, substantiating the argument for using access-per-byte measure in user-oriented analysis. It was also shown that most file references lasted for a short time, and that inter-reference time was 2 to 3 orders of magnitude larger than reference time.Finally, a probabilistic resource usage prediction scheme was developed, using the process resource usage data. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of a program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. (Abstract shortened with permission of author.)U of I OnlyRestricted to the U of I community idenfinitely during batch ingest of legacy ETD

    Data cache management using frequency-based replacement

    No full text

    Glue factors, likelihood computation, and filtering in state space models

    No full text
    Abstract—Factor graphs of statistical models can be augmented by a glue factor that expresses some additional (initial, final, or otherwise “local”) condition. That applies, in particular, to (otherwise time-invariant) linear Gaussian state space models, which are thus generalized to pulse-like models that are localized anywhere in time. The model likelihood can then be computed by (forward-backward or forward-only) sum-product message passing, which leads to the concept of a likelihood filter. We propose to build (forward-only) likelihood filters from a bank of second-order linear systems. We also observe that such likelihood filters can be cascaded into a new sort of neural network that works naturally with multichannel time signals at multiple time scales. I
    corecore