21 research outputs found

    Stochastic models for cloud data backups and video streaming on virtual reality headsets

    Get PDF

    A queueing-theoretic analysis of the threshold-based exhaustive data-backup scheduling policy

    Get PDF
    We analyse the threshold-based exhaustive data backup scheduling mechanism by means of a queueing-theoretic approach. Data packets that have not yet been backed up are modelled by customers waiting for service (back-up). We obtain the probability generating function of the system content (backlog size) at random slot boundaries in steady state

    Analysis of the age of data in data backup systems

    Get PDF
    Cloud infrastructures are becoming a common platform for storage and workload operations for industries. With increasing rate of data generation, the cloud storage industry has already grown into a multi-billion dollar industry. This industry offers services with very strict service level agreements (SLAs) to insure a high Quality of Service (QoS) for its clients. A breach of these SLAs results in a heavy economic loss for the service provider. We study a queueing model of data backup systems with a focus on the age of data. The age of data is roughly defined as the time for which data has not been backed up and is therefore a measure of uncertainty for the user. We precisely define the performance measure and compute the generating function of its distribution. It is critical to ensure that the tail probabilities are small so that the system stays within SLAs with a high probability. Therefore, we also analyze the tail distribution of the age of data by performing dominant singularity analysis of its generating function. Our formulas can help the service providers to set the system parameters adequately. (C) 2019 Elsevier B.V. All rights reserved

    Drilling Down into the Discourse Structure with LLMs for Long Document Question Answering

    Full text link
    We address the task of evidence retrieval for long document question answering, which involves locating relevant paragraphs within a document to answer a question. We aim to assess the applicability of large language models (LLMs) in the task of zero-shot long document evidence retrieval, owing to their unprecedented performance across various NLP tasks. However, currently the LLMs can consume limited context lengths as input, thus providing document chunks as inputs might overlook the global context while missing out on capturing the inter-segment dependencies. Moreover, directly feeding the large input sets can incur significant computational costs, particularly when processing the entire document (and potentially incurring monetary expenses with enterprise APIs like OpenAI's GPT variants). To address these challenges, we propose a suite of techniques that exploit the discourse structure commonly found in documents. By utilizing this structure, we create a condensed representation of the document, enabling a more comprehensive understanding and analysis of relationships between different parts. We retain 99.6%99.6\% of the best zero-shot approach's performance, while processing only 26%26\% of the total tokens used by the best approach in the information seeking evidence retrieval setup. We also show how our approach can be combined with \textit{self-ask} reasoning agent to achieve best zero-shot performance in complex multi-hop question answering, just ≈4%\approx 4\% short of zero-shot performance using gold evidence.Comment: Accepted to the Findings of EMNLP 202

    A clinical study of intestinal stomas: its indications and complications

    Get PDF
    Background: Intestinal stoma is an opening for fecal diversion. The purpose of the present study was to identify indications for commonly performed intestinal stomas and to study complications related to it.Methods: This is a prospective study was carried out in a surgical unit of Hamidia Hospital, Gandhi Medical College, Bhopal from January, 2012 to December,2012. Data was collected by meticulous history taking including age, gender, indication, type of stoma, type of surgery, careful clinical examination, appropriate operative findings and follow up of the cases. The results were collected, analyzed and compared with other studies.Results: A total of 100 patients were evaluated age ranged between 12- 85 years (50.5 ± 29.01 years) Male to female ratio was 7:3. Of the 100 patients 97 were admitted in emergency while 3 in out-patient department. The most common type of stoma made was loop ileostomy (64%) followed by sigmoid colostomy (11%) and transverse loop colostomy (9%). Main indication for a stoma formation was enteric perforation (38%) followed by Koch’s abdomen (18%). Of the various complications encountered with intestinal stoma, peristomal skin irritation (36%) was the most consistent complication followed by laparotomy wound infection (13%).Conclusions: Inspite of vast exposure of general surgeons towards stoma formation the complications are inevitable. Early detection of complication and its timely management is the keystone

    Intra peritoneal ascending colon in parastomal hernial sac

    Get PDF
    The rate of parastomal hernia reported varies from 5% to 80%. It forms when the abdominal wall defect is continually stretched by the tangential forces applied along the circumference of the abdominal wall opening. The presence of parastomal hernia along with intraperitoneal ascending colon, caecum and terminal ileum along with ileal perforation is a rare entity

    Towards Optimizing the Costs of LLM Usage

    Full text link
    Generative AI and LLMs in particular are heavily used nowadays for various document processing tasks such as question answering and summarization. However, different LLMs come with different capabilities for different tasks as well as with different costs, tokenization, and latency. In fact, enterprises are already incurring huge costs of operating or using LLMs for their respective use cases. In this work, we propose optimizing the usage costs of LLMs by estimating their output quality (without actually invoking the LLMs), and then solving an optimization routine for the LLM selection to either keep costs under a budget, or minimize the costs, in a quality and latency aware manner. We propose a model to predict the output quality of LLMs on document processing tasks like summarization, followed by an LP rounding algorithm to optimize the selection of LLMs. We study optimization problems trading off the quality and costs, both theoretically and empirically. We further propose a sentence simplification model for reducing the number of tokens in a controlled manner. Additionally, we propose several deterministic heuristics for reducing tokens in a quality aware manner, and study the related optimization problem of applying the heuristics optimizing the quality and cost trade-off. We perform extensive empirical validation of our methods on not only enterprise datasets but also on open-source datasets, annotated by us, and show that we perform much better compared to closest baselines. Our methods reduce costs by 40%- 90% while improving quality by 4%-7%. We will release the annotated open source datasets to the community for further research and exploration.Comment: 8 pages + Appendix, Total 12 page
    corecore