357 research outputs found

    An adaptive control system to deliver Interactive Virtual Environment content to handheld devices

    Get PDF
    Wireless communication advances have enabled emerging video streaming applications to mobile handheld devices. For example, it is possible to display and interact with complex 3D virtual environments on mobile devices that don't have enough computational and storage capabilities (e.g. smart phones, PDAs) through remote rendering techniques, where a server renders 3D data and streams the corresponding image flow to the client. However, due to fluctuations in bandwidth characteristics and limited mobile device CPU capabilities, it is extremely challenging to design effective systems for streaming interactive multimedia over wireless networks. This paper presents a novel approach based on a controller that can automatically adjust streaming parameters basing on feedback measures from the client device. Experimental results prove the effectiveness of the proposed solution in coping with bandwidth changes, thus providing high Quality of Service (QoS) in remote visualization

    Modeling virtualized application performance from hypervisor counters

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 61-64).Managing a virtualized datacenter has grown more challenging, as each virtual machine's service level agreement (SLA) must be satisfied, when the service levels are generally inaccessible to the hypervisor. To aid in VM consolidation and service level assurance, we develop a modeling technique that generates accurate models of service level. Using only hypervisor counters as inputs, we train models to predict application response times and predict SLA violations. To collect training data, we conduct a simulation phase which stresses the application across many workloads levels, and collects each response time. Simultaneously, hypervisor performance counters are collected. Afterwards, the data is synchronized and used as training data in ensemble-based genetic programming for symbolic regression. This modeling technique is quite efficient at dealing with high-dimensional datasets, and it also generates interpretable models. After training models for web servers and virtual desktops, we test generalization across different content. In our experiments, we found that our technique could distill small subsets of important hypervisor counters from over 700 counters. This was tested for both Apache web servers and Windows-based virtual desktop infrastructures. For the web servers, we accurately modeled the breakdown points and also the service levels. Our models could predict service levels with 90.5% accuracy on a test set. On a untrained scenario with completely different contending content, our models predict service levels with 70% accuracy, but predict SLA violation with 92.7% accuracy. For the virtual desktops, on test scenarios similar to training scenarios, model accuracy was 97.6%. Our main contribution is demonstrating that a completely data-driven approach to application performance modeling can be successful. In contrast to many other works, our models do not use workload level or response times as inputs to the models, but nevertheless predicts service level accurately. Our approach also lets the models determine which inputs are important to a particular model's performance, rather than hand choosing a few inputs to train on.by Lawrence L. Chan.M.Eng

    Discrete-time dynamic modeling for software and services composition as an extension of the Markov chain approach

    Get PDF
    Discrete Time Markov Chains (DTMCs) and Continuous Time Markov Chains (CTMCs) are often used to model various types of phenomena, such as, for example, the behavior of software products. In that case, Markov chains are widely used to describe possible time-varying behavior of “self-adaptive” software systems, where the transition from one state to another represents alternative choices at the software code level, taken according to a certain probability distribution. From a control-theoretical standpoint, some of these probabilities can be interpreted as control signals and others can just be observed. However, the translation between a DTMC or CTMC model and a corresponding first principle model, that can be used to design a control system is not immediate. This paper investigates a possible solution for translating a CTMC model into a dynamic system, with focus on the control of computing systems components. Notice that DTMC models can be translated as well, providing additional information

    Population pharmacokinetic modelling to address the gaps in knowledge of commonly used HIV and TB drugs in children

    Get PDF
    The epidemiology of HIV and TB are overlapping, particularly in sub-Saharan Africa, and TB infection remains common in HIV-positive children. The combined administration of anti-tubercular and antiretroviral therapies(ART) may lead to drug-drug interactions potentially needing to be addressed with the adjustment of doses. This thesis assessed the pharmacokinetics of abacavir and ethambutol and evaluated the influence of covariates such as age and concomitant medication on the PK parameters across different studies using nonlinear mixed-effects modelling. The models developed were used to estimate area under the concentration-time curve (AUC) and maximum concentrations (Cmax) achieved with the currently-recommended weight-adjusted doses. A web-based paediatric dosing tool, which is meant to be used as a first step in the design of clinical trials for paediatric dosing was also developed. The model describing the pharmacokinetics of abacavir found: a) abacavir exposure to be 18.4% larger (CI:7.50-32.2) after the first dose of ART compared to abacavir co-treated with standard lopinavir/ritonavir for over 7 days, possibly indicating that clearance is induced with time on ART, b) malnourished HIV infected children had much higher exposures but this effect waned with a half-life of 12.2 (CI: 9.87-16.8) days as children stayed on nutritional rehabilitation and recovered, c). during co-administration of rifampicin-containing antituberculosis treatment and super-boosted lopinavir/ritonavir, abacavir exposure was decreased by 29.4% (CI: 24.3-35.8), d) children receiving efavirenz had 12.1% (CI: 2.57-20.1) increased abacavir clearance compared to standard lopinavir/ritonavir. The effects did not result in abacavir exposures lower or higher than those reported in adults and are not likely to be clinically important. The ethambutol model found lower concentrations than those reported in adults. The predicted ethambutol median (IQR) Cmax was 1.66 (1.21-2.15) mg/L for children on ethambutol with or without ART (excluding super-boosted lopinavir/ritonavir) and 0.882 (0.669-1.28) mg/L for children on ethambutol with super-boosted lopinavir/ritonavir, these are below the lower limit of the recommended Cmax range of 2 mg/L. During co-administration with super-boosted lopinavir, ethambutol exposure was decreased by 32% (CI: 23.8-38.9), likely due to drug-drug interaction involving absorption, metabolism or elimination. Bioavailability was decreased in children who were administered ethambutol in a crushed form, with an estimate decrease of 30.8% at birth, and an increase of 9.6% for each year of age up to 3.2 years where bioavailability was now similar to children taking EMB full tablet. The developed paediatric dosing tool contains two major sections. a) the ‘generic module’ which uses allometric scaling -based predictions to explore the expected AUC for a generic drug, b) the ‘drug-specific module’ which can simulate entire pharmacokinetic profiles (concentration over time after dose) by using a drug-specific population pharmacokinetic model. In summary, under the current weight-adjusted doses, abacavir exposure remained within the adult recommended levels. Ethambutol dose adjustment would be required in order to achieve adult exposures. A web-based paediatric dosing tool that uses allometric scaling -based predictions as well as drug specific predictions based on published pharmacokinetic models was successfully developed

    The control of fleet management systems´ server model

    Get PDF
    Our article deals with the load controlling of server systems which can be represented as M/M/1 queuing models. It introduces the results of a service structure´s state space based control, which has also been realized in practice, the system model, and the control which guarantees the availability of the system

    Functional Cohesion of Gene Sets Determined by Latent Semantic Indexing of PubMed Abstracts

    Get PDF
    High-throughput genomic technologies enable researchers to identify genes that are co-regulated with respect to specific experimental conditions. Numerous statistical approaches have been developed to identify differentially expressed genes. Because each approach can produce distinct gene sets, it is difficult for biologists to determine which statistical approach yields biologically relevant gene sets and is appropriate for their study. To address this issue, we implemented Latent Semantic Indexing (LSI) to determine the functional coherence of gene sets. An LSI model was built using over 1 million Medline abstracts for over 20,000 mouse and human genes annotated in Entrez Gene. The gene-to-gene LSI-derived similarities were used to calculate a literature cohesion p-value (LPv) for a given gene set using a Fisher's exact test. We tested this method against genes in more than 6,000 functional pathways annotated in Gene Ontology (GO) and found that approximately 75% of gene sets in GO biological process category and 90% of the gene sets in GO molecular function and cellular component categories were functionally cohesive (LPv<0.05). These results indicate that the LPv methodology is both robust and accurate. Application of this method to previously published microarray datasets demonstrated that LPv can be helpful in selecting the appropriate feature extraction methods. To enable real-time calculation of LPv for mouse or human gene sets, we developed a web tool called Gene-set Cohesion Analysis Tool (GCAT). GCAT can complement other gene set enrichment approaches by determining the overall functional cohesion of data sets, taking into account both explicit and implicit gene interactions reported in the biomedical literature

    Energy-Aware Data Management on NUMA Architectures

    Get PDF
    The ever-increasing need for more computing and data processing power demands for a continuous and rapid growth of power-hungry data center capacities all over the world. As a first study in 2008 revealed, energy consumption of such data centers is becoming a critical problem, since their power consumption is about to double every 5 years. However, a recently (2016) released follow-up study points out that this threatening trend was dramatically throttled within the past years, due to the increased energy efficiency actions taken by data center operators. Furthermore, the authors of the study emphasize that making and keeping data centers energy-efficient is a continuous task, because more and more computing power is demanded from the same or an even lower energy budget, and that this threatening energy consumption trend will resume as soon as energy efficiency research efforts and its market adoption are reduced. An important class of applications running in data centers are data management systems, which are a fundamental component of nearly every application stack. While those systems were traditionally designed as disk-based databases that are optimized for keeping disk accesses as low a possible, modern state-of-the-art database systems are main memory-centric and store the entire data pool in the main memory, which replaces the disk as main bottleneck. To scale up such in-memory database systems, non-uniform memory access (NUMA) hardware architectures are employed that face a decreased bandwidth and an increased latency when accessing remote memory compared to the local memory. In this thesis, we investigate energy awareness aspects of large scale-up NUMA systems in the context of in-memory data management systems. To do so, we pick up the idea of a fine-grained data-oriented architecture and improve the concept in a way that it keeps pace with increased absolute performance numbers of a pure in-memory DBMS and scales up on NUMA systems in the large scale. To achieve this goal, we design and build ERIS, the first scale-up in-memory data management system that is designed from scratch to implement a data-oriented architecture. With the help of the ERIS platform, we explore our novel core concept for energy awareness, which is Energy Awareness by Adaptivity. The concept describes that software and especially database systems have to quickly respond to environmental changes (i.e., workload changes) by adapting themselves to enter a state of low energy consumption. We present the hierarchically organized Energy-Control Loop (ECL), which is a reactive control loop and provides two concrete implementations of our Energy Awareness by Adaptivity concept, namely the hardware-centric Resource Adaptivity and the software-centric Storage Adaptivity. Finally, we will give an exhaustive evaluation regarding the scalability of ERIS as well as our adaptivity facilities
    corecore