507 research outputs found

    The Imperative of Synthetic Biology: A Proposed National Research Initiative

    Get PDF
    A 2.5 page report outlining why the United States should launch a strategic national research initiative in synthetic biolog

    D-SPACE4Cloud: A Design Tool for Big Data Applications

    Get PDF
    The last years have seen a steep rise in data generation worldwide, with the development and widespread adoption of several software projects targeting the Big Data paradigm. Many companies currently engage in Big Data analytics as part of their core business activities, nonetheless there are no tools and techniques to support the design of the underlying hardware configuration backing such systems. In particular, the focus in this report is set on Cloud deployed clusters, which represent a cost-effective alternative to on premises installations. We propose a novel tool implementing a battery of optimization and prediction techniques integrated so as to efficiently assess several alternative resource configurations, in order to determine the minimum cost cluster deployment satisfying QoS constraints. Further, the experimental campaign conducted on real systems shows the validity and relevance of the proposed method

    Product-form in G-networks

    Get PDF
    The introduction of the class of queueing networks called G-networks by Gelenbe has been a breakthrough in the field of stochastic modeling since it has largely expanded the class of models which are analytically or numerically tractable. From a theoretical point of view, the introduction of the G-networks has lead to very important considerations: first, a product-form queueing network may have non-linear traffic equations; secondly, we can have a product-form equilibrium distribution even if the customer routing is defined in such a way that more than two queues can change their states at the same time epoch. In this work, we review some of the classes of product-forms introduced for the analysis of the G-networks with special attention to these two aspects. We propose a methodology that, coherently with the product-form result, allows for a modular analysis of the G-queues to derive the equilibrium distribution of the network

    File Access Performance of Diskless Workstations

    Get PDF
    This paper studies the performance of single-user workstations that access files remotely over a local area network. From the environmental, economic, and administrative points of view, workstations that are diskless or that have limited secondary storage are desirable at the present time. Even with changing technology, access to shared data will continue to be important. It is likely that some performance penalty must be paid for remote rather than local file access. Our objectives are to assess this penalty and to explore a number of design alternatives that can serve to minimize it. Our approach is to use the results of measurement experiments to parameterize queuing network performance models. These models then are used to assess performance under load and to evahrate design alternatives. The major conclusions of our study are: (1) A system of diskless workstations with a shared file server can have satisfactory performance. By this, we mean performance comparable to that of a local disk in the lightly loaded case, and the ability to support substantial numbers of client workstations without significant degradation. As with any shared facility, good design is necessary to minimize queuing delays under high load. (2) The key to efficiency is protocols that allow volume transfers at every interface (e.g., between client and server, and between disk and memory at the server) and at every level (e.g., between client and server at the level of logical request/response and at the level of local area network packet size). However, the benefits of volume transfers are limited to moderate sizes (8-16 kbytes) by several factors. (3) From a performance point of view, augmenting the capabilities of the shared file server may be more cost effective than augmenting the capabilities of the client workstations. (4) Network contention should not be a performance problem for a lo-Mbit network and 100 active workstations in a software development environment

    Performance Degradation and Cost Impact Evaluation of Privacy Preserving Mechanisms in Big Data Systems

    Get PDF
    Big Data is an emerging area and concerns managing datasets whose size is beyond commonly used software tools ability to capture, process, and perform analyses in a timely way. The Big Data software market is growing at 32% compound annual rate, almost four times more than the whole ICT market, and the quantity of data to be analyzed is expected to double every two years. Security and privacy are becoming very urgent Big Data aspects that need to be tackled. Indeed, users share more and more personal data and user-generated content through their mobile devices and computers to social networks and cloud services, losing data and content control with a serious impact on their own privacy. Privacy is one area that had a serious debate recently, and many governments require data providers and companies to protect users’ sensitive data. To mitigate these problems, many solutions have been developed to provide data privacy but, unfortunately, they introduce some computational overhead when data is processed. The goal of this paper is to quantitatively evaluate the performance and cost impact of multiple privacy protection mechanisms. A real industry case study concerning tax fraud detection has been considered. Many experiments have been performed to analyze the performance degradation and additional cost (required to provide a given service level) for running applications in a cloud system

    Modeling performance of Hadoop applications: A journey from queueing networks to stochastic well formed nets

    Get PDF
    Nowadays, many enterprises commit to the extraction of actionable knowledge from huge datasets as part of their core business activities. Applications belong to very different domains such as fraud detection or one-to-one marketing, and encompass business analytics and support to decision making in both private and public sectors. In these scenarios, a central place is held by the MapReduce framework and in particular its open source implementation, Apache Hadoop. In such environments, new challenges arise in the area of jobs performance prediction, with the needs to provide Service Level Agreement guarantees to the enduser and to avoid waste of computational resources. In this paper we provide performance analysis models to estimate MapReduce job execution times in Hadoop clusters governed by the YARN Capacity Scheduler. We propose models of increasing complexity and accuracy, ranging from queueing networks to stochastic well formed nets, able to estimate job performance under a number of scenarios of interest, including also unreliable resources. The accuracy of our models is evaluated by considering the TPC-DS industry benchmark running experiments on Amazon EC2 and the CINECA Italian supercomputing center. The results have shown that the average accuracy we can achieve is in the range 9–14%

    Ecotopia: An Ecological Framework for Change Management in Distributed Systems

    Full text link
    Abstract. Dynamic change management in an autonomic, service-oriented infrastructure is likely to disrupt the critical services delivered by the infrastructure. Furthermore, change management must accommodate complex real-world systems, where dependability and performance objectives are managed across multiple distributed service components and have specific criticality/value models. In this paper, we present Ecotopia, a framework for change management in complex service-oriented architectures (SOA) that is ecological in its intent: it schedules change operations with the goal of minimizing the service-delivery disruptions by accounting for their impact on the SOA environment. The change-planning functionality of Ecotopia is split between multiple objective-advisors and a system-level change-orchestrator component. The objective advisors assess the change-impact on service delivery by estimating the expected values of the Key Performance Indicators (KPIs), during and after change. The orchestrator uses the KPI estimations to assess the per-objective and overall business-value changes over a long time-horizon and to identify the scheduling plan that maximizes the overall business value. Ecotopia handles both external change requests, like software upgrades, and internal changes requests, like fault-recovery actions. We evaluate the Ecotopia framework using two realistic change-management scenarios in distributed enterprise systems

    Helicobacter pylori Infection in Pediatric Patients Living in Europe: Results of the EuroPedHP Registry 2013 to 2016

    Get PDF
    Objectives: The aim of the study was to assess clinical presentation, endoscopic findings, antibiotic susceptibility and treatment success of Helicobacter pylori (H. pylori) infected pediatric patients. Methods: Between 2013 and 2016, 23 pediatric hospitals from 17 countries prospectively submitted data on consecutive H. pylori-infected (culture positive) patients to the EuroPedHP-Registry. Results: Of 1333 patients recruited (55.1% girls, median age 12.6 years), 1168 (87.6%) were therapy naïve (group A) and 165 (12.4%) had failed treatment (group B). Patients resided in North/Western (29.6%), Southern (34.1%) and Eastern Europe (23.0%), or Israel/Turkey (13.4%). Main indications for endoscopy were abdominal pain or dyspepsia (81.2%, 1078/1328). Antral nodularity was reported in 77.8% (1031/1326) of patients, gastric or duodenal ulcers and erosions in 5.1% and 12.8%, respectively. Primary resistance to clarithromycin (CLA) and metronidazole (MET) occurred in 25% and 21%, respectively, and increased after failed therapy. Bacterial strains were fully susceptible in 60.5% of group A, but in only 27.4% of group B. Primary CLA resistance was higher in Southern and Eastern Europe (adjusted odds ratio [ORadj] = 3.44, 95% confidence interval [CI] 2.22–5.32, P < 0.001 and 2.62, 95% CI: 1.63–4.22, P < 0.001, respectively) compared with Northern/Western Europe. Children born outside Europe showed higher primary MET resistance (ORadj = 3.81, 95% CI: 2.25–6.45, P < 0.001). Treatment success in group A reached only 79.8% (568/712) with 7 to 14 days triple therapy tailored to antibiotic susceptibility. Conclusions: Peptic ulcers are rare in dyspeptic H. pylori-infected children. Primary resistance to CLA and MET is markedly dependent on geographical regions of birth and residence. The ongoing survey will show whether implementation of the updated ESPGHAN/NASPGHAN guidelines will improve the eradication success.What Is Known: Antibiotic susceptibility and treatment adherence are crucial for successful Helicobacter pylori eradication. In 2006, we reported antibiotic resistance in 1233 infected children (1033 treatment-naïve) living in 14 European countries. Primary resistance rates to clarithromycin and metronidazole were 20% and 23%, respectively. What Is New: This second survey in 1333 culture-positive children revealed increasing primary resistance for clarithromycin (25%), but not for metronidazole (21%). Antibiotic resistance significantly depended on geographical regions and migration status, questioning country-based recommendations. Prescribed drug doses were too low, particularly for protein pump inhibitors (PPI). Improved eradication rates can be expected if current European Society of Pediatric Gastroenterology, Hepatology and Nutrition/North American Society of Pediatric Gastroenterology, Hepatology and Nutrition guidelines are followed.info:eu-repo/semantics/publishedVersio
    • …
    corecore