1,066 research outputs found

    Competing by Saving Lives: How Pharmaceutical and Medical Device Companies Create Shared Value in Global Health

    Get PDF
    This report looks at how pharmaceutical and medical device companies can create shared value in global health by addressing unmet health needs in low- and middle-income countries. Companies have already begun to reap business value and are securing competitive advantages in the markets of tomorrow

    On a course on computer cluster configuration and administration

    Full text link
    [EN] Computer clusters are today a cost-effective way of providing either high-performance and/or high-availability. The flexibility of their configuration aims to fit the needs of multiple environments, from small servers to SME and large Internet servers. For these reasons, their usage has expanded not only in academia but also in many companies. However, each environment needs a different ¿cluster flavour¿. High-performance and high-throughput computing are required in universities and research centres while high-performance service and high-availability are usually reserved to use in companies. Despite this fact, most university cluster computing courses continue to cover only high-performance computing, usually ignoring other possibilities. In this paper, a master-level course which attempts to fill this gap is discussed. It explores the different types of cluster computing as well as their functional basis, from a very practical point of view. As part of the teaching methodology, each student builds from scratch a computer cluster based on a virtualization tool. The entire process is designed to be scalable. The goal is to be able to apply it to an actual computer cluster with a larger number of nodes, such as those the students may subsequently encounter in their professional life.This work was supported in part by the Spanish Ministerio de Economia y Competitividad (MINECO) and by FEDER funds under Grant TIN2015-66972-C5-1-R.López Rodríguez, PJ.; Baydal Cardona, ME. (2017). On a course on computer cluster configuration and administration. Journal of Parallel and Distributed Computing. 105:127-137. https://doi.org/10.1016/j.jpdc.2017.01.009S12713710

    Dependability of the NFV Orchestrator: State of the Art and Research Challenges

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The introduction of network function virtualisation (NFV) represents a significant change in networking technology, which may create new opportunities in terms of cost efficiency, operations, and service provisioning. Although not explicitly stated as an objective, the dependability of the services provided using this technology should be at least as good as conventional solutions. Logical centralisation, off-the-shelf computing platforms, and increased system complexity represent new dependability challenges relative to the state of the art. The core function of the network, with respect to failure and service management, is orchestration. The failure and misoperation of the NFV orchestrator (NFVO) will have huge network-wide consequences. At the same time, NFVO is vulnerable to overload and design faults. Thus, the objective of this paper is to give a tutorial on the dependability challenges of the NFVO, and to give insight into the required future research. This paper provides necessary background information, reviews the available literature, outlines the proposed solutions, and identifies some design and research problems that must be addressed.acceptedVersio

    Teaching high-performance service in a cluster computing course

    Full text link
    [EN] Most courses on cluster computing in graduate and postgraduate studies are focused on parallel programming and high-performance/high-throughput computing. This is the typical usage of clusters in academia and research centres. However, nowadays, many companies are providing web, mail and, in general, Internet services using computer clusters. These services require a different ``cluster flavour'': high-performance service and high availability. Despite the fact that computer clusters for each environment demand a different configuration, most university cluster computing courses keep focusing only on high-performance computing, ignoring other possibilities. In this paper, we propose several teaching strategies for a course on cluster computing that could fill this gap. The content developed here would be taught as a part of the course. The subject shows several strategies about how to configure, test and evaluate a high-availability/load-balanced Internet server. A virtualization-based platform is used to build a cluster prototype, using Linux as its operating system. Evaluation of the course shows that students knowledge and skills on the subject are improved at the end of the course. On the other hand, regarding the teaching methodology, the results obtained in the yearly survey of the University confirm student satisfaction.This work was supported in part by the Spanish Ministerio de Economia y Competitividad (MINECO) and by FEDER funds under Grant TIN2015-66972-C5-1-R.López Rodríguez, PJ.; Baydal Cardona, ME. (2018). Teaching high-performance service in a cluster computing course. Journal of Parallel and Distributed Computing. 117:138-147. https://doi.org/10.1016/j.jpdc.2018.02.027S13814711

    Adaptive and Scalable High Availability for Infrastructure Clouds

    Full text link

    High Availability Framework for Mix-Cloud Secure Applications

    Get PDF
    Having one of the services, such as web applications, databases or telephony systems, unavailable because of a single server failure is very annoying, yet very common issue, especially if the service is deployed on-premises. The simplest way to address it is to introduce redundancy to the system. But in this case the amount of physical machines needed will raise, while their efficiency will drop as most of the services do not use 100% of machine's capabilities. The better way to solve the service availability issue is to logically separate the service from the underlying hardware, balancing the load between instances and migrating them between the physical machines in case of failure. This way is much more effective, but it also contains a number of challenges, such as configuration difficulty and inter-service request routing. The HA framework discussed in this thesis was designed to mitigate those issues. The key goal solved by the HA framework is raising the scalability and reliability of the service while keeping the configuration as simple as possible. The framework binds together a number of existing technologies, automatically installs and manages them with the single goal in mind: to provide an automated, easy-to-use, reliable, and scalable High Availability solution. In addition, the framework provides a distributed yet unified point of control over the whole installation, regardless of the physical location of components, including cloud and PaaS deployments. The framework is meant to be used by small-to-medium sized enterprises

    Enhancing Availability of Marine Bigdata Repository with a New Fault Tolerance Technique

    Get PDF
    System availability is one of the crucial properties of a dependable knowledge repository system in order to preserve and pull through from minor outages in a short timespan by an automated process. National Marine Bioinformatics System or NABTICS is a Marine Microbial Bigdata Repository that unites the integrated information on genomic sequence and associated metadata which projected to be a large and growing database as well as a metadata system for inputs of research analysis and solving community issues. Therefore, it is decisive to maintain the availability of the system by accurately detecting the failure in a timely manner and a prompt recovery action during the event of failure. The failure in any of NABTICS' system component can be devastating for the system causing the system is inaccessible for a period of time. In this paper, we integrated NABTICS with Cloud-based Neighbour Replication and Failure Recovery (NRFR) in order to enhance the availability of the system. We showed that the implementation resulted in better user experience with minimum system downtime as well as online database application is said to be highly available. Furthermore, NABTICS also performed better resource utilization and higher response application during runtime

    Fault-tolerant fpga for mission-critical applications.

    Get PDF
    One of the devices that play a great role in electronic circuits design, specifically safety-critical design applications, is Field programmable Gate Arrays (FPGAs). This is because of its high performance, re-configurability and low development cost. FPGAs are used in many applications such as data processing, networks, automotive, space and industrial applications. Negative impacts on the reliability of such applications result from moving to smaller feature sizes in the latest FPGA architectures. This increases the need for fault-tolerant techniques to improve reliability and extend system lifetime of FPGA-based applications. In this thesis, two fault-tolerant techniques for FPGA-based applications are proposed with a built-in fault detection region. A low cost fault detection scheme is proposed for detecting faults using the fault detection region used in both schemes. The fault detection scheme primarily detects open faults in the programmable interconnect resources in the FPGAs. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can be detected. For fault recovery, each scheme has its own fault recovery approach. The first approach uses a spare module and a 2-to-1 multiplexer to recover from any fault detected. On the other hand, the second approach recovers from any fault detected using the property of Partial Reconfiguration (PR) in the FPGAs. It relies on identifying a Partially Reconfigurable block (P_b) in the FPGA that is used in the recovery process after the first faulty module is identified in the system. This technique uses only one location to recover from faults in any of the FPGA’s modules and the FPGA interconnects. Simulation results show that both techniques can detect and recover from open faults. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can also be detected. Finally, both techniques require low area overhead

    Health promotion intervention in mental health care : design and baseline findings of a cluster preference randomized controlled trial

    Get PDF
    Background : Growing attention is given to the effects of health promotion programs targeting physical activity and healthy eating in individuals with mental disorders. The design of evaluation studies of public health interventions poses several problems and the current literature appears to provide only limited evidence on the effectiveness of such programs. The aim of the study is to examine the effectiveness and cost-effectiveness of a health promotion intervention targeting physical activity and healthy eating in individuals with mental disorders living in sheltered housing. In this paper, the design of the study and baseline findings are described. Methods/design : The design consists of a cluster preference randomized controlled trial. All sheltered housing organisations in the Flanders region (Belgium) were asked if they were interested to participate in the study and if they were having a preference to serve as intervention or control group. Those without a preference were randomly assigned to the intervention or control group. Individuals in the intervention group receive a 10-week health promotion intervention above their treatment as usual. Outcome assessments occur at baseline, at 10 and at 36 weeks. The primary outcomes include body weight, Body Mass Index, waist circumference, and fat mass. Secondary outcomes consist of physical activity levels, eating habits, health-related quality of life and psychiatric symptom severity. Cost-effectiveness of the intervention will be examined by calculating the Cost-Effectiveness ratio and through economic modeling. Twenty-five sheltered housing organisations agreed to participate. On the individual level 324 patients were willing to participate, including 225 individuals in the intervention group and 99 individuals in the control group. At baseline, no statistical significant differences between the two groups were found for the primary outcome variables. Discussion : This is the first trial evaluating both the effectiveness and cost-effectiveness of a health promotion intervention targeting physical activity and health eating in mental health care using a cluster preference randomized controlled design. The baseline characteristics already demonstrate the unhealthy condition of the study population
    corecore