18,105 research outputs found

    Cost-Effective Cloud Computing: A Case Study Using the Comparative Genomics Tool, Roundup

    Get PDF
    Background Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource—Roundup—using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Methods Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. Results We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure

    NOMA based resource allocation and mobility enhancement framework for IoT in next generation cellular networks

    Get PDF
    With the unprecedented technological advances witnessed in the last two decades, more devices are connected to the internet, forming what is called internet of things (IoT). IoT devices with heterogeneous characteristics and quality of experience (QoE) requirements may engage in dynamic spectrum market due to scarcity of radio resources. We propose a framework to efficiently quantify and supply radio resources to the IoT devices by developing intelligent systems. The primary goal of the paper is to study the characteristics of the next generation of cellular networks with non-orthogonal multiple access (NOMA) to enable connectivity to clustered IoT devices. First, we demonstrate how the distribution and QoE requirements of IoT devices impact the required number of radio resources in real time. Second, we prove that using an extended auction algorithm by implementing a series of complementary functions, enhance the radio resource utilization efficiency. The results show substantial reduction in the number of sub-carriers required when compared to conventional orthogonal multiple access (OMA) and the intelligent clustering is scalable and adaptable to the cellular environment. Ability to move spectrum usages from one cluster to other clusters after borrowing when a cluster has less user or move out of the boundary is another soft feature that contributes to the reported radio resource utilization efficiency. Moreover, the proposed framework provides IoT service providers cost estimation to control their spectrum acquisition to achieve required quality of service (QoS) with guaranteed bit rate (GBR) and non-guaranteed bit rate (Non-GBR)

    Adaptable processes

    Get PDF
    We propose the concept of adaptable processes as a way of overcoming the limitations that process calculi have for describing patterns of dynamic process evolution. Such patterns rely on direct ways of controlling the behavior and location of running processes, and so they are at the heart of the adaptation capabilities present in many modern concurrent systems. Adaptable processes have a location and are sensible to actions of dynamic update at runtime; this allows to express a wide range of evolvability patterns for concurrent processes. We introduce a core calculus of adaptable processes and propose two verification problems for them: bounded and eventual adaptation. While the former ensures that the number of consecutive erroneous states that can be traversed during a computation is bound by some given number k, the latter ensures that if the system enters into a state with errors then a state without errors will be eventually reached. We study the (un)decidability of these two problems in several variants of the calculus, which result from considering dynamic and static topologies of adaptable processes as well as different evolvability patterns. Rather than a specification language, our calculus intends to be a basis for investigating the fundamental properties of evolvable processes and for developing richer languages with evolvability capabilities

    Attribute Identification and Predictive Customisation Using Fuzzy Clustering and Genetic Search for Industry 4.0 Environments

    Get PDF
    Today´s factory involves more services and customisation. A paradigm shift is towards “Industry 4.0” (i4) aiming at realising mass customisation at a mass production cost. However, there is a lack of tools for customer informatics. This paper addresses this issue and develops a predictive analytics framework integrating big data analysis and business informatics, using Computational Intelligence (CI). In particular, a fuzzy c-means is used for pattern recognition, as well as managing relevant big data for feeding potential customer needs and wants for improved productivity at the design stage for customised mass production. The selection of patterns from big data is performed using a genetic algorithm with fuzzy c-means, which helps with clustering and selection of optimal attributes. The case study shows that fuzzy c-means are able to assign new clusters with growing knowledge of customer needs and wants. The dataset has three types of entities: specification of various characteristics, assigned insurance risk rating, and normalised losses in use compared with other cars. The fuzzy c-means tool offers a number of features suitable for smart designs for an i4 environment

    An adaptable implementation package targeting evidence-based indicators in primary care: a pragmatic cluster-randomised evaluation

    Get PDF
    Background In primary care, multiple priorities and system pressures make closing the gap between evidence and practice challenging. Most implementation studies focus on single conditions, limiting generalisability. We compared an adaptable implementation package against an implementation control and assessed effects on adherence to four different evidence-based quality indicators. Methods and findings We undertook two parallel, pragmatic cluster-randomised trials using balanced incomplete block designs in general practices in West Yorkshire, England. We used ‘opt-out’ recruitment, and we randomly assigned practices that did not opt out to an implementation package targeting either diabetes control or risky prescribing (Trial 1); or blood pressure (BP) control or anticoagulation in atrial fibrillation (AF) (Trial 2). Within trials, each arm acted as the implementation control comparison for the other targeted indicator. For example, practices assigned to the diabetes control package acted as the comparison for practices assigned to the risky prescribing package. The implementation package embedded behaviour change techniques within audit and feedback, educational outreach, and computerised support, with content tailored to each indicator. Respective patient-level primary endpoints at 11 months comprised the following: achievement of all recommended levels of haemoglobin A1c (HbA1c), BP, and cholesterol; risky prescribing levels; achievement of recommended BP; and anticoagulation prescribing. Between February and March 2015, we recruited 144 general practices collectively serving over 1 million patients. We stratified computer-generated randomisation by area, list size, and pre-intervention outcome achievement. In April 2015, we randomised 80 practices to Trial 1 (40 per arm) and 64 to Trial 2 (32 per arm). Practices and trial personnel were not blind to allocation. Two practices were lost to follow-up but provided some outcome data. We analysed the intention-to-treat (ITT) population, adjusted for potential confounders at patient level (sex, age) and practice level (list size, locality, pre-intervention achievement against primary outcomes, total quality scores, and levels of patient co-morbidity), and analysed cost-effectiveness. The implementation package reduced risky prescribing (odds ratio [OR] 0.82; 97.5% confidence interval [CI] 0.67–0.99, p = 0.017) with an incremental cost-effectiveness ratio of £1,359 per quality-adjusted life year (QALY), but there was insufficient evidence of effect on other primary endpoints (diabetes control OR 1.03, 97.5% CI 0.89–1.18, p = 0.693; BP control OR 1.05, 97.5% CI 0.96–1.16, p = 0.215; anticoagulation prescribing OR 0.90, 97.5% CI 0.75–1.09, p = 0.214). No statistically significant effects were observed in any secondary outcome except for reduced co-prescription of aspirin and clopidogrel without gastro-protection in patients aged 65 and over (adjusted OR 0.62; 97.5% CI 0.39–0.99; p = 0.021). Main study limitations concern our inability to make any inferences about the relative effects of individual intervention components, given the multifaceted nature of the implementation package, and that the composite endpoint for diabetes control may have been too challenging to achieve. Conclusions In this study, we observed that a multifaceted implementation package was clinically and cost-effective for targeting prescribing behaviours within the control of clinicians but not for more complex behaviours that also required patient engagement. Trial registration The study is registered with the ISRCTN registry (ISRCTN91989345)
    corecore