3,743 research outputs found

    Reporting an Experience on Design and Implementation of e-Health Systems on Azure Cloud

    Full text link
    Electronic Health (e-Health) technology has brought the world with significant transformation from traditional paper-based medical practice to Information and Communication Technologies (ICT)-based systems for automatic management (storage, processing, and archiving) of information. Traditionally e-Health systems have been designed to operate within stovepipes on dedicated networks, physical computers, and locally managed software platforms that make it susceptible to many serious limitations including: 1) lack of on-demand scalability during critical situations; 2) high administrative overheads and costs; and 3) in-efficient resource utilization and energy consumption due to lack of automation. In this paper, we present an approach to migrate the ICT systems in the e-Health sector from traditional in-house Client/Server (C/S) architecture to the virtualised cloud computing environment. To this end, we developed two cloud-based e-Health applications (Medical Practice Management System and Telemedicine Practice System) for demonstrating how cloud services can be leveraged for developing and deploying such applications. The Windows Azure cloud computing platform is selected as an example public cloud platform for our study. We conducted several performance evaluation experiments to understand the Quality Service (QoS) tradeoffs of our applications under variable workload on Azure.Comment: Submitted to third IEEE International Conference on Cloud and Green Computing (CGC 2013

    Scalable algorithms for QoS-aware virtual network mapping for cloud services

    Get PDF
    Both business and consumer applications increasingly depend on cloud solutions. Yet, many are still reluctant to move to cloud-based solutions, mainly due to concerns of service quality and reliability. Since cloud platforms depend both on IT resources (located in data centers, DCs) and network infrastructure connecting to it, both QoS and resilience should be offered with end-to-end guarantees up to and including the server resources. The latter currently is largely impeded by the fact that the network and cloud DC domains are typically operated by disjoint entities. Network virtualization, together with combined control of network and IT resources can solve that problem. Here, we formally state the combined network and IT provisioning problem for a set of virtual networks, incorporating resilience as well as QoS in physical and virtual layers. We provide a scalable column generation model, to address real world network sizes. We analyze the latter in extensive case studies, to answer the question at which layer to provision QoS and resilience in virtual networks for cloud services

    Improved electromagnetic compatibility standards for the interconnected wireless world

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The future is wireless, a world where everything is interconnected. However, the current standards for ensuring the electromagnetic compatibility (EMC) and the coexistence of such wireless systems urge for a major update. It is shown how novel statistical approaches based on the amplitude probability distribution detector and time-domain measurements are better suited for estimating the degradation caused by electromagnetic interferences on digital communication systems than the established practice of determining compliance according to the quasi-peak detector levels using a pass/fail criterion. Therefore, a redefinition of the test methods and of the compliance requirements in terms of EMC standards must be a priority of the international standardization bodies. Finally, a discussion of the fundamental challenges involved in this standardization breakthrough for EMC is delivered.Postprint (author's final draft

    Joint in-network video rate adaptation and measurement-based admission control: algorithm design and evaluation

    Get PDF
    The important new revenue opportunities that multimedia services offer to network and service providers come with important management challenges. For providers, it is important to control the video quality that is offered and perceived by the user, typically known as the quality of experience (QoE). Both admission control and scalable video coding techniques can control the QoE by blocking connections or adapting the video rate but influence each other's performance. In this article, we propose an in-network video rate adaptation mechanism that enables a provider to define a policy on how the video rate adaptation should be performed to maximize the provider's objective (e.g., a maximization of revenue or QoE). We discuss the need for a close interaction of the video rate adaptation algorithm with a measurement based admission control system, allowing to effectively orchestrate both algorithms and timely switch from video rate adaptation to the blocking of connections. We propose two different rate adaptation decision algorithms that calculate which videos need to be adapted: an optimal one in terms of the provider's policy and a heuristic based on the utility of each connection. Through an extensive performance evaluation, we show the impact of both algorithms on the rate adaptation, network utilisation and the stability of the video rate adaptation. We show that both algorithms outperform other configurations with at least 10 %. Moreover, we show that the proposed heuristic is about 500 times faster than the optimal algorithm and experiences only a performance drop of approximately 2 %, given the investigated video delivery scenario

    Multiplexing regulated traffic streams: design and performance

    Get PDF
    The main network solutions for supporting QoS rely on traf- fic policing (conditioning, shaping). In particular, for IP networks the IETF has developed Intserv (individual flows regulated) and Diffserv (only ag- gregates regulated). The regulator proposed could be based on the (dual) leaky-bucket mechanism. This explains the interest in network element per- formance (loss, delay) for leaky-bucket regulated traffic. This paper describes a novel approach to the above problem. Explicitly using the correlation structure of the sources’ traffic, we derive approxi- mations for both small and large buffers. Importantly, for small (large) buffers the short-term (long-term) correlations are dominant. The large buffer result decomposes the traffic stream in a stream of constant rate and a periodic impulse stream, allowing direct application of the Brownian bridge approximation. Combining the small and large buffer results by a concave majorization, we propose a simple, fast and accurate technique to statistically multiplex homogeneous regulated sources. To address heterogeneous inputs, we present similarly efficient tech- niques to evaluate the performance of multiple classes of traffic, each with distinct characteristics and QoS requirements. These techniques, applica- ble under more general conditions, are based on optimal resource (band- width and buffer) partitioning. They can also be directly applied to set GPS (Generalized Processor Sharing) weights and buffer thresholds in a shared resource system
    corecore