20 research outputs found
Assessing the Performance of Virtualization Technologies for NFV: a Preliminary Benchmarking
The NFV paradigm transforms those applications executed for decades in dedicated appliances, into software images to be consolidated in standard server.
Although NFV is implemented through cloud computing technologies (e.g., virtual machines, virtual switches), the network traffic that such components have to handle in NFV is different than the traffic they process when used in a cloud computing scenario.
Then, this paper provides a (preliminary) benchmarking of the widespread virtualization technologies when used in NFV, which means when they are exploited to run the so called virtual network functions and to chain them in order to create complex services
Recommended from our members
Sandboxed, Online Debugging of Production Bugs for SOA Systems
Short time-to-bug localization is extremely important for any 24x7 service-oriented application. To this end, we introduce a new debugging paradigm called live debugging. There are two goals that any live debugging infrastructure must meet: Firstly, it must offer real-time insight for bug diagnosis and localization, which is paramount when errors happen in user-facing applications. Secondly, live debugging should not impact user-facing performance for normal events. In large distributed applications, bugs which impact only a small percentage of users are common. In such scenarios, debugging a small part of the application should not impact the entire system.
With the above-stated goals in mind, this thesis presents a framework called Parikshan, which leverages user-space containers (OpenVZ) to launch application instances for the express purpose of live debugging. Parikshan is driven by a live-cloning process, which generates a replica (called debug container) of production services, cloned from a production container which continues to provide the real output to the user. The debug container provides a sandbox environment, for safe execution of monitoring/debugging done by the users without any perturbation to the execution environment. As a part of this framework, we have designed customized-network proxies, which replicate inputs from clients to both the production and test-container, as well safely discard all outputs. Together the network duplicator, and the debug container ensure both compute and network isolation of the debugging environment. We believe that this piece of work provides the first of its kind practical real-time debugging of large multi-tier and cloud applications, without requiring any application downtime, and minimal performance impact
Recommended from our members
A Personal Virtual Computer Recorder
Continuing advances in hardware technology have enabled the proliferation of faster, cheaper, and more capable personal computers. Users of all backgrounds rely on their computers to handle ever-expanding information, communication, and computation needs. As users spend more time interacting with their computers, it is becoming increasingly important to archive and later search the knowledge, ideas and information that they have viewed through their computers. However, existing state-of-the-art web and desktop search tools fail to provide a suitable solution, as they focus on static, accessible documents in isolation. Thus, finding the information one has viewed among the ever-increasing and chaotic sea of data available from a computer remains a challenge. This dissertation introduces DejaView, a personal virtual computer recorder that enhances personal computers with the ability to process display-centric content to help users with all the information they see through their computers. DejaView continuously records a user's session to provide a complete WYSIWYS (What You Search Is What You've Seen) record of a desktop computing experience, enabling users to playback, browse, search, and revive records, making it easier to retrieve and interact with information they have seen before. DejaView records visual output, checkpoints corresponding application and file system states, and captures onscreen text with contextual information to index the record. A user can then browse and search the record for any visual information that has been previously displayed on the desktop, and revive and interact with the desktop computing state corresponding to any point in the record. DejaView introduces new, transparent operating system, display and file system virtualization techniques and novel semantic display-centric information recording, and combines them to provide its functionality without any modifications to applications, window systems, or operating system kernels. Our results demonstrate that DejaView can provide continuous low-overhead recording without any user-noticeable performance degradation, and allows users to playback, browse, search, and time-travel back to records fast enough for interactive use. This dissertation also demonstrates how DejaView's execution virtualization and recording extend beyond the desktop recorder context. We introduce a coordinated, parallel checkpoint-restart mechanism for distributed applications that minimizes synchronization overhead and uniquely supports complete checkpoint and restart of network state in a transport protocol independent manner, for both reliable and unreliable protocols. We introduce a scalable system that enables significant energy saving by migrating network state and applications off of idle hosts allowing the hosts to enter low-power suspend state, while preserving their network presence. Finally, we show how our techniques can be integrated into a commodity operating system, mainline Linux, thereby allowing the entire operating systems community to benefit from mature checkpoint-restart that is transparent, secure, reliable, efficient, and integral to the Linux kernel
Improving the performance of Virtualized Network Services based on NFV and SDN
Network Functions Virtualisation (NFV) proposes to move all the traditional network appliances, which require dedicated physical machine, onto virtualised environment (e.g,. Virtual Machine).
In this way, many of the current physical devices present in the infrastructure are replaced with standard high volume servers, which could be located in Datacenters, at the edge of the network and in the end user premises.
This enables a reduction of the required physical resources thanks to the use of virtualization technologies, already used in cloud computing, and allows services to be more dynamic and scalable.
However, differently from traditional cloud applications which are rather demanding in terms of CPU power, network applications are mostly I/O bound, hence the virtualization technologies in use (either standard VM-based or lightweight ones) need to be improved to maximize the network performance.
A series of Virtual Network Functions (VNFs) can be connected to each other thanks to Software-Defined Networks (SDN) technologies (e.g., OpenFlow) to create a Network Function Forwarding Graph (NF-FG) that processes the network traffic in the configured order of the graph.
Using NF-FGs it is possible to create arbitrary chains of services, and transparently configure different virtualized network services, which can be dynamically instantiated and rearranges depending on the requested service and its requirements.
However, the above virtualized technologies are rather demanding in terms of hardware resources (mainly CPU and memory), which may have a non-negligible impact on the cost of providing the services according to this paradigm.
This thesis will investigate this problem, proposing a set of solutions that enable the novel NFV paradigm to be efficiently used, hence being able to guarantee both flexibility and efficiency in future network services
Virtualization services: scalable methods for virtualizing multicore systems
Multi-core technology is bringing parallel processing capabilities
from servers to laptops and even handheld devices. At the same time,
platform support for system virtualization is making it easier to
consolidate server and client resources, when and as needed by
applications. This consolidation is achieved by dynamically mapping
the virtual machines on which applications run to underlying
physical machines and their processing cores. Low cost processor and
I/O virtualization methods efficiently scaled to different numbers of
processing cores and I/O devices are key enablers of such consolidation.
This dissertation develops and evaluates new methods for scaling
virtualization functionality to multi-core and future many-core systems.
Specifically, it re-architects virtualization functionality to improve
scalability and better exploit multi-core system resources. Results
from this work include a self-virtualized I/O abstraction, which
virtualizes I/O so as to flexibly use different platforms' processing
and I/O resources. Flexibility affords improved performance and resource
usage and most importantly, better scalability than that offered by
current I/O virtualization solutions. Further, by describing system virtualization as a
service provided to virtual machines and the underlying computing platform,
this service can be enhanced to provide new and innovative functionality.
For example, a virtual device may provide obfuscated data to guest operating
systems to maintain data privacy; it could mask differences in device
APIs or properties to deal with heterogeneous underlying resources; or it
could control access to data based on the ``trust' properties of the
guest VM.
This thesis demonstrates that extended virtualization services are
superior to existing operating system or user-level implementations
of such functionality, for multiple reasons. First, this solution
technique makes more efficient use of key performance-limiting resource in
multi-core systems, which are memory and I/O bandwidth. Second, this
solution technique better exploits the parallelism inherent in multi-core
architectures and exhibits good scalability properties, in
part because at the hypervisor level, there is greater control in precisely
which and how resources are used to realize extended virtualization services.
Improved control over resource usage makes it possible to provide
value-added functionalities for both guest VMs and the platform.
Specific instances of virtualization services described in this thesis are the
network virtualization service that exploits heterogeneous processing cores,
a storage virtualization service that provides location transparent access
to block devices by extending
the functionality provided by network virtualization service, a multimedia
virtualization service that allows efficient media device sharing based on semantic
information, and an object-based storage service with enhanced access
control.Ph.D.Committee Chair: Schwan, Karsten; Committee Member: Ahamad, Mustaq; Committee Member: Fujimoto, Richard; Committee Member: Gavrilovska, Ada; Committee Member: Owen, Henry; Committee Member: Xenidis, Jim
QoE-Centric Control and Management of Multimedia Services in Software Defined and Virtualized Networks
Multimedia services consumption has increased tremendously since the deployment of 4G/LTE networks. Mobile video services (e.g., YouTube and Mobile TV) on smart devices are expected to continue to grow with the emergence and evolution of future networks such as 5G. The end user’s demand for services with better quality from service providers has triggered a trend towards Quality of Experience (QoE) - centric network management through efficient utilization of network resources. However, existing network technologies are either unable to adapt to diverse changing network conditions or limited in available resources.
This has posed challenges to service providers for provisioning of QoE-centric multimedia services. New networking solutions such as Software Defined Networking (SDN) and Network Function Virtualization (NFV) can provide better solutions in terms of
QoE control and management of multimedia services in emerging and future networks. The features of SDN, such as adaptability, programmability and cost-effectiveness make it suitable for bandwidth-intensive multimedia applications such as live video streaming, 3D/HD video and video gaming. However, the delivery of multimedia services over SDN/NFV networks to achieve optimized QoE, and the overall QoE-centric network resource management remain an open question especially in the advent development of future softwarized networks.
The work in this thesis intends to investigate, design and develop novel approaches for QoE-centric control and management of multimedia services (with a focus on video streaming services) over software defined and virtualized networks.
First, a video quality management scheme based on the traffic intensity under Dynamic Adaptive Video Streaming over HTTP (DASH) using SDN is developed. The proposed scheme can mitigate virtual port queue congestion which may cause
buffering or stalling events during video streaming, thus, reducing the video quality.
A QoE-driven resource allocation mechanism is designed and developed for improving the end user’s QoE for video streaming services. The aim of this approach is to find the best combination of network node functions that can provide an optimized QoE level to end-users through network node cooperation. Furthermore, a novel QoE-centric management scheme is proposed and developed, which utilizes Multipath TCP (MPTCP) and Segment Routing (SR) to enhance QoE for video streaming services over SDN/NFV-based networks. The goal of this strategy is to enable service providers to route network traffic through multiple
disjointed bandwidth-satisfying paths and meet specific service QoE guarantees to the end-users. Extensive experiments demonstrated that the proposed schemes in this work improve the video quality significantly compared with the state-of-the-
art approaches. The thesis further proposes the path protections and link failure-free MPTCP/SR-based architecture that increases survivability, resilience, availability and robustness of future networks. The proposed path protection and dynamic link recovery scheme achieves a minimum time to recover from a failed link and avoids link congestion in softwarized networks
Cloud Computing cost and energy optimization through Federated Cloud SoS
2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks
Edge Computing for Extreme Reliability and Scalability
The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud