741 research outputs found

    Operating systems data transfer optimization

    Get PDF
    Proyecto de Graduación (Maestría en Ingeniería en Computación) Instituto Tecnológico de Costa Rica, Escuela de Ingeniería en Computación, 2018.Balancing algorithms challenge the state of the art on how data exchanges as messages between programs that execute in the kernel and the applications running on top in user space on a modern Operating System. There is always a possibility to improve the way applications that rely on different spaces in an Operating System can interact. Algorithms must be placed in the picture all the time when thinking about next-generation human interaction problems and which solutions they require. Artificial Intelligence, Computer Vision, Internet of Things, Autonomous Driving are all data-centric applications to solve the next human issues that require data to be transported efficiently and fast between different programs, no matter whether they reside in the kernel or in user space. Chip designs and physical boundaries are putting pressure on software solutions that can virtualize and optimize how data is exchanged. This research proposes to demonstrate - via experimentation techniques, designs, measurement and simulation - that in-place solutions for data optimization transfer between applications residing in different Operating System spaces can be compared and revised to improve their performance towards a data-centric technology world. Specifically, it explores the use of a simulated environment to create a set of archetypical scenarios using an experimental design which demonstrates that PF_RING optimizes data messages exchange between Operating System kernel and user space applications

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results

    Virtual Networking Performance in OpenStack Platform for Network Function Virtualization

    Get PDF
    The emerging Network Function Virtualization (NFV) paradigm, coupled with the highly flexible and programmatic control of network devices offered by Software Defined Networking solutions, enables unprecedented levels of network virtualization that will definitely change the shape of future network architectures, where legacy telco central offices will be replaced by cloud data centers located at the edge. On the one hand, this software-centric evolution of telecommunications will allow network operators to take advantage of the increased flexibility and reduced deployment costs typical of cloud computing. On the other hand, it will pose a number of challenges in terms of virtual network performance and customer isolation. This paper intends to provide some insights on how an open-source cloud computing platform such as OpenStack implements multitenant network virtualization and how it can be used to deploy NFV, focusing in particular on packet forwarding performance issues. To this purpose, a set of experiments is presented that refer to a number of scenarios inspired by the cloud computing and NFV paradigms, considering both single tenant and multitenant scenarios. From the results of the evaluation it is possible to highlight potentials and limitations of running NFV on OpenStack

    Improving Multiple-CMP Systems Using Token Coherence

    Get PDF
    Improvements in semiconductor technology now enable Chip Multiprocessors (CMPs). As many future computer systems will use one or more CMPs and support shared memory, such systems will have caches that must be kept coherent. Coherence is a particular challenge for Multiple-CMP (M-CMP) systems. One approach is to use a hierarchical protocol that explicitly separates the intra-CMP coherence protocol from the inter-CMP protocol, but couples them hierarchically to maintain coherence. However, hierarchical protocols are complex, leading to subtle, difficult-to-verify race conditions. Furthermore, most previous hierarchical protocols use directories at one or both levels, incurring indirections—and thus extra latency—for sharing misses, which are common in commercial workloads. In contrast, this paper exploits the separation of correctness substrate and performance policy in the recently-proposed token coherence protocol to develop the first M-CMP coherence protocol that is flat for correctness, but hierarchical for performance. Via model checking studies, we show that flat correctness eases verification. Via simulation with micro-benchmarks, we make new protocol variants more robust under contention. Finally, via simulation with commercial workloads on a commercial operating system, we show that new protocol variants can be 10-50% faster than a hierarchical directory protocol

    Understand Your Chains: Towards Performance Profile-based Network Service Management

    Full text link
    Allocating resources to virtualized network functions and services to meet service level agreements is a challenging task for NFV management and orchestration systems. This becomes even more challenging when agile development methodologies, like DevOps, are applied. In such scenarios, management and orchestration systems are continuously facing new versions of functions and services which makes it hard to decide how much resources have to be allocated to them to provide the expected service performance. One solution for this problem is to support resource allocation decisions with performance behavior information obtained by profiling techniques applied to such network functions and services. In this position paper, we analyze and discuss the components needed to generate such performance behavior information within the NFV DevOps workflow. We also outline research questions that identify open issues and missing pieces for a fully integrated NFV profiling solution. Further, we introduce a novel profiling mechanism that is able to profile virtualized network functions and entire network service chains under different resource constraints before they are deployed on production infrastructure.Comment: Submitted to and accepted by the European Workshop on Software Defined Networks (EWSDN) 201

    On the placement of security-related Virtualised Network Functions over data center networks

    Get PDF
    Middleboxes are typically hardware-accelerated appliances such as firewalls, proxies, WAN optimizers, and NATs that play an important role in service provisioning over today's data centers. Reports show that the number of middleboxes is on par with the number of routers, and consequently represent a significant commitment from an operator's capital and operational expenditure budgets. Over the past few years, software middleboxes known as Virtual Network Functions (VNFs) are replacing the hardware appliances to reduce cost, improve the flexibility of deployment, and allow for extending network functionality in short timescales. This dissertation aims at identifying the unique characteristics of security modules implementation as VNFs in virtualised environments. We focus on the placement of the security VNFs to minimise resource usage without violating the security imposed constraints as a challenge faced by operators today who want to increase the usable capacity of their infrastructures. The work presented here, focuses on the multi-tenant environment where customised security services are provided to tenants. The services are implemented as a software module deployed as a VNF collocated with network switches to reduce overhead. Furthermore, the thesis presents a formalisation for the resource-aware placement of security VNFs and provides a constraint programming solution along with examining heuristic, meta-heuristic and near-optimal/subset-sum solutions to solve larger size problems in reduced time. The results of this work identify the unique and vital constraints of the placement of security functions. They demonstrate that the granularity of the traffic required by the security functions imposes traffic constraints that increase the resource overhead of the deployment. The work identifies the north-south traffic in data centers as the traffic designed for processing for security functions rather than east-west traffic. It asserts that the non-sharing strategy of security modules will reduce the complexity in case of the multi-tenant environment. Furthermore, the work adopts on-path deployment of security VNF traffic strategy, which is shown to reduce resources overhead compared to previous approaches

    Design and Mining of Health Information Systems for Process and Patient Care Improvement

    Get PDF
    abstract: In healthcare facilities, health information systems (HISs) are used to serve different purposes. The radiology department adopts multiple HISs in managing their operations and patient care. In general, the HISs that touch radiology fall into two categories: tracking HISs and archive HISs. Electronic Health Records (EHR) is a typical tracking HIS, which tracks the care each patient receives at multiple encounters and facilities. Archive HISs are typically specialized databases to store large-size data collected as part of the patient care. A typical example of an archive HIS is the Picture Archive and Communication System (PACS), which provides economical storage and convenient access to diagnostic images from multiple modalities. How to integrate such HISs and best utilize their data remains a challenging problem due to the disparity of HISs as well as high-dimensionality and heterogeneity of the data. My PhD dissertation research includes three inter-connected and integrated topics and focuses on designing integrated HISs and further developing statistical models and machine learning algorithms for process and patient care improvement. Topic 1: Design of super-HIS and tracking of quality of care (QoC). My research developed an information technology that integrates multiple HISs in radiology, and proposed QoC metrics defined upon the data that measure various dimensions of care. The DDD assisted the clinical practices and enabled an effective intervention for reducing lengthy radiologist turnaround times for patients. Topic 2: Monitoring and change detection of QoC data streams for process improvement. With the super-HIS in place, high-dimensional data streams of QoC metrics are generated. I developed a statistical model for monitoring high- dimensional data streams that integrated Singular Vector Decomposition (SVD) and process control. The algorithm was applied to QoC metrics data, and additionally extended to another application of monitoring traffic data in communication networks. Topic 3: Deep transfer learning of archive HIS data for computer-aided diagnosis (CAD). The novelty of the CAD system is the development of a deep transfer learning algorithm that combines the ideas of transfer learning and multi- modality image integration under the deep learning framework. Our system achieved high accuracy in breast cancer diagnosis compared with conventional machine learning algorithms.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Full-scale system impact analysis: Digital document storage project

    Get PDF
    The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent

    Acquisition plan for Digital Document Storage (DDS) prototype system

    Get PDF
    NASA Headquarters maintains a continuing interest in and commitment to exploring the use of new technology to support productivity improvements in meeting service requirements tasked to the NASA Scientific and Technical Information (STI) Facility, and to support cost effective approaches to the development and delivery of enhanced levels of service provided by the STI Facility. The DDS project has been pursued with this interest and commitment in mind. It is believed that DDS will provide improved archival blowback quality and service for ad hoc requests for paper copies of documents archived and serviced centrally at the STI Facility. It will also develop an operating capability to scan, digitize, store, and reproduce paper copies of 5000 NASA technical reports archived annually at the STI Facility and serviced to the user community. Additionally, it will provide NASA Headquarters and field installations with on-demand, remote, electronic retrieval of digitized, bilevel, bit mapped report images along with branched, nonsequential retrieval of report subparts
    • …
    corecore