3,875 research outputs found

    Towards Autonomic Service Provisioning Systems

    Full text link
    This paper discusses our experience in building SPIRE, an autonomic system for service provision. The architecture consists of a set of hosted Web Services subject to QoS constraints, and a certain number of servers used to run session-based traffic. Customers pay for having their jobs run, but require in turn certain quality guarantees: there are different SLAs specifying charges for running jobs and penalties for failing to meet promised performance metrics. The system is driven by an utility function, aiming at optimizing the average earned revenue per unit time. Demand and performance statistics are collected, while traffic parameters are estimated in order to make dynamic decisions concerning server allocation and admission control. Different utility functions are introduced and a number of experiments aiming at testing their performance are discussed. Results show that revenues can be dramatically improved by imposing suitable conditions for accepting incoming traffic; the proposed system performs well under different traffic settings, and it successfully adapts to changes in the operating environment.Comment: 11 pages, 9 Figures, http://www.wipo.int/pctdb/en/wo.jsp?WO=201002636

    Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Get PDF
    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems ‘expose’ relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT)

    Peer to Peer Information Retrieval: An Overview

    Get PDF
    Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom

    A survey of communication protocols for internet of things and related challenges of fog and cloud computing integration

    Get PDF
    The fast increment in the number of IoT (Internet of Things) devices is accelerating the research on new solutions to make cloud services scalable. In this context, the novel concept of fog computing as well as the combined fog-to-cloud computing paradigm is becoming essential to decentralize the cloud, while bringing the services closer to the end-system. This article surveys e application layer communication protocols to fulfill the IoT communication requirements, and their potential for implementation in fog- and cloud-based IoT systems. To this end, the article first briefly presents potential protocol candidates, including request-reply and publish-subscribe protocols. After that, the article surveys these protocols based on their main characteristics, as well as the main performance issues, including latency, energy consumption, and network throughput. These findings are thereafter used to place the protocols in each segment of the system (IoT, fog, cloud), and thus opens up the discussion on their choice, interoperability, and wider system integration. The survey is expected to be useful to system architects and protocol designers when choosing the communication protocols in an integrated IoT-to-fog-to-cloud system architecture.Peer ReviewedPostprint (author's final draft

    High-Speed Mobile Networks for Modern Farming and Agricultural Systems

    Get PDF
    ABSTRACT High-Speed Mobile Networks for Modern Farming and Agricultural Systems J.Santos Najar-Ramirez High-speed mobile networks are necessary for agriculture to inventory individual plant health, maximize yield and minimize the resources applied. More specifically, real-time information on individual plant status is critical to decisions regarding the management of resources reserved and expended. This necessity can be met by the availability of environmental sensors (such as humidity, temperature, and pH) whose data is kept on storage servers connected to static and mobile local area networks. These static and mobile local area networks are connected to cellular, core and satellite networks. For instance, agricultural experts remotely working on vast acreage farms from business offices or while traveling can easily connect their notebook computers and other portable devices to these networks in order to check farm status, send email, read industry news or arrange a visit to neighbor farms or suppliers. Today, several mobile phone companies offer broadband service with 2Mbps downlink in rural and dense urban areas, however, they do not typically exist in farm areas. Although these networks (such as 802.11ac/n, 3G, 4G, etc) are significant achievements, they do not meet the projected needs of the agricultural industry. The present use model of high-speed networks for email and multimedia content, together with agriculture’s expected intensive use of real-time plant and environmental condition monitoring, with statistics/plots and real-time high resolution video, necessitates a highly integrated and highly available networked system. For agricultural experts, attentive to market needs, seamless high-speed wireless communication ‘anywhere, anytime at any speed’ is critical to enhancing their productivity and crop yields

    Design And Implementation Of A Hardware Level Content Networking Front End Device

    Get PDF
    The bandwidth and speed of network connections are continually increasing. The speed increase in network technology is set to soon outpace the speed increase in CMOS technology. This asymmetrical growth is beginning to causing software applications that once worked with then current levels of network traffic to flounder under the new high data rates. Processes that were once executed in software now have to be executed, partially if not wholly in hardware. One such application that could benefit from hardware implementation is high layer routing. By allowing a network device to peer into higher layers of the OSI model, the device can scan for viruses, provide higher quality-of-service (QoS), and efficiently route packets. This thesis proposes an architecture for a device that will utilize hardware-level string matching to distribute incoming requests for a server farm. The proposed architecture is implemented in VHDL, synthesized, and laid out on an Altera FPGA

    Ontwerp en evaluatie van content distributie netwerken voor multimediale streaming diensten.

    Get PDF
    Traditionele Internetgebaseerde diensten voor het verspreiden van bestanden, zoals Web browsen en het versturen van e-mails, worden aangeboden via één centrale server. Meer recente netwerkdiensten zoals interactieve digitale televisie of video-op-aanvraag vereisen echter hoge kwaliteitsgaranties (QoS), zoals een lage en constante netwerkvertraging, en verbruiken een aanzienlijke hoeveelheid bandbreedte op het netwerk. Architecturen met één centrale server kunnen deze garanties moeilijk bieden en voldoen daarom niet meer aan de hoge eisen van de volgende generatie multimediatoepassingen. In dit onderzoek worden daarom nieuwe netwerkarchitecturen bestudeerd, die een dergelijke dienstkwaliteit kunnen ondersteunen. Zowel peer-to-peer mechanismes, zoals bij het uitwisselen van muziekbestanden tussen eindgebruikers, als servergebaseerde oplossingen, zoals gedistribueerde caches en content distributie netwerken (CDN's), komen aan bod. Afhankelijk van de bestudeerde dienst en de gebruikte netwerktechnologieën en -architectuur, worden gecentraliseerde algoritmen voor netwerkontwerp voorgesteld. Deze algoritmen optimaliseren de plaatsing van de servers of netwerkcaches en bepalen de nodige capaciteit van de servers en netwerklinks. De dynamische plaatsing van de aangeboden bestanden in de verschillende netwerkelementen wordt aangepast aan de heersende staat van het netwerk en aan de variërende aanvraagpatronen van de eindgebruikers. Serverselectie, herroutering van aanvragen en het verspreiden van de belasting over het hele netwerk komen hierbij ook aan bod

    Management, Optimization and Evolution of the LHCb Online Network

    Get PDF
    The LHCb experiment is one of the four large particle detectors running at the Large Hadron Collider (LHC) at CERN. It is a forward single-arm spectrometer dedicated to test the Standard Model through precision measurements of Charge-Parity (CP) violation and rare decays in the b quark sector. The LHCb experiment will operate at a luminosity of 2x10^32cm-2s-1, the proton-proton bunch crossings rate will be approximately 10 MHz. To select the interesting events, a two-level trigger scheme is applied: the rst level trigger (L0) and the high level trigger (HLT). The L0 trigger is implemented in custom hardware, while HLT is implemented in software runs on the CPUs of the Event Filter Farm (EFF). The L0 trigger rate is dened at about 1 MHz, and the event size for each event is about 35 kByte. It is a serious challenge to handle the resulting data rate (35 GByte/s). The Online system is a key part of the LHCb experiment, providing all the IT services. It consists of three major components: the Data Acquisition (DAQ) system, the Timing and Fast Control (TFC) system and the Experiment Control System (ECS). To provide the services, two large dedicated networks based on Gigabit Ethernet are deployed: one for DAQ and another one for ECS, which are referred to Online network in general. A large network needs sophisticated monitoring for its successful operation. Commercial network management systems are quite expensive and dicult to integrate into the LHCb ECS. A custom network monitoring system has been implemented based on a Supervisory Control And Data Acquisition (SCADA) system called PVSS which is used by LHCb ECS. It is a homogeneous part of the LHCb ECS. In this thesis, it is demonstrated how a large scale network can be monitored and managed using tools originally made for industrial supervisory control. The thesis is organized as the follows: Chapter 1 gives a brief introduction to LHC and the B physics on LHC, then describes all sub-detectors and the trigger and DAQ system of LHCb from structure to performance. Chapter 2 first introduces the LHCb Online system and the dataflow, then focuses on the Online network design and its optimization. In Chapter 3, the SCADA system PVSS is introduced briefly, then the architecture and implementation of the network monitoring system are described in detail, including the front-end processes, the data communication and the supervisory layer. Chapter 4 first discusses the packet sampling theory and one of the packet sampling mechanisms: sFlow, then demonstrates the applications of sFlow for the network trouble-shooting, the traffic monitoring and the anomaly detection. In Chapter 5, the upgrade of LHC and LHCb is introduced, the possible architecture of DAQ is discussed, and two candidate internetworking technologies (high speed Ethernet and InfniBand) are compared in different aspects for DAQ. Three schemes based on 10 Gigabit Ethernet are presented and studied. Chapter 6 is a general summary of the thesis

    Scalable and Adaptive Load Balancing on IBM PowerNP

    Get PDF
    Web and other Internet-based server farms are a critical company resource. A solution to the increased complexity of server farms and to the need to improve the server performance in terms of scalability, fault tolerance and management is to implement a load balancing technique. It consists of a front-end machine which intelligently redirects the traffic to several Real Servers. We discuss the feasibility of implementing adaptive load balancing with minimal flow disruption on the IBM PowerNP Network Processor. We focus our attention on the steady-state part of the algorithm and propose a PowerNP-tailored mapping algorithm derived from Robust Hash Mapping. We propose and show a fast algorithm solution (despite the simple arithmetical logic of the PowerNP), as well as a scalable approach (aiming at minimizing the packet processing time) and, finally, we present some initial performance results
    corecore