12,734 research outputs found

    LTE Spectrum Sharing Research Testbed: Integrated Hardware, Software, Network and Data

    Full text link
    This paper presents Virginia Tech's wireless testbed supporting research on long-term evolution (LTE) signaling and radio frequency (RF) spectrum coexistence. LTE is continuously refined and new features released. As the communications contexts for LTE expand, new research problems arise and include operation in harsh RF signaling environments and coexistence with other radios. Our testbed provides an integrated research tool for investigating these and other research problems; it allows analyzing the severity of the problem, designing and rapidly prototyping solutions, and assessing them with standard-compliant equipment and test procedures. The modular testbed integrates general-purpose software-defined radio hardware, LTE-specific test equipment, RF components, free open-source and commercial LTE software, a configurable RF network and recorded radar waveform samples. It supports RF channel emulated and over-the-air radiated modes. The testbed can be remotely accessed and configured. An RF switching network allows for designing many different experiments that can involve a variety of real and virtual radios with support for multiple-input multiple-output (MIMO) antenna operation. We present the testbed, the research it has enabled and some valuable lessons that we learned and that may help designing, developing, and operating future wireless testbeds.Comment: In Proceeding of the 10th ACM International Workshop on Wireless Network Testbeds, Experimental Evaluation & Characterization (WiNTECH), Snowbird, Utah, October 201

    EYES - Energy Efficient Sensor Networks

    Get PDF
    The EYES project (IST-2001-34734) is a three years European research project on self-organizing and collaborative energy-efficient sensor networks. It will address the convergence of distributed information processing, wireless communications, and mobile computing. The goal of the project is to develop the architecture and the technology which enables the creation of a new generation of sensors that can effectively network together so as to provide a flexible platform for the support of a large variety of mobile sensor network applications. This document gives an overview of the EYES project

    Context-aware adaptation in DySCAS

    Get PDF
    DySCAS is a dynamically self-configuring middleware for automotive control systems. The addition of autonomic, context-aware dynamic configuration to automotive control systems brings a potential for a wide range of benefits in terms of robustness, flexibility, upgrading etc. However, the automotive systems represent a particularly challenging domain for the deployment of autonomics concepts, having a combination of real-time performance constraints, severe resource limitations, safety-critical aspects and cost pressures. For these reasons current systems are statically configured. This paper describes the dynamic run-time configuration aspects of DySCAS and focuses on the extent to which context-aware adaptation has been achieved in DySCAS, and the ways in which the various design and implementation challenges are met

    Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Get PDF
    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems ‘expose’ relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT)

    Management and Service-aware Networking Architectures (MANA) for Future Internet Position Paper: System Functions, Capabilities and Requirements

    Get PDF
    Future Internet (FI) research and development threads have recently been gaining momentum all over the world and as such the international race to create a new generation Internet is in full swing: GENI, Asia Future Internet, Future Internet Forum Korea, European Union Future Internet Assembly (FIA). This is a position paper identifying the research orientation with a time horizon of 10 years, together with the key challenges for the capabilities in the Management and Service-aware Networking Architectures (MANA) part of the Future Internet (FI) allowing for parallel and federated Internet(s)

    Device-Centric Monitoring for Mobile Device Management

    Full text link
    The ubiquity of computing devices has led to an increased need to ensure not only that the applications deployed on them are correct with respect to their specifications, but also that the devices are used in an appropriate manner, especially in situations where the device is provided by a party other than the actual user. Much work which has been done on runtime verification for mobile devices and operating systems is mostly application-centric, resulting in global, device-centric properties (e.g. the user may not send more than 100 messages per day across all applications) being difficult or impossible to verify. In this paper we present a device-centric approach to runtime verify the device behaviour against a device policy with the different applications acting as independent components contributing to the overall behaviour of the device. We also present an implementation for Android devices, and evaluate it on a number of device-centric policies, reporting the empirical results obtained.Comment: In Proceedings FESCA 2016, arXiv:1603.0837
    corecore