19 research outputs found

    Radio Map Interpolation using Graph Signal Processing

    Get PDF
    Interpolating a radio map is a problem of great relevance in many scenarios such as network planning, network optimization and localization. In this work such a problem is tackled by leveraging recent results from the emerging field of signal processing on graphs. A technique for interpolating graph structured data is adapted to the problem at hand by using different graph creation strategies, including ones that explicitly consider NLOS propagation conditions. Extensive experiments in a realistic large-scale urban scenario demonstrate that the proposed technique outperforms other traditional methods such as IDW, RBF and model-based interpolation

    Beyond cellular green generation: Potential and challenges of the network separation

    Get PDF
    This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This article introduces the ideas investigated in the BCG2 project of the GreenTouch consortium. The basic concept is to separate signaling and data in the wireless access network. Transmitting the signaling information separately maintains coverage even when the whole data network is adapted to the current load situation. Such network-wide adaptation can power down base stations when no data transmission is needed and, thus, promises a tremendous increase in energy efficiency. We highlight the advantages of the separation approach and discuss technical challenges opening new research directions. Moreover, we propose two analytical models to assess the potential energy efficiency improvement of the BCG2 approach

    Rate-accuracy optimization of binary descriptors

    Get PDF
    Binary descriptors have recently emerged as low-complexity alternatives to state-of-the-art descriptors such as SIFT. The descriptor is represented by means of a binary string, in which each bit is the result of the pairwise comparison of smoothed pixel values properly selected in a patch around each keypoint. Previous works have focused on the construction of the descriptor neglecting the opportunity of performing lossless compression. In this paper, we propose two contributions. First, design an entropy coding scheme that seeks the internal ordering of the descriptor that minimizes the number of bits necessary to represent it. Second, we compare different selection strategies that can be adopted to identify which pairwise comparisons to use when building the descriptor. Unlike previous works, we evaluate the discriminative power of descriptors as a function of rate, in order to investigate the trade-offs in a bandwidth constrained scenario

    Hybrid coding of visual content and local image features

    Get PDF
    Distributed visual analysis applications, such as mobile visual search or Visual Sensor Networks (VSNs) require the transmission of visual content on a bandwidth-limited network, from a peripheral node to a processing unit. Traditionally, a Compress-Then-Analyze approach has been pursued, in which sensing nodes acquire and encode the pixel-level representation of the visual content, that is subsequently transmitted to a sink node in order to be processed. This approach might not represent the most effective solution, since several analysis applications leverage a compact representation of the content, thus resulting in an inefficient usage of network resources. Furthermore, coding artifacts might significantly impact the accuracy of the visual task at hand. To tackle such limitations, an orthogonal approach named Analyze-Then-Compress has been proposed. According to such a paradigm, sensing nodes are responsible for the extraction of visual features, that are encoded and transmitted to a sink node for further processing. In spite of improved task efficiency, such paradigm implies the central processing node not being able to reconstruct a pixel-level representation of the visual content. In this paper we propose an effective compromise between the two paradigms, namely Hybrid-Analyze-Then-Compress (HATC) that aims at jointly encoding visual content and local image features. Furthermore, we show how a target tradeoff between image quality and task accuracy might be achieved by accurately allocating the bitrate to either visual content or local features.Comment: submitted to IEEE International Conference on Image Processin

    Coding video sequences of visual features

    Get PDF
    Visual features provide a convenient representation of the image content, which is exploited in several applications, e.g., visual search, object tracking, etc. In several cases, visual features need to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required rate, while attaining a target efficiency for the task at hand. Although the literature has recently addressed the problem of coding local features extracted from still images, in this paper we propose, for the first time, a coding architecture designed for local features extracted from video content. We exploit both spatial and temporal redundancy by means of intra-frame and inter-frame coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. Experimental results demonstrate that, in the case of SIFT descriptors, exploiting temporal redundancy leads to substantial gains in terms of coding efficiency

    Enabling visual analysis in wireless sensor networks

    Get PDF
    This demo showcases some of the results obtained by the GreenEyes project, whose main objective is to enable visual analysis on resource-constrained multimedia sensor networks. The demo features a multi-hop visual sensor network operated by BeagleBones Linux computers with IEEE 802.15.4 communication capabilities, and capable of recognizing and tracking objects according to two different visual paradigms. In the traditional compress-then-analyze (CTA) paradigm, JPEG compressed images are transmitted through the network from a camera node to a central controller, where the analysis takes place. In the alternative analyze-then-compress (ATC) paradigm, the camera node extracts and compresses local binary visual features from the acquired images (either locally or in a distributed fashion) and transmits them to the central controller, where they are used to perform object recognition/tracking. We show that, in a bandwidth constrained scenario, the latter paradigm allows to reach better results in terms of application frame rates, still ensuring excellent analysis performance
    corecore