1,729 research outputs found
Stochastic characterization of the spectrum sharing game in ad-hoc networks
Abstract This work focuses on infrastructure-less ad hoc wireless networks where multiple transmitter/receiver pairs share the same radio resources (spectrum); transmitters have to choose how to split a total power budget across orthogonal spectrum bands with the goal to maximize their sum rate under cumulative interference from concurrent transmissions. We start off by introducing and characterizing the non-cooperative game among transmitter/receiver pairs when the network topology is deterministically given. The corresponding Nash equilibria are derived, highlighting their dependency on the topological parameters (distances between wireless nodes, propagation model, and background noise power). The analysis is then extended to the case of random network topologies drawn from a given spatial stochastic process. Tools of stochastic geometry are leveraged to derive a statistical characterization of the equilibria of the spectrum sharing game. Finally, a distributed algorithm is proposed to let the players of the spectrum sharing game converge to equilibria conditions. Numerical simulations show that the proposed algorithm drives the users to stable points that are close to the equilibria of the game requiring limited information exchange among nodes
Network Selection and Resource Allocation Games for Wireless Access Networks
Wireless access networks are often characterized by the interaction of different end users, communication technologies, and network operators. This paper analyzes the dynamics among these "actors" by focusing on the processes of wireless network selection, where end users may choose among multiple available access networks to get connectivity, and resource allocation, where network operators may set their radio resources to provide connectivity. The interaction among end users is modeled as a non-cooperative congestion game where players (end users) selfishly select the access network that minimizes their perceived selection cost. A method based on mathematical programming is proposed to find Nash equilibria and characterize their optimality under three cost functions, which are representative of different technological scenarios. System level simulations are then used to evaluate the actual throughput and fairness of the equilibrium points. The interaction among end users and network operators is then assessed through a two-stage multi-leader/multi-follower game, where network operators (leaders) play in the first stage by properly setting the radio resources to maximize their users, and end users (followers) play in the second stage the aforementioned network selection game. The existence of exact and approximated subgame perfect Nash equilibria of the two-stage game is thoroughly assessed and numerical results are provided on the "quality" of such equilibria
Bamboo: A fast descriptor based on AsymMetric pairwise BOOsting
A robust hash, or content-based fingerprint, is a succinct representation of the perceptually most relevant parts of a multimedia object. A key requirement of fingerprinting is that elements with perceptually similar content should map to the same fingerprint, even if their bit-level representations are different. In this work we propose BAMBOO (Binary descriptor based on AsymMetric pairwise BOOsting), a binary local descriptor that exploits a combination of content-based fingerprinting techniques and computationally efficient filters (box filters, Haar-like features, etc.) applied to image patches. In particular, we define a possibly large set of filters and iteratively select the most discriminative ones resorting to an asymmetric pair-wise boosting technique. The output values of the filtering process are quantized to one bit, leading to a very compact binary descriptor. Results show that such descriptor leads to compelling results, significantly outperforming binary descriptors having comparable complexity (e.g., BRISK), and approaching the discriminative power of state-of-the-art descriptors which are significantly more complex (e.g., SIFT and BinBoost)
Optimal Content Placement in ICN Vehicular Networks
Information Centric Networking (ICN) is a networking framework for content distribution. The communication is based on a request/response model where the attention is centered on the content. The user sends interest messages naming the content it desires and the network chooses the best node from which delivers the content. This way for retrieving contents naturally fits a context where users continuously change their location. One of the main problems of user mobility is the intermittent connectivity that causes loss of packets. This work shows how in a Vehicle-to-Infrastructure scenario, the network can exploit the ICN architecture with content pre-distribution to maximize the probability that the user retrieves the desired content. We give an ILP formulation of the problem of optimally distributing the contents in the network nodes and discuss how the system assumptions impact the success probability. Moreover, we validate our model by means of simulations with ndnSIM
Energy consumption of visual sensor networks: impact of spatio-temporal coverage
Wireless visual sensor networks (VSNs) are expected to play a major role in future IEEE 802.15.4 personal area networks (PANs) under recently established collision-free medium access control (MAC) protocols, such as the IEEE 802.15.4e-2012 MAC. In such environments, the VSN energy consumption is affected by a number of camera sensors deployed (spatial coverage), as well as a number of captured video frames of which each node processes and transmits data (temporal coverage). In this paper we explore this aspect for uniformly formed VSNs, that is, networks comprising identical wireless visual sensor nodes connected to a collection node via a balanced cluster-tree topology, with each node producing independent identically distributed bitstream sizes after processing the video frames captured within each network activation interval. We derive analytic results for the energy-optimal spatiooral coverage parameters of such VSNs under a priori known bounds for the number of frames to process per sensor and the number of nodes to deploy within each tier of the VSN. Our results are parametric to the probability density function characterizing the bitstream size produced by each node and the energy consumption rates of the system of interest. Experimental results are derived from a deployment of TelosB motes and reveal that our analytic results are always within 7%of the energy consumption measurements for a wide range of settings. In addition, results obtained via motion JPEG encoding and feature extraction on a multimedia subsystem (BeagleBone Linux Computer) show that the optimal spatiooral settings derived by our framework allow for substantial reduction of energy consumption in comparison with ad hoc settings
Coding binary local features extracted from video sequences
Local features represent a powerful tool which is exploited in several applications such as visual search, object recognition and tracking, etc. In this context, binary descriptors provide an efficient alternative to real-valued descriptors, due to low computational complexity, limited memory footprint and fast matching algorithms. The descriptor consists of a binary vector, in which each bit is the result of a pairwise comparison between smoothed pixel intensities. In several cases, visual features need to be transmitted over a bandwidth-limited network. To this end, it is useful to compress the descriptor to reduce the required rate, while attaining a target accuracy for the task at hand. The past literature thoroughly addressed the problem of coding visual features extracted from still images and, only very recently, the problem of coding real-valued features (e.g., SIFT, SURF) extracted from video sequences. In this paper we propose a coding architecture specifically designed for binary local features extracted from video content. We exploit both spatial and temporal redundancy by means of intra-frame and inter-frame coding modes, showing that significant coding gains can be attained for a target level of accuracy of the visual analysis task
A visual sensor network for object recognition: Testbed realization
This work describes the implementation of an object recognition service on top of energy and resource-constrained hardware. A complete pipeline for object recognition based on the BRISK visual features is implemented on Intel Imote2 sensor devices. The reference implementation is used to assess the performance of the object recognition pipeline in terms of processing time and recognition accuracy
Effects of solar activity on noise in CALIOP profiles above the South Atlantic Anomaly
We show that nighttime dark noise measurements from the spaceborne lidar
CALIOP contain valuable information about the evolution of upwelling
high-energy radiation levels. Above the South Atlantic Anomaly (SAA), CALIOP
dark noise levels fluctuate by ±6% between 2006 and 2013, and follow
the known anticorrelation of local particle flux with the 11-year cycle of
solar activity (with a 1-year lag). By analyzing the geographic distribution
of noisy profiles, we are able to reproduce known findings about the SAA
region. Over the considered period, it shifts westward by
0.3° year<sup>−1</sup>, and changes in size by 6° meridionally and
2° zonally, becoming larger with weaker solar activity. All results are
in strong agreement with previous works. We predict SAA noise levels will
increase anew after 2014, and will affect future spaceborne lidar missions
most near 2020
Rate-energy-accuracy optimization of convolutional architectures for face recognition
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Face recognition systems based on Convolutional Neural Networks (CNNs) or convolutional architectures currently represent the state of the art, achieving an accuracy comparable to that of humans. Nonetheless, there are two issues that might hinder their adoption on distributed battery-operated devices (e.g., visual sensor nodes, smartphones, and wearable devices). First, convolutional architectures are usually computationally demanding, especially when the depth of the network is increased to maximize accuracy. Second, transmitting the output features produced by a CNN might require a bitrate higher than the one needed for coding the input image. Therefore, in this paper we address the problem of optimizing the energy-rate-accuracy characteristics of a convolutional architecture for face recognition. We carefully profile a CNN implementation on a Raspberry Pi device and optimize the structure of the neural network, achieving a 17-fold speedup without significantly affecting recognition accuracy. Moreover, we propose a coding architecture custom-tailored to features extracted by such model. (C) 2015 Elsevier Inc. All rights reserved.Face recognition systems based on Convolutional Neural Networks (CNNs) or convolutional architectures currently represent the state of the art, achieving an accuracy comparable to that of humans. Nonetheless, there are two issues that might hinder their a36142148CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOCAPES - COORDENAÇÃO DE APERFEIÇOAMENTO DE PESSOAL DE NÍVEL SUPERIORFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULOConselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)sem informação2013/11359-0sem informaçã
Compress-then-analyze vs. analyze-then-compress: Two paradigms for image analysis in visual sensor networks
We compare two paradigms for image analysis in vi- sual sensor networks (VSN). In the compress-then-analyze (CTA) paradigm, images acquired from camera nodes are compressed and sent to a central controller for further analysis. Conversely, in the analyze-then-compress (ATC) approach, camera nodes perform visual feature extraction and transmit a compressed version of these features to a central controller. We focus on state-of-the-art binary features which are particularly suitable for resource-constrained VSNs, and we show that the ”winning” paradigm depends primarily on the network conditions. Indeed, while the ATC approach might be the only possible way to perform analysis at low available bitrates, the CTA approach reaches the best results when the available bandwidth enables the transmission of high-quality images
- …
