150 research outputs found
The EXtended Reality Quality Riddle: A Technological and Sociological Survey
This position paper surveys recent research on the latest findings regarding the Quality of Experience in XR from two distinct angles. Firstly, we present recent technical outcomes concerning media quality. Secondly, we extend our investigation to user experiences from a sociological perspective. This innovative multidisciplinary approach establishes a connection between various methodologies, enabling a more comprehensive understanding of XR quality and it opens up new possibilities for XR services design and performance measurements
Efficient Compressive Sampling of Spatially Sparse Fields in Wireless Sensor Networks
Wireless sensor networks (WSN), i.e. networks of autonomous, wireless sensing
nodes spatially deployed over a geographical area, are often faced with
acquisition of spatially sparse fields. In this paper, we present a novel
bandwidth/energy efficient CS scheme for acquisition of spatially sparse fields
in a WSN. The paper contribution is twofold. Firstly, we introduce a sparse,
structured CS matrix and we analytically show that it allows accurate
reconstruction of bidimensional spatially sparse signals, such as those
occurring in several surveillance application. Secondly, we analytically
evaluate the energy and bandwidth consumption of our CS scheme when it is
applied to data acquisition in a WSN. Numerical results demonstrate that our CS
scheme achieves significant energy and bandwidth savings wrt state-of-the-art
approaches when employed for sensing a spatially sparse field by means of a
WSN.Comment: Submitted to EURASIP Journal on Advances in Signal Processin
CLEVER: a cooperative and cross-layer approach to video streaming in HetNets
We investigate the problem of providing a video streaming service to mobile users in an heterogeneous cellular network composed of micro e-NodeBs (eNBs) and macro e-NodeBs (MeNBs). More in detail, we target a cross-layer dynamic allocation of the bandwidth resources available over a set of eNBs and one MeNB, with the goal of reducing the delay per chunk experienced by users. After optimally formulating the problem of minimizing the chunk delay, we detail the Cross LayEr Video stReaming (CLEVER) algorithm, to practically tackle it. CLEVER makes allocation decisions on the basis of information retrieved from the application layer aswell as from lower layers. Results, obtained over two representative case studies, show that CLEVER is able to limit the chunk delay, while also reducing the amount of bandwidth reserved for offloaded users on the MeNB, as well as the number of offloaded users. In addition, we show that CLEVER performs clearly better than two selected reference algorithms, while being very close to a best bound. Finally, we show that our solution is able to achieve high fairness indexes and good levels of Quality of Experience (QoE)
Blind Image Deblurring Driven by Nonlinear Processing in the Edge Domain
This work addresses the problem of blind image deblurring, that is, of recovering an original image observed through one or more unknown linear channels and corrupted by additive noise. We resort to an iterative algorithm, belonging to the class of Bussgang algorithms, based on alternating a linear and a nonlinear image estimation stage. In detail, we investigate the design of a novel nonlinear processing acting on the Radon transform of the image edges. This choice is motivated by the fact that the Radon transform of the image edges well describes the structural image features and the effect of blur, thus simplifying the nonlinearity design. The effect of the nonlinear processing is to thin the blurred image edges and to drive the overall blind restoration algorithm to a sharp, focused image. The performance of the algorithm is assessed by experimental results pertaining to restoration of blurred natural images
Green compressive sampling reconstruction in IoT networks
In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering of the CS measurements, we focus on the energy efficiency of the signal reconstruction stage given the CS measurements. As a first novel contribution, we present an analysis of the energy consumption within the IoT network under two computing architectures. In the first one, reconstruction takes place within the IoT network and the reconstructed data are encoded and transmitted out of the IoT network; in the second one, all the CS measurements are forwarded to off-network devices for reconstruction and storage, i.e., reconstruction is off-loaded. Our analysis shows that the two architectures significantly differ in terms of consumed energy, and it outlines a theoretically motivated criterion to select a green CS reconstruction computing architecture. Specifically, we present a suitable decision function to determine which architecture outperforms the other in terms of energy efficiency. The presented decision function depends on a few IoT network features, such as the network size, the sink connectivity, and other systems’ parameters. As a second novel contribution, we show how to overcome classical performance comparison of different CS reconstruction algorithms usually carried out w.r.t. the achieved accuracy. Specifically, we consider the consumed energy and analyze the energy vs. accuracy trade-off. The herein presented approach, jointly considering signal processing and IoT network issues, is a relevant contribution for designing green compressive sampling architectures in IoT networks
Joint Analysis and Segmentation of Time-Varying Data with Outliers
Principal-Component Analysis (PCA) is a fundamental tool in data science and machine learning, used for compressing, analyzing, visualizing, and processing large datasets. At the same time, temporal segmentation is important for coherent component analysis of big data collections generated by time-varying distributions. However, both segmentation and PCA can be critically affected and misled by corrupted points that often exist in big data collections. To address these issues, we propose a novel and robust method for joint segmentation and principal-component analysis of time-varying data, based on L1-norm formulations. Our proposed method estimates robust L1-norm principal components (L1-PCs) over different temporal horizons and combines them to perform outlier detection, data segmentation, and subspace estimation. Numerical studies on real-world data, including videos and smartphone-sensed human body motion measurements, corroborate the merits of the proposed method in terms of segmentation, PCA, and outlier detection/removal
Protein-Protein Interaction Prediction via Graph Signal Processing
This paper tackles the problem of predicting the protein-protein interactions that arise in all living systems. Inference of protein-protein interactions is of paramount importance for understanding fundamental biological phenomena, including cross-species protein-protein interactions, such as those causing the 2020-21 pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Furthermore, it is relevant also for applications such as drug repurposing, where a known authorized drug is applied to novel diseases. On the other hand, a large fraction of existing protein interactions are not known, and their experimental measurement is resource consuming. To this purpose, we adopt a Graph Signal Processing based approach modeling the protein-protein interaction (PPI) network (a.k.a. the interactome) as a graph and some connectivity related node features as a signal on the graph. We then leverage the signal on graph features to infer links between graph nodes, corresponding to interactions between proteins. Specifically, we develop a Markovian model of the signal on graph that enables the representation of connectivity properties of the nodes, and exploit it to derive an algorithm to infer the graph edges. Performance assessment by several metrics recognized in the literature proves that the proposed approach, named GRAph signal processing Based PPI prediction (GRABP), effectively captures underlying biologically grounded properties of the PPI network
- …