1,646 research outputs found
Decision Fusion in Space-Time Spreading aided Distributed MIMO WSNs
In this letter, we propose space-time spreading (STS) of local sensor
decisions before reporting them over a wireless multiple access channel (MAC),
in order to achieve flexible balance between diversity and multiplexing gain as
well as eliminate any chance of intrinsic interference inherent in MAC
scenarios. Spreading of the sensor decisions using dispersion vectors exploits
the benefits of multi-slot decision to improve low-complexity diversity gain
and opportunistic throughput. On the other hand, at the receive side of the
reporting channel, we formulate and compare optimum and sub-optimum fusion
rules for arriving at a reliable conclusion.Simulation results demonstrate gain
in performance with STS aided transmission from a minimum of 3 times to a
maximum of 6 times over performance without STS.Comment: 5 pages, 5 figure
Objective assessment of region of interest-aware adaptive multimedia streaming quality
Adaptive multimedia streaming relies on controlled
adjustment of content bitrate and consequent video quality variation in order to meet the bandwidth constraints of the communication
link used for content delivery to the end-user. The values of the easy to measure network-related Quality of Service metrics have no direct relationship with the way moving images are
perceived by the human viewer. Consequently variations in the video stream bitrate are not clearly linked to similar variation in the user perceived quality. This is especially true if some human visual system-based adaptation techniques are employed. As research has shown, there are certain image regions in each frame of a video sequence on which the users are more interested than in the others. This paper presents the Region of Interest-based Adaptive Scheme (ROIAS) which adjusts differently the regions within each frame of the streamed multimedia content based on the user interest in them. ROIAS is presented and discussed in terms of the adjustment algorithms employed and their impact on the human perceived video quality. Comparisons with existing approaches, including a constant quality adaptation scheme across the whole frame area, are performed employing two objective metrics which estimate user perceived video quality
VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection
Although traffic sign detection has been studied for years and great progress
has been made with the rise of deep learning technique, there are still many
problems remaining to be addressed. For complicated real-world traffic scenes,
there are two main challenges. Firstly, traffic signs are usually small size
objects, which makes it more difficult to detect than large ones; Secondly, it
is hard to distinguish false targets which resemble real traffic signs in
complex street scenes without context information. To handle these problems, we
propose a novel end-to-end deep learning method for traffic sign detection in
complex environments. Our contributions are as follows: 1) We propose a
multi-resolution feature fusion network architecture which exploits densely
connected deconvolution layers with skip connections, and can learn more
effective features for the small size object; 2) We frame the traffic sign
detection as a spatial sequence classification and regression task, and propose
a vertical spatial sequence attention (VSSA) module to gain more context
information for better detection performance. To comprehensively evaluate the
proposed method, we do experiments on several traffic sign datasets as well as
the general object detection dataset and the results have shown the
effectiveness of our proposed method
Turbo NOC: a framework for the design of Network On Chip based turbo decoder architectures
This work proposes a general framework for the design and simulation of
network on chip based turbo decoder architectures. Several parameters in the
design space are investigated, namely the network topology, the parallelism
degree, the rate at which messages are sent by processing nodes over the
network and the routing strategy. The main results of this analysis are: i) the
most suited topologies to achieve high throughput with a limited complexity
overhead are generalized de-Bruijn and generalized Kautz topologies; ii)
depending on the throughput requirements different parallelism degrees, message
injection rates and routing algorithms can be used to minimize the network area
overhead.Comment: submitted to IEEE Trans. on Circuits and Systems I (submission date
27 may 2009
Data-Driven Assisted Chance-Constrained Energy and Reserve Scheduling with Wind Curtailment
Chance-constrained optimization (CCO) has been widely used for uncertainty
management in power system operation. With the prevalence of wind energy, it
becomes possible to consider the wind curtailment as a dispatch variable in
CCO. However, the wind curtailment will cause impulse for the uncertainty
distribution, yielding challenges for the chance constraints modeling. To deal
with that, a data-driven framework is developed. By modeling the wind
curtailment as a cap enforced on the wind power output, the proposed framework
constructs a Gaussian process (GP) surrogate to describe the relationship
between wind curtailment and the chance constraints. This allows us to
reformulate the CCO with wind curtailment as a mixed-integer second-order cone
programming (MI-SOCP) problem. An error correction strategy is developed by
solving a convex linear programming (LP) to improve the modeling accuracy. Case
studies performed on the PJM 5-bus and IEEE 118-bus systems demonstrate that
the proposed method is capable of accurately accounting the influence of wind
curtailment dispatch in CCO
Eyes in the Sky: Decentralized Control for the Deployment of Robotic Camera Networks
This paper presents a decentralized control strategy for positioning and orienting multiple robotic cameras to collectively monitor an environment. The cameras may have various degrees of mobility from six degrees of freedom, to one degree of freedom. The control strategy is proven to locally minimize a novel metric representing information loss over the environment. It can accommodate groups of cameras with heterogeneous degrees of mobility (e.g., some that only translate and some that only rotate), and is adaptive to robotic cameras being added or deleted from the group, and to changing environmental conditions. The robotic cameras share information for their controllers over a wireless network using a specially designed multihop networking algorithm. The control strategy is demonstrated in repeated experiments with three flying quadrotor robots indoors, and with five flying quadrotor robots outdoors. Simulation results for more complex scenarios are also presented.United States. Army Research Office. Multidisciplinary University Research Initiative. Scalable (Grant number W911NF-05-1-0219)United States. Office of Naval Research. Multidisciplinary University Research Initiative. Smarts (Grant number N000140911051)National Science Foundation (U.S.). (Grant number EFRI-0735953)Lincoln LaboratoryBoeing CompanyUnited States. Dept. of the Air Force (Contract FA8721-05-C-0002
Approximate MIMO Iterative Processing with Adjustable Complexity Requirements
Targeting always the best achievable bit error rate (BER) performance in
iterative receivers operating over multiple-input multiple-output (MIMO)
channels may result in significant waste of resources, especially when the
achievable BER is orders of magnitude better than the target performance (e.g.,
under good channel conditions and at high signal-to-noise ratio (SNR)). In
contrast to the typical iterative schemes, a practical iterative decoding
framework that approximates the soft-information exchange is proposed which
allows reduced complexity sphere and channel decoding, adjustable to the
transmission conditions and the required bit error rate. With the proposed
approximate soft information exchange the performance of the exact soft
information can still be reached with significant complexity gains.Comment: The final version of this paper appears in IEEE Transactions on
Vehicular Technolog
PEA265: Perceptual Assessment of Video Compression Artifacts
The most widely used video encoders share a common hybrid coding framework
that includes block-based motion estimation/compensation and block-based
transform coding. Despite their high coding efficiency, the encoded videos
often exhibit visually annoying artifacts, denoted as Perceivable Encoding
Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience
(QoE) of end users. To monitor and improve visual QoE, it is crucial to develop
subjective and objective measures that can identify and quantify various types
of PEAs. In this work, we make the first attempt to build a large-scale
subjectlabelled database composed of H.265/HEVC compressed videos containing
various PEAs. The database, namely the PEA265 database, includes 4 types of
spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types
of temporal PEAs (i.e. flickering and floating). Each containing at least
60,000 image or video patches with positive and negative labels. To objectively
identify these PEAs, we train Convolutional Neural Networks (CNNs) using the
PEA265 database. It appears that state-of-theart ResNeXt is capable of
identifying each type of PEAs with high accuracy. Furthermore, we define PEA
pattern and PEA intensity measures to quantify PEA levels of compressed video
sequence. We believe that the PEA265 database and our findings will benefit the
future development of video quality assessment methods and perceptually
motivated video encoders.Comment: 10 pages,15 figures,4 table
- …