203,614 research outputs found
Joint Cooperative Spectrum Sensing and MAC Protocol Design for Multi-channel Cognitive Radio Networks
In this paper, we propose a semi-distributed cooperative spectrum sen sing
(SDCSS) and channel access framework for multi-channel cognitive radio networks
(CRNs). In particular, we c onsider a SDCSS scheme where secondary users (SUs)
perform sensing and exchange sensing outcomes with ea ch other to locate
spectrum holes. In addition, we devise the p -persistent CSMA-based cognitive
MAC protocol integrating the SDCSS to enable efficient spectrum sharing among
SUs. We then perform throughput analysis and develop an algorithm to determine
the spectrum sensing and access parameters to maximize the throughput for a
given allocation of channel sensing sets. Moreover, we consider the spectrum
sensing set optimization problem for SUs to maxim ize the overall system
throughput. We present both exhaustive search and low-complexity greedy
algorithms to determine the sensing sets for SUs and analyze their complexity.
We also show how our design and analysis can be extended to consider reporting
errors. Finally, extensive numerical results are presented to demonstrate the
sig nificant performance gain of our optimized design framework with respect to
non-optimized designs as well as the imp acts of different protocol parameters
on the throughput performance.Comment: accepted for publication EURASIP Journal on Wireless Communications
and Networking, 201
Recommended from our members
Real-time decoding of question-and-answer speech dialogue using human cortical activity.
Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate
SCMA with Low Complexity Symmetric Codebook Design for Visible Light Communication
Sparse code multiple access (SCMA) is attracting significant research
interests currently, which is considered as a promising multiple access
technique for 5G systems. It serves as a good candidate for the future
communication network with massive nodes due to its capability of handling user
overloading. Introducing SCMA to visible light communication (VLC) can provide
another opportunity on design of transmission protocols for the communication
network with massive nodes due to the limited communication range of VLC, which
reduces the interference intensity. However, when applying SCMA in VLC systems,
we need to modify the SCMA codebook to accommodate the real and positive signal
requirement for VLC.We apply multidimensional constellation design methods to
SCMA codebook. To reduce the design complexity, we also propose a symmetric
codebook design. For all the proposed design approaches, the minimum Euclidean
distance aims to be maximized. Our symmetric codebook design can reduce design
and detection complexity simultaneously. Simulation results show that our
design implies fast convergence with respect to the number of iterations, and
outperforms the design that simply modifies the existing approaches to VLC
signal requirements
Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video
Object detection is considered one of the most challenging problems in this
field of computer vision, as it involves the combination of object
classification and object localization within a scene. Recently, deep neural
networks (DNNs) have been demonstrated to achieve superior object detection
performance compared to other approaches, with YOLOv2 (an improved You Only
Look Once model) being one of the state-of-the-art in DNN-based object
detection methods in terms of both speed and accuracy. Although YOLOv2 can
achieve real-time performance on a powerful GPU, it still remains very
challenging for leveraging this approach for real-time object detection in
video on embedded computing devices with limited computational power and
limited memory. In this paper, we propose a new framework called Fast YOLO, a
fast You Only Look Once framework which accelerates YOLOv2 to be able to
perform object detection in video on embedded devices in a real-time manner.
First, we leverage the evolutionary deep intelligence framework to evolve the
YOLOv2 network architecture and produce an optimized architecture (referred to
as O-YOLOv2 here) that has 2.8X fewer parameters with just a ~2% IOU drop. To
further reduce power consumption on embedded devices while maintaining
performance, a motion-adaptive inference method is introduced into the proposed
Fast YOLO framework to reduce the frequency of deep inference with O-YOLOv2
based on temporal motion characteristics. Experimental results show that the
proposed Fast YOLO framework can reduce the number of deep inferences by an
average of 38.13%, and an average speedup of ~3.3X for objection detection in
video compared to the original YOLOv2, leading Fast YOLO to run an average of
~18FPS on a Nvidia Jetson TX1 embedded system
- …