19,857 research outputs found

    Byzantine Attack and Defense in Cognitive Radio Networks: A Survey

    Full text link
    The Byzantine attack in cooperative spectrum sensing (CSS), also known as the spectrum sensing data falsification (SSDF) attack in the literature, is one of the key adversaries to the success of cognitive radio networks (CRNs). In the past couple of years, the research on the Byzantine attack and defense strategies has gained worldwide increasing attention. In this paper, we provide a comprehensive survey and tutorial on the recent advances in the Byzantine attack and defense for CSS in CRNs. Specifically, we first briefly present the preliminaries of CSS for general readers, including signal detection techniques, hypothesis testing, and data fusion. Second, we analyze the spear and shield relation between Byzantine attack and defense from three aspects: the vulnerability of CSS to attack, the obstacles in CSS to defense, and the games between attack and defense. Then, we propose a taxonomy of the existing Byzantine attack behaviors and elaborate on the corresponding attack parameters, which determine where, who, how, and when to launch attacks. Next, from the perspectives of homogeneous or heterogeneous scenarios, we classify the existing defense algorithms, and provide an in-depth tutorial on the state-of-the-art Byzantine defense schemes, commonly known as robust or secure CSS in the literature. Furthermore, we highlight the unsolved research challenges and depict the future research directions.Comment: Accepted by IEEE Communications Surveys and Tutoiral

    DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

    Full text link
    We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.Comment: ICCV 201

    Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

    Full text link
    A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of light's plenoptic function.Comment: 9 pages, 8 figures, Accepted to 3DV 201
    • …
    corecore