541 research outputs found

    Listen-and-Talk: Full-duplex Cognitive Radio Networks

    Full text link
    In traditional cognitive radio networks, secondary users (SUs) typically access the spectrum of primary users (PUs) by a two-stage "listen-before-talk" (LBT) protocol, i.e., SUs sense the spectrum holes in the first stage before transmit in the second stage. In this paper, we propose a novel "listen-and-talk" (LAT) protocol with the help of the full-duplex (FD) technique that allows SUs to simultaneously sense and access the vacant spectrum. Analysis of sensing performance and SU's throughput are given for the proposed LAT protocol. And we find that due to self-interference caused by FD, increasing transmitting power of SUs does not always benefit to SU's throughput, which implies the existence of a power-throughput tradeoff. Besides, though the LAT protocol suffers from self-interference, it allows longer transmission time, while the performance of the traditional LBT protocol is limited by channel spatial correction and relatively shorter transmission period. To this end, we also present an adaptive scheme to improve SUs' throughput by switching between the LAT and LBT protocols. Numerical results are provided to verify the proposed methods and the theoretical results.Comment: in proceeding of IEEE Globecom 201

    Listen-and-Talk: Protocol Design and Analysis for Full-duplex Cognitive Radio Networks

    Full text link
    In traditional cognitive radio networks, secondary users (SUs) typically access the spectrum of primary users (PUs) by a two-stage "listen-before-talk" (LBT) protocol, i.e., SUs sense the spectrum holes in the first stage before transmitting in the second. However, there exist two major problems: 1) transmission time reduction due to sensing, and 2) sensing accuracy impairment due to data transmission. In this paper, we propose a "listen-and-talk" (LAT) protocol with the help of full-duplex (FD) technique that allows SUs to simultaneously sense and access the vacant spectrum. Spectrum utilization performance is carefully analyzed, with the closed-form spectrum waste ratio and collision ratio with the PU provided. Also, regarding the secondary throughput, we report the existence of a tradeoff between the secondary transmit power and throughput. Based on the power-throughput tradeoff, we derive the analytical local optimal transmit power for SUs to achieve both high throughput and satisfying sensing accuracy. Numerical results are given to verify the proposed protocol and the theoretical results

    Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

    Full text link
    Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification. Standard denoiser suffers from the error amplification effect, in which small residual adversarial noise is progressively amplified and leads to wrong classifications. HGD overcomes this problem by using a loss function defined as the difference between the target model's outputs activated by the clean image and denoised image. Compared with ensemble adversarial training which is the state-of-the-art defending method on large images, HGD has three advantages. First, with HGD as a defense, the target model is more robust to either white-box or black-box adversarial attacks. Second, HGD can be trained on a small subset of the images and generalizes well to other images and unseen classes. Third, HGD can be transferred to defend models other than the one guiding it. In NIPS competition on defense against adversarial attacks, our HGD solution won the first place and outperformed other models by a large margin

    U-shaped fusion convolutional transformer based workflow for fast optical coherence tomography angiography generation in lips

    Get PDF
    Oral disorders, including oral cancer, pose substantial diagnostic challenges due to late-stage diagnosis, invasive biopsy procedures, and the limitations of existing non-invasive imaging techniques. Optical coherence tomography angiography (OCTA) shows potential in delivering non-invasive, real-time, high-resolution vasculature images. However, the quality of OCTA images are often compromised due to motion artifacts and noise, necessitating more robust and reliable image reconstruction approaches. To address these issues, we propose a novel model, a U-shaped fusion convolutional transformer (UFCT), for the reconstruction of high-quality, low-noise OCTA images from two-repeated OCT scans. UFCT integrates the strengths of convolutional neural networks (CNNs) and transformers, proficiently capturing both local and global image features. According to the qualitative and quantitative analysis in normal and pathological conditions, the performance of the proposed pipeline outperforms that of the traditional OCTA generation methods when only two repeated B-scans are performed. We further provide a comparative study with various CNN and transformer models and conduct ablation studies to validate the effectiveness of our proposed strategies. Based on the results, the UFCT model holds the potential to significantly enhance clinical workflow in oral medicine by facilitating early detection, reducing the need for invasive procedures, and improving overall patient outcomes.</p

    U-shaped fusion convolutional transformer based workflow for fast optical coherence tomography angiography generation in lips

    Get PDF
    Oral disorders, including oral cancer, pose substantial diagnostic challenges due to late-stage diagnosis, invasive biopsy procedures, and the limitations of existing non-invasive imaging techniques. Optical coherence tomography angiography (OCTA) shows potential in delivering non-invasive, real-time, high-resolution vasculature images. However, the quality of OCTA images are often compromised due to motion artifacts and noise, necessitating more robust and reliable image reconstruction approaches. To address these issues, we propose a novel model, a U-shaped fusion convolutional transformer (UFCT), for the reconstruction of high-quality, low-noise OCTA images from two-repeated OCT scans. UFCT integrates the strengths of convolutional neural networks (CNNs) and transformers, proficiently capturing both local and global image features. According to the qualitative and quantitative analysis in normal and pathological conditions, the performance of the proposed pipeline outperforms that of the traditional OCTA generation methods when only two repeated B-scans are performed. We further provide a comparative study with various CNN and transformer models and conduct ablation studies to validate the effectiveness of our proposed strategies. Based on the results, the UFCT model holds the potential to significantly enhance clinical workflow in oral medicine by facilitating early detection, reducing the need for invasive procedures, and improving overall patient outcomes.</p

    A hand‐held optical coherence tomography angiography scanner based on angiography reconstruction transformer networks

    Get PDF
    Optical coherence tomography angiography (OCTA) has successfully demonstrated its viability for clinical applications in dermatology. Due to the high optical scattering property of skin, extracting high‐quality OCTA images from skin tissues requires at least six‐repeated scans. While the motion artifacts from the patient and the free hand‐held probe can lead to a low‐quality OCTA image. Our deep‐learning‐based scan pipeline enables fast and high‐quality OCTA imaging with 0.3‐s data acquisition. We utilize a fast scanning protocol with a 60 ÎŒm/pixel spatial interval rate and introduce angiography‐reconstruction‐transformer (ART) for 4× super‐resolution of low transverse resolution OCTA images. The ART outperforms state‐of‐the‐art networks in OCTA image super‐resolution and provides a lighter network size. ART can restore microvessels while reducing the processing time by 85%, and maintaining improvements in structural similarity and peak‐signal‐to‐noise ratio. This study represents that ART can achieve fast and flexible skin OCTA imaging while maintaining image quality

    A Fast Optical Coherence Tomography Angiography Image Acquisition and Reconstruction Pipeline for Skin Application

    Get PDF
    Traditional high-quality OCTA images require multi-repeated scans (e.g., 4-8 repeats) in the same position, which causes patient uncomfortable. We propose a deep-learning-based pipeline that can extract high-quality OCTA images from only two-repeat OCT scans. The performance of the proposed Image Reconstruction U-Net (IRU-Net) outperforms state-of-the-art UNet vision transformer and UNet in OCTA image reconstruction from a two-repeat OCT signal. The results demonstrated a mean peak-signal-to-noise ratio increased from 15.7 to 24.2; the mean structural similarity index measure improved from 0.28 to 0.59; while OCT data acquisition time was reduced from 21 seconds to 3.5 seconds (reduced by 83%)
    • 

    corecore