549 research outputs found

    Noncoherent Multi-Way Relay Based on Fast Frequency-Hopping M-ary Frequency-Shift Keying

    No full text
    Information exchange among a group of users is implemented with the aid of fast frequency-hopping MM-ary frequency-shift keying multi-way relay (FFH/MFSK MWR). The FFH/MFSK MWR scheme uses two time-slots per symbol for achieving the information exchange, regardless of the number of users involved. During the first time-slot, all the users communicate with a relay based on the FFH/MFSK principles. Then, without recovery of the symbols received, the relay forms a time-frequency (TF) matrix, which is forwarded to all the users during the second time-slot. During the second time-slot, each user receives signals from the relay and, based on which, detects the other users' information. In the FFH/MFSK MWR scheme, both the relay and the users use square-law assisted noncoherent techniques for detection. While the relay uses simple threshold detection, three types of detectors, namely the maximum likelihood multiuser detector (ML-MUD), sub-optimum ML-MUD (SML-MUD) and the majority vote based single-user detector (MV-SUD), are considered for detection at the users. Finally, in this paper, the error performance of the FFH/MFSK MWR systems is investigated by simulations, when assuming communications over Rayleigh fading channels

    Auto-Encoding Scene Graphs for Image Captioning

    Full text link
    We propose Scene Graph Auto-Encoder (SGAE) that incorporates the language inductive bias into the encoder-decoder image captioning framework for more human-like captions. Intuitively, we humans use the inductive bias to compose collocations and contextual inference in discourse. For example, when we see the relation `person on bike', it is natural to replace `on' with `ride' and infer `person riding bike on a road' even the `road' is not evident. Therefore, exploiting such bias as a language prior is expected to help the conventional encoder-decoder models less likely overfit to the dataset bias and focus on reasoning. Specifically, we use the scene graph --- a directed graph (G\mathcal{G}) where an object node is connected by adjective nodes and relationship nodes --- to represent the complex structural layout of both image (I\mathcal{I}) and sentence (S\mathcal{S}). In the textual domain, we use SGAE to learn a dictionary (D\mathcal{D}) that helps to reconstruct sentences in the S→G→D→S\mathcal{S}\rightarrow \mathcal{G} \rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline, where D\mathcal{D} encodes the desired language prior; in the vision-language domain, we use the shared D\mathcal{D} to guide the encoder-decoder in the I→G→D→S\mathcal{I}\rightarrow \mathcal{G}\rightarrow \mathcal{D} \rightarrow \mathcal{S} pipeline. Thanks to the scene graph representation and shared dictionary, the inductive bias is transferred across domains in principle. We validate the effectiveness of SGAE on the challenging MS-COCO image captioning benchmark, e.g., our SGAE-based single-model achieves a new state-of-the-art 127.8127.8 CIDEr-D on the Karpathy split, and a competitive 125.5125.5 CIDEr-D (c40) on the official server even compared to other ensemble models

    Kervolutional Neural Networks

    Full text link
    Convolutional neural networks (CNNs) have enabled the state-of-the-art performance in many computer vision tasks. However, little effort has been devoted to establishing convolution in non-linear space. Existing works mainly leverage on the activation layers, which can only provide point-wise non-linearity. To solve this problem, a new operation, kervolution (kernel convolution), is introduced to approximate complex behaviors of human perception systems leveraging on the kernel trick. It generalizes convolution, enhances the model capacity, and captures higher order interactions of features, via patch-wise kernel functions, but without introducing additional parameters. Extensive experiments show that kervolutional neural networks (KNN) achieve higher accuracy and faster convergence than baseline CNN.Comment: oral paper in CVPR 201

    Performance Analysis of Multihop Wireless Links over Generalized-K Fading Channels

    No full text
    The performance of multihop links is studied in this contribution by both analysis and simulations, when communicating over Generalized-KK (KGK_G) fading channels. The performance metrics considered include symbol error rate (SER), outage probability, level crossing rate (LCR) and average outage duration (AOD). First, the expressions for both the SER and outage probability are derived by approximating the probability density function (PDF) of the end-to-end signal-to-noise ratio (SNR) using an equivalent end-to-end PDF. We show that this equivalent end-to-end PDF is accurate for analyzing the outage probability. Then, the second-order statistics of LCR and AOD of multihop links are analyzed. Finally, the performance of multihop links is investigated either by simulations or by evaluation of the expressions derived. Our performance results show that the analytical expressions obtained can be well justified by the simulation results. The studies show that the KGK_G channel model as well as the expressions derived in this paper are highly efficient for predicting the performance metrics and statistics for design of multihop communication links
    • …
    corecore