15,651 research outputs found

    Wireless Interference Identification with Convolutional Neural Networks

    Full text link
    The steadily growing use of license-free frequency bands requires reliable coexistence management for deterministic medium utilization. For interference mitigation, proper wireless interference identification (WII) is essential. In this work we propose the first WII approach based upon deep convolutional neural networks (CNNs). The CNN naively learns its features through self-optimization during an extensive data-driven GPU-based training process. We propose a CNN example which is based upon sensing snapshots with a limited duration of 12.8 {\mu}s and an acquisition bandwidth of 10 MHz. The CNN differs between 15 classes. They represent packet transmissions of IEEE 802.11 b/g, IEEE 802.15.4 and IEEE 802.15.1 with overlapping frequency channels within the 2.4 GHz ISM band. We show that the CNN outperforms state-of-the-art WII approaches and has a classification accuracy greater than 95% for signal-to-noise ratio of at least -5 dB

    Raw Multi-Channel Audio Source Separation using Multi-Resolution Convolutional Auto-Encoders

    Get PDF
    Supervised multi-channel audio source separation requires extracting useful spectral, temporal, and spatial features from the mixed signals. The success of many existing systems is therefore largely dependent on the choice of features used for training. In this work, we introduce a novel multi-channel, multi-resolution convolutional auto-encoder neural network that works on raw time-domain signals to determine appropriate multi-resolution features for separating the singing-voice from stereo music. Our experimental results show that the proposed method can achieve multi-channel audio source separation without the need for hand-crafted features or any pre- or post-processing

    An Experimental Platform for Multi-spacecraft Phase-Array Communications

    Full text link
    The emergence of small satellites and CubeSats for interplanetary exploration will mean hundreds if not thousands of spacecraft exploring every corner of the solar-system. Current methods for communication and tracking of deep space probes use ground based systems such as the Deep Space Network (DSN). However, the increased communication demand will require radically new methods to ease communication congestion. Networks of communication relay satellites located at strategic locations such as geostationary orbit and Lagrange points are potential solutions. Instead of one large communication relay satellite, we could have scores of small satellites that utilize phase arrays to effectively operate as one large satellite. Excess payload capacity on rockets can be used to warehouse more small satellites in the communication network. The advantage of this network is that even if one or a few of the satellites are damaged or destroyed, the network still operates but with degraded performance. The satellite network would operate in a distributed architecture and some satellites maybe dynamically repurposed to split and communicate with multiple targets at once. The potential for this alternate communication architecture is significant, but this requires development of satellite formation flying and networking technologies. Our research has found neural-network control approaches such as the Artificial Neural Tissue can be effectively used to control multirobot/multi-spacecraft systems and can produce human competitive controllers. We have been developing a laboratory experiment platform called Athena to develop critical spacecraft control algorithms and cognitive communication methods. We briefly report on the development of the platform and our plans to gain insight into communication phase arrays for space.Comment: 4 pages, 10 figures, IEEE Cognitive Communications for Aerospace Applications Worksho

    Large-scale Isolated Gesture Recognition Using Convolutional Neural Networks

    Full text link
    This paper proposes three simple, compact yet effective representations of depth sequences, referred to respectively as Dynamic Depth Images (DDI), Dynamic Depth Normal Images (DDNI) and Dynamic Depth Motion Normal Images (DDMNI). These dynamic images are constructed from a sequence of depth maps using bidirectional rank pooling to effectively capture the spatial-temporal information. Such image-based representations enable us to fine-tune the existing ConvNets models trained on image data for classification of depth sequences, without introducing large parameters to learn. Upon the proposed representations, a convolutional Neural networks (ConvNets) based method is developed for gesture recognition and evaluated on the Large-scale Isolated Gesture Recognition at the ChaLearn Looking at People (LAP) challenge 2016. The method achieved 55.57\% classification accuracy and ranked 2nd2^{nd} place in this challenge but was very close to the best performance even though we only used depth data.Comment: arXiv admin note: text overlap with arXiv:1608.0633
    • …
    corecore