4,431 research outputs found

    Seismic Performance and Design of Bridge Foundations in Liquefiable Ground with a Frozen Crust

    Get PDF
    INE/AUTC 12.3

    Convergence of martingale solution to slow-fast systems with jumps modulated by Markovian switching

    Full text link
    This paper investigates the convergence of martingale solutions to slow-fast systems with jumps modulated by Markovian switching on weakly irreducible class. The key point here is to deals with slow-fast systems and two-time-scale Markovian switching simultaneously, while averaging on the slow component requires two invariant measures respectively due to the coexistence of the fast component and Markovian switching. We first investigate the slow-fast systems modulated by Markovian chains with single weakly irreducible class, and the existence and uniqueness of the solution will be proved. Then weak convergence is presented based on tightness and the exponential ergodicity of the fast component with the martingale method, where the appropriate perturbed test functions plays a decisive role in processing. Finally we extend results to the case of the multiple irreducible class

    Deep Neural Network Architectures for Modulation Classification

    Full text link
    In this work, we investigate the value of employing deep learning for the task of wireless signal modulation recognition. Recently in [1], a framework has been introduced by generating a dataset using GNU radio that mimics the imperfections in a real wireless channel, and uses 10 different modulation types. Further, a convolutional neural network (CNN) architecture was developed and shown to deliver performance that exceeds that of expert-based approaches. Here, we follow the framework of [1] and find deep neural network architectures that deliver higher accuracy than the state of the art. We tested the architecture of [1] and found it to achieve an accuracy of approximately 75% of correctly recognizing the modulation type. We first tune the CNN architecture of [1] and find a design with four convolutional layers and two dense layers that gives an accuracy of approximately 83.8% at high SNR. We then develop architectures based on the recently introduced ideas of Residual Networks (ResNet [2]) and Densely Connected Networks (DenseNet [3]) to achieve high SNR accuracies of approximately 83.5% and 86.6%, respectively. Finally, we introduce a Convolutional Long Short-term Deep Neural Network (CLDNN [4]) to achieve an accuracy of approximately 88.5% at high SNR.Comment: 5 pages, 10 figures, In proc. Asilomar Conference on Signals, Systems, and Computers, Nov. 201
    corecore