18 research outputs found

    Stability analysis for delayed quaternion-valued neural networks via nonlinear measure approach

    Get PDF
    In this paper, the existence and stability analysis of the quaternion-valued neural networks (QVNNs) with time delay are considered. Firstly, the QVNNs are equivalently transformed into four real-valued systems. Then, based on the Lyapunov theory, nonlinear measure approach, and inequality technique, some sufficient criteria are derived to ensure the existence and uniqueness of the equilibrium point as well as global stability of delayed QVNNs. In addition, the provided criteria are presented in the form of linear matrix inequality (LMI), which can be easily checked by LMI toolbox in MATLAB. Finally, two simulation examples are demonstrated to verify the effectiveness of obtained results. Moreover, the less conservatism of the obtained results is also showed by two comparison examples

    Adaptive land classification and new class generation by unsupervised double-stage learning in Poincare sphere space for polarimetric synthetic aperture radars

    Get PDF
    Polarimetric satellite-borne synthetic aperture radar (PolSAR) is expected to provide land usage information globally and precisely. In this paper, we propose a unsupervised double-stage learning land state classification system using a self-organizing map (SOM) that utilizes ensemble variation vectors. We find that the Poincare sphere parameters representing the polarization state of scattered wave have specific features of the land state, in particular, in their ensemble variation rather than spatial variation. Experiments demonstrate that the proposed PolSAR double-stage SOM system generate new classes appropriately, resulting in successful fine land classification and/or appropriate new class generation

    A Quaternion Gated Recurrent Unit Neural Network for Sensor Fusion

    Get PDF
    Recurrent Neural Networks (RNNs) are known for their ability to learn relationships within temporal sequences. Gated Recurrent Unit (GRU) networks have found use in challenging time-dependent applications such as Natural Language Processing (NLP), financial analysis and sensor fusion due to their capability to cope with the vanishing gradient problem. GRUs are also known to be more computationally efficient than their variant, the Long Short-Term Memory neural network (LSTM), due to their less complex structure and as such, are more suitable for applications requiring more efficient management of computational resources. Many of such applications require a stronger mapping of their features to further enhance the prediction accuracy. A novel Quaternion Gated Recurrent Unit (QGRU) is proposed in this paper, which leverages the internal and external dependencies within the quaternion algebra to map correlations within and across multidimensional features. The QGRU can be used to efficiently capture the inter- and intra-dependencies within multidimensional features unlike the GRU, which only captures the dependencies within the sequence. Furthermore, the performance of the proposed method is evaluated on a sensor fusion problem involving navigation in Global Navigation Satellite System (GNSS) deprived environments as well as a human activity recognition problem. The results obtained show that the QGRU produces competitive results with almost 3.7 times fewer parameters compared to the GRU

    Alternating Deep Low Rank Approach for Exponential Function Reconstruction and Its Biomedical Magnetic Resonance Applications

    Full text link
    Exponential function is a fundamental signal form in general signal processing and biomedical applications, such as magnetic resonance spectroscopy and imaging. How to reduce the sampling time of these signals is an important problem. Sub-Nyquist sampling can accelerate signal acquisition but bring in artifacts. Recently, the low rankness of these exponentials has been applied to implicitly constrain the deep learning network through the unrolling of low rank Hankel factorization algorithm. However, only depending on the implicit low rank constraint cannot provide the robust reconstruction, such as sampling rate mismatches. In this work, by introducing the explicit low rank prior to constrain the deep learning, we propose an Alternating Deep Low Rank approach (ADLR) that utilizes deep learning and optimization solvers alternately. The former solver accelerates the reconstruction while the latter one corrects the reconstruction error from the mismatch. The experiments on both general exponential functions and realistic biomedical magnetic resonance data show that, compared with the state-of-the-art methods, ADLR can achieve much lower reconstruction error and effectively alleviates the decrease of reconstruction quality with sampling rate mismatches.Comment: 14 page
    corecore