554 research outputs found

    Surface Networks

    Full text link
    We study data-driven representations for three-dimensional triangle meshes, which are one of the prevalent objects used to represent 3D geometry. Recent works have developed models that exploit the intrinsic geometry of manifolds and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants, which learn from the local metric tensor via the Laplacian operator. Despite offering excellent sample complexity and built-in invariances, intrinsic geometry alone is invariant to isometric deformations, making it unsuitable for many applications. To overcome this limitation, we propose several upgrades to GNNs to leverage extrinsic differential geometry properties of three-dimensional surfaces, increasing its modeling power. In particular, we propose to exploit the Dirac operator, whose spectrum detects principal curvature directions --- this is in stark contrast with the classical Laplace operator, which directly measures mean curvature. We coin the resulting models \emph{Surface Networks (SN)}. We prove that these models define shape representations that are stable to deformation and to discretization, and we demonstrate the efficiency and versatility of SNs on two challenging tasks: temporal prediction of mesh deformations under non-linear dynamics and generative models using a variational autoencoder framework with encoders/decoders given by SNs

    Isotropization of Quaternion-Neural-Network-Based PolSAR Adaptive Land Classification in Poincare-Sphere Parameter Space

    Get PDF
    Quaternion neural networks (QNNs) achieve high accuracy in polarimetric synthetic aperture radar classification for various observation data by working in Poincare-sphere-parameter space. The high performance arises from the good generalization characteristics realized by a QNN as 3-D rotation as well as amplification/attenuation, which is in good consistency with the isotropy in the polarization-state representation it deals with. However, there are still two anisotropic factors so far which lead to a classification capability degraded from its ideal performance. In this letter, we propose an isotropic variation vector and an isotropic activation function to improve the classification ability. Experiments demonstrate the enhancement of the QNN ability

    Quaternion Neuro-Fuzzy Learning Algorithm for Fuzzy Rule Generation

    Get PDF
    Abstract—In order to generate or tune fuzzy rules, Neuro- Fuzzy learning algorithms with Gaussian type membership functions based on gradient-descent method are well known. In this paper, we propose a new learning approach, the Quaternion Neuro-Fuzzy learning algorithm. This method is an extension of the conventional method to four-dimensional space by using a quaternion neural network that maps quaternion to real values. Input, antecedent membership functions and consequent singletons are quaternion, and output is real. Four-dimensional input can be better represented by quaternion than by real values. We compared it with the conventional method by several function identification problems, and revealed that the proposed method outperformed the counterpart: The number of rules was reduced to 5 from 625, the number of epochs by one fortieth, and error by one tenth in the best cases.The Second International Conference on Robot, Vision and Signal Processing December 10-12, 2013 Kitakyushu, Japa

    A Quaternion Gated Recurrent Unit Neural Network for Sensor Fusion

    Get PDF
    Recurrent Neural Networks (RNNs) are known for their ability to learn relationships within temporal sequences. Gated Recurrent Unit (GRU) networks have found use in challenging time-dependent applications such as Natural Language Processing (NLP), financial analysis and sensor fusion due to their capability to cope with the vanishing gradient problem. GRUs are also known to be more computationally efficient than their variant, the Long Short-Term Memory neural network (LSTM), due to their less complex structure and as such, are more suitable for applications requiring more efficient management of computational resources. Many of such applications require a stronger mapping of their features to further enhance the prediction accuracy. A novel Quaternion Gated Recurrent Unit (QGRU) is proposed in this paper, which leverages the internal and external dependencies within the quaternion algebra to map correlations within and across multidimensional features. The QGRU can be used to efficiently capture the inter- and intra-dependencies within multidimensional features unlike the GRU, which only captures the dependencies within the sequence. Furthermore, the performance of the proposed method is evaluated on a sensor fusion problem involving navigation in Global Navigation Satellite System (GNSS) deprived environments as well as a human activity recognition problem. The results obtained show that the QGRU produces competitive results with almost 3.7 times fewer parameters compared to the GRU

    Forward and Inverse Kinematics Solution of A 3-DOF Articulated Robotic Manipulator Using Artificial Neural Network

    Get PDF
    In this research paper, the multilayer feedforward neural network (MLFFNN) is architected and described for solving the forward and inverse kinematics of the 3-DOF articulated robot. When designing the MLFFNN network for forward kinematics, the joints' variables are used as inputs to the network, and the positions and orientations of the robot end-effector are used as outputs. In the case of inverse kinematics, the MLFFNN network is designed using only the positions of the robot end-effector as the inputs, whereas the joints’ variables are the outputs. For both cases, the training of the proposed multilayer network is accomplished by Levenberg Marquardt (LM) method. A sinusoidal type of motion using variable frequencies is commanded to the three joints of the articulated manipulator, and then the data is collected for the training, testing, and validation processes. The experimental simulation results demonstrate that the proposed artificial neural network that is inspired by biological processes is trained very effectively, as indicated by the calculated mean squared error (MSE), which is approximately equal to zero. The resulted in smallest MSE in the case of the forward kinematics is 4.592×10^(-8) in the case of the inverse kinematics, is 9.071×10^(-7). This proves that the proposed MLFFNN artificial network is highly reliable and robust in minimizing error. The proposed method is applied to a 3-DOF manipulator and could be used in more complex types of robots like 6-DOF or 7-DOF robots
    • …
    corecore