21 research outputs found

    Modification of Gesture-Determined-Dynamic Function with Consideration of Margins for Motion Planning of Humanoid Robots

    Full text link
    The gesture-determined-dynamic function (GDDF) offers an effective way to handle the control problems of humanoid robots. Specifically, GDDF is utilized to constrain the movements of dual arms of humanoid robots and steer specific gestures to conduct demanding tasks under certain conditions. However, there is still a deficiency in this scheme. Through experiments, we found that the joints of the dual arms, which can be regarded as the redundant manipulators, could exceed their limits slightly at the joint angle level. The performance straightly depends on the parameters designed beforehand for the GDDF, which causes a lack of adaptability to the practical applications of this method. In this paper, a modified scheme of GDDF with consideration of margins (MGDDF) is proposed. This MGDDF scheme is based on quadratic programming (QP) framework, which is widely applied to solving the redundancy resolution problems of robot arms. Moreover, three margins are introduced in the proposed MGDDF scheme to avoid joint limits. With consideration of these margins, the joints of manipulators of the humanoid robots will not exceed their limits, and the potential damages which might be caused by exceeding limits will be completely avoided. Computer simulations conducted on MATLAB further verify the feasibility and superiority of the proposed MGDDF scheme

    Cascading Neural Networks for Upper-body Gesture Recognition

    Get PDF
    Abstract -Gesture recognition has many applications ranging from health care to entertainment. However for it to be a feasible method of human-computer interaction it is essential that only intentional movements are interpreted and that the system can work for a wide variety of users. To date very few systems have been tested for the realworld where users are inexperienced in gesture performance resulting in data which is noisier in terms of gesturestarts, gesture motion and gesture-ends. In addition, few systems have taken into consideration the dominant hand used when performing gestures. The work presented in this paper takes this into consideration by firstly selecting key-frames from a gesture sequence then cascading neural networks for left and right gesture classification. The first neural network determines which hand is being used for gesture performance and the second neural network then recognises the gesture. The performance of the system is tested using the VisApp2013 gesture dataset which consists of four left and right hand gestures. This dataset is unique in that the test gesture samples have been performed by untrained users to simulate a real-world environment. By key-frame selection and cascading neural networks the system accuracy improves from 79.8% to 95.6%

    A developmental approach to robotic pointing via human–robot interaction

    Get PDF
    This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/)The ability of pointing is recognised as an essential skill of a robot in its communication and social interaction. This paper introduces a developmental learning approach to robotic pointing, by exploiting the interactions between a human and a robot. The approach is inspired through observing the process of human infant development. It works by first applying a reinforcement learning algorithm to guide the robot to create attempt movements towards a salient object that is out of the robot's initial reachable space. Through such movements, a human demonstrator is able to understand the robot desires to touch the target and consequently, to assist the robot to eventually reach the object successfully. The human-robot interaction helps establish the understanding of pointing gestures in the perception of both the human and the robot. From this, the robot can collect the successful pointing gestures in an effort to learn how to interact with humans. Developmental constraints are utilised to drive the entire learning procedure. The work is supported by experimental evaluation, demonstrating that the proposed approach can lead the robot to gradually gain the desirable pointing ability. It also allows that the resulting robot system exhibits similar developmental progress and features as with human infants

    A developmental approach to robotic pointing via human-robot interaction

    Get PDF
    The ability of pointing is recognised as an essential skill of a robot in its communication and social interaction. This paper introduces a developmental learning approach to robotic pointing, by exploiting the interactions between a human and a robot. The approach is inspired through observing the process of human infant development. It works by first applying a reinforcement learning algorithm to guide the robot to create attempt movements towards a salient object that is out of the robot's initial reachable space. Through such movements, a human demonstrator is able to understand the robot desires to touch the target and consequently, to assist the robot to eventually reach the object successfully. The human-robot interaction helps establish the understanding of pointing gestures in the perception of both the human and the robot. From this, the robot can collect the successful pointing gestures in an effort to learn how to interact with humans. Developmental constraints are utilised to drive the entire learning procedure. The work is supported by experimental evaluation, demonstrating that the proposed approach can lead the robot to gradually gain the desirable pointing ability. It also allows that the resulting robot system exhibits similar developmental progress and features as with human infants

    Is There Any Hope for Developing Automated Translation Technology for Sign Languages?

    Get PDF
    This article discusses the prerequisites for the machine translation of sign languages. The topic is complex, including questions relating to technology, interaction design, linguistics and culture. At the moment, despite the affordances provided by the technology, automated translation between signed and spoken languages – or between sign languages – is not possible. The very need of such translation and its associated technology can also be questioned. Yet, we believe that contributing to the improvement of sign language detection, processing and even sign language translation to spoken languages in the future is a matter that should not be abandoned. However, we argue that this work should focus on all necessary aspects of sign languages and sign language user communities. Thus, a more diverse and critical perspective towards these issues is needed in order to avoid generalisations and bias that is often manifested within dominant research paradigms particularly in the fields of spoken language research and speech community.publishedVersionPeer reviewe

    Proceedings. 22. Workshop Computational Intelligence, Dortmund, 6. - 7. Dezember 2012

    Get PDF
    Dieser Tagungsband enthält die Beiträge des 22. Workshops "Computational Intelligence" des Fachausschusses 5.14 der VDI/VDE-Gesellschaft für Mess- und Automatisierungstechnik (GMA) der vom 6. - 7. Dezember 2012 in Dortmund stattgefunden hat. Die Schwerpunkte sind Methoden, Anwendungen und Tools für - Fuzzy-Systeme, - Künstliche Neuronale Netze, - Evolutionäre Algorithmen und - Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen Anwendungen und Benchmark-Problemen

    Locally regularized sliced inverse regression based 3D hand gesture recognition on a dance robot

    Full text link
    Gesture recognition plays an important role in human machine interactions (HMIs) for multimedia entertainment. In this paper, we present a dimension reduction based approach for dynamic real-time hand gesture recognition. The hand gestures are recorded as acceleration signals by using a handheld with a 3-axis accelerometer sensor installed, and represented by discrete cosine transform (DCT) coefficients. To recognize different hand gestures, we develop a new dimension reduction method, locally regularized sliced inverse regression (LR-SIR), to find an effective low dimensional subspace, in which different hand gestures are well separable, following which recognition can be performed by using simple and efficient classifiers, e.g., nearest mean, k-nearest-neighbor rule and support vector machine. LR-SIR is built upon the well-known sliced inverse regression (SIR), but overcomes its limitation that it ignores the local geometry of the data distribution. Besides, LR-SIR can be effectively and efficiently solved by eigen-decomposition. Finally, we apply the LR-SIR based gesture recognition to control our recently developed dance robot for multimedia entertainment. Thorough empirical studies on 'digits'-gesture recognition suggest the effectiveness of the new gesture recognition scheme for HMI. © 2012 Elsevier Inc. All rights reserved

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    OBSERVER-BASED-CONTROLLER FOR INVERTED PENDULUM MODEL

    Get PDF
    This paper presents a state space control technique for inverted pendulum system. The system is a common classical control problem that has been widely used to test multiple control algorithms because of its nonlinear and unstable behavior. Full state feedback based on pole placement and optimal control is applied to the inverted pendulum system to achieve desired design specification which are 4 seconds settling time and 5% overshoot. The simulation and optimization of the full state feedback controller based on pole placement and optimal control techniques as well as the performance comparison between these techniques is described comprehensively. The comparison is made to choose the most suitable technique for the system that have the best trade-off between settling time and overshoot. Besides that, the observer design is analyzed to see the effect of pole location and noise present in the system
    corecore