42 research outputs found

    A Framework for Vision-based Static Hand Gesture Recognition

    Get PDF
    In today’s technical world, the intellectual computing of a efficient human-computer interaction (HCI) or human alternative and augmentative communication (HAAC) is essential in our lives. Hand gesture recognition is one of the most important techniques that can be used to build up a gesture based interface system for HCI or HAAC application. Therefore, suitable development of gesture recognition method is necessary to design advance hand gesture recognition system for successful applications like robotics, assistive systems, sign language communication, virtual reality etc. However, the variation of illumination, rotation, position and size of gesture images, efficient feature representation, and classification are the main challenges towards the development of a real time gesture recognition system. The aim of this work is to develop a framework for vision based static hand gesture recognition which overcomes the challenges of illumination, rotation, size and position variation of the gesture images. In general, a framework for gesture recognition system which consists of preprocessing, feature extraction, feature selection, and classification stages is developed in this thesis work. The preprocessing stage involves the following sub-stages: image enhancement which enhances the image by compensating illumination variation; segmentation, which segments hand region from its background image and transforms it into binary silhouette; image rotation that makes the segmented gesture as rotation invariant; filtering that effectively removes background noise and object noise from binary image and provides a well defined segmented hand gesture. This work proposes an image rotation technique by coinciding the first principal component of the segmented hand gesture with vertical axes to make it as rotation invariant. In the feature extraction stage, this work extracts xi localized contour sequence (LCS) and block based features, and proposes a combined feature set by appending LCS features with block-based features to represent static hand gesture images. A discrete wavelets transform (DWT) and Fisher ratio (F-ratio) based feature set is also proposed for better representation of static hand gesture image. To extract this feature set, DWT is applied on resized and enhanced grayscale image and then the important DWT coefficient matrices are selected as features using proposed F-ratio based coefficient matrices selection technique. In sequel, a modified radial basis function neural network (RBF-NN) classifier based on k-mean and least mean square (LMS) algorithms is proposed in this work. In the proposed RBF-NN classifier, the centers are automatically selected using k-means algorithm and estimated weight matrix is updated utilizing LMS algorithm for better recognition of hand gesture images. A sigmoidal activation function based RBF-NN classifier is also proposed here for further improvement of recognition performance. The activation function of the proposed RBF-NN classifier is formed using a set of composite sigmoidal functions. Finally, the extracted features are applied as input to the classifier to recognize the class of static hand gesture images. Subsequently, a feature vector optimization technique based on genetic algorithm (GA) is also proposed to remove the redundant and irrelevant features. The proposed algorithms are tested on three static hand gesture databases which include grayscale images with uniform background (Database I and Database II) and color images with non-uniform background (Database III). Database I is a repository database which consists of hand gesture images of 25 Danish/international sign language (D/ISL) hand alphabets. Database II and III are indigenously developed using VGA Logitech Webcam (C120) with 24 American Sign Language (ASL) hand alphabets

    Comprehensive review of vision-based fall detection systems

    Get PDF
    Vision-based fall detection systems have experienced fast development over the last years. To determine the course of its evolution and help new researchers, the main audience of this paper, a comprehensive revision of all published articles in the main scientific databases regarding this area during the last five years has been made. After a selection process, detailed in the Materials and Methods Section, eighty-one systems were thoroughly reviewed. Their characterization and classification techniques were analyzed and categorized. Their performance data were also studied, and comparisons were made to determine which classifying methods best work in this field. The evolution of artificial vision technology, very positively influenced by the incorporation of artificial neural networks, has allowed fall characterization to become more resistant to noise resultant from illumination phenomena or occlusion. The classification has also taken advantage of these networks, and the field starts using robots to make these systems mobile. However, datasets used to train them lack real-world data, raising doubts about their performances facing real elderly falls. In addition, there is no evidence of strong connections between the elderly and the communities of researchers

    Human face detection techniques: A comprehensive review and future research directions

    Get PDF
    Face detection which is an effortless task for humans are complex to perform on machines. Recent veer proliferation of computational resources are paving the way for a frantic advancement of face detection technology. Many astutely developed algorithms have been proposed to detect faces. However, there is a little heed paid in making a comprehensive survey of the available algorithms. This paper aims at providing fourfold discussions on face detection algorithms. At first, we explore a wide variety of available face detection algorithms in five steps including history, working procedure, advantages, limitations, and use in other fields alongside face detection. Secondly, we include a comparative evaluation among different algorithms in each single method. Thirdly, we provide detailed comparisons among the algorithms epitomized to have an all inclusive outlook. Lastly, we conclude this study with several promising research directions to pursue. Earlier survey papers on face detection algorithms are limited to just technical details and popularly used algorithms. In our study, however, we cover detailed technical explanations of face detection algorithms and various recent sub-branches of neural network. We present detailed comparisons among the algorithms in all-inclusive and also under sub-branches. We provide strengths and limitations of these algorithms and a novel literature survey including their use besides face detection

    State of the Art in Face Recognition

    Get PDF
    Notwithstanding the tremendous effort to solve the face recognition problem, it is not possible yet to design a face recognition system with a potential close to human performance. New computer vision and pattern recognition approaches need to be investigated. Even new knowledge and perspectives from different fields like, psychology and neuroscience must be incorporated into the current field of face recognition to design a robust face recognition system. Indeed, many more efforts are required to end up with a human like face recognition system. This book tries to make an effort to reduce the gap between the previous face recognition research state and the future state

    Multi Agent Systems

    Get PDF
    Research on multi-agent systems is enlarging our future technical capabilities as humans and as an intelligent society. During recent years many effective applications have been implemented and are part of our daily life. These applications have agent-based models and methods as an important ingredient. Markets, finance world, robotics, medical technology, social negotiation, video games, big-data science, etc. are some of the branches where the knowledge gained through multi-agent simulations is necessary and where new software engineering tools are continuously created and tested in order to reach an effective technology transfer to impact our lives. This book brings together researchers working in several fields that cover the techniques, the challenges and the applications of multi-agent systems in a wide variety of aspects related to learning algorithms for different devices such as vehicles, robots and drones, computational optimization to reach a more efficient energy distribution in power grids and the use of social networks and decision strategies applied to the smart learning and education environments in emergent countries. We hope that this book can be useful and become a guide or reference to an audience interested in the developments and applications of multi-agent systems

    User Experience Enchanced Interface ad Controller Design for Human-Robot Interaction

    Get PDF
    The robotic technologies have been well developed recently in various fields, such as medical services, industrial manufacture and aerospace. Despite their rapid development, how to deal with the uncertain envi-ronment during human-robot interactions effectively still remains un-resolved. The current artificial intelligence (AI) technology does not support robots to fulfil complex tasks without human’s guidance. Thus, teleoperation, which means remotely controlling a robot by a human op-erator, is indispensable in many scenarios. It is an important and useful tool in research fields. This thesis focuses on the study of designing a user experience (UX) enhanced robot controller, and human-robot in-teraction interfaces that try providing human operators an immersion perception of teleoperation. Several works have been done to achieve the goal.First, to control a telerobot smoothly, a customised variable gain con-trol method is proposed where the stiffness of the telerobot varies with the muscle activation level extracted from signals collected by the surface electromyograph(sEMG) devices. Second, two main works are conducted to improve the user-friendliness of the interaction interfaces. One is that force feedback is incorporated into the framework providing operators with haptic feedback to remotely manipulate target objects. Given the high cost of force sensor, in this part of work, a haptic force estimation algorithm is proposed where force sensor is no longer needed. The other main work is developing a visual servo control system, where a stereo camera is mounted on the head of a dual arm robots offering operators real-time working situations. In order to compensate the internal and ex-ternal uncertainties and accurately track the stereo camera’s view angles along planned trajectories, a deterministic learning techniques is utilised, which enables reusing the learnt knowledge before current dynamics changes and thus features increasing the learning efficiency. Third, in-stead of sending commands to the telerobts by joy-sticks, keyboards or demonstrations, the telerobts are controlled directly by the upper limb motion of the human operator in this thesis. Algorithm that utilised the motion signals from inertial measurement unit (IMU) sensor to captures humans’ upper limb motion is designed. The skeleton of the operator is detected by Kinect V2 and then transformed and mapped into the joint positions of the controlled robot arm. In this way, the upper limb mo-tion signals from the operator is able to act as reference trajectories to the telerobts. A more superior neural networks (NN) based trajectory controller is also designed to track the generated reference trajectory. Fourth, to further enhance the human immersion perception of teleop-eration, the virtual reality (VR) technique is incorporated such that the operator can make interaction and adjustment of robots easier and more accurate from a robot’s perspective.Comparative experiments have been performed to demonstrate the effectiveness of the proposed design scheme. Tests with human subjects were also carried out for evaluating the interface design

    Human action recognition using spatial-temporal analysis.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.In the past few decades’ human action recognition (HAR) from video has gained a lot of attention in the computer vision domain. The analysis of human activities in videos span a variety of applications including security and surveillance, entertainment, and the monitoring of the elderly. The task of recognizing human actions in any scenario is a difficult and complex one which is characterized by challenges such as self-occlusion, noisy backgrounds and variations in illumination. However, literature provides various techniques and approaches for action recognition which deal with these challenges. This dissertation focuses on a holistic approach to the human action recognition problem with specific emphasis on spatial-temporal analysis. Spatial-temporal analysis is achieved by using the Motion History Image (MHI) approach to solve the human action recognition problem. Three variants of MHI are investigated, these are: Original MHI, Modified MHI and Timed MHI. An MHI is a single image describing a silhouettes motion over a period of time. Brighter pixels in the resultant MHI show the most recent movement/motion. One of the key problems of MHI is that it is not easy to know the conditions needed to obtain an MHI silhouette that will result in a high recognition rate for action recognition. These conditions are often neglected and thus pose a problem for human action recognition systems as they could affect their overall performance. Two methods are proposed to solve the human action recognition problem and to show the conditions needed to obtain high recognition rates using the MHI approach. The first uses the concept of MHI with the Bag of Visual Words (BOVW) approach to recognize human actions. The second approach combines MHI with Local Binary Patterns (LBP). The Weizmann and KTH datasets are then used to validate the proposed methods. Results from experiments show promising recognition rates when compared to some existing methods. The BOVW approach used in combination with the three variants of MHI achieved the highest recognition rates compared to the LBP method. The original MHI method resulted in the highest recognition rate of 87% on the Weizmann dataset and an 81.6% recognition rate is achieved on the KTH dataset using the Modified MHI approach

    Bio-Inspired Robotics

    Get PDF
    Modern robotic technologies have enabled robots to operate in a variety of unstructured and dynamically-changing environments, in addition to traditional structured environments. Robots have, thus, become an important element in our everyday lives. One key approach to develop such intelligent and autonomous robots is to draw inspiration from biological systems. Biological structure, mechanisms, and underlying principles have the potential to provide new ideas to support the improvement of conventional robotic designs and control. Such biological principles usually originate from animal or even plant models, for robots, which can sense, think, walk, swim, crawl, jump or even fly. Thus, it is believed that these bio-inspired methods are becoming increasingly important in the face of complex applications. Bio-inspired robotics is leading to the study of innovative structures and computing with sensory–motor coordination and learning to achieve intelligence, flexibility, stability, and adaptation for emergent robotic applications, such as manipulation, learning, and control. This Special Issue invites original papers of innovative ideas and concepts, new discoveries and improvements, and novel applications and business models relevant to the selected topics of ``Bio-Inspired Robotics''. Bio-Inspired Robotics is a broad topic and an ongoing expanding field. This Special Issue collates 30 papers that address some of the important challenges and opportunities in this broad and expanding field
    corecore