204 research outputs found
Real-time human action recognition on an embedded, reconfigurable video processing architecture
Copyright @ 2008 Springer-Verlag.In recent years, automatic human motion recognition has been widely researched within the computer vision and image processing communities. Here we propose a real-time embedded vision solution for human motion recognition implemented on a ubiquitous device. There are three main contributions in this paper. Firstly, we have developed a fast human motion recognition system with simple motion features and a linear Support Vector Machine (SVM) classifier. The method has been tested on a large, public human action dataset and achieved competitive performance for the temporal template (eg. āmotion history imageā) class of approaches. Secondly, we have developed a reconfigurable, FPGA based video processing architecture. One advantage of this architecture is that the system processing performance can be reconfiured for a particular application, with the addition of new or replicated processing cores. Finally, we have successfully implemented a human motion recognition system on this reconfigurable architecture. With a small number of human actions (hand gestures), this stand-alone system is performing reliably, with an 80% average recognition rate using limited training data. This type of system has applications in security systems, man-machine communications and intelligent environments.DTI and Broadcom Ltd
FPGA implementation of real-time human motion recognition on a reconfigurable video processing architecture
In recent years, automatic human motion recognition has been widely researched within the computer vision and image processing communities. Here we propose a real-time embedded vision solution for human motion recognition implemented on a ubiquitous device. There are three main contributions in this paper. Firstly, we have developed a fast human motion recognition system with simple motion features and a linear Support Vector Machine(SVM) classifier. The method has been tested on a large, public human action dataset and achieved competitive performance for the temporal template (eg. ``motion history image") class of approaches. Secondly, we have developed a reconfigurable, FPGA based video processing architecture. One advantage of this architecture is that the system processing performance can be reconfigured for a particular application, with the addition of new or replicated processing cores. Finally, we have successfully implemented a human motion recognition system on this reconfigurable architecture. With a small number of human actions (hand gestures), this stand-alone system is performing reliably, with an 80% average recognition rate using limited training data. This type of system has applications in security systems, man-machine communications and intelligent environments
Real-time Emotional State Detection from Facial Expression on Embedded Devices
From the last decade, researches on human facial
emotion recognition disclosed that computing models built on
regression modelling can produce applicable performance.
However, many systems need extensive computing power to be
run that prevents its wide applications such as robots and smart
devices. In this proposed system, a real-time automatic facial
expression system was designed, implemented and tested on an
embedded device such as FPGA that can be a first step for a
specific facial expression recognition chip for a social robot. The
system was built and simulated in MATLAB and then was built
on FPGA and it can carry out real time continuously emotional
state recognition at 30 fps with 47.44% accuracy. The proposed
graphic user interface is able to display the participant video and
two dimensional predict labels of the emotion in real time
together.The research presented in this paper was supported partially by the Slovak Research and Development Agency under the research projects APVV-15-0517 & APPV-15-0731 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic under the project VEGA 1/0075/15
Raspberry Based Hand Gesture Recognition Using Haar Cascade and Local Binary Pattern Histogram
Many companies and even public institutions for civil servants currently use photo-taking for the attendance. However, this strategy is still considered ineffective since the employees still can hack the attendance by making their own photos and put them in their desks. Therefore, an alternative that can complement the current face detection method is highly needed so that the employeeās attendance can be directly monitored. One of the methods that can be used to detect the attendance is hand gesture detection. This research aims to detect hand gestures made by the employees to ensure whether they really come to work or not. This research makeĀ the chance for manipulation using photo or fake GPS is quite small. For the purpose of hand gesture recognition, this study utilized Local Binary Pattern Histogram algorithm. The hand gesture image was first taken using a raspberry pi camera and then processed by the device to examine whether it matches the registered ID or not. The results showed that ID recognition by using hand gestures is detectable. The number recognition in hand gestures includes numbers 1 to 10. The test results showed that for 5 trials, the average time required for reading hand gestures using a laptop was 9.2 seconds, while that of using raspberry was 14.2 seconds. The results of this research show that the system has not been able to distinguish which hand is read first, so numbers that have the same number are considered the same, such as 81 and 18. So, the motion reading using a raspberry takes longer than that of using a laptop because the laptop's performance is higher than that of a raspberry and system cannot distinguish between numbers consisting of the same number
HAND GESTURE AND DETEKSI WAJAHDETECTION USING RASPBERRY PI
Face detection is currently used for various purposes, one of which is to record employees attendance. This strategy is ineffective since the employees still can hack the attendance by making their own photos and put them in their desks. If they are unable to come to the office,they can always ask their colleagues to submit their already available photos.Therefore, an alternative that can complement the current face detection method is highly needed.One of the methods that can be used is hand gesture detection.This study aims to detect hand gestures made by the employees to ensure whether they really come to work or not,so the chance for manipulation is quite small.For the purpose of hand gesture recognition, this study utilized Local Binary Pattern Histogram algorithm. LBPH is an algorithm used for the image matching process between images that have been given training and images taken in real time.The hand gesture image was first taken using a raspberry pi camera and then processed to examine whether it matches the registered ID or not.The results showed that ID recognition by using hand gestures is detectable and is in accordance with the registered ID.The number recognition in hand gestures includes numbers 1 to 10. The test results showed that, the average time required for reading hand gestures using a laptop was 9.2 seconds, while that of using raspberry was 14,2 seconds.Motion reading using a raspberry takes longer than that of using a laptop because the laptop's performance is higher than that of a raspberry
Recommended from our members
Hand gesture recognition using deep learning neural networks
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHuman Computer Interaction (HCI) is a broad field involving different types of interactions including gestures. Gesture recognition concerns non-verbal motions used as a means of communication in HCI. A system may be utilised to identify human gestures to convey information for device control. This represents a significant field within HCI involving device interfaces and users. The aim of gesture recognition is to record gestures that are formed in a certain way and then detected by a device such as a camera. Hand gestures can be used as a form of communication for many different applications. It may be used by people who possess different disabilities, including those with hearing-impairments, speech impairments and stroke patients, to communicate and fulfil their basic needs.
Various studies have previously been conducted relating to hand gestures. Some studies proposed different techniques to implement the hand gesture experiments. For image processing there are multiple tools to extract features of images, as well as Artificial Intelligence which has varied classifiers to classify different types of data. 2D and 3D hand gestures request an effective algorithm to extract images and classify various mini gestures and movements. This research discusses this issue using different algorithms. To detect 2D or 3D hand gestures, this research proposed image processing tools such as Wavelet Transforms and Empirical Mode Decomposition to extract image features. The Artificial Neural Network (ANN) classifier which used to train and classify data besides Convolutional Neural Networks (CNN). These methods were examined in terms of multiple parameters such as execution time, accuracy, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood, negative likelihood, receiver operating characteristic, area under ROC curve and root mean square. This research discusses four original contributions in the field of hand gestures. The first contribution is an implementation of two experiments using 2D hand gesture video where ten different gestures are detected in short and long distances using an iPhone 6 Plus with 4K resolution. The experiments are performed using WT and EMD for feature extraction while ANN and CNN for classification. The second contribution comprises 3D hand gesture video experiments where twelve gestures are recorded using holoscopic imaging system camera. The third contribution pertains experimental work carried out to detect seven common hand gestures. Finally, disparity experiments were performed using the left and the right 3D hand gesture videos to discover disparities. The results of comparison show the accuracy results of CNN being 100% compared to other techniques. CNN is clearly the most appropriate method to be used in a hand gesture system.Imam Abdulrahman bin Faisal Universit
- ā¦