2,202 research outputs found

    Human Computer Interaction Employing Hand Gestures in Lieu of Mouse Movements

    Get PDF
    After the advent of computers, we have come to the stage where we can’t imagine a day without interacting with the computers. When the computer interaction is playing such an important role, it would not be wise if we don’t try to enhance the way we interact with the computer. So, the mechanism we propose helps to enhance the way we interact with the computer without any aid from any external device. The proposed system uses the webcam of the computer system to take the hand gestures as input and the cursor responds accordingly. Vision based approach is used for the skin detection. To reduce the effect of illumination on the image, HSV color space is used in the proposed design. The edge detection technique is used for the counter extraction process. Now a bounding rectangle is drawn so as to find the center of the hand. As the center of the hand moves, the cursor moves. To detect the gestures from the hand the fingers are to be recognized. This be done with the help of function like cvConvexityDefects() in openCV. Thus, with the help of two gestures right and left clicks can be preformed

    A Multi-Level Colour Thresholding Based Segmentation Approach for Improved Identification of the Defective Region in Leather Surfaces

    Get PDF
    Vision systems are recently adopted for defect detection in leather surface to overcome difficulties of labour intensive, time consuming manual inspection process. Suitable image processing techniques needs to be developed for accurate detection of leather defects. Existing research works have focused for gray scale based image processing techniques which requires conversion of colour images using an averaging method and it lacks sensitivity for detecting the leather defects due to the random and texture surface of the leather.  This work presents a colour processing approach for improved identification of leather defects using a multi-level thresholding function. In this work, the colour leather images are processed in ‘Lab’ colour domain for improving the human perception of discriminating the leather defects.  In the present work, the specific range of values for the colour attributes of different leather defect in colour leather samples are identified using the colour histogram.  MATLAB software routine is developed for identifying defects in specific ranges of colour attributes and the results are presented.  From the results, it is found that proposed provides a simpler approach for identifying the defective regions based on the colour attributes of the surface with improved human perception. The proposed methodology can be implemented in graphical processing units for efficiently detecting several types of defects using specific thresholds for the automated real-time inspection of leather defects

    Hand Pointing Detection Using Live Histogram Template of Forehead Skin

    Full text link
    Hand pointing detection has multiple applications in many fields such as virtual reality and control devices in smart homes. In this paper, we proposed a novel approach to detect pointing vector in 2D space of a room. After background subtraction, face and forehead is detected. In the second step, forehead skin H-S plane histograms in HSV space is calculated. By using these histogram templates of users skin, and back projection method, skin areas are detected. The contours of hand are extracted using Freeman chain code algorithm. Next step is finding fingertips. Points in hand contour which are candidates for the fingertip can be found in convex defects of convex hull and contour. We introduced a novel method for finding the fingertip based on the special points on the contour and their relationships. Our approach detects hand-pointing vectors in live video from a common webcam with 94%TP and 85%TN.Comment: Accepted for oral presentation in DSP201

    Hand Detection using HSV Model

    Get PDF
    Natural Human Computer Interaction (HCI) is the demand of today’s technology oriented world. Detecting and tracking of face and hands are important for gesture recognition. Skin detection is a very popular and useful technique for detecting and tracking human-body parts. It has been much attention mainly because of its vast range of applications such as, face detection and tracking, naked people detection, hand detection and tracking, people retrieval in databases and Internet, etc. Many models and algorithms are being used for detection of face, hand and its gesture. Hand detection using model or classification is to build a decision rule that will discriminate between skin and non-skin pixels. Identifying skin color pixels involves finding the range of values for which most skin pixels would fall in a given color space. All external factors will be eliminated to detect the hand and its color in the image in complex background. Keywords: image segmentation, hand detection, hci, computer vision, RGB, HS

    Glove defect detection via YOLO V5

    Get PDF
    Malaysia is one of the biggest producers and exporters of gloves in the world. To meet and exceed the customer’s expectation, a predictive defect model is necessary to minimize the defect glove. There are three crucial parts to develop an effective defect glove detection model, which are data collection, model development and model evaluation. The data provided should be good quality, the algorithm for developing the model should reach high accuracy and high inference time due to the fast glove production line, and the developed model must compare to the other quality model to prove its robustness and effectiveness. This paper focuses on employing the YOLO V5 model for glove defect detection as well as investigating the efficiency of other several deep learning approaches. The dataset collected in this research was 493 images with three classes which are normal glove, tear glove and unstripped glove. To avoid overfitting due to the small amount of dataset, argumentation processes such as saturation, exposure and noise were applied to increase the dataset number to 1148 images. Data were then split to 70:20:10 for the training-validation-test ratio. The parameter setup was 100 epochs with 129 iterations. The YOLO V5 was compared with Scaled YOLO V4, Detectron2 and EfficientDet by the training time, model size, accuracy, and inference time. In conclusion, the best model was YOLO V5 because it reached the lowest training (0.259 hour) and inference time (0.0095 seconds), smallest model size (14418kb) and highest accuracy (mAP = 0.9951)

    Vision-Based Three Dimensional Hand Interaction In Markerless Augmented Reality Environment

    Get PDF
    Kemunculan realiti tambahan membolehkan objek maya untuk wujud bersama dengan dunia sebenar dan ini memberi kaedah baru untuk berinteraksi dengan objek maya. Sistem realiti tambahan memerlukan penunjuk tertentu, seperti penanda untuk menentukan bagaimana objek maya wujud dalam dunia sebenar. Penunjuk tertentu mesti diperolehi untuk menggunakan sistem realiti tambahan, tetapi susah untuk seseorang mempunyai penunjuk tersebut pada bila-bila masa. Tangan manusia, yang merupakan sebahagian dari badan manusia dapat menyelesaikan masalah ini. Selain itu, tangan boleh digunakan untuk berinteraksi dengan objek maya dalam dunia realiti tambahan. Tesis ini membentangkan sebuah sistem realiti tambahan yang menggunakan tangan terbuka untuk pendaftaran objek maya dalam persekitaran sebenar dan membolehkan pengguna untuk menggunakan tangan yang satu lagi untuk berinteraksi dengan objek maya yang ditambahkan dalam tiga-matra. Untuk menggunakan tangan untuk pendaftaran dan interaksi dalam realiti tambahan, postur dan isyarat tangan pengguna perlu dikesan. The advent of augmented reality (AR) enables virtual objects to be superimposed on the real world and provides a new way to interact with the virtual objects. AR system requires an indicator to determine for how the virtual objects aligned in the real world. The indicator must first be obtained to access to a particular AR system. It may be inconvenient to have the indicator in reach at all time. Human hand, which is part of the human body may be a solution for this. Besides, hand is also a promising tool for interaction with virtual objects in AR environment. This thesis presents a markerless Augmented Reality system which utilizes outstretched hand for registration of virtual objects in the real environment and enables the users to have three dimensional (3D) interaction with the augmented virtual objects. To employ the hand for registration and interaction in AR, hand postures and gestures that the user perform has to be recognized

    Human-computer interaction based on hand gestures using RGB-D sensors

    Get PDF
    In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the user’s hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time
    corecore