21 research outputs found

    Modeling Shapes for Pattern Recognition: A Simple Low-Cost Spline-based Approach

    No full text
    We present a simple procedure for modeling shapes and trajectories of points using cubic polynomial splines. The procedure may prove useful for researchers working in the field of pattern recognition that are in the search of a simple functional representation for shapes and which are not particularly interested in diving into the hightheoretical aspects of more complex representations. The use of splines brings in a few advantages with regards to data dimensionality, speed and accuracy of processing, with minimal effort required for the implementation part. We describe several algorithms for data reduction, spline creation and query for which we provide pseudo code procedures in order to demonstrate the ease of implementation. We equally provide measurements on the approximation error and rate of data reduction

    Effects of Moving Speed and Phone Location on Eyes-Free Gesture Input with Mobile Devices

    No full text
    Using smartphones while moving is challenging and can be dangerous. Eyes-free input gestures can provide a means to use smartphones without the need for visual attention from users. In this study, we investigated the effect of different moving speeds (standing, walking, or jogging) and different locations (phone held freely in the hand, or phone placed inside a shoulder bag) on eyes-free input gestures with smartphone. Our results from 12 male participants showed gesture’s entering duration is not affected by moving speed or phone location, however, other features of gesture, such as length, height, width, area, and phone orientation, are mostly affected by moving speed or phone location. So, eyes-free gestures’ features vary significantly as the user’s environmental factors, such as moving speed or phone location, change and should be considered by designers

    Gesture Heatmaps

    No full text

    An Efficient Solution for Hand Gesture Recognition from Video Sequence

    No full text
    The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands
    corecore