9 research outputs found

    A survey of face detection, extraction and recognition

    Get PDF
    The goal of this paper is to present a critical survey of existing literatures on human face recognition over the last 4-5 years. Interest and research activities in face recognition have increased significantly over the past few years, especially after the American airliner tragedy on September 11 in 2001. While this growth largely is driven by growing application demands, such as static matching of controlled photographs as in mug shots matching, credit card verification to surveillance video images, identification for law enforcement and authentication for banking and security system access, advances in signal analysis techniques, such as wavelets and neural networks, are also important catalysts. As the number of proposed techniques increases, survey and evaluation becomes important

    Doctor of Philosophy

    Get PDF
    dissertationMost humans have difficulty performing precision tasks, such as writing and painting, without additional physical support(s) to help steady or offload their arm's weight. To alleviate this problem, various passive and active devices have been developed. However, such devices often have a small workspace and lack scalable gravity compensation throughout the workspace and/or diversity in their applications. This dissertation describes the development of a Spatial Active Handrest (SAHR), a large-workspace manipulation aid, to offload the weight of the user's arm and increase user's accuracy over a large three-dimensional workspace. This device has four degrees-of-freedom and allows the user to perform dexterous tasks within a large workspace that matches the workspace of a human arm when performing daily tasks. Users can move this device to a desired position and orientation using force or position inputs, or a combination of both. The SAHR converts the given input(s) to desired velocit

    A vision-based approach for human hand tracking and gesture recognition.

    Get PDF
    Hand gesture interface has been becoming an active topic of human-computer interaction (HCI). The utilization of hand gestures in human-computer interface enables human operators to interact with computer environments in a natural and intuitive manner. In particular, bare hand interpretation technique frees users from cumbersome, but typically required devices in communication with computers, thus offering the ease and naturalness in HCI. Meanwhile, virtual assembly (VA) applies virtual reality (VR) techniques in mechanical assembly. It constructs computer tools to help product engineers planning, evaluating, optimizing, and verifying the assembly of mechanical systems without the need of physical objects. However, traditional devices such as keyboards and mice are no longer adequate due to their inefficiency in handling three-dimensional (3D) tasks. Special VR devices, such as data gloves, have been mandatory in VA. This thesis proposes a novel gesture-based interface for the application of VA. It develops a hybrid approach to incorporate an appearance-based hand localization technique with a skin tone filter in support of gesture recognition and hand tracking in the 3D space. With this interface, bare hands become a convenient substitution of special VR devices. Experiment results demonstrate the flexibility and robustness introduced by the proposed method to HCI.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .L8. Source: Masters Abstracts International, Volume: 43-03, page: 0883. Adviser: Xiaobu Yuan. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    Doctor of Philosophy

    Get PDF
    dissertationHumans generally have difficulty performing precision tasks with their unsupported hands. To compensate for this difficulty, people often seek to support or rest their hand and arm on a fixed surface. However, when the precision task needs to be performed over a workspace larger than what can be reached from a fixed position, a fixed support is no longer useful. This dissertation describes the development of the Active Handrest, a device that expands its user's dexterous workspace by providing ergonomic support and precise repositioning motions over a large workspace. The prototype Active Handrest is a planar computer-controlled support for the user's hand and arm. The device can be controlled through force input from the user, position input from a grasped tool, or a combination of inputs. The control algorithm of the Active Handrest converts the input(s) into device motions through admittance control where the device's desired velocity is calculated proportionally to the input force or its equivalent. A robotic 2-axis admittance device was constructed as the initial Planar Active Handrest, or PAHR, prototype. Experiments were conducted to optimize the device's control input strategies. Large workspace shape tracing experiments were used to compare the PAHR to unsupported, fixed support, and passive moveable support conditions. The Active Handrest was found to reduce task error and provide better speedaccuracy performance. Next, virtual fixture strategies were explored for the device. From the options considered, a virtual spring fixture strategy was chosen based on its effectiveness. An experiment was conducted to compare the PAHR with its virtual fixture strategy to traditional virtual fixture techniques for a grasped stylus. Virtual fixtures implemented on the Active Handrest were found to be as effective as fixtures implemented on a grasped tool. Finally, a higher degree-of-freedom Enhanced Planar Active Handrest, or E-PAHR, was constructed to provide support for large workspace precision tasks while more closely following the planar motions of the human arm. Experiments were conducted to investigate appropriate control strategies and device utility. The E-PAHR was found to provide a skill level equal to that of the PAHR with reduced user force input and lower perceived exertion

    Pedestrian detection and tracking using stereo vision techniques

    Get PDF
    Automated pedestrian detection, counting and tracking has received significant attention from the computer vision community of late. Many of the person detection techniques described so far in the literature work well in controlled environments, such as laboratory settings with a small number of people. This allows various assumptions to be made that simplify this complex problem. The performance of these techniques, however, tends to deteriorate when presented with unconstrained environments where pedestrian appearances, numbers, orientations, movements, occlusions and lighting conditions violate these convenient assumptions. Recently, 3D stereo information has been proposed as a technique to overcome some of these issues and to guide pedestrian detection. This thesis presents such an approach, whereby after obtaining robust 3D information via a novel disparity estimation technique, pedestrian detection is performed via a 3D point clustering process within a region-growing framework. This clustering process avoids using hard thresholds by using bio-metrically inspired constraints and a number of plan view statistics. This pedestrian detection technique requires no external training and is able to robustly handle challenging real-world unconstrained environments from various camera positions and orientations. In addition, this thesis presents a continuous detect-and-track approach, with additional kinematic constraints and explicit occlusion analysis, to obtain robust temporal tracking of pedestrians over time. These approaches are experimentally validated using challenging datasets consisting of both synthetic data and real-world sequences gathered from a number of environments. In each case, the techniques are evaluated using both 2D and 3D groundtruth methodologies

    Multiple Cues used in Model-Based Human Motion Capture

    No full text
    Human motion capture has lately been the object of much attention due to commercial interests. A ”touch free” computer vision solution to the problem is desirable to avoid the intrusiveness of standard capture devices. The object to be monitored is known a priori which suggest to include a human model in the capture process. In this paper we use a model-based approach known as the analysis-bysynthesis approach. This approach is powerful but has a problem with its potential huge search space. Using multiple cues we reduce the search space by introducing constraints through the 3D locations of salient points and a silhouette of the subject. Both data types are relatively easy to derive and only require limited computational effort so the approach remains suitable for real-time applications. The approach is tested on 3D movements of a human arm and the results show that we successfully can estimate the pose of the arm using the reduced search space. 1

    Using biomechanical constraints to improve video-based motion capture

    Get PDF
    In motion capture applications whose aim is to recover human body postures from various input, the high dimensionality of the problem makes it desirable to reduce the size of the search-space by eliminating a priori impossible configurations. This can be carried out by constraining the posture recovery process in various ways. Most recent work in this area has focused on applying camera viewpoint-related constraints to eliminate erroneous solutions. When camera calibration parameters are available, they provide an extremely efficient tool for disambiguating not only posture estimation, but also 3D reconstruction and data segmentation. Increased robustness is indeed to be gained from enforcing such constraints, which we prove in the context of an optical motion capture framework. Our contribution in this respect resides in having applied such constraints consistently to each main step involved in a motion capture process, namely marker reconstruction and segmentation, followed by posture recovery. These steps are made inter-dependent, where each one constrains the other. A more application-independent approach is to encode constraints directly within the human body model, such as limits on the rotational joints. This being an almost unexplored research subject, our efforts were mainly directed at determining a new method for measuring, representing and applying such joint limits. To the present day, the few existing range of motion boundary representations present severe drawbacks that call for an alternative formulation. The joint limits paradigm we propose not only overcomes these drawbacks, but also allows to capture intra- and inter-joint rotation dependencies, these being essential to realistic joint motion representation. The range of motion boundary is defined by an implicit surface, its analytical expression enabling us to readily establish whether a given joint rotation is valid or not. Furthermore, its continuous and differentiable nature provides us with a means of elegantly incorporating such a constraint within an optimisation process for posture recovery. Applying constrained optimisation to our body model and stereo data extracted from video sequence, we demonstrate the clearly resulting decrease in posture estimation errors. As a bonus, we have integrated our joint limits representation in character animation packages to show how motion can be naturally constrained in this manner
    corecore