18 research outputs found

    Implementation of target tracking in Smart Wheelchair Component System

    Get PDF
    Independent mobility is critical to individuals of any age. While the needs of many individuals with disabilities can be satisfied with power wheelchairs, some members of the disabled community find it difficult or impossible to operate a standard power wheelchair. This population includes, but is not limited to, individuals with low vision, visual field neglect, spasticity, tremors, or cognitive deficits. To meet the needs of this population, our group is involved in developing cost effective modularly designed Smart Wheelchairs. Our objective is to develop an assistive navigation system which will seamlessly integrate into the lifestyle of individual with disabilities and provide safe and independent mobility and navigation without imposing an excessive physical or cognitive load. The Smart Wheelchair Component System (SWCS) can be added to a variety of commercial power wheelchairs with minimal modification to provide navigation assistance. Previous versions of the SWCS used acoustic and infrared rangefinders to identify and avoid obstacles, but these sensors do not lend themselves to many desirable higher-level behaviors. To achieve these higher level behaviors we integrated a Continuously Adapted Mean Shift (CAMSHIFT) target tracking algorithm into the SWCS, along with the Minimal Vector Field Histogram (MVFH) obstacle avoidance algorithm. The target tracking algorithm provides the basis for two distinct operating modes: (1) a "follow-the-leader" mode, and (2) a "move to stationary target" mode.The ability to track a stationary or moving target will make smart wheelchairs more useful as a mobility aid, and is also expected to be useful for wheeled mobility training and evaluation. In addition to wheelchair users, the caregivers, clinicians, and transporters who provide assistance to wheelchair users will also realize beneficial effects of providing safe and independent mobility to wheelchair users which will reduce the level of assistance needed by wheelchair users

    Computer vision in target pursuit using a UAV

    Get PDF
    Research in target pursuit using Unmanned Aerial Vehicle (UAV) has gained attention in recent years, this is primarily due to decrease in cost and increase in demand of small UAVs in many sectors. In computer vision, target pursuit is a complex problem as it involves the solving of many sub-problems which are typically concerned with the detection, tracking and following of the object of interest. At present, the majority of related existing methods are developed using computer simulation with the assumption of ideal environmental factors, while the remaining few practical methods are mainly developed to track and follow simple objects that contain monochromatic colours with very little texture variances. Current research in this topic is lacking of practical vision based approaches. Thus the aim of this research is to fill the gap by developing a real-time algorithm capable of following a person continuously given only a photo input. As this research considers the whole procedure as an autonomous system, therefore the drone is activated automatically upon receiving a photo of a person through Wi-Fi. This means that the whole system can be triggered by simply emailing a single photo from any device anywhere. This is done by first implementing image fetching to automatically connect to WIFI, download the image and decode it. Then, human detection is performed to extract the template from the upper body of the person, the intended target is acquired using both human detection and template matching. Finally, target pursuit is achieved by tracking the template continuously while sending the motion commands to the drone. In the target pursuit system, the detection is mainly accomplished using a proposed human detection method that is capable of detecting, extracting and segmenting the human body figure robustly from the background without prior training. This involves detecting face, head and shoulder separately, mainly using gradient maps. While the tracking is mainly accomplished using a proposed generic and non-learning template matching method, this involves combining intensity template matching with colour histogram model and employing a three-tier system for template management. A flight controller is also developed, it supports three types of controls: keyboard, mouse and text messages. Furthermore, the drone is programmed with three different modes: standby, sentry and search. To improve the detection and tracking of colour objects, this research has also proposed several colour related methods. One of them is a colour model for colour detection which consists of three colour components: hue, purity and brightness. Hue represents the colour angle, purity represents the colourfulness and brightness represents intensity. It can be represented in three different geometric shapes: sphere, hemisphere and cylinder, each of these shapes also contains two variations. Experimental results have shown that the target pursuit algorithm is capable of identifying and following the target person robustly given only a photo input. This can be evidenced by the live tracking and mapping of the intended targets with different clothing in both indoor and outdoor environments. Additionally, the various methods developed in this research could enhance the performance of practical vision based applications especially in detecting and tracking of objects

    A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms

    Full text link
    In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, head-mounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platform specific factors that influence gaze tracking accuracy. A key outcome from this review is the realization of a need to develop standardized methodologies for performance evaluation of gaze tracking systems and achieve consistency in their specification and comparative evaluation. To address this need, the concept of a methodological framework for practical evaluation of different gaze tracking systems is proposed.Comment: 25 pages, 13 figures, Accepted for publication in IEEE Access in July 201

    Lovotics: Love + Robotics, Sentimental Robot With Affective Artificial Intelligence

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Object Tracking Using Local Binary Descriptors

    Get PDF
    Visual tracking has become an increasingly important topic of research in the field of Computer Vision (CV). There are currently many tracking methods based on the Detect-then-Track paradigm. This type of approach may allow for a system to track a random object with just one initialization phase, but may often rely on constructing models to follow the object. Another limitation of these methods is that they are computationally and memory intensive, which hinders their application to resource constrained platforms such as mobile devices. Under these conditions, the implementation of Augmented Reality (AR) or complex multi-part systems is not possible. In this thesis, we explore a variety of interest point descriptors for generic object tracking. The SIFT descriptor is considered a benchmark and will be compared with binary descriptors such as BRIEF, ORB, BRISK, and FREAK. The accuracy of these descriptors is benchmarked against the ground truth of the object\u27s location. We use dictionaries of descriptors to track regions with small error under variations due to occlusions, illumination changes, scaling, and rotation. This is accomplished by using Dense-to-Sparse Search Pattern, Locality Constraints, and Scale Adaptation. A benchmarking system is created to test the descriptors\u27 accuracy, speed, robustness, and distinctness. This data offers a comparison of the tracking system to current state of the art systems such as Multiple Instance Learning Tracker (MILTrack), Tracker Learned Detection (TLD), and Continuously Adaptive MeanShift (CAMSHIFT)

    Real-time synthetic primate vision

    Get PDF

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described
    corecore