63 research outputs found
Implementation of target tracking in Smart Wheelchair Component System
Independent mobility is critical to individuals of any age. While the needs of many individuals with disabilities can be satisfied with power wheelchairs, some members of the disabled community find it difficult or impossible to operate a standard power wheelchair. This population includes, but is not limited to, individuals with low vision, visual field neglect, spasticity, tremors, or cognitive deficits. To meet the needs of this population, our group is involved in developing cost effective modularly designed Smart Wheelchairs. Our objective is to develop an assistive navigation system which will seamlessly integrate into the lifestyle of individual with disabilities and provide safe and independent mobility and navigation without imposing an excessive physical or cognitive load. The Smart Wheelchair Component System (SWCS) can be added to a variety of commercial power wheelchairs with minimal modification to provide navigation assistance. Previous versions of the SWCS used acoustic and infrared rangefinders to identify and avoid obstacles, but these sensors do not lend themselves to many desirable higher-level behaviors. To achieve these higher level behaviors we integrated a Continuously Adapted Mean Shift (CAMSHIFT) target tracking algorithm into the SWCS, along with the Minimal Vector Field Histogram (MVFH) obstacle avoidance algorithm. The target tracking algorithm provides the basis for two distinct operating modes: (1) a "follow-the-leader" mode, and (2) a "move to stationary target" mode.The ability to track a stationary or moving target will make smart wheelchairs more useful as a mobility aid, and is also expected to be useful for wheeled mobility training and evaluation. In addition to wheelchair users, the caregivers, clinicians, and transporters who provide assistance to wheelchair users will also realize beneficial effects of providing safe and independent mobility to wheelchair users which will reduce the level of assistance needed by wheelchair users
Control visual de un robot móvil mediante una cámara cenital
This research project addresses the problem of controlling the motion of a small
mobile robot by means of visual feedback provided by an overhead camera. This visual
servoing problem has been previously addressed by many researchers due to its multiple
applications to real world problems. In this document, we propose a software platform
that rely on low cost hardware components to solve it. Based on the imagery supplied by
the overhead camera, the proposed system is capable of precisely locating and tracking
the robot within a planar ground workspace, using the CAMShift algorithm, as well as
finding out its orientation at every moment. Then, an error measurement is defined
between current and desired positions of the robot in the Cartesian plane (Position-Based
Visual Servoing). In order to generate the suitable motion commands that lead the robot
towards its destination, we make use of mathematical equations that model the control
of the robot. The platform has been especially designed regarding its application to real
time problems.
One of the central goals of this work is analyzing the viability of the proposed system
and the level of accuracy that it is capable of achieving taking into account the low cost
components on which it is based. The validation of the system has come as a result of the
real time experiments that have been conducted. Firstly, an exhaustive battery testing
that comprehends 1400 experiments has been conducted in order to find a suitable set
of parameter values that polished the control equations. Secondly, we have implemented
three different applications to test these new control values: tracing a trajectory defined
by a fixed set of points, pursuing a mobile target and integrating our system with a blockprogramming
platform from which it receives a set of destination points to be followed.
Having successfully completed all these tasks, we conclude that the proposed robotic
system has well proven its feasibility and effectiveness facing the addressed visual servoing
problem.Este proyecto de investigación aborda el problema de controlar el
movimiento de un pequeño robot móvil por medio del feedback visual proporcionado
por una cámara cenital. Este problema de control visual de servos ya ha sido
abordado previamente por multitud de investigadores debido a sus múltiples aplicaciones
a problemas del mundo real. En este documento, se propone una plataforma software que
depende de componentes hardware de bajo coste para resolverlo. Basado en imágenes
suministradas por la cámara cenital, el sistema propuesto es capaz de localizar y seguir
de forma precisa al robot dentro de un entorno de trabajo en el plano del suelo, usando
para ello el algoritmo de tracking CAMShift, así como averiguar su orientación en cada
momento. Después, una medida de error se define entre la posición actual del robot y la
deseada en el plano Cartesiano (control visual de servos basado en posición (PBVS)). Para
generar los comandos de movimiento aporpiados que lleven al robot a su destino, hacemos
uso de ecuaciones matemáticas que modelizan el control del robot. La plataforma ha sido
especialmente diseñada teniendo en cuenta su aplicación a problemas en tiempo real.
Uno de los objetivos centrales de este trabajo es analizar la viabilidad del sistema
propuesto y el nivel de precisión que es capaz de obtener teniendo en cuenta los
componentes de bajo coste en los que se basa. La validación del sistema viene dada como
resultado de los experimentos en tiempo real que se han llevado a cabo. Primeramente, una
exhaustiva batería de pruebas que comprende 1400 experimentos ha sido ejecutada con el
fin de obtener un set de valores para los parámetros que puliesen las ecuaciones de control.
A continuación, hemos implementado tres aplicaciones diferentes para probar estos nuevos
valores de control: trazar una trayectoria definida por un conjunto de puntos fijos,
perseguir un objetivo móvil e integrar nuestro sistema con la plataforma de programación
por bloques desde la que recibe el conjunto de puntos a seguir. Habiendo completado
todas estas tareas satisfactoriamente, concluimos que el sistema robótico propuesto ha
demostrado con holgura su viabilidad y efectividad frente al problema de control visual
de servos abordado
Real-Time, Multiple Pan/Tilt/Zoom Computer Vision Tracking and 3D Positioning System for Unmanned Aerial System Metrology
The study of structural characteristics of Unmanned Aerial Systems (UASs) continues to be an important field of research for developing state of the art nano/micro systems. Development of a metrology system using computer vision (CV) tracking and 3D point extraction would provide an avenue for making these theoretical developments. This work provides a portable, scalable system capable of real-time tracking, zooming, and 3D position estimation of a UAS using multiple cameras. Current state-of-the-art photogrammetry systems use retro-reflective markers or single point lasers to obtain object poses and/or positions over time. Using a CV pan/tilt/zoom (PTZ) system has the potential to circumvent their limitations. The system developed in this paper exploits parallel-processing and the GPU for CV-tracking, using optical flow and known camera motion, in order to capture a moving object using two PTU cameras. The parallel-processing technique developed in this work is versatile, allowing the ability to test other CV methods with a PTZ system using known camera motion. Utilizing known camera poses, the object\u27s 3D position is estimated and focal lengths are estimated for filling the image to a desired amount. This system is tested against truth data obtained using an industrial system
Computer vision in target pursuit using a UAV
Research in target pursuit using Unmanned Aerial Vehicle (UAV) has gained attention in recent years, this is primarily due to decrease in cost and increase in demand of small UAVs in many sectors. In computer vision, target pursuit is a complex problem as it involves the solving of many sub-problems which are typically concerned with the detection, tracking and following of the object of interest. At present, the majority of related existing methods are developed using computer simulation with the assumption of ideal environmental factors, while the remaining few practical methods are mainly developed to track and follow simple objects that contain monochromatic colours with very little texture variances. Current research in this topic is lacking of practical vision based approaches. Thus the aim of this research is to fill the gap by developing a real-time algorithm capable of following a person continuously given only a photo input.
As this research considers the whole procedure as an autonomous system, therefore the drone is activated automatically upon receiving a photo of a person through Wi-Fi. This means that the whole system can be triggered by simply emailing a single photo from any device anywhere. This is done by first implementing image fetching to automatically connect to WIFI, download the image and decode it. Then, human detection is performed to extract the template from the upper body of the person, the intended target is acquired using both human detection and template matching. Finally, target pursuit is achieved by tracking the template continuously while sending the motion commands to the drone.
In the target pursuit system, the detection is mainly accomplished using a proposed human detection method that is capable of detecting, extracting and segmenting the human body figure robustly from the background without prior training. This involves detecting face, head and shoulder separately, mainly using gradient maps. While the tracking is mainly accomplished using a proposed generic and non-learning template matching method, this involves combining intensity template matching with colour histogram model and employing a three-tier system for template management. A flight controller is also developed, it supports three types of controls: keyboard, mouse and text messages. Furthermore, the drone is programmed with three different modes: standby, sentry and search.
To improve the detection and tracking of colour objects, this research has also proposed several colour related methods. One of them is a colour model for colour detection which consists of three colour components: hue, purity and brightness. Hue represents the colour angle, purity represents the colourfulness and brightness represents intensity. It can be represented in three different geometric shapes: sphere, hemisphere and cylinder, each of these shapes also contains two variations.
Experimental results have shown that the target pursuit algorithm is capable of identifying and following the target person robustly given only a photo input. This can be evidenced by the live tracking and mapping of the intended targets with different clothing in both indoor and outdoor environments. Additionally, the various methods developed in this research could enhance the performance of practical vision based applications especially in detecting and tracking of objects
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …