23 research outputs found

    Real-Time, Multiple Pan/Tilt/Zoom Computer Vision Tracking and 3D Positioning System for Unmanned Aerial System Metrology

    Get PDF
    The study of structural characteristics of Unmanned Aerial Systems (UASs) continues to be an important field of research for developing state of the art nano/micro systems. Development of a metrology system using computer vision (CV) tracking and 3D point extraction would provide an avenue for making these theoretical developments. This work provides a portable, scalable system capable of real-time tracking, zooming, and 3D position estimation of a UAS using multiple cameras. Current state-of-the-art photogrammetry systems use retro-reflective markers or single point lasers to obtain object poses and/or positions over time. Using a CV pan/tilt/zoom (PTZ) system has the potential to circumvent their limitations. The system developed in this paper exploits parallel-processing and the GPU for CV-tracking, using optical flow and known camera motion, in order to capture a moving object using two PTU cameras. The parallel-processing technique developed in this work is versatile, allowing the ability to test other CV methods with a PTZ system using known camera motion. Utilizing known camera poses, the object\u27s 3D position is estimated and focal lengths are estimated for filling the image to a desired amount. This system is tested against truth data obtained using an industrial system

    Modeling and Optimizing the Coverage of Multi-Camera Systems

    Get PDF
    This thesis approaches the problem of modeling a multi-camera system\u27s performance from system and task parameters by describing the relationship in terms of coverage. This interface allows a substantial separation of the two concerns: the ability of the system to obtain data from the space of possible stimuli, according to task requirements, and the description of the set of stimuli required for the task. The conjecture is that for any particular system, it is in principle possible to develop such a model with ideal prediction of performance. Accordingly, a generalized structure and tool set is built around the core mathematical definitions of task-oriented coverage, without tying it to any particular model. A family of problems related to coverage in the context of multi-camera systems is identified and described. A comprehensive survey of the state of the art in approaching such problems concludes that by coupling the representation of coverage to narrow problem cases and applications, and by attempting to simplify the models to fit optimization techniques, both the generality and the fidelity of the models are reduced. It is noted that models exhibiting practical levels of fidelity are well beyond the point where only metaheuristic optimization techniques are applicable. Armed with these observations and a promising set of ideas from surveyed sources, a new high-fidelity model for multi-camera vision based on the general coverage framework is presented. This model is intended to be more general in scope than previous work, and despite the complexity introduced by the multiple criteria required for fidelity, it conforms to the framework and is thus tractable for certain optimization approaches. Furthermore, it is readily extended to different types of vision systems. This thesis substantiates all of these claims. The model\u27s fidelity and generality is validated and compared to some of the more advanced models from the literature. Three of the aforementioned coverage problems are then approached in application cases using the model. In one case, a bistatic variant of the sensing modality is used, requiring a modification of the model; the compatibility of this modification, both conceptually and mathematically, illustrates the generality of the framework

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Robust computational intelligence techniques for visual information processing

    Get PDF
    The third part is exclusively dedicated to the super-resolution of Magnetic Resonance Images. In one of these works, an algorithm based on the random shifting technique is developed. Besides, we studied noise removal and resolution enhancement simultaneously. To end, the cost function of deep networks has been modified by different combinations of norms in order to improve their training. Finally, the general conclusions of the research are presented and discussed, as well as the possible future research lines that are able to make use of the results obtained in this Ph.D. thesis.This Ph.D. thesis is about image processing by computational intelligence techniques. Firstly, a general overview of this book is carried out, where the motivation, the hypothesis, the objectives, and the methodology employed are described. The use and analysis of different mathematical norms will be our goal. After that, state of the art focused on the applications of the image processing proposals is presented. In addition, the fundamentals of the image modalities, with particular attention to magnetic resonance, and the learning techniques used in this research, mainly based on neural networks, are summarized. To end up, the mathematical framework on which this work is based on, ₚ-norms, is defined. Three different parts associated with image processing techniques follow. The first non-introductory part of this book collects the developments which are about image segmentation. Two of them are applications for video surveillance tasks and try to model the background of a scenario using a specific camera. The other work is centered on the medical field, where the goal of segmenting diabetic wounds of a very heterogeneous dataset is addressed. The second part is focused on the optimization and implementation of new models for curve and surface fitting in two and three dimensions, respectively. The first work presents a parabola fitting algorithm based on the measurement of the distances of the interior and exterior points to the focus and the directrix. The second work changes to an ellipse shape, and it ensembles the information of multiple fitting methods. Last, the ellipsoid problem is addressed in a similar way to the parabola

    Proceedings of the 9th Conference on Autonomous Robot Systems and Competitions

    Get PDF
    Welcome to ROBOTICA 2009. This is the 9th edition of the conference on Autonomous Robot Systems and Competitions, the third time with IEEE‐Robotics and Automation Society Technical Co‐Sponsorship. Previous editions were held since 2001 in Guimarães, Aveiro, Porto, Lisboa, Coimbra and Algarve. ROBOTICA 2009 is held on the 7th May, 2009, in Castelo Branco , Portugal. ROBOTICA has received 32 paper submissions, from 10 countries, in South America, Asia and Europe. To evaluate each submission, three reviews by paper were performed by the international program committee. 23 papers were published in the proceedings and presented at the conference. Of these, 14 papers were selected for oral presentation and 9 papers were selected for poster presentation. The global acceptance ratio was 72%. After the conference, eighth papers will be published in the Portuguese journal Robótica, and the best student paper will be published in IEEE Multidisciplinary Engineering Education Magazine. Three prizes will be awarded in the conference for: the best conference paper, the best student paper and the best presentation. The last two, sponsored by the IEEE Education Society ‐ Student Activities Committee. We would like to express our thanks to all participants. First of all to the authors, whose quality work is the essence of this conference. Next, to all the members of the international program committee and reviewers, who helped us with their expertise and valuable time. We would also like to deeply thank the invited speaker, Jean Paul Laumond, LAAS‐CNRS France, for their excellent contribution in the field of humanoid robots. Finally, a word of appreciation for the hard work of the secretariat and volunteers. Our deep gratitude goes to the Scientific Organisations that kindly agreed to sponsor the Conference, and made it come true. We look forward to seeing more results of R&D work on Robotics at ROBOTICA 2010, somewhere in Portugal

    Intelligent strategies for mobile robotics in laboratory automation

    Get PDF
    In this thesis a new intelligent framework is presented for the mobile robots in laboratory automation, which includes: a new multi-floor indoor navigation method is presented and an intelligent multi-floor path planning is proposed; a new signal filtering method is presented for the robots to forecast their indoor coordinates; a new human feature based strategy is proposed for the robot-human smart collision avoidance; a new robot power forecasting method is proposed to decide a distributed transportation task; a new blind approach is presented for the arm manipulations for the robots

    Effective Step to Real-time Implementation of Accident Detection System Using Image Processing

    Get PDF
    Studies in the past have shown that number of traffic related fatalities is highly dependent on the emergency response time after the occurrence of an accident. Also traffic intersections were found to be one of the most vulnerable places for occurrence of an accident. Therefore there is a need to reduce the emergency response time by alerting the emergency response team by an automated accident detection system at traffic intersections, once an accident is detected. The goal of this project is develop an accident detection system at traffic intersections that is capable of operating in real-time with good performance rate. Therefore an accident detection system was developed which uses the vehicle parameters such as the speed and trajectory and other features such as area, orientation and position of the vehicle. Since one of the key elements in accident detection step is accurate tracking of moving vehicles, more focus was given to vehicle detection and tracking step. In this work, a tracking algorithm that uses a weighted combination of low-level features extracted from moving vehicles and low-level vision analysis on vehicle regions extracted from different frames is implemented. The speed of the tracked vehicles are calculated and along with the features extracted from the tracked vehicle, an accident detection system is designed which validates the factors cueing the occurrence of an accident. Once an accident is detected, the user is signaled about the occurrence of an accident. The detection and tracking performance of the algorithm was around 90% for two test videos used and collision detection system produced a correct detection rate of 87.5% for the test crashes simulated in the test bed setup in the laboratory. Overall the algorithm shows promise since it has a processing rate of 5frames/sec with good collision detection performance. With more test crashes and real-crashes data training, the performance of the algorithm is expected to improve.School of Electrical & Computer Engineerin

    Mobile robot vavigation using a vision based approach

    Get PDF
    PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially cluttered indoor environment using a mapless navigation strategy. The work focuses on two key problems, namely vision based obstacle avoidance and vision based reactive navigation strategy. The estimation of optical flow plays a key role in vision based obstacle avoidance problems, however the current view is that this technique is too sensitive to noise and distortion under real conditions. Accordingly, practical applications in real time robotics remain scarce. This dissertation presents a novel methodology for vision based obstacle avoidance, using a hybrid architecture. This integrates an appearance-based obstacle detection method into an optical flow architecture based upon a behavioural control strategy that includes a new arbitration module. This enhances the overall performance of conventional optical flow based navigation systems, enabling a robot to successfully move around without experiencing collisions. Behaviour based approaches have become the dominant methodologies for designing control strategies for robot navigation. Two different behaviour based navigation architectures have been proposed for the second problem, using monocular vision as the primary sensor and equipped with a 2-D range finder. Both utilize an accelerated version of the Scale Invariant Feature Transform (SIFT) algorithm. The first architecture employs a qualitative-based control algorithm to steer the robot towards a goal whilst avoiding obstacles, whereas the second employs an intelligent control framework. This allows the components of soft computing to be integrated into the proposed SIFT-based navigation architecture, conserving the same set of behaviours and system structure of the previously defined architecture. The intelligent framework incorporates a novel distance estimation technique using the scale parameters obtained from the SIFT algorithm. The technique employs scale parameters and a corresponding zooming factor as inputs to train a neural network which results in the determination of physical distance. Furthermore a fuzzy controller is designed and integrated into this framework so as to estimate linear velocity, and a neural network based solution is adopted to estimate the steering direction of the robot. As a result, this intelligent iv approach allows the robot to successfully complete its task in a smooth and robust manner without experiencing collision. MS Robotics Studio software was used to simulate the systems, and a modified Pioneer 3-DX mobile robot was used for real-time implementation. Several realistic scenarios were developed and comprehensive experiments conducted to evaluate the performance of the proposed navigation systems. KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant Feature Transforms, Intelligent framework
    corecore