7 research outputs found

    An Image Understanding System for Detecting Indoor Features

    Get PDF
    The capability of identifying physical structures of an unknown environment is very important for vision based robot navigation and scene understanding. Among physical structures in indoor environments, corridor lines and doors are important visual landmarks for robot navigation since they show the topological structure in an indoor environment and establish connections among the different places or regions in the indoor environment. Furthermore, they provide clues for understanding the image. In this thesis, I present two algorithms to detect the vanishing point, corridor lines, and doors respectively using a single digital video camera. In both algorithms, we utilize a hypothesis generation and verification method to detect corridor and door structures using low level linear features. The proposed method consists of low, intermediate, and high level processing stages which correspond to the extraction of low level features, the formation of hypotheses, and verification of the hypotheses via seeking evidence actively. In particular, we extend this single-pass framework by employing a feedback strategy for more robust hypothesis generation and verification. We demonstrate the robustness of the proposed methods on a large number of real video images in a variety of corridor environments, with image acquisitions under different illumination and reflection conditions, with different moving speeds, and with different viewpoints of the camera. Experimental results performed on the corridor line detection algorithm validate that the method can detect corridor line locations in the presence of many spurious line features about one second. Experimental results carried on the door detection algorithm show that the system can detect visually important doors in an image with a very high accuracy rate when a robot navigates along a corridor environment

    Fast and robust image feature matching methods for computer vision applications

    Get PDF
    Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications

    A new classification approach based on geometrical model for human detection in images

    Get PDF
    In recent years, object detection and classification has gained more attention, thus, there are several human object detection algorithms being used to locate and recognize human objects in images. The research of image processing and analysing based on human shape is a hot topic due to its wide applicability in real applications. In this research, we present a new shape-based classification approach to categorise the detected object as human or non-human in images. The classification in this approach is based on applying a geometrical model which contains a set of parameters related to the object’s upper portion. Based on the result of these geometric parameters, our approach can simply classify the detected object as human or non-human. In general, the classification process of this new approach is based on generating a geometrical model by observing unique geometrical relations between the upper portion shape points (neck, head, shoulders) of humans, this observation is based on analysis of the change in the histogram of the x values coordinates for human upper portion shape. To present the changing of X coordinate values we have used histograms with mathematical smoothing functions to avoid small angles, as the result we observed four parameters for human objects to be used in building the classifier, by applying the four parameters of the geometrical model and based on the four parameters results, our classification approach can classify the human object from another object. The proposed approach has been tested and compared with some of the machine learning approaches such as Artificial Neural Networks (ANN), Support Vector Machine (SVM) Model, and a famous type of decision tree called Random Forest, by using 358 different images for several objects obtained from INRIA dataset (set of human and non-human as an object in digital images). From the comparison and testing result between the proposed approach and the machine learning approaches in term of accuracy performance, we indicate that the proposed approach achieved the highest accuracy rate (93.85%), with the lowest miss detection rate (11.245%) and false discovery rate (9.34%). The result achieved from the testing and comparison shows the efficiency of this presented approach

    Modellbasierte Lokalisation und Verfolgung für sichtsystemgestützte Regelungen [online]

    Get PDF
    corecore