45 research outputs found

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    Development of a bio-inspired vision system for mobile micro-robots

    Get PDF
    In this paper, we present a new bio-inspired vision system for mobile micro-robots. The processing method takes inspiration from vision of locusts in detecting the fast approaching objects. Research suggested that locusts use wide field visual neuron called the lobula giant movement detector to respond to imminent collisions. We employed the locusts' vision mechanism to motion control of a mobile robot. The selected image processing method is implemented on a developed extension module using a low-cost and fast ARM processor. The vision module is placed on top of a micro-robot to control its trajectory and to avoid obstacles. The observed results from several performed experiments demonstrated that the developed extension module and the inspired vision system are feasible to employ as a vision module for obstacle avoidance and motion control

    Designing a machine vision system for a mobile robot to detect and mark dangerous areas

    Get PDF
    There is no doubt that machine vision systems offer many benefits in many applications, as they improve the ability of machines to adapt and learn. When implementing a new application it is necessary to design a vision system that matches the requirements of the application, as there is a wide range of parameters that must be considered during the design. Our goal in this paper is to learn about these different parameters and define the different requirements for designing a machine vision system for a mobile robot, whose task is to examine different environments autonomously, detect hazardous materials, and mark high-risk areas, in various weather conditions and around the clock

    Object Classification Techniques using Tree Based Classifiers

    Get PDF
    Object recognition is presently one of the most active research areas in computer vision, pattern recognition, artificial intelligence and human activity analysis. The area of object detection and classification, attention habitually focuses on changes in the location of anobject with respect to time, since appearance information can sensibly describe the object category. In this paper, feature set obtained from the Gray Level Co-Occurrence Matrices (GLCM), representing a different stage of statistical variations of object category. The experiments are carried out using Caltech 101 dataset, considering sevenobjects viz (airplanes, camera, chair, elephant, laptop, motorbike and bonsai tree) and the extracted GLCM feature set are modeled by tree based classifier like Naive Bayes Tree and Random Forest. In the experimental results, Random Forest classifier exhibits the accuracy and effectiveness of the proposed method with an overall accuracy rate of 89.62%, which outperforms the Naive Bayes classifier

    Improving the mobility performance of autonomous unmanned ground vehicles by adding the ability to 'Sense/Feel' their local environment.

    Get PDF
    This paper follows on from earlier work detailed in output one and critically reviews the sensor technologies used in autonomous vehicles, including robots, to ascertain the physical properties of the environment including terrain sensing. The paper reports on a comprehensive study done in terrain types and how these could be determined and the appropriate sensor technologies that can be used. It also reports on work currently in progress in applying these sensor technologies and gives details of a prototype system built at Middlesex University on a reconfigurable mobility system, demonstrating the success of the proposed strategies. This full paper was subject to a blind refereed review process and presented at the 12th HCI International 2007, Beijing, China, incorporating 8 other international thematic conferences. The conference involved over 250 parallel sessions and was attended by 2000 delegates. The conference proceedings are published by Springer in a 17 volume paperback book edition in the Lecture Notes in Computer Science series (LNCS). These are available on-line through the LNCS Digital Library, readily accessible by all subscribing libraries around the world, published in the proceedings of the Second International Conference on Virtual Reality, ICVR 2007, held as Part of HCI International 2007, Beijing, China, July 22-27, 2007. It is also published as a collection of 81 papers in Lecture Notes in Computer Science Series by Springer

    SIFT and color feature fusion using localized maximum-margin learning for scene classification

    Get PDF
    published_or_final_versionThe 3rd International Conference on Machine Vision (ICMV 2010), Hong Kong, China, 28-30 December 2010. In Proceedings of 3rd ICMV, 2010, p. 56-6

    A Visual AGV-Urban Car using Fuzzy Control

    Full text link
    The goal of the work described in this paper is to develop a visual line guided system for being used on-board an Autonomous Guided Vehicle (AGV) commercial car, controlling the steering and using just the visual information of a line painted below the car. In order to implement the control of the vehicle, a Fuzzy Logic controller has been implemented, that has to be robust against curvature changes and velocity changes. The only input information for the controller is the visual distance from the image center captured by a camera pointing downwards to the guiding line on the road, at a commercial frequency of 30Hz. The good performance of the controller has successfully been demonstrated in a real environment at urban velocities. The presented results demonstrate the capability of the Fuzzy controller to follow a circuit in urban environments without previous information about the path or any other information from additional sensor

    From On-Road to Off: Transfer Learning within a Deep Convolutional Neural Network for Segmentation and Classification of Off-Road Scenes

    Get PDF
    Real-time road-scene understanding is a challenging computer vision task with recent advances in convolutional neural networks (CNN) achieving results that notably surpass prior traditional feature driven approaches. Here, we take an existing CNN architecture, pre-trained for urban road-scene understanding, and retrain it towards the task of classifying off-road scenes, assessing the network performance within the training cycle. Within the paradigm of transfer learning we analyse the effects on CNN classification, by training and assessing varying levels of prior training on varying sub-sets of our off-road training data. For each of these configurations, we evaluate the network at multiple points during its training cycle, allowing us to analyse in depth exactly how the training process is affected by these variations. Finally, we compare this CNN to a more traditional approach using a feature-driven Support Vector Machine (SVM) classifier and demonstrate state-of-the-art results in this particularly challenging problem of off-road scene understanding
    corecore