51 research outputs found

    Vision for Looking at Traffic Lights:Issues, Survey, and Perspectives

    Get PDF

    Machine Vision-based Obstacle Avoidance for Mobile Robot

    Get PDF
    Obstacle avoidance for mobile robots, especially humanoid robot, is an essential ability for the robot to perform in its environment. This ability based on the colour recognition capability of the barrier or obstacle and the field, as well as the ability to perform movements avoiding the barrier, detected when the robot detects an obstacle in its path. This research develops a detection system of barrier objects and a field with a colour range in HSV format and extracts the edges of barrier objects with the FindContoure method at a threshold filter value. The filter results are then processed using the Bounding Rect method so that the results are obtained from the object detection coordinate extraction. The test results detect the colour of the barrier object with OpenCV is 100%, the movement test uses the processing of the object's colour image and robot direction based on the contour area value> 12500 Pixels, the percentage of the robot making edging motion through the red barrier object is 80% and the contour area testing <12500 pixel is 70% of the movement of the robot forward approaching the barrier object

    Machine Vision-based Obstacle Avoidance for Mobile Robot

    Get PDF
    Obstacle avoidance for mobile robots, especially humanoid robot, is an essential ability for the robot to perform in its environment. This ability based on the colour recognition capability of the barrier or obstacle and the field, as well as the ability to perform movements avoiding the barrier, detected when the robot detects an obstacle in its path. This research develops a detection system of barrier objects and a field with a colour range in HSV format and extracts the edges of barrier objects with the FindContoure method at a threshold filter value. The filter results are then processed using the Bounding Rect method so that the results are obtained from the object detection coordinate extraction. The test results detect the colour of the barrier object with OpenCV is 100%, the movement test uses the processing of the object's colour image and robot direction based on the contour area value> 12500 Pixels, the percentage of the robot making edging motion through the red barrier object is 80% and the contour area testing <12500 pixel is 70% of the movement of the robot forward approaching the barrier object

    A mathematical model for computerized car crash detection using computer vision techniques

    Full text link
    My proposed approach to the automatic detection of traffic accidents in a signalized intersection is presented here. In this method, a digital camera is strategically placed to view the entire intersection. The images are captured, processed and analyzed for the presence of vehicles and pedestrians in the proposed detection zones. Those images are further processed to detect if an accident has occurred; The mathematical model presented is a Poisson distribution that predicts the number of accidents in an intersection per week, which can be used as approximations for modeling the crash process. We believe that the crash process can be modeled by using a two-state method, which implies that the intersection is in one of two states: clear (no accident) or obstructed (accident). We can then incorporate a rule-based AI system, which will help us in identifying that a crash has taken or will possibly take place; We have modeled the intersection as a service facility, which processes vehicles in a relatively small amount of time. A traffic accident is then perceived as an interruption of that service

    Machine Vision-based Obstacle Avoidance for Mobile Robot

    Get PDF
    Obstacle avoidance for mobile robots, especially humanoid robot, is an essential ability for the robot to perform in its environment. This ability based on the colour recognition capability of the barrier or obstacle and the field, as well as the ability to perform movements avoiding the barrier, detected when the robot detects an obstacle in its path. This research develops a detection system of barrier objects and a field with a colour range in HSV format and extracts the edges of barrier objects with the FindContoure method at a threshold filter value. The filter results are then processed using the Bounding Rect method so that the results are obtained from the object detection coordinate extraction. The test results detect the colour of the barrier object with OpenCV is 100%, the movement test uses the processing of the object's colour image and robot direction based on the contour area value&gt; 12500 Pixels, the percentage of the robot making edging motion through the red barrier object is 80% and the contour area testing &lt;12500 pixel is 70% of the movement of the robot forward approaching the barrier object

    Motion tracking on embedded systems: vision-based vehicle tracking using image alignment with symmetrical function.

    Get PDF
    Cheung, Lap Chi.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 91-95).Abstracts in English and Chinese.Chapter 1. --- INTRODUCTION --- p.1Chapter 1.1. --- Background --- p.1Chapter 1.1.1. --- Introduction to Intelligent Vehicle --- p.1Chapter 1.1.2. --- Typical Vehicle Tracking Systems for Rear-end Collision Avoidance --- p.2Chapter 1.1.3. --- Passive VS Active Vehicle Tracking --- p.3Chapter 1.1.4. --- Vision-based Vehicle Tracking Systems --- p.4Chapter 1.1.5. --- Characteristics of Computing Devices on Vehicles --- p.5Chapter 1.2. --- Motivation and Objectives --- p.6Chapter 1.3. --- Major Contributions --- p.7Chapter 1.3.1. --- A 3-phase Vision-based Vehicle Tracking Framework --- p.7Chapter 1.3.2. --- Camera-to-vehicle Distance Measurement by Single Camera --- p.9Chapter 1.3.3. --- Real Time Vehicle Detection --- p.10Chapter 1.3.4. --- Real Time Vehicle Tracking using Simplified Image Alignment --- p.10Chapter 1.4. --- Evaluation Platform --- p.11Chapter 1.5. --- Thesis Organization --- p.11Chapter 2. --- RELATED WORK --- p.13Chapter 2.1. --- Stereo-based Vehicle Tracking --- p.13Chapter 2.2. --- Motion-based Vehicle Tracking --- p.16Chapter 2.3. --- Knowledge-based Vehicle Tracking --- p.18Chapter 2.4. --- Commercial Systems --- p.19Chapter 3. --- 3-PHASE VISION-BASED VEHICLE TRACKING FRAMEWORK --- p.22Chapter 3.1. --- Introduction to the 3-phase Framework --- p.22Chapter 3.2. --- Vehicle Detection --- p.23Chapter 3.2.1. --- Overview of Vehicle Detection --- p.23Chapter 3.2.2. --- Locating the Vehicle Center - Symmetrical Measurement --- p.25Chapter 3.2.3. --- Locating the Vehicle Roof and Bottom --- p.28Chapter 3.2.4. --- Locating the Vehicle Sides - Over-complete Haar Transform --- p.30Chapter 3.3. --- Vehicle Template Tracking Image Alignment --- p.37Chapter 3.3.5. --- Overview of Vehicle Template Tracking --- p.37Chapter 3.3.6. --- Goal of Image Alignment --- p.41Chapter 3.3.7. --- Alternative Image Alignment - Compositional Image Alignment --- p.42Chapter 3.3.8. --- Efficient Image Alignment - Inverse Compositional Algorithm --- p.43Chapter 3.4. --- Vehicle Template Update --- p.46Chapter 3.4.1. --- Situation of Vehicle lost --- p.46Chapter 3.4.2. --- Template Filling by Updating the positions of Vehicle Features --- p.48Chapter 3.5. --- Experiments and Discussions --- p.49Chapter 3.5. 1. --- Experiment Setup --- p.49Chapter 3.5.2. --- Successful Tracking Percentage --- p.50Chapter 3.6. --- Comparing with other tracking methodologies --- p.52Chapter 3.6.1. --- 1-phase Vision-based Vehicle Tracking --- p.52Chapter 3.6.2. --- Image Correlation --- p.54Chapter 3.6.3. --- Continuously Adaptive Mean Shift --- p.58Chapter 4. --- CAMERA TO-VEHICLE DISTANCE MEASUREMENT BY SINGLE CAMERA --- p.61Chapter 4.1 --- The Principle of Law of Perspective --- p.61Chapter 4.2. --- Distance Measurement by Single Camera --- p.62Chapter 5. --- REAL TIME VEHICLE DETECTION --- p.66Chapter 5.1. --- Introduction --- p.66Chapter 5.2. --- Timing Analysis of Vehicle Detection --- p.66Chapter 5.3. --- Symmetrical Measurement Optimization --- p.67Chapter 5.3.1. --- Diminished Gradient Image for Symmetrical Measurement --- p.67Chapter 5.3.2. --- Replacing Division by Multiplication Operations --- p.71Chapter 5.4. --- Over-complete Haar Transform Optimization --- p.73Chapter 5.4.1. --- Characteristics of Over-complete Haar Transform --- p.75Chapter 5.4.2. --- Pre-compntation of Haar block --- p.74Chapter 5.5. --- Summary --- p.77Chapter 6. --- REAL TIME VEHICLE TRACKING USING SIMPLIFIED IMAGE ALIGNMENT --- p.78Chapter 6.1. --- Introduction --- p.78Chapter 6.2. --- Timing Analysis of Original Image Alignment --- p.78Chapter 6.3. --- Simplified Image Alignment --- p.80Chapter 6.3.1. --- Reducing the Number of Parameters in Affine Transformation --- p.80Chapter 6.3.2. --- Size Reduction of Image A ligmnent Matrixes --- p.85Chapter 6.4. --- Experiments and Discussions --- p.85Chapter 6.4.1. --- Successful Tracking Percentage --- p.86Chapter 6.4.2. --- Timing Improvement --- p.87Chapter 7. --- CONCLUSIONS --- p.89Chapter 8. --- BIBLIOGRAPHY --- p.9

    Real-Time Traffic Light Recognition Based on C-HOG Features

    Get PDF
    This paper proposes a real-time traffic light detection and recognition algorithm that would allow for the recognition of traffic signals in intelligent vehicles. This algorithm is based on C-HOG features (Color and HOG features) and Support Vector Machine (SVM). The algorithm extracted red and green areas in the video accurately, and then screened the eligible area. Thereafter, the C-HOG features of all kinds of lights could be extracted. Finally, this work used SVM to build a classifier of corresponding category lights. This algorithm obtained accurate real-time information based on the judgment of the decision function. Furthermore, experimental results show that this algorithm demonstrated accuracy and good real-time performance
    • …
    corecore