4 research outputs found

    FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection

    Full text link
    With the advance of AI, road object detection has been a prominent topic in computer vision, mostly using perspective cameras. Fisheye lens provides omnidirectional wide coverage for using fewer cameras to monitor road intersections, however with view distortions. To our knowledge, there is no existing open dataset prepared for traffic surveillance on fisheye cameras. This paper introduces an open FishEye8K benchmark dataset for road object detection tasks, which comprises 157K bounding boxes across five classes (Pedestrian, Bike, Car, Bus, and Truck). In addition, we present benchmark results of State-of-The-Art (SoTA) models, including variations of YOLOv5, YOLOR, YOLO7, and YOLOv8. The dataset comprises 8,000 images recorded in 22 videos using 18 fisheye cameras for traffic monitoring in Hsinchu, Taiwan, at resolutions of 1080×\times1080 and 1280×\times1280. The data annotation and validation process were arduous and time-consuming, due to the ultra-wide panoramic and hemispherical fisheye camera images with large distortion and numerous road participants, particularly people riding scooters. To avoid bias, frames from a particular camera were assigned to either the training or test sets, maintaining a ratio of about 70:30 for both the number of images and bounding boxes in each class. Experimental results show that YOLOv8 and YOLOR outperform on input sizes 640×\times640 and 1280×\times1280, respectively. The dataset will be available on GitHub with PASCAL VOC, MS COCO, and YOLO annotation formats. The FishEye8K benchmark will provide significant contributions to the fisheye video analytics and smart city applications.Comment: CVPR Workshops 202

    Highly Curved Lane Detection Algorithms Based on Kalman Filter

    No full text
    The purpose of the self-driving car is to minimize the number casualties of traffic accidents. One of the effects of traffic accidents is an improper speed of a car, especially at the road turn. If we can make the anticipation of the road turn, it is possible to avoid traffic accidents. This paper presents a cutting edge curve lane detection algorithm based on the Kalman filter for the self-driving car. It uses parabola equation and circle equation models inside the Kalman filter to estimate parameters of a using curve lane. The proposed algorithm was tested with a self-driving vehicle. Experiment results show that the curve lane detection algorithm has a high success rate. The paper also presents simulation results of the autonomous vehicle with the feature to control steering and speed using the results of the full curve lane detection algorithm

    Hybrid Motion Planning Method for Autonomous Robots Using Kinect Based Sensor Fusion and Virtual Plane Approach in Dynamic Environments

    Get PDF
    A new reactive motion planning method for an autonomous vehicle in dynamic environments is proposed. The new dynamic motion planning method combines a virtual plane based reactive motion planning technique with a sensor fusion based obstacle detection approach, which results in improving robustness and autonomy of vehicle navigation within unpredictable dynamic environments. The key feature of the new reactive motion planning method is based on a local observer in the virtual plane which allows the effective transformation of complex dynamic planning problems into simple stationary in the virtual plane. In addition, a sensor fusion based obstacle detection technique provides the pose estimation of moving obstacles by using a Kinect sensor and a sonar sensor, which helps to improve the accuracy and robustness of the reactive motion planning approach in uncertain dynamic environments. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles even in hostile environments where conventional method failed
    corecore