2,550 research outputs found

    PARTICLE SWARM OPTIMIZATION DAN SUPPORT VECTOR MACHINE UNTUK KLASIFIKASI JENIS KENDARAAN PADA VIDEO BERGERAK

    Get PDF
    Deteksi, pelacakan dan klasifikasi kendaraan merupakan tahap yang paling penting dari aplikasi visi komputer pada Sistem Transportasi Cerdas (Intelligent Transportation System). Pada saat ini penggunaan radar dan sensor magnetik mempunyai masalah dalam klasifikasi dan perhitungan jumlah kendaraan. Penelitian ini mengusulkan penggunaan kamera video untuk mendeteksi dan mengklasifikasikan jenis kendaraan. Data video dirubah ke dalam urutan bingkai citra, dan dilakukan penapisan untuk menghilangkain derau menggunakan Median Filter. Deteksi kendaraan dapat dilakukan menggunakan metode pengurangan latar belakang dengan latar belakang dimodelkan menggunakan frame differencing dan kontur sebagai model dari objek. Kemudian dilakukan operasi morfologi opening dan diikuti dilasi pada kontur yang terdeteksi untuk menghilangkan titik piksel yang tidak dibutuhkan dan mengisi piksel kosong pada kontur. Ekstraksi ciri geometri dari kontur kendaraan digunakan sebagai masukan Support Vector Machine (SVM) dengan kernel Radial Basis Function (RBF) untuk mengklasifikasikan jenis kendaraan. Klasifikasi SVM sangat dipengaruhi oleh pemilihan parameter C dan fungsi kernel yang digunakan. Particle Swarm Optimization (PSO) digunakan untuk memilih parameter C dan gamma terbaik SVM. Diperoleh sensitivitas sebesar 79%, spesifisitas 41%, akurasi 68%, dan galat 32% dalam mengklasifikasikan jenis kendaraan dari data video. Dengan akurasi rata-rata untuk pelacakan objek adalah 87,5%. Kata Kunci : Sistem Transportasi Cerdas, Klasifikasi Kendaraan, Background Subtraction, Particle Swarm Optimization, Support Vector Machine Detection, tracking, and classification of vehicles is the most important stage of computer vision applications in Intelligent Transportation Systems (ITS). At present the use of radar and magnetic sensors has problems in the classification and calculation of the number vehicles. This research proposes the use of video cameras to detect and classify vehicle types. Video data is changed to the frame image sequence, and filtering is done using the Median Filter to remove noise. Vehicle detection is performed using a background subtraction method where the background is modeled using frame differencing and contours as models of objects. Then the opening morphology operation is followed by dilation performed on the detected contour to eliminate unnecessary pixel points and fill empty pixels on the contour. The extraction of geometric features from vehicle contour is used as input for Support Vector Machine (SVM) with the Radial Basis Function (RBF) kernel to classify vehicle types. SVM is influenced by the parameters selection of C and the kernel function used. Particle Swarm Optimization (PSO) is used to select the best C and gamma SVM parameters. Sensitivity of 79%, specificity of 41%, accuracy of 68%, and error of 32% is obtained in classifying types of vehicles from video data. With average accuracy for object tracking is 87.5%. Keywords : Intelligent Transport System, Vehicle Classification, Background Subtraction, Particle Swarm Optimization, Support Vector Machine

    Are object detection assessment criteria ready for maritime computer vision?

    Get PDF
    Maritime vessels equipped with visible and infrared cameras can complement other conventional sensors for object detection. However, application of computer vision techniques in maritime domain received attention only recently. The maritime environment offers its own unique requirements and challenges. Assessment of the quality of detections is a fundamental need in computer vision. However, the conventional assessment metrics suitable for usual object detection are deficient in the maritime setting. Thus, a large body of related work in computer vision appears inapplicable to the maritime setting at the first sight. We discuss the problem of defining assessment metrics suitable for maritime computer vision. We consider new bottom edge proximity metrics as assessment metrics for maritime computer vision. These metrics indicate that existing computer vision approaches are indeed promising for maritime computer vision and can play a foundational role in the emerging field of maritime computer vision

    Rain Removal in Traffic Surveillance: Does it Matter?

    Get PDF
    Varying weather conditions, including rainfall and snowfall, are generally regarded as a challenge for computer vision algorithms. One proposed solution to the challenges induced by rain and snowfall is to artificially remove the rain from images or video using rain removal algorithms. It is the promise of these algorithms that the rain-removed image frames will improve the performance of subsequent segmentation and tracking algorithms. However, rain removal algorithms are typically evaluated on their ability to remove synthetic rain on a small subset of images. Currently, their behavior is unknown on real-world videos when integrated with a typical computer vision pipeline. In this paper, we review the existing rain removal algorithms and propose a new dataset that consists of 22 traffic surveillance sequences under a broad variety of weather conditions that all include either rain or snowfall. We propose a new evaluation protocol that evaluates the rain removal algorithms on their ability to improve the performance of subsequent segmentation, instance segmentation, and feature tracking algorithms under rain and snow. If successful, the de-rained frames of a rain removal algorithm should improve segmentation performance and increase the number of accurately tracked features. The results show that a recent single-frame-based rain removal algorithm increases the segmentation performance by 19.7% on our proposed dataset, but it eventually decreases the feature tracking performance and showed mixed results with recent instance segmentation methods. However, the best video-based rain removal algorithm improves the feature tracking accuracy by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System

    Automatic Vehicle Trajectory Extraction by Aerial Remote Sensing

    Get PDF
    Research in road users’ behaviour typically depends on detailed observational data availability, particularly if the interest is in driving behaviour modelling. Among this type of data, vehicle trajectories are an important source of information for traffic flow theory, driving behaviour modelling, innovation in traffic management and safety and environmental studies. Recent developments in sensing technologies and image processing algorithms reduced the resources (time and costs) required for detailed traffic data collection, promoting the feasibility of site-based and vehicle-based naturalistic driving observation. For testing the core models of a traffic microsimulation application for safety assessment, vehicle trajectories were collected by remote sensing on a typical Portuguese suburban motorway. Multiple short flights over a stretch of an urban motorway allowed for the collection of several partial vehicle trajectories. In this paper the technical details of each step of the methodology used is presented: image collection, image processing, vehicle identification and vehicle tracking. To collect the images, a high-resolution camera was mounted on an aircraft's gyroscopic platform. The camera was connected to a DGPS for extraction of the camera position and allowed the collection of high resolution images at a low frame rate of 2s. After generic image orthorrectification using the flight details and the terrain model, computer vision techniques were used for fine rectification: the scale-invariant feature transform algorithm was used for detection and description of image features, and the random sample consensus algorithm for feature matching. Vehicle detection was carried out by median-based background subtraction. After the computation of the detected foreground and the shadow detection using a spectral ratio technique, region segmentation was used to identify candidates for vehicle positions. Finally, vehicles were tracked using a k- shortest disjoints paths algorithm. This approach allows for the optimization of an entire set of trajectories against all possible position candidates using motion-based optimization. Besides the importance of a new trajectory dataset that allows the development of new behavioural models and the validation of existing ones, this paper also describes the application of state-of-the-art algorithms and methods that significantly minimize the resources needed for such data collection. Keywords: Vehicle trajectories extraction, Driver behaviour, Remote sensin

    3D Vehicle Extraction and Tracking from Multiple Viewpoints for Traffic Monitoring by using Probability Fusion Map

    Get PDF
    This paper presents a novel solution of vehicle occlusion and 3D measurement for traffic monitoring by data fusion from multiple stationary cameras. Comparing with single camera based conventional methods in traffic monitoring, our approach fuses video data from different viewpoints into a common probability fusion map (PFM) and extracts targets. The proposed PFM concept is efficient to handle and fuse data in order to estimate the probability of vehicle appearance, which is verified to be more reliable than single camera solution by real outdoor experiments. An AMF based shadowing modeling algorithm is also proposed in this paper in order to remove shadows on the road area and extract the proper vehicle regions

    Computer vision based traffic monitoring system for multi-track freeways

    Get PDF
    Nowadays, development is synonymous with construction of infrastructure. Such road infrastructure needs constant attention in terms of traffic monitoring as even a single disaster on a major artery will disrupt the way of life. Humans cannot be expected to monitor these massive infrastructures over 24/7 and computer vision is increasingly being used to develop automated strategies to notify the human observers of any impending slowdowns and traffic bottlenecks. However, due to extreme costs associated with the current state of the art computer vision based networked monitoring systems, innovative computer vision based systems can be developed which are standalone and efficient in analyzing the traffic flow and tracking vehicles for speed detection. In this article, a traffic monitoring system is suggested that counts vehicles and tracks their speeds in realtime for multi-track freeways in Australia. Proposed algorithm uses Gaussian mixture model for detection of foreground and is capable of tracking the vehicle trajectory and extracts the useful traffic information for vehicle counting. This stationary surveillance system uses a fixed position overhead camera to monitor traffic

    A Knowledge Graph Framework for Detecting Traffic Events Using Stationary Cameras

    Get PDF
    With the rapid increase in urban development, it is critical to utilize dynamic sensor streams for traffic understanding, especially in larger cities where route planning or infrastructure planning is more critical. This creates a strong need to understand traffic patterns using ubiquitous sensors to allow city officials to be better informed when planning urban construction and to provide an understanding of the traffic dynamics in the city. In this study, we propose our framework ITSKG (Imagery-based Traffic Sensing Knowledge Graph) which utilizes the stationary traffic camera information as sensors to understand the traffic patterns. The proposed system extracts image-based features from traffic camera images, adds a semantic layer to the sensor data for traffic information, and then labels traffic imagery with semantic labels such as congestion. We share a prototype example to highlight the novelty of our system and provide an online demo to enable users to gain a better understanding of our system. This framework adds a new dimension to existing traffic modeling systems by incorporating dynamic image-based features as well as creating a knowledge graph to add a layer of abstraction to understand and interpret concepts like congestion to the traffic event detection system
    corecore