1,422 research outputs found
Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object Detection
Background modeling is widely used for intelligent surveillance systems to
detect moving targets by subtracting the static background components. Most
roadside LiDAR object detection methods filter out foreground points by
comparing new data points to pre-trained background references based on
descriptive statistics over many frames (e.g., voxel density, number of
neighbors, maximum distance). However, these solutions are inefficient under
heavy traffic, and parameter values are hard to transfer from one scenario to
another. In early studies, the probabilistic background modeling methods widely
used for the video-based system were considered unsuitable for roadside LiDAR
surveillance systems due to the sparse and unstructured point cloud data. In
this paper, the raw LiDAR data were transformed into a structured
representation based on the elevation and azimuth value of each LiDAR point.
With this high-order tensor representation, we break the barrier to allow
efficient high-dimensional multivariate analysis for roadside LiDAR background
modeling. The Bayesian Nonparametric (BNP) approach integrates the intensity
value and 3D measurements to exploit the measurement data using 3D and
intensity info entirely. The proposed method was compared against two
state-of-the-art roadside LiDAR background models, computer vision benchmark,
and deep learning baselines, evaluated at point, object, and path levels under
heavy traffic and challenging weather. This multimodal Weighted Bayesian
Gaussian Mixture Model (GMM) can handle dynamic backgrounds with noisy
measurements and substantially enhances the infrastructure-based LiDAR object
detection, whereby various 3D modeling for smart city applications could be
created
Sea-Surface Object Detection Based on Electro-Optical Sensors: A Review
Sea-surface object detection is critical for navigation safety of autonomous ships. Electrooptical (EO) sensors, such as video cameras, complement radar on board in detecting small obstacle
sea-surface objects. Traditionally, researchers have used horizon detection, background subtraction, and
foreground segmentation techniques to detect sea-surface objects. Recently, deep learning-based object
detection technologies have been gradually applied to sea-surface object detection. This article demonstrates a comprehensive overview of sea-surface object-detection approaches where the advantages
and drawbacks of each technique are compared, covering four essential aspects: EO sensors and image
types, traditional object-detection methods, deep learning methods, and maritime datasets collection. In
particular, sea-surface object detections based on deep learning methods are thoroughly analyzed and
compared with highly influential public datasets introduced as benchmarks to verify the effectiveness of
these approaches. The arti
Advanced traffic video analytics for robust traffic accident detection
Automatic traffic accident detection is an important task in traffic video analysis due to its key applications in developing intelligent transportation systems. Reducing the time delay between the occurrence of an accident and the dispatch of the first responders to the scene may help lower the mortality rate and save lives. Since 1980, many approaches have been presented for the automatic detection of incidents in traffic videos. In this dissertation, some challenging problems for accident detection in traffic videos are discussed and a new framework is presented in order to automatically detect single-vehicle and intersection traffic accidents in real-time.
First, a new foreground detection method is applied in order to detect the moving vehicles and subtract the ever-changing background in the traffic video frames captured by static or non-stationary cameras. For the traffic videos captured during day-time, the cast shadows degrade the performance of the foreground detection and road segmentation. A novel cast shadow detection method is therefore presented to detect and remove the shadows cast by moving vehicles and also the shadows cast by static objects on the road.
Second, a new method is presented to detect the region of interest (ROI), which applies the location of the moving vehicles and the initial road samples and extracts the discriminating features to segment the road region. After detecting the ROI, the moving direction of the traffic is estimated based on the rationale that the crashed vehicles often make rapid change of direction. Lastly, single-vehicle traffic accidents and trajectory conflicts are detected using the first-order logic decision-making system.
The experimental results using publicly available videos and a dataset provided by the New Jersey Department of Transportation (NJDOT) demonstrate the feasibility of the proposed methods. Additionally, the main challenges and future directions are discussed regarding (i) improving the performance of the foreground segmentation, (ii) reducing the computational complexity, and (iii) detecting other types of traffic accidents
INTELIGENTNA TECHNIKA WYBORU OPTYMALIZATORA: BADANIE PORÓWNAWCZE ZMODYFIKOWANEGO MODELU DENSENET201 Z INNYMI MODELAMI GŁĘBOKIEGO UCZENIA
The rapid growth and development of AI-based applications introduce a wide range of deep and transfer learning model architectures. Selecting an optimal optimizer is still challenging to improve any classification type's performance efficiency and accuracy. This paper proposes an intelligent optimizer selection technique using a new search algorithm to overcome this difficulty. A dataset used in this work was collected and customized for controlling and monitoring roads, especially when emergency vehicles are approaching. In this regard, several deep and transfer learning models have been compared for accurate detection and classification. Furthermore, DenseNet201 layers are frizzed to choose the perfect optimizer. The main goal is to improve the performance accuracy of emergency car classification by performing the test of various optimization methods, including (Adam, Adamax, Nadam, and RMSprob). The evaluation metrics utilized for the model’s comparison with other deep learning techniques are based on classification accuracy, precision, recall, and F1-Score. Test results show that the proposed selection-based optimizer increased classification accuracy and reached 98.84%.Szybki wzrost i rozwój aplikacji opartych na sztucznej inteligencji wprowadzają szeroki zakres architektur modeli głębokiego uczenia i uczenia transferowego. Wybór optymalnego optymalizatora wciąż stanowi wyzwanie w celu poprawy wydajności i dokładności każdego rodzaju klasyfikacji. W niniejszej pracy proponowana jest inteligentna technika wyboru optymalizatora, wykorzystująca nowy algorytm wyszukiwania, aby pokonać to wyzwanie. Zbiór danych użyty w tej pracy został zebrany i dostosowany do celów kontroli i monitorowania dróg, zwłaszcza w sytuacjach, gdy zbliżają się pojazdy ratunkowe. W tym kontekście porównano kilka modeli głębokiego uczenia i uczenia transferowego w celu dokładnej detekcji i klasyfikacji. Ponadto, warstwy DenseNet201 zostały zamrożone, aby wybrać optymalizatora idealnego. Głównym celem jest poprawa dokładności klasyfikacji samochodów ratunkowych poprzez przeprowadzenie testów różnych metod optymalizacji, w tym (Adam, Adamax, Nadam i RMSprob). Metryki oceny wykorzystane do porównania modelu z innymi technikami głębokiego uczenia opierają się na dokładności klasyfikacji, precyzji, czułości i miarze F1. Wyniki testów pokazują, że zaproponowany optymalizator oparty na wyborze zwiększył dokładność klasyfikacji i osiągnął wynik na poziomie 98,84%
Vehicle detection and tracking using wireless sensors and video cameras
This thesis presents the development of a surveillance testbed using wireless sensors and video cameras for vehicle detection and tracking. The experimental study includes testbed design and discusses some of the implementation issues in using wireless sensors and video cameras for a practical application. A group of sensor devices equipped with light sensors are used to detect and localize the position of moving vehicle. Background subtraction method is used to detect the moving vehicle from the video sequences. Vehicle centroid is calculated in each frame. A non-linear minimization method is used to estimate the perspective transformation which project 3D points to 2D image points. Vehicle location estimates from three cameras are fused to form a single trajectory representing the vehicle motion. Experimental results using both sensors and cameras are presented. Average error between vehicle location estimates from the cameras and the wireless sensors is around 0.5ft
Recommended from our members
Automated Detection and Counting of Pedestrians on an Urban Roadside
This thesis implements an automated system that counts pedestrians with 85% accuracy. Two approaches have been considered and evaluated in terms of count accuracy, cost and ease of deployment. The first approach employs the Autoscope Solo Terra, a traffic camera which is widely used to monitor vehicular traffic. The Solo Terra supports an image processing-based detector that counts the number of objects crossing user-defined areas in the captured image. The count is updated based on the amount of movement across the selected regions. Therefore, a second approach has been considered that uses a histogram of oriented gradients (HoG), an advanced vision based algorithm proposed by Dalal et al. which distinguishes a pedestrian from a non-pedestrian based on an omega shape formed by the head and shoulders of a human being. The implemented detection software processes video frames that are streamed from a low-cost digital camera. The frames are divided into sub-regions which are scanned for an omega shape whenever movement is detected in those regions. It has been found that the HoG-based approach degrades in performance due to occlusion under dense pedestrian traffic conditions whereas the Solo Terra approach appears to be more robust. Undercounts and overcounts were encountered using the Solo Terra approach. To combat the disadvantages of both the approaches, they were integrated to form a single system where count is incremented predominantly using the Solo Terra. The HoG-based approach corrects the obtained count under certain conditions. A preliminary prototype of the integrated system has been verified
Camera Planning and Fusion in a Heterogeneous Camera Network
Wide-area camera networks are becoming more and more common. They have widerange of commercial and military applications from video surveillance to smart home and from traffic monitoring to anti-terrorism. The design of such a camera network is a challenging problem due to the complexity of the environment, self and mutual occlusion of moving objects, diverse sensor properties and a myriad of performance metrics for different applications. In this dissertation, we consider two such challenges: camera planing and camera fusion. Camera planning is to determine the optimal number and placement of cameras for a target cost function. Camera fusion describes the task of combining images collected by heterogenous cameras in the network to extract information pertinent to a target application.
I tackle the camera planning problem by developing a new unified framework based on binary integer programming (BIP) to relate the network design parameters and the performance goals of a variety of camera network tasks. Most of the BIP formulations are NP hard problems and various approximate algorithms have been proposed in the literature. In this dissertation, I develop a comprehensive framework in comparing the entire spectrum of approximation algorithms from Greedy, Markov Chain Monte Carlo (MCMC) to various relaxation techniques. The key contribution is to provide not only a generic formulation of the camera planning problem but also novel approaches to adapt the formulation to powerful approximation schemes including Simulated Annealing (SA) and Semi-Definite Program (SDP). The accuracy, efficiency and scalability of each technique are analyzed and compared in depth. Extensive experimental results are provided to illustrate the strength and weakness of each method.
The second problem of heterogeneous camera fusion is a very complex problem. Information can be fused at different levels from pixel or voxel to semantic objects, with large variation in accuracy, communication and computation costs. My focus is on the geometric transformation of shapes between objects observed at different camera planes. This so-called the geometric fusion approach usually provides the most reliable fusion approach at the expense of high computation and communication costs. To tackle the complexity, a hierarchy of camera models with different levels of complexity was proposed to balance the effectiveness and efficiency of the camera network operation. Then different calibration and registration methods are proposed for each camera model. At last, I provide two specific examples to demonstrate the effectiveness of the model: 1)a fusion system to improve the segmentation of human body in a camera network consisted of thermal and regular visible light cameras and 2) a view dependent rendering system by combining the information from depth and regular cameras to collecting the scene information and generating new views in real time
- …