1,564 research outputs found

    Analysis of Illegal Parking Behavior in Lisbon: Predicting and Analyzing Illegal Parking Incidents in LisbonÂŽs Top 10 Critical Streets

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceIllegal parking represents a costly and pervasive problem for most cities, as it not only leads to an increase in traffic congestion and the emission of air pollutants but also compromises pedestrian, biking, and driving safety. Moreover, it obstructs the flow of emergency vehicles, delivery services, and other essential functions, posing a significant risk to public safety and impeding the efficient operation of urban services. These detrimental effects ultimately diminish the cleanliness, security, and overall attractiveness of cities, impacting the well-being of both residents and visitors alike. Traditionally, decision-support systems utilized for addressing illegal parking have heavily relied on costly camera systems and complex video-processing algorithms to detect and monitor infractions in real time. However, the implementation of such systems is often challenging and expensive, particularly considering the diverse and dynamic road environment conditions. Alternatively, research studies focusing on spatiotemporal features for predicting parking infractions present a more efficient and cost-effective approach. This project focuses on the development of a machine learning model to accurately predict illegal parking incidents in the ten highly critical streets of Lisbon Municipality, taking into account the hour period and whether it is a weekend or holiday. A comprehensive evaluation of various machine learning algorithms was conducted, and the k-nearest neighbors (KNN) algorithm emerged as the top performing model. The KNN model exhibited robust predictive capabilities, effectively estimating the occurrence of illegal parking in the most critical streets, and together with the creation of an interactive and user-friendly dashboard, this project contributes valuable insights for urban planners, policymakers, and law enforcement agencies, empowering them to enhance public safety and security through informed decision-making

    A computer vision system for detecting and analysing critical events in cities

    Get PDF
    Whether for commuting or leisure, cycling is a growing transport mode in many cities worldwide. However, it is still perceived as a dangerous activity. Although serious incidents related to cycling leading to major injuries are rare, the fear of getting hit or falling hinders the expansion of cycling as a major transport mode. Indeed, it has been shown that focusing on serious injuries only touches the tip of the iceberg. Near miss data can provide much more information about potential problems and how to avoid risky situations that may lead to serious incidents. Unfortunately, there is a gap in the knowledge in identifying and analysing near misses. This hinders drawing statistically significant conclusions to provide measures for the built-environment that ensure a safer environment for people on bikes. In this research, we develop a method to detect and analyse near misses and their risk factors using artificial intelligence. This is accomplished by analysing video streams linked to near miss incidents within a novel framework relying on deep learning and computer vision. This framework automatically detects near misses and extracts their risk factors from video streams before analysing their statistical significance. It also provides practical solutions implemented in a camera with embedded AI (URBAN-i Box) and a cloud-based service (URBAN-i Cloud) to tackle the stated issue in the real-world settings for use by researchers, policy-makers, or citizens. The research aims to provide human-centred evidence that may enable policy-makers and planners to provide a safer built environment for cycling in London, or elsewhere. More broadly, this research aims to contribute to the scientific literature with the theoretical and empirical foundations of a computer vision system that can be utilised for detecting and analysing other critical events in a complex environment. Such a system can be applied to a wide range of events, such as traffic incidents, crime or overcrowding

    Bounding Box-Free Instance Segmentation Using Semi-Supervised Learning for Generating a City-Scale Vehicle Dataset

    Full text link
    Vehicle classification is a hot computer vision topic, with studies ranging from ground-view up to top-view imagery. In remote sensing, the usage of top-view images allows for understanding city patterns, vehicle concentration, traffic management, and others. However, there are some difficulties when aiming for pixel-wise classification: (a) most vehicle classification studies use object detection methods, and most publicly available datasets are designed for this task, (b) creating instance segmentation datasets is laborious, and (c) traditional instance segmentation methods underperform on this task since the objects are small. Thus, the present research objectives are: (1) propose a novel semi-supervised iterative learning approach using GIS software, (2) propose a box-free instance segmentation approach, and (3) provide a city-scale vehicle dataset. The iterative learning procedure considered: (1) label a small number of vehicles, (2) train on those samples, (3) use the model to classify the entire image, (4) convert the image prediction into a polygon shapefile, (5) correct some areas with errors and include them in the training data, and (6) repeat until results are satisfactory. To separate instances, we considered vehicle interior and vehicle borders, and the DL model was the U-net with the Efficient-net-B7 backbone. When removing the borders, the vehicle interior becomes isolated, allowing for unique object identification. To recover the deleted 1-pixel borders, we proposed a simple method to expand each prediction. The results show better pixel-wise metrics when compared to the Mask-RCNN (82% against 67% in IoU). On per-object analysis, the overall accuracy, precision, and recall were greater than 90%. This pipeline applies to any remote sensing target, being very efficient for segmentation and generating datasets.Comment: 38 pages, 10 figures, submitted to journa

    Reliable Navigational Scene Perception for Autonomous Ships in Maritime Environment

    Get PDF
    Due to significant advances in robotics and transportation, research on autonomous ships has attracted considerable attention. The most critical task is to make the ships capable of accurately, reliably, and intelligently detecting their surroundings to achieve high levels of autonomy. Three deep learning-based models are constructed in this thesis to perform complex perceptual tasks such as identifying ships, analysing encounter situations, and recognising water surface objects. In this thesis, sensors, including the Automatic Identification System (AIS) and cameras, provide critical information for scene perception. Specifically, the AIS enables mid-range and long-range detection, assisting the decision-making system to take suitable and decisive action. A Convolutional Neural Network-Ship Movement Modes Classification (CNN-SMMC) is used to detect ships or objects. Following that, a Semi- Supervised Convolutional Encoder-Decoder Network (SCEDN) is developed to classify ship encounter situations and make a collision avoidance plan for the moving ships or objects. Additionally, cameras are used to detect short-range objects, a supplementary solution to ships or objects not equipped with an AIS. A Water Obstacle Detection Network based on Image Segmentation (WODIS) is developed to find potential threat targets. A series of quantifiable experiments have demonstrated that these models can provide reliable scene perception for autonomous ships

    Data Collection and Machine Learning Methods for Automated Pedestrian Facility Detection and Mensuration

    Get PDF
    Large-scale collection of pedestrian facility (crosswalks, sidewalks, etc.) presence data is vital to the success of efforts to improve pedestrian facility management, safety analysis, and road network planning. However, this kind of data is typically not available on a large scale due to the high labor and time costs that are the result of relying on manual data collection methods. Therefore, methods for automating this process using techniques such as machine learning are currently being explored by researchers. In our work, we mainly focus on machine learning methods for the detection of crosswalks and sidewalks from both aerial and street-view imagery. We test data from these two viewpoints individually and with an ensemble method that we refer to as our “dual-perspective prediction model”. In order to obtain this data, we developed a data collection pipeline that combines crowdsourced pedestrian facility location data with aerial and street-view imagery from Bing Maps. In addition to the Convolutional Neural Network used to perform pedestrian facility detection using this data, we also trained a segmentation network to measure the length and width of crosswalks from aerial images. In our tests with a dual-perspective image dataset that was heavily occluded in the aerial view but relatively clear in the street view, our dual-perspective prediction model was able to increase prediction accuracy, recall, and precision by 49%, 383%, and 15%, respectively (compared to using a single perspective model based on only aerial view images). In our tests with satellite imagery provided by the Mississippi Department of Transportation, we were able to achieve accuracies as high as 99.23%, 91.26%, and 93.7% for aerial crosswalk detection, aerial sidewalk detection, and aerial crosswalk mensuration, respectively. The final system that we developed packages all of our machine learning models into an easy-to-use system that enables users to process large batches of imagery or examine individual images in a directory using a graphical interface. Our data collection and filtering guidelines can also be used to guide future research in this area by establishing standards for data quality and labelling

    Deep Neural Networks and Data for Automated Driving

    Get PDF
    This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence. Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety? This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above
    • 

    corecore