4,975 research outputs found

    The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping

    Full text link
    Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird's-eye-view images contain sharper features (e.g. road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford RobotCar Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures, accepted at IV 201

    Smart streetlights: a feasibility study

    Get PDF
    The world's cities are growing. The effects of population growth and urbanisation mean that more people are living in cities than ever before, a trend set to continue. This urbanisation poses problems for the future. With a growing population comes more strain on local resources, increased traffic and congestion, and environmental decline, including more pollution, loss of green spaces, and the formation of urban heat islands. Thankfully, many of these stressors can be alleviated with better management and procedures, particularly in the context of road infrastructure. For example, with better traffic data, signalling can be smoothed to reduce congestion, parking can be made easier, and streetlights can be dimmed in real time to match real-world road usage. However, obtaining this information on a citywide scale is prohibitively expensive due to the high costs of labour and materials associated with installing sensor hardware. This study investigated the viability of a streetlight-integrated sensor system to affordably obtain traffic and environmental information. This investigation was conducted in two stages: 1) the development of a hardware prototype, and 2) evaluation of an evolved prototype system. In Stage 1 of the study, the development of the prototype sensor system was conducted over three design iterations. These iterations involved, in iteration 1, the live deployment of the prototype system in an urban setting to select and evaluate sensors for environmental monitoring, and in iterations 2 and 3, deployments on roads with live and controlled traffic to develop and test sensors for remote traffic detection. In the final iteration, which involved controlled passes of over 600 vehicle, 600 pedestrian, and 400 cyclist passes, the developed system that comprised passive-infrared motion detectors, lidar, and thermal sensors, could detect and count traffic from a streetlight-integrated configuration with 99%, 84%, and 70% accuracy, respectively. With the finalised sensor system design, Stage 1 showed that traffic and environmental sensing from a streetlight-integrated configuration was feasible and effective using on-board processing with commercially available and inexpensive components. In Stage 2, financial and social assessments of the developed sensor system were conducted to evaluate its viability and value in a community. An evaluation tool for simulating streetlight installations was created to measure the effects of implementing the smart streetlight system. The evaluation showed that the on-demand traffic-adaptive dimming enabled by the smart streetlight system was able to reduce the electrical and maintenance costs of lighting installations. As a result, a 'smart' LED streetlight system was shown to outperform conventional always-on streetlight configurations in terms of financial value within a period of five to 12 years, depending on the installation's local traffic characteristics. A survey regarding the public acceptance of smart streetlight systems was also conducted and assessed the factors that influenced support of its applications. In particular, the Australia-wide survey investigated applications around road traffic improvement, streetlight dimming, and walkability, and quantified participants' support through willingness-to-pay assessments to enable each application. Community support of smart road applications was generally found to be positive and welcomed, especially in areas with a high dependence on personal road transport, and from participants adversely affected by spill light in their homes. Overall, the findings of this study indicate that our cities, and roads in particular, can and should be made smarter. The technology currently exists and is becoming more affordable to allow communities of all sizes to implement smart streetlight systems for the betterment of city services, resource management, and civilian health and wellbeing. The sooner that these technologies are embraced, the sooner they can be adapted to the specific needs of the community and environment for a more sustainable and innovative future

    Driving in the Rain: A Survey toward Visibility Estimation through Windshields

    Get PDF
    Rain can significantly impair the driver’s sight and affect his performance when driving in wet conditions. Evaluation of driver visibility in harsh weather, such as rain, has garnered considerable research since the advent of autonomous vehicles and the emergence of intelligent transportation systems. In recent years, advances in computer vision and machine learning led to a significant number of new approaches to address this challenge. However, the literature is fragmented and should be reorganised and analysed to progress in this field. There is still no comprehensive survey article that summarises driver visibility methodologies, including classic and recent data-driven/model-driven approaches on the windshield in rainy conditions, and compares their generalisation performance fairly. Most ADAS and AD systems are based on object detection. Thus, rain visibility plays a key role in the efficiency of ADAS/AD functions used in semi- or fully autonomous driving. This study fills this gap by reviewing current state-of-the-art solutions in rain visibility estimation used to reconstruct the driver’s view for object detection-based autonomous driving. These solutions are classified as rain visibility estimation systems that work on (1) the perception components of the ADAS/AD function, (2) the control and other hardware components of the ADAS/AD function, and (3) the visualisation and other software components of the ADAS/AD function. Limitations and unsolved challenges are also highlighted for further research

    A Hardware and Software Framework for Automotive Intelligent Lighting

    Get PDF

    Fast and robust road sign detection in driver assistance systems

    Full text link
    © 2018, Springer Science+Business Media, LLC, part of Springer Nature. Road sign detection plays a critical role in automatic driver assistance systems. Road signs possess a number of unique visual qualities in images due to their specific colors and symmetric shapes. In this paper, road signs are detected by a two-level hierarchical framework that considers both color and shape of the signs. To address the problem of low image contrast, we propose a new color visual saliency segmentation algorithm, which uses the ratios of enhanced and normalized color values to capture color information. To improve computation efficiency and reduce false alarm rate, we modify the fast radial symmetry transform (RST) algorithm, and propose to use an edge pairwise voting scheme to group feature points based on their underlying symmetry in the candidate regions. Experimental results on several benchmarking datasets demonstrate the superiority of our method over the state-of-the-arts on both efficiency and robustness

    Probing streets and the built environment with ambient and community sensing

    Get PDF
    Data has become an important currency in todays world economy. Ephemeral and real-time data from Twitter, Facebook, Google, urban sensors, weather stations, and the Web contain hidden patterns of the city that are useful for informing architectural and urban design

    APPLICATIONS OF MACHINE LEARNING AND COMPUTER VISION FOR SMART INFRASTRUCTURE MANAGEMENT IN CIVIL ENGINEERING

    Get PDF
    Machine Learning and Computer Vision are the two technologies that have innovative applications in diverse fields, including engineering, medicines, agriculture, astronomy, sports, education etc. The idea of enabling machines to make human like decisions is not a recent one. It dates to the early 1900s when analogies were drawn out between neurons in a human brain and capability of a machine to function like humans. However, major advances in the specifics of this theory were not until 1950s when the first experiments were conducted to determine if machines can support artificial intelligence. As computation powers increased, in the form of parallel computing and GPU computing, the time required for training the algorithms decreased significantly. Machine Learning is now used in almost every day to day activities. This research demonstrates the use of machine learning and computer vision for smart infrastructure management. This research’s contribution includes two case studies – a) Occupancy detection using vibration sensors and machine learning and b) Traffic detection, tracking, classification and counting on Memorial Bridge in Portsmouth, NH using computer vision and machine learning. Each case study, includes controlled experiments with a verification data set. Both the studies yielded results that validated the approach of using machine learning and computer vision. Both case studies present a scenario where in machine learning is applied to a civil engineering challenge to create a more objective basis for decision-making. This work also includes a summary of the current state-of-the -practice of machine learning in Civil Engineering and the suggested steps to advance its application in civil engineering based on this research in order to use the technology more effectively

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced

    Pedestrian Detection and Tracking in Video Surveillance System: Issues, Comprehensive Review, and Challenges

    Get PDF
    Pedestrian detection and monitoring in a surveillance system are critical for numerous utility areas which encompass unusual event detection, human gait, congestion or crowded vicinity evaluation, gender classification, fall detection in elderly humans, etc. Researchers’ primary focus is to develop surveillance system that can work in a dynamic environment, but there are major issues and challenges involved in designing such systems. These challenges occur at three different levels of pedestrian detection, viz. video acquisition, human detection, and its tracking. The challenges in acquiring video are, viz. illumination variation, abrupt motion, complex background, shadows, object deformation, etc. Human detection and tracking challenges are varied poses, occlusion, crowd density area tracking, etc. These results in lower recognition rate. A brief summary of surveillance system along with comparisons of pedestrian detection and tracking technique in video surveillance is presented in this chapter. The publicly available pedestrian benchmark databases as well as the future research directions on pedestrian detection have also been discussed
    • …
    corecore