6,886 research outputs found

    Mitigating blind spot collision utilizing ultrasonic gap perimeter sensor

    Get PDF
    Failure to identify the vehicle by the side of the vehicle or in other word as blind spot area, especially larger vehicles are one of the causes of the accident. For some drivers, the simple solution is to place an additional side mirror. However, it is not the best solution because this additional side mirrors do not provide an accurate picture of actual or estimated distance to the object or another vehicle. The objective of this project is to identify the causes of automobile collisions, notably the side collision impact causes by the blind spot, to develop a system that can detect the presence vehicles on the side and to develop a system that are affordable for normal car users. To achieve this objective, flow chart was designed to help write coding using Arduino 1.0.2 and design hardware. This system can detect the obstacle within range 2cm to 320cm from the edge of the project vehicle. Before this system developed, the survey was conducted to determine what the driver wants. After that, the design process is carried out. The input to this system is Ping ultrasonic sensor, LCD, LED, and siren for the output part. LCD and LED were displaying the distance from the vehicle and the siren will be switched on to warn the driver when have obstacle in the blind spot area. As a conclusion, the Mitigating Blind Spot Collision Utilizing Ultrasonic Gap Perimeter Sensor System has successfully completed. This system able to detect the presence of other vehicles on the side of the project vehicle, especially in the blind spot area and will alert the driver when the vehicle is nearby when the alarm system is operated. The efficiency of this system to detect objects in the blind spot area is 79.82%. Others, it will give the display value less than one second after obstacle exists in front of the sensor. This operating time is most important because if the system is slow, the main function of this system to detect the obstacle in the blind spot area is not achieved

    What Am I Testing and Where? Comparing Testing Procedures based on Lightweight Requirements Annotations

    Get PDF
    [Context] The testing of software-intensive systems is performed in different test stages each having a large number of test cases. These test cases are commonly derived from requirements. Each test stages exhibits specific demands and constraints with respect to their degree of detail and what can be tested. Therefore, specific test suites are defined for each test stage. In this paper, the focus is on the domain of embedded systems, where, among others, typical test stages are Software- and Hardware-in-the-loop. [Objective] Monitoring and controlling which requirements are verified in which detail and in which test stage is a challenge for engineers. However, this information is necessary to assure a certain test coverage, to minimize redundant testing procedures, and to avoid inconsistencies between test stages. In addition, engineers are reluctant to state their requirements in terms of structured languages or models that would facilitate the relation of requirements to test executions. [Method] With our approach, we close the gap between requirements specifications and test executions. Previously, we have proposed a lightweight markup language for requirements which provides a set of annotations that can be applied to natural language requirements. The annotations are mapped to events and signals in test executions. As a result, meaningful insights from a set of test executions can be directly related to artifacts in the requirements specification. In this paper, we use the markup language to compare different test stages with one another. [Results] We annotate 443 natural language requirements of a driver assistance system with the means of our lightweight markup language. The annotations are then linked to 1300 test executions from a simulation environment and 53 test executions from test drives with human drivers. Based on the annotations, we are able to analyze how similar the test stages are and how well test stages and test cases are aligned with the requirements. Further, we highlight the general applicability of our approach through this extensive experimental evaluation. [Conclusion] With our approach, the results of several test levels are linked to the requirements and enable the evaluation of complex test executions. By this means, practitioners can easily evaluate how well a systems performs with regards to its specification and, additionally, can reason about the expressiveness of the applied test stage.TU Berlin, Open-Access-Mittel - 202

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    An intra-vehicular wireless multimedia sensor network for smartphone-based low-cost advanced driver-assistance systems

    Get PDF
    Advanced driver-assistance system(s) (ADAS) are more prevalent in high-end vehicles than in low-end vehicles. Wired solutions of vision sensors in ADAS already exist, but are costly and do not cater for low-end vehicles. General ADAS use wired harnessing for communication; this approach eliminates the need for cable harnessing and, therefore, the practicality of a novel wireless ADAS solution was tested. A low-cost alternative is proposed that extends a smartphoneโ€™s sensor perception, using a camera-based wireless sensor network. This paper presents the design of a low-cost ADAS alternative that uses an intra-vehicle wireless sensor network structured by a Wi-Fi Direct topology, using a smartphone as the processing platform. The proposed system makes ADAS features accessible to cheaper vehicles and investigates the possibility of using a wireless network to communicate ADAS information in a intra-vehicle environment. Other ADAS smartphone approaches make use of a smartphoneโ€™s onboard sensors; however, this paper shows the application of essential ADAS features developed on the smartphoneโ€™s ADAS application, carrying out both lane detection and collision detection on a vehicle by using wireless sensor data. A smartphoneโ€™s processing power was harnessed and used as a generic object detector through a convolution neural network, using the sensory networkโ€™s video streams. The networkโ€™s performance was analysed to ensure that the network could carry out detection in real-time. A low-cost CMOS camera sensor network with a smartphone found an application, using Wi-Fi Direct, to create an intra-vehicle wireless network as a low-cost advanced driver-assistance system.DATA AVAILABLITY STATEMENT : Publicly available datasets were analysed in this study. There data can be found here: https://github.com/TuSimple/tusimple-benchmark and https://boxy-dataset.com/ boxy/ accessed on 25 November 2021.https://www.mdpi.com/journal/sensorsam2023Electrical, Electronic and Computer Engineerin

    Object Detection in 20 Years: A Survey

    Full text link
    Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years.Comment: This work has been submitted to the IEEE TPAMI for possible publicatio

    ๊ต์ฐจ๋กœ์—์„œ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์ œํ•œ๋œ ๊ฐ€์‹œ์„ฑ๊ณผ ๋ถˆํ™•์‹ค์„ฑ์„ ๊ณ ๋ คํ•œ ์ข…๋ฐฉํ–ฅ ๊ฑฐ๋™๊ณ„ํš

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„๊ณตํ•™๋ถ€, 2023. 2. ์ด๊ฒฝ์ˆ˜.This dissertation presents a novel longitudinal motion planning of autonomous vehicle at urban intersection to overcome the limited visibility due to complicated road structures and sensor specification, guaranteeing the safety from the potential collision with vehicles appearing from the occluded region. The intersection autonomous driving requires high level of safety due to congested traffics and environmental complexities. Due to complicated road structures and the detection range of perception sensors, the occluded region is generated in urban autonomous driving. The virtual target is one of the motion planning methods to react the sudden appearance of vehicles from the blind spot. The Gaussian Process Regression (GPR) is implemented to train the virtual target model to generate various future driving trajectories interacting with the motion of the ego vehicle. The GPR model provides not only the predicted trajectories of the virtual target but also the uncertainty of the future motion. Therefore, prediction results from GPR can be utilized to a position constraint for the Model Predictive Control (MPC), and the uncertainties are taken into account as a chance constraint in the MPC. In order to comprehend the surrounding environment including dynamic objects, a region of interest (ROI) is defined to determine targets of the interest. With the pre-determined driving route of the ego vehicle and the route information of the intersection, driving lanes intersecting with the ego driving lane can be determined, and the intersecting lanes are defined as ROI, reducing the computational load by eliminating targets of disinterest. Then the future motion of the selected target is predicted by a Long Short-Term Memory-Recurrent Neural Network (LSTM-RNN). Driving data for training are directly obtained with two different autonomous vehicles, providing their odometry information regardless to the limited field of view (FOV). For a widely known autonomous driving datasets such as Waymo and nuScenes, the vehicle odometry information are collected from the perceptive sensors mounted on the test vehicle. Thus, information of target that are out of the FOV of the test vehicle cant be obtained. The obtained training data are organized in the target centered coordinates for better input-domain adaptation and generalization. The mean squared error and the negative log likelihood loss functions are adapted to train and provide the uncertainty information of the target vehicle for the motion planning of the autonomous vehicle. The MPC with a chance constraint is formulated to optimize the longitudinal motion of the autonomous vehicle. The dynamic and actuator constraints are designed to provide ride comfort and safety to drivers. The position constraint with the chance constraint guarantees the safety and prevent the potential collision with target vehicles. The position constraint for the travel distance over the prediction horizon time is determined based on the clearance between the predicted trajectories of the target and ego vehicle at every prediction sample time. The performance and feasibility of the proposed algorithm are evaluated via computer simulation and test-data based simulation. The offline simulation validates the safety of the proposed algorithm, and the suggested motion planner has been implemented on an autonomous driving vehicle and tested in a real road. Through the implementation of the algorithm to an actual vehicle, the suggested algorithm is confirmed to be applicable in real life autonomous driving.๋ณธ ๋…ผ๋ฌธ์€ ๋ณต์žกํ•œ ๋„๋กœ ๊ตฌ์กฐ์™€ ์„ผ์„œ ์‚ฌ์–‘์œผ๋กœ ์ธํ•œ ์‹œ์•ผ ์ œํ•œ์„ ๊ทน๋ณตํ•˜๋ฉฐ ์‚ฌ๊ฐ์ง€๋Œ€์—์„œ ๋“ฑ์žฅํ•˜๋Š” ์ฐจ๋Ÿ‰๊ณผ์˜ ์ž ์žฌ์ ์ธ ์ถฉ๋Œ๋กœ๋ถ€ํ„ฐ ์•ˆ์ „์„ ๋ณด์žฅํ•˜๊ธฐ ์œ„ํ•œ ๋„์‹ฌ ๊ต์ฐจ๋กœ์—์„œ์˜ ์ž์œจ์ฃผํ–‰์ฐจ์˜ ์ƒˆ๋กœ์šด ์ข…๋ฐฉํ–ฅ ๊ฑฐ๋™ ๊ณ„ํš์„ ์ œ์‹œํ•œ๋‹ค. ๋„์‹ฌ ์ž์œจ์ฃผํ–‰์€ ๊ตํ†ต์ฒด์ฆ๊ณผ ํ™˜๊ฒฝ์˜ ๋ณต์žก์„ฑ์œผ๋กœ ์ธํ•ด ๋†’์€ ์ˆ˜์ค€์˜ ์•ˆ์ „์„ฑ์ด ์š”๊ตฌ๋ฉ๋‹ˆ๋‹ค. ๋ณต์žกํ•œ ๋„๋กœ ๊ตฌ์กฐ์™€ ์ธ์ง€ ์„ผ์„œ์˜ ์ธ์ง€ ๋ฒ”์œ„๋กœ ์ธํ•ด ๋„์‹ฌ ์ž์œจ์ฃผํ–‰์—์„œ๋Š” ์‚ฌ๊ฐ์ง€๋Œ€๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ๊ฐ€์ƒ ํƒ€๊ฒŸ์€ ์‚ฌ๊ฐ์ง€๋Œ€์—์„œ ์ฐจ๋Ÿ‰์˜ ๊ฐ‘์ž‘์Šค๋Ÿฌ์šด ์ถœํ˜„์— ๋Œ€์‘ํ•˜๊ธฐ ์œ„ํ•œ ๊ฑฐ๋™ ๊ณ„ํš ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ž์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™๊ณผ ์ƒํ˜ธ์ž‘์šฉํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ฏธ๋ž˜ ์ฃผํ–‰ ๊ถค์ ์„ ์ƒ์„ฑํ•˜๋Š” ๊ฐ€์ƒ ํƒ€๊ฒŸ ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ Gaussian Process Regression (GPR) ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. GPR ๋ชจ๋ธ์€ ๊ฐ€์ƒ ํ‘œ์ ์˜ ์˜ˆ์ธก๋œ ๊ถค์ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋ฏธ๋ž˜ ๊ถค์ ์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ GPR์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋Š” Model Predictive Control (MPC)์— ๋Œ€ํ•œ ์œ„์น˜ ์ œ์•ฝ ์กฐ๊ฑด์œผ๋กœ ํ™œ์šฉ๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ถˆํ™•์‹ค์„ฑ์€ MPC์—์„œ ๊ธฐํšŒ ์ œ์•ฝ ์กฐ๊ฑด์œผ๋กœ ๊ณ ๋ ค๋ฉ๋‹ˆ๋‹ค. ๋™์  ๊ฐ์ฒด๋ฅผ ํฌํ•จํ•œ ์ฃผ๋ณ€ ํ™˜๊ฒฝ์„ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด ๊ด€์‹ฌ์˜์—ญ์„ ์ •์˜ํ•˜์—ฌ ๋ชฉํ‘œ ๋Œ€์ƒ์„ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ฏธ๋ฆฌ ๊ฒฐ์ •๋œ ์ž์ฐจ๋Ÿ‰์˜ ์ฃผํ–‰๊ฒฝ๋กœ์™€ ๊ต์ฐจ๋กœ์˜ ๊ฒฝ๋กœ์ •๋ณด๋ฅผ ํ†ตํ•˜์—ฌ ์ž์ฐจ๋Ÿ‰์˜ ์ฃผํ–‰์ฐจ๋กœ์™€ ๊ต์ฐจํ•˜๋Š” ๋‹ค๋ฅธ ์ฐจ์„ ์„ ํŒ๋‹จํ•˜์—ฌ ๊ด€์‹ฌ์˜์—ญ์œผ๋กœ ์ •์˜ํ•จ์œผ๋กœ์จ ๊ด€์‹ฌ์˜์—ญ ๋ฐ–์˜ ์ฐจ๋Ÿ‰์„ ์ œ์™ธํ•˜์—ฌ ์—ฐ์‚ฐ๋Ÿ‰์„ ๊ฐ์†Œ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ ์ธ์ง€๋œ ์ฐจ๋Ÿ‰์˜ ๋ฏธ๋ž˜ ์ด๋™ ๊ถค์ ์€ LSTM-RNN (Long Short-Term Memory Recurrent Neural Network)์— ์˜ํ•ด ์˜ˆ์ธก๋ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ์œ„ํ•œ ์ฃผํ–‰ ๋ฐ์ดํ„ฐ๋Š” ๋‘ ๋Œ€์˜ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์—์„œ ์ง์ ‘ ํš๋“ํ•˜์—ฌ ์ œํ•œ๋œ ์‹œ์•ผ์— ๊ด€๊ณ„์—†์ด ์ฐจ๋Ÿ‰์˜ ์ƒํƒœ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ๊ธ€ Waymo ๋ฐ nuScenes์™€ ๊ฐ™์ด ๋„๋ฆฌ ์•Œ๋ ค์ง„ ์ž์œจ์ฃผํ–‰ ๋ฐ์ดํ„ฐ์˜ ๊ฒฝ์šฐ ์ฐจ๋Ÿ‰ ์ƒํƒœ ์ •๋ณด๋Š” ํ…Œ์ŠคํŠธ ์ฐจ๋Ÿ‰์— ์žฅ์ฐฉ๋œ ์ธ์ง€ ์„ผ์„œ์—์„œ ์ˆ˜์ง‘๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ…Œ์ŠคํŠธ ์ฐจ๋Ÿ‰์˜ ์‹œ์•ผ์—์„œ ๋ฒ—์–ด๋‚˜ ์žˆ๋Š” ์ฐจ๋Ÿ‰ ์ •๋ณด๋Š” ์–ป์„ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ทจ๋“ํ•œ ์ฃผํ–‰ ๋ฐ์ดํ„ฐ๋Š” ๋” ๋‚˜์€ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ ์ ์‘ ๋ฐ ์ผ๋ฐ˜ํ™”๋ฅผ ์œ„ํ•ด ์ž์ฐจ๊ฐ€ ์•„๋‹Œ ํƒ€๊ฒŸ์ฐจ๋Ÿ‰ ์ค‘์‹ฌ ์ขŒํ‘œ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์†์‹คํ•จ์ˆ˜๋กœ ํ‰๊ท  ์ œ๊ณฑ ์˜ค์ฐจ ๋ฐ ์Œ์˜ ๋กœ๊ทธ ์šฐ๋„ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์˜€๊ณ  ์Œ์˜ ๋กœ๊ทธ ์šฐ๋„ํ•จ์ˆ˜๋Š” ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™๊ณ„ํš์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๊ฒŒ ํƒ€๊ฒŸ์ฐจ๋Ÿ‰์˜ ๋ฏธ๋ž˜ ๊ถค์ ์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๊ธฐํšŒ ์ œ์•ฝ ์กฐ๊ฑด์ด ์žˆ๋Š” MPC๋Š” ์ž์œจ์ฐจ๋Ÿ‰์˜ ์ข…๋ฐฉํ–ฅ ๊ฑฐ๋™์„ ์ตœ์ ํ™”ํ•˜๋„๋ก ๊ตฌํ˜„๋ฉ๋‹ˆ๋‹ค. ๋™์  ์ œ์•ฝ ์กฐ๊ฑด ๋ฐ ๊ตฌ๋™๊ธฐ ์ œ์•ฝ ์กฐ๊ฑด์€ ์šด์ „์ž์—๊ฒŒ ์Šน์ฐจ๊ฐ๊ณผ ์•ˆ์ „์„ ์ œ๊ณตํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ธฐํšŒ ์ œ์•ฝ ์กฐ๊ฑด์€ ์œ„์น˜ ์ œ์•ฝ ์กฐ๊ฑด์„ ๊ฐ•๊ฑดํ•˜๊ฒŒ ํ•˜์—ฌ ์•ˆ์ „์„ ๋ณด์žฅํ•˜๊ณ  ๋Œ€์ƒ ์ฐจ๋Ÿ‰๊ณผ์˜ ์ž ์žฌ์ ์ธ ์ถฉ๋Œ์„ ๋ฐฉ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก ์‹œ๊ฐ„๋™์•ˆ ์ด๋™ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ์œ„์น˜ ์ œ์•ฝ ์กฐ๊ฑด์€ ๊ฐ ์˜ˆ์ธก์‹œ๊ฐ„์˜ ํƒ€๊ฒŸ๊ณผ ์ž์ฐจ๋Ÿ‰์˜ ์˜ˆ์ธก๋œ ๊ถค์  ๊ฐ„์˜ ๊ฑฐ๋ฆฌ ์ฐจ์ด์— ์˜ํ•ด ๊ฒฐ์ •๋œ๋‹ค. ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅ๊ณผ ํƒ€๋‹น์„ฑ์€ ์ปดํ“จํ„ฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜๊ณผ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ํ‰๊ฐ€๋œ๋‹ค. ์˜คํ”„๋ผ์ธ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์•ˆ์ „์„ฑ์„ ๊ฒ€์ฆํ•˜์˜€์œผ๋ฉฐ ์ œ์•ˆํ•œ ๊ฑฐ๋™๊ณ„ํš ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ž์œจ์ฃผํ–‰์ฐจ์— ๊ตฌํ˜„ํ•˜์—ฌ ์‹ค์ œ ๋„๋กœ์—์„œ ํ…Œ์ŠคํŠธํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‹ค์ œ ์ฐจ๋Ÿ‰์— ๊ตฌํ˜„ํ•˜์—ฌ ์‹ค์ œ ์ž์œจ์ฃผํ–‰์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1. Introduction 1 1.1. Research Background and Motivation of Intersection Autonomous Driving 1 1.2. Previous Researches on Intersection Autonomous Driving 9 1.2.1. Research on Trajectory Prediction and Intention Inference at Urban Intersection 10 1.2.2. Research on Intersection Motion Planning 11 1.3. Thesis Objectives 18 1.4. Thesis Outline 19 Chapter 2. Overall Architecture of Intersection Autonomous Driving System 22 2.1. Software Configuration of Intersection Autonomous Driving 22 2.2. Hardware Configuration of Autonomous Driving and Test Vehicle 24 2.3. Vehicle Test Environment for Intersection Autonomous Driving 25 Chapter 3. Virtual Target Modelling for Intersection Motion Planning 27 3.1. Limitation of Conventional Virtual Target Model for Intersection 27 3.2. Virtual Target Generation for Intersection Occlusion 31 3.3. Intersection Virtual Target Modeling 34 3.3.1. Gaussian Process Regression based Virtual Target Model at Intersection 35 3.3.2. Data Processing for Gaussian Process Regression based Virtual Target Model 38 3.3.3. Definition of Visibility Index of Virtual Target at Intersection 45 3.3.4. Long Short-Term Memory based Virtual Target Model at Intersection 51 Chapter 4. Surrounding Vehicle Motion Prediction at Intersection 54 4.1. Intersection Surrounding Vehicle Classification 54 4.2. Data-driven Vehicle State based Motion Prediction at Intersection 58 4.2.1. Network Architecture of Motion Predictor 58 4.2.2. Dataset Processing of the Network 65 Chapter 5. Intersection Longitudinal Motion Planning 68 5.1. Outlines of Longitudinal Motion Planning with Model Predictive Control 68 5.2. Stochastic Model Predictive Control of Intersection Motion Planner 69 5.2.1. Definition of System Dynamics Model 69 5.2.2. Ego Vehicle Prediction and Reference States Definition 70 5.2.3. Safety Clearance Decision for Intersection Collision Avoidance 71 5.2.4. Driving Mode Decision of Intersection Motion Planning 79 5.2.5. Formulation of Model Predictive Control with the Chance Constraint 83 Chapter 6. Performance Evaluation of Intersection Longitudinal Motion Planning 86 6.1. Performance Evaluation of Virtual Target Prediction at Intersection 86 6.1.1. GPR based Virtual Target Model Prediction Results 86 6.1.2. Intersection Autonomous Driving Computer Simulation Environment 90 6.1.2.1. Simulation Result of Effect of Virtual Target in Intersection Autonomous Driving 92 6.1.2.2. Virtual Target Simulation Result of the Right Turn Across Path Scenario in the Intersection 96 6.1.2.3. Virtual Target Simulation Result of the Straight Across Path Scenario in the Intersection 102 6.1.2.4. Virtual Target Simulation Result of the Left Turn Across Path Scenario in the Intersection 108 6.1.2.5. Virtual Target Simulation Result of Crooked T-shaped Intersection 113 6.2. Performance Evaluation of Data-driven Vehicle State based Motion Prediction at Intersection 124 6.2.1. Data-driven Motion Prediction Accuracy Analysis 124 6.2.2. Prediction Trajectory Accuracy Analysis 134 6.3. Vehicle Test for Intersection Autonomous Driving 146 6.3.1. Test Vehicle Configuration for Intersection Autonomous Driving 146 6.3.2. Software Configuration for Autonomous Vehicle Operation 147 6.3.3. Vehicle Test Environment for Intersection Autonomous Driving 148 6.3.4. Vehicle Test Result of Intersection Autonomous Driving 151 Chapter 7. Conclusion and Future Work 161 7.1. Conclusion 161 7.2. Future Work 164 Bibliography 166 Abstract in Korean 172๋ฐ•

    An intra-vehicular wireless multimedia sensor network for smartphone-based low-cost advanced driver-assistance systems

    Get PDF
    Advanced driver-assistance systems (ADAS) are more prevalent in high-end vehicles than in low-end vehicles. The research proposes an alternative for drivers without having to wait years to gain access to the safety ADAS offers. Wireless Multimedia Sensor Networks (WMSN) for ADAS applications in collaboration with smartphones is non-existent. Intra-vehicle environments cause difficulties in data transfer for wireless networks where performance of such networks in an intra-vehicle network is investigated. A low-cost alternative was proposed that extends a smartphoneโ€™s sensor perception, using a camera- based wireless sensor network. This dissertation presents the design of a low-cost ADAS alternative that uses an intra-vehicle wireless sensor network structured by a Wi-Fi Direct topology, using a smartphone as the processing platform. In addition, to expand on the smartphoneโ€™s other commonly available wireless protocols, the Bluetooth protocol was used to collect blind spot sensory data, being processed by the smartphone. Both protocols form part of the Intra-Vehicular Wireless Sensor Network (IVWSN). Essential ADAS features developed on the smartphone ADAS application carried out both lane detection and collision detection on a vehicle. A smartphoneโ€™s processing power was harnessed and used as a generic object detector through a convolution neural network, using the sensory networkโ€™s video streams. Blind spot sensors on the lateral sides of the vehicle provided sensory data transmitted to the smartphone through Bluetooth. IVWSNs are complex environments with many reflective materials that may impede communication. The network in a vehicle environment should be reliable. The networkโ€™s performance was analysed to ensure that the network could carry out detection in real-time, which would be essential for the driverโ€™s safety. General ADAS systems use wired harnessing for communication and, therefore, the practicality of a novel wireless ADAS solution was tested. It was found that a low-cost advanced driver-assistance system alternative can be conceptualised by using object detection techniques being processed on a smartphone from multiple streams, sourced from an IVWSN, composed of camera sensors. A low-cost CMOS camera sensors network with a smartphone found an application, using Wi-Fi Direct to create an intra-vehicle wireless network as a low-cost advanced driver-assistance system.Dissertation (MEng (Computer Engineering))--University of Pretoria, 2021.Electrical, Electronic and Computer EngineeringMEng (Computer Engineering)Unrestricte

    Evaluation of surface defect detection in reinforced concrete bridge decks using terrestrial LiDAR

    Get PDF
    Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technologyโ€™s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information
    • โ€ฆ
    corecore