608 research outputs found

    Assessment of human factors as a main contributory factor to youth-related road fatalities, and countermeasures to address the problem

    Get PDF
    Abstract in English, Afrikaans and SepediRoad-related deaths contribute significantly to the death rate in South Africa (SA) and across the world, especially among the youth. The current study investigated road fatalities among the age group classified as youth, between the ages of 20 and 35 years, from the perception of traffic officers and crash investigators. In addition, the current study attempted to identify countermeasures to address this problem. The cross-sectional, quantitative study used a questionnaire for the collection of the data. The study found that the main human factors that contribute to fatalities among youth aged 20-35 years are drunken driving, speeding, disregard for traffic lights, overtaking and not wearing a seatbelt. The study found a strong correlation between speeding and drunken driving. The study also captured strategies that can be implemented, and key among those was the need for stricter requirements for obtaining a driver’s licence, increased visibility of traffic officers at high-accident zones, more traffic fine enforcement, and implementing road safety education as part of the school curriculum. The results of this study could assist policymakers to develop programmes aimed at this specific age group between 20 and 35 years and passing laws that can address the legislative-related gaps as identified in the current study.Padverwante sterftes dra aansienlik by tot die sterftesyfer in Suid-Afrika (SA) en regoor die wêreld, veral onder jeugdiges. Hierdie studie het padsterftes ondersoek onder die ouderdomsgroep wat as jeug geklassifiseer word – dit is persone tussen 20 en 35 jaar oud – vanuit die persepsie van verkeersbeamptes en ongelukondersoekers. Boonop het hierdie studie gepoog om teenmaatreëls te identifiseer om hierdie probleem op te los. Dié deursnee- kwantitatiewe studie het ’n vraelys gebruik om data in te samel. Die studie het bevind dat die belangrikste menslike faktore wat tot sterftes by jeugdiges tussen die ouderdomme van 20 en 35 jaar bydra, dronkbestuur, spoed, die minagting van verkeersligte, verbysteek en versuim om ’n sitplekgordel te dra, is. Die studie het ook ’n sterk korrelasie tussen spoed en dronkbestuur gevind. Die studie het ook strategieë uitgewys wat geïmplementeer kan word, met die volgende strategieë wat deurslaggewend was: die behoefte aan strenger vereistes om ’n rybewys te bekom, verhoogde sigbaarheid van verkeersbeamptes by hoë-ongeluksones, strenger afdwinging van verkeersboetes en die implementering van padveiligheidsonderrig as deel van die skoolkurrikulum. Die resultate van hierdie studie kan beleidmakers help om programme te ontwikkel wat op hierdie bepaalde ouderdomsgroep – persone tussen 20 en 35 jaar – gerig is en wette aanneem wat aandag sal gee aan die regsverwante leemtes wat in dié studie geïdentifiseer is.Mahu ao a hlolwago ke dikotsi tša mebileng ke sebakwakgolo go palomoka ya mahu ka Afrika Borwa le lefaseng ka bophara, kudukudu go baswa. Nyakišišo ye e sekasekile dikotsi tša mebileng go dihlopha tša baswa ba mengwaga ya magareng ga 20 le 35, go ya ka dipego tša bahlankedi ba mebila le banyakišiši ba dikotsi tša mebileng. Go tlaleletša mo, nyakišišo ye e laeditše magato ao a ka tšewago go šomana le bothata bjo. Nyakišišo e šomišitše mokgwa wa khwanthithethifi ka go šomiša mananeopotšišo go kgoboketša datha. Nyakišišo e hweditše gore mabaka ao a hlolago dikotsi go baswa ba mengwaga ye magareng ga 20 go ya go ye 35 ke go otlela ba nwele madila, go otlela ka lebelo, go hlokomologa melao ya mebileng (diroboto), go feta dinamelwa tše dingwe le go se apare mapanta a polokego. Nyakišišo e hweditše gore go na le tswalano e matla magareng ga go otlela ka lebelo le go otlela o nwele madila. Dipoelo tša nyakišišo di laeditše gore go na le tlhokego ya go bea melawana e thata ya go hwetša laesense ya go otlela, go oketša bahlankedi ba taolo ya sephethephethe mafelong ao go bago le dikotsi tše ntši, go oketša kotlo go bao ba tshelago melao ya mebileng le go akaretša thuto ya polokego ya mebileng go kharikhulamo ya sekolo. Dipoelo tša nyakišišo ye di ka thuša badiramelao go hlama mananeo ao ba ka a lebišago go sehlopha sa baswa ba mengwaga ya magareng ga 20 le 35 le go bea melao yeo e tla thibago tlhaelelo ye e utolotšwego ke nyakišišo yeOperations ManagementM. Com (Operations Management

    2022 Annual Report: 75th Anniversary Edition

    Get PDF
    This 2022 Annual Report documents the 75th year of the Richard King Mellon Foundation, and the second year of our 2021-2030 Strategic Plan. We received 646 applications for funding in 2022. In response, we awarded 303 grants and program-related investments (PRIs), totaling more than $152 million. And we continued in 2022 to broaden significantly the circle of visionary grantees with whom we work. The 2022 grant and PRI recipients included 71 organizations that never before had received Foundation funding, eclipsing the record for new grantees set the year before.In the pages that follow, you will read stories of some of the visionaries we funded in 2022. The leaders and organizations you will meet in those stories are inspiring representatives of our remarkable grantees. Yet they are only a small fraction of the extraordinary people and groups we worked with in 2022, all of whom are worthy of such stories

    WiDEVIEW: An UltraWideBand and Vision Dataset for Deciphering Pedestrian-Vehicle Interactions

    Full text link
    Robust and accurate tracking and localization of road users like pedestrians and cyclists is crucial to ensure safe and effective navigation of Autonomous Vehicles (AVs), particularly so in urban driving scenarios with complex vehicle-pedestrian interactions. Existing datasets that are useful to investigate vehicle-pedestrian interactions are mostly image-centric and thus vulnerable to vision failures. In this paper, we investigate Ultra-wideband (UWB) as an additional modality for road users' localization to enable a better understanding of vehicle-pedestrian interactions. We present WiDEVIEW, the first multimodal dataset that integrates LiDAR, three RGB cameras, GPS/IMU, and UWB sensors for capturing vehicle-pedestrian interactions in an urban autonomous driving scenario. Ground truth image annotations are provided in the form of 2D bounding boxes and the dataset is evaluated on standard 2D object detection and tracking algorithms. The feasibility of UWB is evaluated for typical traffic scenarios in both line-of-sight and non-line-of-sight conditions using LiDAR as ground truth. We establish that UWB range data has comparable accuracy with LiDAR with an error of 0.19 meters and reliable anchor-tag range data for up to 40 meters in line-of-sight conditions. UWB performance for non-line-of-sight conditions is subjective to the nature of the obstruction (trees vs. buildings). Further, we provide a qualitative analysis of UWB performance for scenarios susceptible to intermittent vision failures. The dataset can be downloaded via https://github.com/unmannedlab/UWB_Dataset

    Studying Pedestrian’s Unmarked Midblock Crossing Behavior on a Multilane Road When Interacting With Autonomous Vehicles Using Virtual Reality

    Get PDF
    This dissertation focuses on the challenge of pedestrian interaction with autonomous vehicles (AVs) at unmarked midblock locations where the right-of-way is unspecified. A virtual reality (VR) simulation was developed to replicate an urban unmarked midblock environment where pedestrians cross a four-lane arterial roadway and interact with AVs. One research goal is to investigate the impact of roadway centerline features (undivided, two-way left-turn lane, and median) and AV operational schemes portrayed through on-vehicle signals (no signal, yellow negotiating indication, and yellow/blue negotiating/no-yield indications) on pedestrian crossing behavior. Results demonstrate that both roadway centerline design features and AV operations and signaling show significant impacts on pedestrians\u27 unmarked midblock crossing behavior, including the waiting time at the curb, waiting time in the middle of the road, and the total crossing time. Whereas, only the roadway centerline design features significantly impact the walking time, and only the AV operations and signaling significantly impact the accepted gap. Participants in the undivided centerline scene spent longer time waiting at the curb and walking on the road. Also, pedestrians are more likely to display risky behavior and cross in front of AVs indicating blue signals with non-yielding behavior in the presence of a median centerline scene. The inclusion of a yellow signal, which indicates the detection of pedestrians and signifies that the AVs will negotiate with them, resulted in a significant reduction in pedestrian waiting time both at the curb and in the middle of the road, when compared to AVs without a signal. Interaction effects between roadway centerline design features and AV operations and signaling are significant only for waiting time in the middle of the road. It is also found that older pedestrians tend to wait longer at the curb and are less likely to cross in front of AVs showing a blue signal with non-yielding behavior. Another research goal is to investigate how this VR experience change pedestrians’ perception of AVs. Results demonstrated that both pedestrians’ overall attitude toward AVs and trust in the effectiveness of AV systems significantly improved after the VR experience. It is also found that the more pedestrians trust the yellow signals, the more likely they are to improve their perception of AVs. Further, pedestrians who exhibit more aggressive crossing behavior are less likely to change their perception towards AVs as compared to those pedestrians who display rule-conforming crossing behaviors. Also, if the experiment made pedestrians feel motion sick, they were less likely to experience increased trust in the AV system\u27s effectiveness

    2023- The Twenty-seventh Annual Symposium of Student Scholars

    Get PDF
    The full program book from the Twenty-seventh Annual Symposium of Student Scholars, held on April 18-21, 2023. Includes abstracts from the presentations and posters.https://digitalcommons.kennesaw.edu/sssprograms/1027/thumbnail.jp

    IEOM Society International

    Get PDF
    IEOM Society Internationa

    Data simulation in deep learning-based human recognition

    Get PDF
    Human recognition is an important part of perception systems, such as those used in autonomous vehicles or robots. These systems often use deep neural networks for this purpose, which rely on large amounts of data that ideally cover various situations, movements, visual appearances, and interactions. However, obtaining such data is typically complex and expensive. In addition to raw data, labels are required to create training data for supervised learning. Thus, manual annotation of bounding boxes, keypoints, orientations, or actions performed is frequently necessary. This work addresses whether the laborious acquisition and creation of data can be simplified through targeted simulation. If data are generated in a simulation, information such as positions, dimensions, orientations, surfaces, and occlusions are already known, and appropriate labels can be generated automatically. A key question is whether deep neural networks, trained with simulated data, can be applied to real data. This work explores the use of simulated training data using examples from the field of pedestrian detection for autonomous vehicles. On the one hand, it is shown how existing systems can be improved by targeted retraining with simulation data, for example to better recognize corner cases. On the other hand, the work focuses on the generation of data that hardly or not occur at all in real standard datasets. It will be demonstrated how training data can be generated by targeted acquisition and combination of motion data and 3D models, which contain finely graded action labels to recognize even complex pedestrian situations. Through the diverse annotation data that simulations provide, it becomes possible to train deep neural networks for a wide variety of tasks with one dataset. In this work, such simulated data is used to train a novel deep multitask network that brings together diverse, previously mostly independently considered but related, tasks such as 2D and 3D human pose recognition and body and orientation estimation

    준정형화된 환경에서 Look-ahead Point를 이용한 모방학습 기반 자율 내비게이션 방법

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 융합과학기술대학원 융합과학부(지능형융합시스템전공), 2023. 2. 박재흥.본 학위논문은 자율주행 차량이 주차장에서 위상지도와 비전 센서로 내비게이션을 수행하는 방법들을 제안합니다. 이 환경에서의 자율주행 기술은 완전 자율주행을 완성하는 데 필요하며, 편리하게 이용될 수 있습니다. 이 기술을 구현하기 위해, 경로를 생성하고 이를 현지화 데이터로 추종하는 방법이 일반적으로 연구되고 있습니다. 그러나, 주차장에서는 도로 간 간격이 좁고 장애물이 복잡하게 분포되어 있어 현지화 데이터를 정확하게 얻기 힘듭니다. 이는 실제 경로와 추종하는 경로 사이에 틀어짐을 발생시켜, 차량과 장애물 간 충돌 가능성을 높입니다. 따라서 현지화 데이터로 경로를 추종하는 대신, 낮은 비용을 가지는 비전 센서로 차량이 주행 가능 영역을 향해 주행하는 방법이 제안됩니다. 주차장에는 차선이 없고 다양한 정적/동적 장애물이 복잡하게 있어, 주행 가능/불가능한 영역을 구분하여 점유 격자 지도를 얻는 것이 필요합니다. 또한, 교차로를 내비게이션하기 위해, 전역 계획에 따른 하나의 갈래 도로만이 주행가능 영역으로 구분됩니다. 갈래 도로는 회전된 바운딩 박스 형태로 인식되며 주행가능 영역 인식과 함께 multi-task 네트워크를 통해 얻어집니다. 주행을 위해 모방학습이 사용되며, 이는 모델-기반 모션플래닝 방법보다 파라미터 튜닝 없이도 다양하고 복잡한 환경을 다룰 수 있고 부정확한 인식 결과에도 강인합니다. 아울러, 이미지에서 제어 명령을 구하는 기존 모방학습 방법과 달리, 점유 격자 지도에서 차량이 도달할 look-ahead point를 학습하는 새로운 모방학습 방법이 제안됩니다. 이 point를 사용함으로써, 모방 학습의 성능을 향상시키는 data aggregation (DAgger) 알고리즘을 별도의 조이스틱 없이 자율주행에 적용할 수 있으며, 전문가는 human-in-loop DAgger 훈련 과정에서도 최적의 행동을 잘 선택할 수 있습니다. 추가로, DAgger 변형 알고리즘들은 안전하지 않거나 충돌에 가까운 상황에 대한 데이터를 샘플링하여 DAgger 성능이 향상됩니다. 그러나, 전체 훈련 데이터셋에서 이 상황에 대한 데이터 비율이 적으면, 추가적인 DAgger 수행 및 사람의 노력이 요구됩니다. 이 문제를 다루기 위해, 가중 손실 함수를 사용하는 새로운 DAgger 훈련 방법인 WeightDAgger 알고리즘이 제안되며, 더 적은 DAgger 반복으로 앞서 언급 것과 유사한 상황에서 전문가의 행동을 더 정확하게 모방할 수 있습니다. DAgger를 동적 상황까지 확장하기 위해, 에이전트와 경쟁하는 적대적 정책이 제안되고, 이 정책을 DAgger 알고리즘에 적용하기 위한 훈련 프레임워크가 제안됩니다. 에이전트는 이전 DAgger 훈련 단계에서 훈련되지 않은 다양한 상황에 대해 훈련될 수 있을 뿐만 아니라 쉬운 상황에서 어려운 상황까지 점진적으로 훈련될 수 있습니다. 실내외 주차장에서의 차량 내비게이션 실험을 통해, 모델-기반 모션 플래닝 알고리즘의 한계 및 이를 다룰 수 있는 제안하는 모방학습 방법의 효용성이 분석됩니다. 또한, 시뮬레이션 실험을 통해, 제안된 WeightDAgger가 기존 DAgger 알고리즘들 보다 더 적은 DAgger 수행 및 사람의 노력이 필요함을 보이며, 적대적 정책을 이용한 DAgger 훈련 방법으로 동적 장애물을 안전하게 회피할 수 있음을 보입니다. 추가적으로, 부록에서는 비전 기반 자율 주차 시스템 및 주차 경로를 빠르게 생성할 수 있는 방법이 소개되어, 비전기반 주행 및 주차를 수행하는 자율 발렛 파킹 시스템이 완성됩니다.This thesis proposes methods for performing autonomous navigation with a topological map and a vision sensor in a parking lot. These methods are necessary to complete fully autonomous driving and can be conveniently used by humans. To implement them, a method of generating a path and tracking it with localization data is commonly studied. However, in such environments, the localization data is inaccurate because the distance between roads is narrow, and obstacles are distributed complexly, which increases the possibility of collisions between the vehicle and obstacles. Therefore, instead of tracking the path with the localization data, a method is proposed in which the vehicle drives toward a drivable area obtained by vision having a low-cost. In the parking lot, there are complicated various static/dynamic obstacles and no lanes, so it is necessary to obtain an occupancy grid map by segmenting the drivable/non-drivable areas. To navigating intersections, one branch road according to a global plan is configured as the drivable area. The branch road is detected in a shape of a rotated bounding box and is obtained through a multi-task network that simultaneously recognizes the drivable area. For driving, imitation learning is used, which can handle various and complex environments without parameter tuning and is more robust to handling an inaccurate perception result than model-based motion-planning algorithms. In addition, unlike existing imitation learning methods that obtain control commands from an image, a new imitation learning method is proposed that learns a look-ahead point that a vehicle will reach on an occupancy grid map. By using this point, the data aggregation (DAgger) algorithm that improves the performance of imitation learning can be applied to autonomous navigating without a separate joystick, and the expert can select the optimal action well even in the human-in-loop DAgger training process. Additionally, DAgger variant algorithms improve DAgger's performance by sampling data for unsafe or near-collision situations. However, if the data ratio for these situations in the entire training dataset is small, additional DAgger iteration and human effort are required. To deal with this problem, a new DAgger training method using a weighted loss function (WeightDAgger) is proposed, which can more accurately imitate the expert action in the aforementioned situations with fewer DAgger iterations. To extend DAgger to dynamic situations, an adversarial agent policy competing with the agent is proposed, and a training framework to apply this policy to DAgger is suggested. The agent can be trained for a variety of situations not trained in previous DAgger training steps, as well as progressively trained from easy to difficult situations. Through vehicle navigation experiments in real indoor and outdoor parking lots, limitations of the model-based motion-planning algorithms and the effectiveness of the proposed method to deal with them are analyzed. Besides, it is shown that the proposed WeightDAgger requires less DAgger performance and human effort than the existing DAgger algorithms, and the vehicle can safely avoid dynamic obstacles with the DAgger training framework using the adversarial agent policy. Additionally, the appendix introduces a vision-based autonomous parking system and a method to quickly generate the parking path, completing the vision-based autonomous valet parking system that performs driving as well as parking.1 INTRODUCTION 1 1.1 Autonomous Driving System and Environments 1 1.2 Motivation 4 1.3 Contributions of Thesis 6 1.4 Overview of Thesis 8 2 MULTI-TASK PERCEPTION NETWORK FOR VISION-BASED NAVIGATION 9 2.1 Introduction 9 2.1.1 Related Works 10 2.2 Proposed Method 13 2.2.1 Bird's-Eye-View Image Transform 14 2.2.2 Multi-Task Perception Network 15 2.2.2.1 Drivable Area Segmentation (Occupancy Grid Map (OGM)) 16 2.2.2.2 Rotated Road Bounding Box Detection 18 2.2.3 Intersection Decision 21 2.2.3.1 Road Occupancy Grid Map (OGMroad) 22 2.2.4 Merged Occupancy Grid Map (OGMmer) 23 2.3 Experiment 25 2.3.1 Experimental Setup 25 2.3.1.1 Autonomous Vehicle 25 2.3.1.2 Multi-task Network Setup 27 2.3.1.3 Model-based Branch Road Detection Method 29 2.3.2 Experimental Results 30 2.3.2.1 Quantitative Analysis of Multi-Task Network 30 2.3.2.2 Comparison of Branch Road Detection Method 31 2.4 Conclusion 34 3 DATA AGGREGATION (DAGGER) ALGORITHM WITH LOOK-AHEAD POINT FOR AUTONOMOUS DRIVING IN SEMI-STRUCTURED ENVIRONMENT 35 3.1 Introduction 35 3.2 Related Works & Background 41 3.2.1 DAgger Algorithms for Autonomous Driving 41 3.2.2 Behavior Cloning 42 3.2.3 DAgger Algorithm 43 3.3 Proposed Method 45 3.3.1 DAgger with Look-ahead Point Composition (State & Action) 45 3.3.2 Loss Function 49 3.3.3 Data-sampling Function in DAgger 50 3.3.4 Reasons to Use Look-ahead Point As Action 52 3.4 Experimental Setup 54 3.4.1 Driving Policy Network Training 54 3.4.2 Model-based Motion-Planning Algorithms 56 3.5 Experimental Result 57 3.5.1 Quantitative Analysis of Driving Policy 58 3.5.1.1 Collision Rate 58 3.5.1.2 Safe Distance Range Ratio 59 3.5.2 Qualitative Analysis of Driving Policy 60 3.5.2.1 Limitations of Tentacle Algorithm 60 3.5.2.2 Limitations of VVF Algorithm 61 3.5.2.3 Limitations of Both Tentacle and VVF 62 3.5.2.4 Driving Results on Noisy Occupancy Grid Map 63 3.5.2.5 Intersection Navigation 65 3.6 Conclusion 68 4 WEIGHT DAGGER ALGORITHM FOR REDUCING IMITATION LEARNING ITERATIONS 70 4.1 Introduction 70 4.2 Related Works & Background 71 4.3 Proposed Method 74 4.3.1 Weighted Loss Function in WeightDAgger 75 4.3.2 Weight Update Process in Entire Training Dataset 78 4.4 Experiments 80 4.4.1 Experimental Setup 80 4.4.2 Experimental Results 82 4.4.2.1 Ablation Study According to τ 82 4.4.2.2 Ablation Study According to ε 83 4.4.2.3 Ablation Study According to α 84 4.4.2.4 Driving Test Results 85 4.4.3 Walking Robot Experiments 86 4.5 Conclusion 87 5 DAGGER USING ADVERSARIAL AGENT POLICY FOR DYNAMIC SITUATIONS 89 5.1 Introduction 89 5.2 Related Works & Background 91 5.2.1 Motion-planning Algorithms for Dynamic Situations 91 5.2.2 DAgger Algorithm for Dynamic Situation 93 5.3 Proposed Method 95 5.3.1 DAgger Training Framework Using Adversarial Agent Policy 95 5.3.2 Applying to Oncoming Dynamic Obstacle Avoidance Task 97 5.3.2.1 Ego Agent Policy 98 5.3.2.2 Adversarial Agent Policy 100 5.4 Experiments 101 5.4.1 Experimental Setup 101 5.4.1.1 Ego Agent Policy Training 102 5.4.1.2 Adversarial Agent Policy Training 103 5.4.2 Experimental Result 103 5.4.2.1 Performance of Adversarial Agent Policy 103 5.4.2.2 Ego Agent Policy Performance Comparisons Trained with / without Adversarial Agent Policy 104 5.5 Conclusion 106 6 CONCLUSIONS 107 Appendix A 110 A.1 Vision-based Re-plannable Autonomous Parking System 110 A.1.1 Parking Spot Detection 112 A.1.2 Re-planning Method 113 A.2 Biased Target-tree* with RRT* Algorithm for Fast Parking Path Planning 115 A.2.1 Introduction 115 A.2.2 Proposed Method 117 A.2.3 Experiments 119 Abstract (In Korean) 143 Acknowledgement 145박

    Collaborative Dynamic 3D Scene Graphs for Automated Driving

    Full text link
    Maps have played an indispensable role in enabling safe and automated driving. Although there have been many advances on different fronts ranging from SLAM to semantics, building an actionable hierarchical semantic representation of urban dynamic scenes from multiple agents is still a challenging problem. In this work, we present Collaborative URBan Scene Graphs (CURB-SG) that enable higher-order reasoning and efficient querying for many functions of automated driving. CURB-SG leverages panoptic LiDAR data from multiple agents to build large-scale maps using an effective graph-based collaborative SLAM approach that detects inter-agent loop closures. To semantically decompose the obtained 3D map, we build a lane graph from the paths of ego agents and their panoptic observations of other vehicles. Based on the connectivity of the lane graph, we segregate the environment into intersecting and non-intersecting road areas. Subsequently, we construct a multi-layered scene graph that includes lane information, the position of static landmarks and their assignment to certain map sections, other vehicles observed by the ego agents, and the pose graph from SLAM including 3D panoptic point clouds. We extensively evaluate CURB-SG in urban scenarios using a photorealistic simulator. We release our code at http://curb.cs.uni-freiburg.de.Comment: Refined manuscript and extended supplementar
    corecore