4,933 research outputs found
보νμ κ±°λ λ° μ΄μ μ μ£Όν νΉμ± κΈ°λ°μ μμ¨μ£Όν μ’ λ°©ν₯ κ±°λ κ³ν
νμλ
Όλ¬Έ (μμ¬) -- μμΈλνκ΅ λνμ : 곡과λν κΈ°κ³κ³΅νλΆ, 2020. 8. μ΄κ²½μ.λ³Έ μ°κ΅¬λ 보νμμ λ―Έλ κ±°λ λ°©ν₯μ λν λΆνμ€μ±μ κ³ λ €ν 보νμ λͺ¨λΈμ μ μνκ³ , 보νμ λμ μμ μ΄μ μ μ£Όν νΉμ±μ λ°μνμ¬ μμ¨μ£Όν μ°¨λμ μ’
λ°©ν₯ λͺ¨μ
μ κ³ννλ μκ³ λ¦¬μ¦μ μ μνλ€. λμ¬ μμ¨ μ£Όνμ κ°λ₯νκ² νκΈ°μν΄μλ 보νμμμ μνΈμ μΈ μ£Όνμ΄ νμμ μ΄λ€. κ·Έλ¬λ, 보νμλ κ±°λ λ°©ν₯ μ νμ΄ μ½κ² μΌμ΄λκΈ° λλ¬Έμ λ―Έλ κ±°λμ μμΈ‘νκΈ°κ° μ΄λ ΅κ³ , μ΄μ λμνλ μμ°¨μ κ±°λμ κ²°μ μ§λ λ°λ μ΄λ €μμ΄ μλ€. μ΄λ¬ν 보νμμ κ±°λ λΆνμ€μ±μ΄ μ‘΄μ¬ν¨μλ μμ¨ μ£Όν μ°¨λμ΄ λ³΄νμμ μμ μ±μ ν보νκ³ ν΄λ¨Ό μ΄μ μμ κ°μ΄ κ±°λνκΈ° μν΄μλ, 보νμμ κ±°λ λΆνμ€μ±μ λ°μνλ 보νμ λͺ¨λΈμ΄ μ°μ μ μΌλ‘ νμνλ€.
ν΄λΉ μ°κ΅¬μμλ 보νμ κ±°λ νΉμ±μ μ‘°μ¬νμ¬ λ³΄νμ κ±°λ νλ₯ λͺ¨λΈμ μ μνκ³ , 보νμ λμ μν©μμμ μ΄μ μμ κ±°λμ μ‘°μ¬νμ¬ μμ¨μ£Όν μ°¨λμ μ’
λ°©ν₯ κ±°λ κ³νμ μ μ©νλ€. ν΄λΉ λ
Όλ¬Έμ ν¬κ² 보νμ λͺ¨λΈ μ μ, μμΈ‘ κΈ°λ° μΆ©λ μν νκ° κ·Έλ¦¬κ³ λ³΄νμ λμ μ’
λ°©ν₯ κ±°λ κ³νμ μΈ κ°μ§ μ£Όμ ννΈλ‘ μ΄λ£¨μ΄μ Έ μλ€. 첫 λ²μ§Έ ννΈμμ 보νμ λͺ¨λΈ μ μμ ν΅μ¬ μ΄λ‘ μ 보νμμ κ±°λ μλμ λ°©ν₯μ μ ννλ κ±°λ μ¬μ΄μλ νΉμ μκ΄κ΄κ³λ₯Ό κ°μ§κ³ μλ€λ κ²μ΄λ€. 보νμμ κ±°λ νΉμ±μ μμ¨ μ£Όν μ°¨λμ λΆμ°©λ λΌμ΄λ€ μΌμμ μ λ°© μΉ΄λ©λΌλ₯Ό ν΅ν΄ νλν 보νμ λ°μ΄ν°λ₯Ό ν΅κ³μ μΌλ‘ λΆμν κ²°κ³Όλ‘ λμΆλμλ€. ν΄λΉ λ°μ΄ν°λ₯Ό ν΅ν΄ μλμ λ°λΌ 보νμκ° λͺ¨λ λ°©ν₯μ λν΄μ κ±°λν νλ₯ μ΄ λμΆλκ³ , 보νμμ λ―Έλ κ±°λ λ²μλ λμΆλ νλ₯ λΆν¬μμ μ ν¨ μκ·Έλ§ λ²μλ₯Ό μ€μ νμ¬ κ΅¬νλλ€. μ΄λ 보νμκ° μΌμ μκ° λμ νΉμ νλ₯ λ‘ κ±°λν μμμ κ³ λ €νμ¬, μνμ΄ μ‘΄μ¬ν μ μλ 보νμμ λν΄μ 미리 μ°¨λμ μμ§μμ κ³νν μ μλλ‘ νλ€. λ λ²μ§Έ ννΈλ‘ 보νμμ μ μ°¨λμ μΌμ μκ° λμμ μμΉ μ 보λ₯Ό μμΈ‘νμ¬ μΆ©λ μνμ±μ νκ°νλ€. 보νμ μμΈ‘μ μμ λμΆν 보νμ μ ν¨ μμΈ‘ κ±°λ λ²μ λ΄μμ κ°μ₯ μνμ±μ΄ ν° λ°©ν₯μΌλ‘ μμ§μΈλ€κ³ κ°μ νλ€. λν, μ μ°¨λμ κ²½μ° μ£Όμ΄μ§ λ‘컬 κ²½λ‘λ₯Ό λ°λΌ μμ§μΈλ€λ κ°μ μ νλ μ°¨μ μ μ§ λͺ¨λΈμ μ¬μ©νλ€. μμΈ‘ κ²°κ³Όλ₯Ό ν΅ν΄ νμ¬ μΆκ°μ μΈ κ°μλλ₯Ό κ°νμ§ μμμ λ, μΆ©λ μνμ΄ μ‘΄μ¬νλμ§ νμΈνλ€. λ§μ§λ§μΌλ‘, νκ²μ΄ λλ 보νμμ λν μ’
λ°©ν₯ κ±°λμ κ²°μ νλ€. μ°μ μ μΌλ‘ 보νμ λμ μν©μμ μ μ ν κ°μλμ κ°μ μμ μ κ²°μ νκΈ° μν΄ ν΄λ¨Ό μ΄μ μ μ£Όν λ°μ΄ν°λ₯Ό λΆμνλ€. μ΄λ₯Ό ν΅ν΄ μ£Όνμμ ν΅μ¬μ μΈ νλΌλ―Έν°λ€μ΄ μ μλκ³ , ν΄λΉ νλΌλ―Έν°λ€μ μ’
λ°©ν₯ κ±°λ κ³νμ λ°μλλ€. λ°λΌμ μ΅μ’
μ μΌλ‘ 보νμ μμΈ‘ κ±°λ μμμ λν΄μ μμ¨ μ£Όν μ°¨λμ μΆμ’
κ°μλμ΄ κ²°μ λλ€.
μ μλ μκ³ λ¦¬μ¦μ μ€μ°¨ ν
μ€νΈλ₯Ό ν΅ν΄ μ±λ₯μ΄ νμΈλλ€. ν
μ€νΈ κ²°κ³Ό, λμΆν 보νμ λͺ¨λΈκ³Ό μμΈ‘ λͺ¨λΈμ λ°νμΌλ‘ ν κ°μ κ²°μ μμ κ³Ό κ°μλμ κΆ€μ μ΄ λμΌ μν©λ€μ λν΄μ λ₯μν μ΄μ μμ μ μ¬ν¨μ΄ νμΈλμλ€.This paper presents a pedestrian model considering uncertainty in the direction of future movement and a human-like longitudinal motion planning algorithm for autonomous vehicle in the interaction situation with pedestrians. Interactive driving with pedestrians is essential for autonomous driving in urban environments. However, interaction with pedestrians is very challenging for autonomous vehicle because it is difficult to predict movement direction of pedestrians. Even if there exists uncertainty of the behavior of pedestrians, the autonomous vehicles should plan their motions ensuring pedestrian safety and respond smoothly to pedestrians. To implement this, a pedestrian probabilistic yaw model is introduced based on behavioral characteristics and the human driving parameters are investigated in the interaction situation. The paper consists of three main parts: the pedestrian model definition, collision risk assessment based on prediction and human-like longitudinal motion planning. In the first section, the main key of pedestrian model is the behavior tendency with correlation between pedestrians speed and direction change. The behavior characteristics are statistically investigated based on perceived pedestrian tracking data using light detection and ranging(Lidar) sensor and front camera. Through the behavior characteristics, movement probability for all directions of the pedestrian is derived according to pedestrians velocity. Also, the effective moving area can be limited up to the valid probability criterion. The defined model allows the autonomous vehicle to know the area that pedestrian may head to a certain probability in the future steps. This helps to plan the vehicle motion considering the pedestrian yaw states uncertainty and to predetermine the motion of autonomous vehicle from the pedestrians who may have a risk. Secondly, a risk assessment is required and is based on the pedestrian model. The dynamic states of pedestrians and subject vehicle are predicted to do a risk assessment. In this section, the pedestrian behavior is predicted under the assumption of moving to the most dangerous direction in the effective moving area obtained above. The prediction of vehicle behavior is performed using a lane keeping model in which the vehicle follows a given path. Based on the prediction result, it is checked whether there will be a collision between the pedestrian and the vehicle if deceleration motion is not taken. Finally, longitudinal motion planning is determined for target pedestrians with possibility of collision. Human driving data is first examined to obtain a proper longitudinal deceleration and deceleration starting point in the interaction situation with pedestrians. Several human driving parameters are defined and applied in determining the longitudinal acceleration of the vehicle. The longitudinal motion planning algorithm is verified via vehicle tests. The test results confirm that the proposed algorithm shows similar longitudinal motion and deceleration decision to a human driver based on predicted pedestrian model.Chapter 1. Introduction 1
1.1. Background and Motivation 1
1.2. Previous Researches 3
1.3. Thesis Objective and Outline 5
Chapter 2. Probabilistic Pedestrian Yaw Model 8
2.1. Pedestrian Behavior Characteristics 9
2.2. Probability Movement Range 11
Chapter 3. Prediction Based Risk Assessment 13
3.1. Lane Keeping Behavior Model 15
3.2. Subject Vehicle Prediction 17
3.3. Safety Region Based on Prediction 19
Chapter 4. Human-like Longitudinal Motion Planning 22
4.1. Human Driving Parameters Definition 22
4.1.1 Hard Mode Distance 23
4.1.2 Soft Mode Distance and Velocity 23
4.1.3 Time-To-Collision 23
4.2. Driving Mode and Acceleration Decision 25
4.2.1 Acceleration of Each Mode 25
4.2.2 Mode Selection 26
Chapter 5. Vehicle Test Result 28
5.1. Configuration of Experimental Vehicle 28
5.2. Longitudinal Motion Planning for Pedestiran 30
5.2.1 Soft Mode Scenario 32
5.2.2 Hard Mode Scenario 35
Chapter 6. Colclusion 38
Bibliography 39
κ΅λ¬Έ μ΄λ‘ 42Maste
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
Pedestrian Environment Model for Automated Driving
Besides interacting correctly with other vehicles, automated vehicles should
also be able to react in a safe manner to vulnerable road users like
pedestrians or cyclists. For a safe interaction between pedestrians and
automated vehicles, the vehicle must be able to interpret the pedestrian's
behavior. Common environment models do not contain information like body poses
used to understand the pedestrian's intent. In this work, we propose an
environment model that includes the position of the pedestrians as well as
their pose information. We only use images from a monocular camera and the
vehicle's localization data as input to our pedestrian environment model. We
extract the skeletal information with a neural network human pose estimator
from the image. Furthermore, we track the skeletons with a simple tracking
algorithm based on the Hungarian algorithm and an ego-motion compensation. To
obtain the 3D information of the position, we aggregate the data from
consecutive frames in conjunction with the vehicle position. We demonstrate our
pedestrian environment model on data generated with the CARLA simulator and the
nuScenes dataset. Overall, we reach a relative position error of around 16% on
both datasets.Comment: Accepted for presentation at the 26th IEEE International Conference
on Intelligent Transportation Systems (ITSC 2023), 24-28 September 2023,
Bilbao, Bizkaia, Spai
Tracking by Prediction: A Deep Generative Model for Mutli-Person localisation and Tracking
Current multi-person localisation and tracking systems have an over reliance
on the use of appearance models for target re-identification and almost no
approaches employ a complete deep learning solution for both objectives. We
present a novel, complete deep learning framework for multi-person localisation
and tracking. In this context we first introduce a light weight sequential
Generative Adversarial Network architecture for person localisation, which
overcomes issues related to occlusions and noisy detections, typically found in
a multi person environment. In the proposed tracking framework we build upon
recent advances in pedestrian trajectory prediction approaches and propose a
novel data association scheme based on predicted trajectories. This removes the
need for computationally expensive person re-identification systems based on
appearance features and generates human like trajectories with minimal
fragmentation. The proposed method is evaluated on multiple public benchmarks
including both static and dynamic cameras and is capable of generating
outstanding performance, especially among other recently proposed deep neural
network based approaches.Comment: To appear in IEEE Winter Conference on Applications of Computer
Vision (WACV), 201
Modelling shared space users via rule-based social force model
The promotion of space sharing in order to raise the quality of community living and safety of street surroundings is increasingly accepted feature of modern urban design. In this context, the development of a shared space simulation tool is essential in helping determine whether particular shared space schemes are suitable alternatives to traditional street layouts. A simulation tool that enables urban designers to visualise pedestrians and cars trajectories, extract flow and density relation in a new shared space design and achieve solutions for optimal design features before implementation. This paper presents a three-layered microscopic mathematical model which is capable of representing the behaviour of pedestrians and vehicles in shared space layouts and it is implemented in a traffic simulation tool. The top layer calculates route maps based on static obstacles in the environment. It plans the shortest path towards agents' respective destinations by generating one or more intermediate targets. In the second layer, the Social Force Model (SFM) is modified and extended for mixed traffic to produce feasible trajectories. Since vehicle movements are not as flexible as pedestrian movements, velocity angle constraints are included for vehicles. The conflicts described in the third layer are resolved by rule-based constraints for shared space users. An optimisation algorithm is applied to determine the interaction parameters of the force-based model for shared space users using empirical data. This new three-layer microscopic model can be used to simulate shared space environments and assess, for example, new street designs
- β¦