5 research outputs found

    ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์‹ฌ์ธต๊ฐ•ํ™”ํ•™์Šต ๊ธฐ๋ฐ˜ ๊ธด๊ธ‰ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ฒฝ๋กœ ์ตœ์ ํ™”

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2021.8. ์ตœ์˜ˆ๋ฆผ.The emergency lane change is a risk itself because it is made instantaneously in emergency such as a sudden stop of the vehicle in front in the driving lane. Therefore, the optimization of the lane change trajectory is an essential research area of autonomous vehicle. This research proposes a path optimization for emergency lane change of autonomous vehicles based on deep reinforcement learning. This algorithm is developed with a focus on fast and safe avoidance behavior and lane change in an emergency. As the first step of algorithm development, a simulation environment was established. IPG CARMAKER was selected for reliable vehicle dynamics simulation and construction of driving scenarios for reinforcement learning. This program is a highly reliable and can analyze the behavior of a vehicle similar to that of a real vehicle. In this research, a simulation was performed using the Hyundai I30-PDe full car model. And as a simulator for DRL and vehicle control, Matlab Simulink which can encompass all of control, measurement, and artificial intelligence was selected. By connecting two simulators, the emergency lane change trajectory is optimized based on DRL. The vehicle lane change trajectory is modeled as a 3rd order polynomial. The start and end point of the lane change is set and analyzed as a function of the lane change distance for the coefficient of the polynomial. In order to optimize the coefficients. A DRL architecture is constructed. 12 types of driving environment data are used for the observation space. And lane change distance which is a variable of polynomial is selected as the output of action space. Reward space is designed to maximize the learning ability. Dynamic & static reward and penalty are given at each time step of simulation, so that optimization proceeds in a direction in which the accumulated rewards could be maximized. Deep Deterministic Policy Gradient agent is used as an algorithm for optimization. An algorithm is developed for driving a vehicle in a dynamic simulation program. First, an algorithm is developed that can determine when, at what velocity, and in which direction to change the lane of a vehicle in an emergency situation. By estimating the maximum tire-road friction coefficient in real-time, the minimum distance for the driving vehicle to stop is calculated to determine the risk of longitudinal collision with the vehicle in front. Also, using Gippsโ€™ safety distance formula, an algorithm is developed that detects the possibility of a collision with a vehicle coming from the lane to be changed, and determines whether to overtake the vehicle to pass forward or to go backward after as being overtaken. Based on this, the decision-making algorithm for the final lane change is developed by determine the collision risk and safety of the left and right lanes. With the developed algorithm that outputs the emergency lane change trajectory through the configured reinforcement learning structure and the general driving trajectory such as the lane keeping algorithm and the adaptive cruise control algorithm according to the situation, an integrated algorithm that drives the ego vehicle through the adaptive model predictive controller is developed. As the last step of the research, DRL was performed to optimize the developed emergency lane change path optimization algorithm. 60,000 trial-and-error learning is performed to develop the algorithm for each driving situation, and performance is evaluated through test driving.๊ธด๊ธ‰ ์ฐจ์„  ๋ณ€๊ฒฝ์€ ์ฃผํ–‰ ์ฐจ์„ ์—์„œ ์„ ํ–‰์ฐจ๋Ÿ‰ ๊ธ‰์ •๊ฑฐ์™€ ๊ฐ™์€ ์‘๊ธ‰์ƒํ™ฉ ๋ฐœ์ƒ์‹œ์— ์ˆœ๊ฐ„์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง€๋Š” ๊ฒƒ์ด๋ฏ€๋กœ ๊ทธ ์ž์ฒด์— ์œ„ํ—˜์„ฑ์„ ์•ˆ๊ณ  ์žˆ๋‹ค. ์ง€๋‚˜์น˜๊ฒŒ ๋Š๋ฆฌ๊ฒŒ ์กฐํ–ฅ์„ ํ•˜๋Š” ๊ฒฝ์šฐ, ์ฃผํ–‰ ์ฐจ๋Ÿ‰์€ ์•ž์— ์žˆ๋Š” ์žฅ์• ๋ฌผ๊ณผ์˜ ์ถฉ๋Œ์„ ํ”ผํ•  ์ˆ˜ ์—†๋‹ค. ์ด์™€ ๋ฐ˜๋Œ€๋กœ ์ง€๋‚˜์น˜๊ฒŒ ๋น ๋ฅด๊ฒŒ ์กฐํ–ฅ์„ ํ•˜๋Š” ๊ฒฝ์šฐ, ์ฐจ๋Ÿ‰๊ณผ ์ง€๋ฉด ์‚ฌ์ด์˜ ์ž‘์šฉ๋ ฅ์€ ํƒ€์ด์–ด ๋งˆ์ฐฐ ํ•œ๊ณ„๋ฅผ ๋„˜๊ฒŒ ๋œ๋‹ค. ์ด๋Š” ์ฐจ๋Ÿ‰์˜ ์กฐ์ข… ์•ˆ์ •์„ฑ์„ ๋–จ์–ดํŠธ๋ ค ์Šคํ•€์ด๋‚˜ ์ „๋ณต ๋“ฑ ๋‹ค๋ฅธ ์–‘์ƒ์˜ ์‚ฌ๊ณ ๋ฅผ ์•ผ๊ธฐํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ฒฝ๋กœ์˜ ์ตœ์ ํ™”๋Š” ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์‘๊ธ‰ ์ƒํ™ฉ ๋Œ€์ฒ˜์— ํ•„์ˆ˜์ ์ธ ์š”์†Œ์ด๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์‹ฌ์ธต๊ฐ•ํ™”ํ•™์Šต์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ๊ธด๊ธ‰ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ฒฝ๋กœ๋ฅผ ์ตœ์ ํ™”ํ•œ๋‹ค. ์ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์„ ํ–‰์ฐจ๋Ÿ‰์˜ ๊ธ‰์ •๊ฑฐ๋‚˜ ์žฅ์• ๋ฌผ ์ถœํ˜„๊ณผ ๊ฐ™์€ ์‘๊ธ‰์ƒํ™ฉ ๋ฐœ์ƒ ์‹œ, ๋น ๋ฅด๊ณ  ์•ˆ์ „ํ•œ ํšŒํ”ผ ๊ฑฐ๋™ ๋ฐ ์ฐจ์„  ๋ณ€๊ฒฝ์— ์ดˆ์ ์„ ๋งž์ถ”์–ด ๊ฐœ๋ฐœ๋˜์—ˆ๋‹ค. ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ฐœ๋ฐœ์˜ ์ฒซ ๋ฒˆ์งธ ๋‹จ๊ณ„๋กœ์„œ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ™˜๊ฒฝ์ด ๊ตฌ์ถ•๋˜์—ˆ๋‹ค. ์‹ ๋ขฐ์„ฑ ์žˆ๋Š” ์ฐจ๋Ÿ‰ ๋™์—ญํ•™ ์‹œ๋ฎฌ๋ ˆ์ด์…˜๊ณผ ๊ฐ•ํ™”ํ•™์Šต์„ ์œ„ํ•œ ์ฃผํ–‰ ์‹œ๋‚˜๋ฆฌ์˜ค ๊ตฌ์ถ•์„ ์œ„ํ•˜์—ฌ IPG CARMAKER๊ฐ€ ์„ ์ •๋˜์—ˆ๋‹ค. ์ด ํ”„๋กœ๊ทธ๋žจ์€ ์‹ค์ œ ์‚ฐ์—… ํ˜„์žฅ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๋†’์€ ์‹ ๋ขฐ์„ฑ์„ ๊ฐ€์ง„ ํ”„๋กœ๊ทธ๋žจ์œผ๋กœ ์‹ค์ œ ์ฐจ๋Ÿ‰๊ณผ ์œ ์‚ฌํ•œ ์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™์„ ๋ถ„์„ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํ˜„๋Œ€์ž๋™์ฐจ์˜ I30-PDe ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ๋˜ํ•œ ๊ฐ•ํ™”ํ•™์Šต๊ณผ ์ฐจ๋Ÿ‰์ œ์–ด๋ฅผ ์œ„ํ•œ ํ”„๋กœ๊ทธ๋žจ์œผ๋กœ ์ œ์–ด, ๊ณ„์ธก, ์ธ๊ณต์ง€๋Šฅ์„ ๋ชจ๋‘ ์•„์šฐ๋ฅผ ์ˆ˜ ์žˆ๋Š” Matlab Simulink๋ฅผ ์„ ์ •ํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” IPG CARMAKER์™€ Matlab Simulink๋ฅผ ์—ฐ๋™ํ•˜์—ฌ ์‹ฌ์ธต ๊ฐ•ํ™” ํ•™์Šต์„ ๋ฐ”ํƒ•์œผ๋กœ ๊ธด๊ธ‰ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ถค์ ์„ ์ตœ์ ํ™”ํ•˜์˜€๋‹ค. ์ฐจ๋Ÿ‰์˜ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ถค์ ์€ 3์ฐจ ๋‹คํ•ญ์‹์˜ ํ˜•์ƒ์œผ๋กœ ๋ชจ๋ธ๋ง ๋˜์—ˆ๋‹ค. ์ฐจ์„  ๋ณ€๊ฒฝ ์‹œ์ž‘ ์ง€์ ๊ณผ ์ข…๋ฃŒ ์ง€์ ์„ ์„ค์ •ํ•˜์—ฌ ๋‹คํ•ญ์‹์˜ ๊ณ„์ˆ˜๋ฅผ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ํ•จ์ˆ˜๋กœ ํ•ด์„ํ•˜์˜€๋‹ค. ์‹ฌ์ธต ๊ฐ•ํ™” ํ•™์Šต์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ณ„์ˆ˜๋“ค์„ ์ตœ์ ํ™”ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ, ๊ฐ•ํ™” ํ•™์Šต ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ตฌ์„ฑํ•˜์˜€๋‹ค. ๊ด€์ธก ๊ณต๊ฐ„์€ 12๊ฐ€์ง€์˜ ์ฃผํ–‰ ํ™˜๊ฒฝ ๋ฐ์ดํ„ฐ๋ฅผ ์ด์šฉํ•˜์˜€๊ณ , ๊ฐ•ํ™” ํ•™์Šต์˜ ์ถœ๋ ฅ์œผ๋กœ๋Š” 3์ฐจ ํ•จ์ˆ˜์˜ ๋ณ€์ˆ˜์ธ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ฑฐ๋ฆฌ๋ฅผ ์„ ์ •ํ•˜์˜€๋‹ค. ๊ทธ๋ฆฌ๊ณ  ๊ฐ•ํ™” ํ•™์Šต์˜ ํ•™์Šต ๋Šฅ๋ ฅ์„ ๊ทน๋Œ€ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ๋ณด์ƒ ๊ณต๊ฐ„์„ ์„ค๊ณ„ํ•˜์˜€๋‹ค. ๋™์  ๋ณด์ƒ, ์ •์  ๋ณด์ƒ, ๋™์  ๋ฒŒ์น™, ์ •์  ๋ฒŒ์น™์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์˜ ๋งค ๋‹จ๊ณ„๋งˆ๋‹ค ๋ถ€์—ฌํ•จ์œผ๋กœ์จ ๋ณด์ƒ ์ด ํ•ฉ์ด ์ตœ๋Œ€ํ™”๋  ์ˆ˜ ์žˆ๋Š” ๋ฐฉํ–ฅ์œผ๋กœ ํ•™์Šต์ด ์ง„ํ–‰๋˜์—ˆ๋‹ค. ์ตœ์ ํ™”๋ฅผ ์œ„ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ๋Š” Deep Deterministic Policy Gradient agent๊ฐ€ ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๊ฐ•ํ™”ํ•™์Šต ์•„ํ‚คํ…์ฒ˜์™€ ํ•จ๊ป˜ ๋™์—ญํ•™ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ”„๋กœ๊ทธ๋žจ์—์„œ์˜ ์ฐจ๋Ÿ‰ ๊ตฌ๋™์„ ์œ„ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋จผ์ € ์‘๊ธ‰์ƒํ™ฉ์‹œ์— ์ฐจ๋Ÿ‰์˜ ์ฐจ์„ ์„ ์–ธ์ œ, ์–ด๋–ค ์†๋„๋กœ, ์–ด๋–ค ๋ฐฉํ–ฅ์œผ๋กœ ๋ณ€๊ฒฝํ•  ์ง€ ๊ฒฐ์ •ํ•˜๋Š” ์˜์‚ฌ๊ฒฐ์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ํƒ€์ด์–ด์™€ ๋„๋กœ ์‚ฌ์ด์˜ ์ตœ๋Œ€ ๋งˆ์ฐฐ๊ณ„์ˆ˜๋ฅผ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ถ”์ •ํ•˜์—ฌ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์ด ์ •์ง€ํ•˜๊ธฐ ์œ„ํ•œ ์ตœ์†Œ ๊ฑฐ๋ฆฌ๋ฅผ ์‚ฐ์ถœํ•จ์œผ๋กœ์จ ์„ ํ–‰ ์ฐจ๋Ÿ‰๊ณผ์˜ ์ถฉ๋Œ ์œ„ํ—˜์„ ํŒ๋‹จํ•˜์˜€๋‹ค. ๋˜ํ•œ Gipps์˜ ์•ˆ์ „๊ฑฐ๋ฆฌ ๊ณต์‹์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ณ€๊ฒฝํ•˜๊ณ ์ž ํ•˜๋Š” ์ฐจ์„ ์—์„œ ์˜ค๋Š” ์ฐจ๋Ÿ‰๊ณผ์˜ ์ถฉ๋Œ ๊ฐ€๋Šฅ์„ฑ์„ ๊ฐ์ง€ํ•˜์—ฌ ๊ทธ ์ฐจ๋Ÿ‰์„ ์ถ”์›”ํ•ด์„œ ์•ž์œผ๋กœ ์ง€๋‚˜๊ฐˆ์ง€, ์ถ”์›”์„ ๋‹นํ•ด์„œ ๋’ค๋กœ ๊ฐˆ ๊ฒƒ์ธ์ง€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ขŒ์ธก ์ฐจ์„ ๊ณผ ์šฐ์ธก ์ฐจ์„ ์˜ ์ถฉ๋Œ ์œ„ํ—˜์„ฑ ๋ฐ ์•ˆ์ •์„ฑ์„ ํŒ๋‹จํ•˜์—ฌ ์ตœ์ข…์ ์ธ ์ฐจ์„  ๋ณ€๊ฒฝ์„ ์œ„ํ•œ ์˜์‚ฌ๊ฒฐ์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๊ตฌ์„ฑ๋œ ๊ฐ•ํ™” ํ•™์Šต ๊ตฌ์กฐ๋ฅผ ํ†ตํ•œ ๊ธด๊ธ‰ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ถค์ ๊ณผ ์ฐจ์„  ์œ ์ง€ ์žฅ์น˜, ์ ์‘ํ˜• ์ˆœํ•ญ ์ œ์–ด์™€ ๊ฐ™์€ ์ผ๋ฐ˜ ์ฃผํ–‰์‹œ์˜ ๊ถค์ ์„ ์ƒํ™ฉ์— ๋งž์ถ”์–ด ์ถœ๋ ฅํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜๊ณ  ์ ์‘ํ˜• ๋ชจ๋ธ ์˜ˆ์ธก ์ œ์–ด๊ธฐ๋ฅผ ํ†ตํ•ด ์ฃผํ–‰ ์ฐจ๋Ÿ‰์„ ๊ตฌ๋™ํ•˜๋Š” ํ†ตํ•ฉ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์˜ ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„๋กœ์„œ, ๊ฐœ๋ฐœ๋œ ๊ธด๊ธ‰ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ฒฝ๋กœ ์ƒ์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ตœ์ ํ™”๋ฅผ ์œ„ํ•˜์—ฌ ์‹ฌ์ธต ๊ฐ•ํ™” ํ•™์Šต์ด ์ˆ˜ํ–‰๋˜์—ˆ๋‹ค. ์ด 60,000ํšŒ์˜ ์‹œํ–‰ ์ฐฉ์˜ค ๋ฐฉ์‹์˜ ํ•™์Šต์„ ํ†ตํ•ด ๊ฐ ์ฃผํ–‰ ์ƒํ™ฉ ๋ณ„ ์ตœ์ ์˜ ์ฐจ์„  ๋ณ€๊ฒฝ ์ œ์–ด ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๊ณ , ๊ฐ ์ฃผํ–‰์ƒํ™ฉ ๋ณ„ ์ตœ์ ์˜ ์ฐจ์„  ๋ณ€๊ฒฝ ๊ถค์ ์„ ์ œ์‹œํ•˜์˜€๋‹ค.Chapter 1. Introduction 1 1.1. Research Background 1 1.2. Previous Research 5 1.3. Research Objective 9 1.4. Dissertation Overview 13 Chapter 2. Simulation Environment 19 2.1. Simulator 19 2.2. Scenario 26 Chapter 3. Methodology 28 3.1. Reinforcement learning 28 3.2. Deep reinforcement learning 30 3.3. Neural network 33 Chapter 4. DRL-enhanced Lane Change 36 4.1. Necessity of Evasive Steering Trajectory Optimization 36 4.2. Trajectory Planning 39 4.3. DRL Structure 42 4.3.1. Observation 43 4.3.2. Action 47 4.3.3. Reward 49 4.3.4. Neural Network Architecture 58 4.3.5. Deep Deterministic Policy Gradient (DDPG) Agent 60 Chapter 5. Autonomous Driving Algorithm Integration 64 5.1. Lane Change Decision Making 65 5.1.1. Longitudinal Collision Detection 66 5.1.2. Lateral Collision Detection 71 5.1.3. Lane Change Direction Decision 74 5.2. Path Planning 75 5.3. Vehicle Controller 76 5.4. Algorithm Integration 77 Chapter 6. Training & Results 79 Chapter 7. Conclusion 91 References 97 ๊ตญ๋ฌธ์ดˆ๋ก 104๋ฐ•

    ํ•œ๊ตญ์˜ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ์˜ ๋ณ€๋™ ์š”์ธ ๋ฐ ๊ฐ€๊ตฌ์˜ ํ›„์ƒ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์— ๋Œ€ํ•œ ๋ถ„์„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์‚ฌํšŒ๊ณผํ•™๋Œ€ํ•™ ๊ฒฝ์ œํ•™๋ถ€,2019. 8. ํ™์žฌํ™”.๋ณธ ์—ฐ๊ตฌ๋Š” 2001๋…„๋ถ€ํ„ฐ 2016๋…„๊นŒ์ง€ ํ•œ๊ตญ์˜ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ์˜ ๋ณ€ํ™”์— ์˜ํ–ฅ์„ ์ฃผ๋Š” ์š”์ธ๋“ค์˜ ์ƒ๋Œ€์  ๊ธฐ์—ฌ๋„๋ฅผ ์ˆ˜๋Ÿ‰์ ์œผ๋กœ ๋ฐํžˆ๊ณ , ์ฃผํƒ๋‹ด๋ณด์ธ์ •๋น„์œจ(LTV) ๊ทœ์ œ ๊ฐ•ํ™” ๋ฐ ํ–ฅํ›„ ์ผ์–ด๋‚  ์ˆ˜ ์žˆ๋Š” ๋ณด์œ ์„ธ ์ธ์ƒ๊ณผ ์ทจ๋“์„ธ ํ์ง€๊ฐ€ ์ฃผํƒ๊ฐ€๊ฒฉ, ์ž„์ฐจ๋ฃŒ ๋ฐ ๊ฐ€๊ตฌ์— ํ›„์ƒ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ์ •๋Ÿ‰์ ์œผ๋กœ ๋ถ„์„ํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ๋น„๋™์งˆ์  ๊ฐ€๊ตฌ(heterogeneous households)๋กœ ์ด๋ฃจ์–ด์ง„ ๊ท ํ˜•๊ฑฐ์‹œ๊ฒฝ์ œ ๋ชจํ˜•์„ ๊ตฌ์„ฑํ•œ๋‹ค. ๋ชจํ˜•๊ฒฝ์ œ๋Š” ์†Œ๋“, ์ฃผ๊ฑฐํ˜•ํƒœ, ์†Œ์œ ๋ฉด์ , ์ฃผ๊ฑฐ๋ฉด์ , ์ž์‚ฐ ๋ฐ ๋ถ€์ฑ„ ๊ทœ๋ชจ, ์†Œ๋น„์ˆ˜์ค€์˜ ์ฐจ์ด๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๊ตฌ๋“ค๋กœ ๊ตฌ์„ฑ๋˜๋ฉฐ, ์ด๋“ค ๊ฐ€๊ตฌ์˜ ๊ฐœ๋ณ„ ์„ ํƒ์— ๋”ฐ๋ผ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ๊ฐ€ ๋‚ด์ƒ์ ์œผ๋กœ ๊ฒฐ์ •๋˜๋Š” ๋ชจํ˜•๊ฒฝ์ œ์˜ ์‹œ์žฅ๊ท ํ˜•์„ ์„ค์ •ํ•œ๋‹ค. ์ด๋ฅผ ์ด์šฉํ•˜์—ฌ ์‹ค์งˆ๊ธˆ๋ฆฌํ•˜๋ฝ, ์‹ค์งˆ์†Œ๋“์ƒ์Šน, ์ฃผํƒ๊ณต๊ธ‰์˜ ์ฆ๊ฐ€, LTV ๋ฐ DTI์˜ ๋Œ€์ถœ๊ทœ์ œ ๊ฐ•ํ™”, ๋ณด์œ ์„ธ ์ธํ•˜์™€ ์ทจ๋“์„ธ ๋ณ€ํ™” ๋“ฑ์˜ ๊ฒฝ์ œํ™˜๊ฒฝ ๋ฐ ์ •์ฑ…์  ์š”์ธ์˜ ์™ธ์ƒ์ ์ธ ๋ณ€ํ™”๊ฐ€ ๋ชจํ˜•๊ฒฝ์ œ์˜ ๊ท ์ œ(steady state) ๊ท ํ˜•์—์„œ์˜ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ ๋ฐ ๊ด€๋ จ ๋ณ€์ˆ˜๋“ค์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์„ ์ •๋Ÿ‰์ ์œผ๋กœ ๋ถ„์„ํ•˜๊ณ  ์ด๋ฅผ ์ž๋ฃŒ์˜ ๊ฒฐ๊ณผ์™€ ๋น„๊ตํ•˜์—ฌ ๋™๊ธฐ๊ฐ„ ๋™์•ˆ ๊ฐœ๋ณ„ ์š”์ธ์˜ ์ƒ๋Œ€์  ๊ธฐ์—ฌ๋„๋ฅผ ๋ถ„์„ํ•œ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ถ”๊ฐ€์ ์œผ๋กœ ํ–ฅํ›„ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” LTV ๊ทœ์ œ ๊ฐ•ํ™”์™€ ์ถฉ๋ถ„ํ•œ ํฌ๊ธฐ์˜ ๋ณด์œ ์„ธ ์ธ์ƒ ๋ฐ ์ทจ๋“์„ธ ํ์ง€๋ฅผ ์ƒ์ •ํ•˜์—ฌ ๊ฐ๊ฐ์˜ ๊ฒฝ์šฐ ์ฃผํƒ๊ฐ€๊ฒฉ, ์ž„์ฐจ๋ฃŒ ๋ฐ ๊ฐ€๊ตฌ์˜ ํ›„์ƒ ๋ณ€ํ™”๋ฅผ ์•„์šธ๋Ÿฌ ์‚ดํŽด๋ณธ๋‹ค. 2001๋…„๋ถ€ํ„ฐ 2016๋…„๊นŒ์ง€ ํ˜„์‹ค์—์„œ ๋‹จ์œ„๋ฉด์ ๋‹น ์‹ค์งˆ์ฃผํƒ๊ฐ€๊ฒฉ์€ 27.6% ์ƒ์Šนํ•˜์˜€๊ณ , ๋‹จ์œ„๋ฉด์ ๋‹น ์‹ค์งˆ์ž„์ฐจ๋ฃŒ๋Š” 2.1% ํ•˜๋ฝํ•˜์˜€๋‹ค. ์‹ค์งˆ๊ธˆ๋ฆฌํ•˜๋ฝ, ์‹ค์งˆ์†Œ๋“์ƒ์Šน, ์ฃผํƒ๊ณต๊ธ‰์˜ ์ฆ๊ฐ€, LTV ๋ฐ DTI์˜ ๋Œ€์ถœ๊ทœ์ œ ๊ฐ•ํ™”, ๋ณด์œ ์„ธ ์ธํ•˜์™€ ์ทจ๋“์„ธ ๋ณ€ํ™”์˜ ์š”์ธ์˜ ํšจ๊ณผ๋ฅผ ์ข…ํ•ฉํ•œ ๋ชจํ˜•๊ฒฝ์ œ์—์„œ๋Š” ๋‹จ์œ„๋ฉด์ ๋‹น ์‹ค์งˆ์ฃผํƒ๊ฐ€๊ฒฉ์ด 34.8% ์ƒ์Šนํ•˜๊ณ , ๋‹จ์œ„๋ฉด์ ๋‹น ์‹ค์งˆ์ž„์ฐจ๋ฃŒ๋Š” 2.5% ํ•˜๋ฝํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚œ๋‹ค. ์ฆ‰ ๋ชจํ˜•๊ฒฝ์ œ๋Š” ํ˜„์‹ค๊ฒฝ์ œ์˜ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ์˜ ๋ณ€ํ™”์— ๋Œ€ํ•ด์„œ ์ƒ๋‹น๋ถ€๋ถ„ ์ผ์น˜ํ•˜๋Š” ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ธ๋‹ค. 2001๋…„์—์„œ 2016๋…„์˜ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ์˜ ๋ณ€ํ™”๋Š” ๋Œ€๋ถ€๋ถ„ ์‹ค์งˆ๊ธˆ๋ฆฌํ•˜๋ฝ, ์‹ค์งˆ์†Œ๋“์ƒ์Šน, ์ฃผํƒ๊ณต๊ธ‰์˜ ์ฆ๊ฐ€๋กœ ์„ค๋ช…์ด ๋˜๊ณ , LTV ๋ฐ DTI์˜ ๋Œ€์ถœ๊ทœ์ œ ์ •์ฑ…์˜ ๋ณ€ํ™”์™€ ๋ณด์œ ์„ธ, ์ทจ๋“์„ธ์˜ ์ฃผํƒ๊ด€๋ จ์„ธ์ œ์˜ ๋ณ€ํ™”๊ฐ€ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ์˜ ๋ณ€ํ™”์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์€ ์ƒ๋Œ€์ ์œผ๋กœ ๋ฏธ๋ฏธํ•œ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. LTV ๊ทœ์ œ๋น„์œจ์ด 100%์—์„œ 70%๋กœ, 70%์—์„œ 40%๋กœ ๊ฐ•ํ™”๋  ๋•Œ ์žฅ๊ธฐ์ ์œผ๋กœ ์ฃผํƒ๊ฐ€๊ฒฉ์€ ๋น„๊ต์  ์ž‘์€ ํญ์œผ๋กœ ๊ฐ์†Œํ•˜์ง€๋งŒ, LTV ๊ทœ์ œ๋น„์œจ์ด 40%์—์„œ 0%๋กœ ๊ฐ์†Œํ•˜๊ฒŒ ๋  ๋•Œ๋Š” ์ฃผํƒ๊ฐ€๊ฒฉ์ด ํฐ ํญ์œผ๋กœ ๊ฐ์†Œํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ์ด๋Š” ๋ชจํ˜•๊ฒฝ์ œ์—์„œ ์ž๋ฃŒ์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ LTV ๋น„์œจ์ด 40% ์ดํ•˜์ธ ๊ฐ€๊ตฌ์˜ ๋น„์ค‘์ด 90% ์ด์ƒ์ด๊ธฐ ๋•Œ๋ฌธ์— LTV ๊ทœ์ œํ•œ๋„๊ฐ€ ์ถฉ๋ถ„ํžˆ ์ปค์กŒ์„ ๋•Œ์— LTV ๊ทœ์ œํ•œ๋„ ๊ฐ•ํ™”์— ๋”ฐ๋ฅธ ์ฃผํƒ๊ฐ€๊ฒฉ ํ•˜๋ฝ์˜ ํญ์ด ํ™•๋Œ€๋œ๋‹ค๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. LTV ๊ทœ์ œ ๊ฐ•ํ™”์— ๋”ฐ๋ผ ๊ฒฝ์ œ ์ „์ฒด์ ์œผ๋กœ ๊ฐ€๊ตฌ์˜ ํ›„์ƒ์ด ์ฆ๊ฐ€ํ•˜๊ณ , ํŠนํžˆ ์ด๋Ÿฌํ•œ ํ›„์ƒ ์ฆ๊ฐ€๋Š” ๊ณ ์†Œ๋“์ธต์— ์ง‘์ค‘๋˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. LTV ๊ทœ์ œ๊ฐ€ ์ถฉ๋ถ„ํžˆ ๊ฐ•ํ™”๋˜๋ฉด ์ €์†Œ๋“์ธต๋„ ๊ณ ์†Œ๋“์— ๋น„ํ•ด ์ž‘์€ ํญ์ด์ง€๋งŒ ํ›„์ƒ ์—ญ์‹œ ์ฆ๊ฐ€ํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ํ–ฅํ›„ ๋ณด์œ ์„ธ์˜ ๊ณผ์„ธํ‘œ์ค€์ด ์‹ค๊ฑฐ๋ž˜๊ฐ€๋กœ ์ธ์ƒ๋˜๋Š” ๊ฒฝ์šฐ๋ฅผ ์ƒ์ •ํ•œ ์‹คํ—˜์—์„œ๋Š” ์žฅ๊ธฐ์ ์œผ๋กœ ์ฃผํƒ๊ฐ€๊ฒฉ์ด 9.3% ํ•˜๋ฝํ•˜๊ณ , ์ž„์ฐจ๋ฃŒ๋Š” 4.1% ํ•˜๋ฝํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ๊ฐ€๊ตฌ๋Š” ํ‰๊ท ์ ์œผ๋กœ ๋งค๊ธฐ ์†Œ๋น„๊ฐ€ 0.7% ๊ฐ์†Œํ•˜๋Š” ํฌ๊ธฐ์˜ ํ›„์ƒ ๋ณ€ํ™”๋ฅผ ๋ณด์˜€๋‹ค. ํŠนํžˆ ๊ณ ์†Œ๋“์ธต์˜ ๊ฒฝ์šฐ ํ›„์ƒ ๊ฐ์†Œ ํญ์ด ๋”์šฑ ํฌ๊ฒŒ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ์ทจ๋“์„ธ๊ฐ€ ํ์ง€๋˜๋Š” ๊ฒฝ์šฐ๋ฅผ ์ƒ์ •ํ•œ ์‹คํ—˜์—์„œ๋Š” ์žฅ๊ธฐ์ ์œผ๋กœ ์ฃผํƒ๊ฐ€๊ฒฉ์ด 7.4% ์ƒ์Šนํ•˜๊ณ , ์ž„์ฐจ๋ฃŒ๋Š” 2.9% ์ƒ์Šนํ•˜๋Š” ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ๊ธฐ์กด์˜ ํ†ต๋…๊ณผ๋Š” ๋‹ฌ๋ฆฌ ๊ฐ€๊ตฌ๋Š” ํ‰๊ท ์ ์œผ๋กœ ๋งค๊ธฐ ์†Œ๋น„๊ฐ€ 0.1% ๊ฐ์†Œํ•˜๋Š” ์ •๋„์˜ ํ›„์ƒ ๋ณ€ํ™”๋ฅผ ๋ณด์˜€๊ณ , ๊ณ ์†Œ๋“์ธต์ผ์ˆ˜๋ก ํ›„์ƒ ๊ฐ์†Œ์˜ ํญ์ด ๋”์šฑ ํฌ๊ฒŒ ๋‚˜ํƒ€๋‚ฌ๋‹ค. ์ทจ๋“์„ธ๊ฐ€ ํ์ง€๋˜๋Š” ๊ฒฝ์šฐ ์ฃผํƒ๊ฐ€๊ฒฉ์ด ์ƒ์Šนํ•˜๋ฉด์„œ ๊ธฐ์กด์˜ ์ฃผํƒ๋ณด์œ ๊ฐ€๊ตฌ ์ค‘ ๋Œ€์ถœ๋น„์ค‘์ด ํฐ ๊ฐ€๊ตฌ๋Š” LTV ๊ทœ์ œํ•œ๋„๋กœ ์ธํ•ด ์ฃผํƒ์˜ ํฌ๊ธฐ๋ฅผ ์ค„์ด๊ฑฐ๋‚˜ ์ž„์ฐจ๊ฐ€๊ตฌ๋กœ ์ „ํ™˜ํ•˜๊ฒŒ ๋˜์–ด ์ด๋“ค ๊ณ„์ธต์˜ ํ›„์ƒ ๊ฐ์†Œ๋กœ ์ธํ•ด ๊ฒฝ์ œ ์ „์ฒด๋กœ๋Š” ๋น„๋ก ์ž‘์€ ํฌ๊ธฐ์ด์ง€๋งŒ ํ›„์ƒ ๊ฐ์†Œ๊ฐ€ ๋‚˜ํƒ€๋‚ฌ๋‹ค.This paper quantitatively evaluates the relative contribution of macroeconomic fundamentals and housing-related policies to the changes in real house prices, rents, and household welfare in Korea. We show that the observed changes in real house prices and rents in 2001-2016 are mainly attributed to a decrease in the real interest rate, an increase in real income, and an increase in aggregate house supply. However, housing-related policies turn out to have little impact on the price changes in contrast to the common belief that those policies greatly affected the housing market over the period. We also find that the welfare implications of fundamentals and housing-related policies vary by household income. This study also finds that if LTV regulation is tightened further to less than 40%, the effects on housing prices start to increase disproportionately because more than 90% of the households LTV ratio is less than 40%. In addition, tighter LTV regulations lead to higher household welfare on average, although larger portions of welfare gains fall to high income groups. If the tax base of property holding tax is expanded to the prevailing prices, the housing prices decrease by 9.3% and the rents decrease by 4.1% in the long run. The welfare decreases by 0.7% in terms of CEV on average and higher income groups get worse off more. If the acquisition tax is abolished, the housing prices increase by 7.4% and the rents increase by 2.9% in the long run. The welfare decreases by 0.1% in terms of CEV on average and higher income groups get worse off more due to interactions of the LTV regulation and the higher housing prices.์ œ1์žฅ ํ•œ๊ตญ์˜ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ์˜ ๋ณ€๋™ ์š”์ธ ๋ถ„์„ ...... 1 ์ œ1์ ˆ ์„œ๋ก  ....................................................................... 1 ์ œ2์ ˆ ๋ชจํ˜•๊ฒฝ์ œ ................................................................. 4 1. ๊ฐ€๊ตฌ์˜ ์„ ํ˜ธ ๋ฐ ์ฃผ๊ฑฐ ......................................................... 5 2. ๊ฐ€๊ตฌ์˜ ์†Œ๋“ ..................................................................... 7 3. ๊ฐ€๊ตฌ์˜ ์ž์‚ฐ ๋ฐ ๋Œ€์ถœ๊ทœ์ œ ................................................... 7 4. ์ •๋ถ€ ๋ฐ ๊ฑฐ๋ž˜๋น„์šฉ .............................................................. 8 5. ๊ฐ€๊ตฌ์˜ ๋ฌธ์ œ ๋ฐ ๊ท ์ œ๊ท ํ˜• ................................................... 10 ์ œ3์ ˆ ๋ชจ์ˆ˜์„ค์ • ................................................................ 11 1. ๊ฒฝ์ œํ™˜๊ฒฝ ๋ฐ ์ •์ฑ…์„ ๋ฐ˜์˜ํ•œ ๋ชจ์ˆ˜ ........................................ 11 2. ์™ธ๋ถ€์  ๋ชจ์ˆ˜ ..................................................................... 20 3. ๋‚ด๋ถ€์  ๋ชจ์ˆ˜ ..................................................................... 23 4. 2016๋…„ ๋ชจํ˜•๊ฒฝ์ œ์˜ ๊ท ์ œ์ƒํƒœ ๋ถ„์„ ...................................... 25 ์ œ4์ ˆ ์ฃผํƒ๊ฐ€๊ฒฉ๊ณผ ์ž„์ฐจ๋ฃŒ์˜ ๋ณ€ํ™”์— ์˜ํ–ฅ์„ ์ฃผ๋Š” ์š”์ธ ๋ถ„์„ ... 30 1. ๋ชจํ˜•๊ฒฝ์ œ์˜ ๋ณ€ํ™”์™€ ์ž๋ฃŒ์˜ ๋ณ€ํ™” ๋น„๊ต ๋ถ„์„ .......................... 31 2. ๊ฐœ๋ณ„์š”์ธ์˜ ํšจ๊ณผ ๋ถ„์„ ....................................................... 36 ์ œ5์ ˆ ๊ฒฐ๋ก  ......................................................................... 49 ์ œ2์žฅ ์ฃผํƒ๋‹ด๋ณด์ธ์ •๋น„์œจ(LTV)์˜ ๋ณ€ํ™”๊ฐ€ ์ฃผํƒ๊ฐ€๊ฒฉ, ์ž„์ฐจ๋ฃŒ ๋ฐ ๊ฐ€๊ตฌ์˜ ํ›„์ƒ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์— ๋Œ€ํ•œ ๋ถ„์„ .............. 51 ์ œ1์ ˆ ์„œ๋ก  ..................................................................... 51 ์ œ2์ ˆ ์ฃผํƒ๋‹ด๋ณด์ธ์ •๋น„์œจ ๋ณ€ํ™”์˜ ํšจ๊ณผ ................................ 52 ์ œ3์ ˆ ๊ฒฐ๋ก  ..................................................................... 55 ์ œ3์žฅ ๋ณด์œ ์„ธ ์ธ์ƒ๊ณผ ์ทจ๋“์„ธ ํ์ง€๊ฐ€ ์ฃผํƒ๊ฐ€๊ฒฉ, ์ž„์ฐจ๋ฃŒ ๋ฐ ๊ฐ€๊ตฌ์˜ ํ›„์ƒ์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์— ๋Œ€ํ•œ ๋ถ„์„ ................... 58 ์ œ1์ ˆ ์„œ๋ก  ..................................................................... 58 ์ œ2์ ˆ ๋ณด์œ ์„ธ์˜ ๊ณผ์„ธํ‘œ์ค€์ด ์‹ค๊ฑฐ๋ž˜๊ฐ€๋กœ ์ธ์ƒ๋˜๋Š” ๊ฒฝ์šฐ ........ 59 ์ œ3์ ˆ ์ทจ๋“์„ธ๊ฐ€ ํ์ง€๋˜๋Š” ๊ฒฝ์šฐ .......................................... 63 ์ œ4์ ˆ ๊ฒฐ๋ก  ..................................................................... 67 ์ฐธ๊ณ ๋ฌธํ—Œ ............................................................... 69 Abstract .............................................................. 71Docto

    On the effect of rule 408 with mutual private information in pre-trial negotiation

    No full text
    Thesis (master`s)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ฒฝ์ œํ•™๋ถ€ ๊ฒฝ์ œํ•™์ „๊ณต,2001.Maste
    corecore