265 research outputs found

    A Method of the Coverage Ratio of Street Trees Based on Deep Learning

    Get PDF
    The street trees coverage ratio provides reliable data support for urban ecological environment assessment, which plays an important part in the ecological environment index calculation. Aiming at the statistical estimation of urban street trees coverage ratio, an integrated model based on YOLOv4 and Unet network for detecting and extracting street trees from remote sensing images is proposed, and obtain the estimated street trees coverage ratio in images accurately. The experiments are carried out under self-made dataset, and the results show that the accuracy of street trees detection is 94.91%, and the street trees coverage ratio is 16.30% and 13.81% in the two experimental urban scenes. The MIoU of contour extraction is 98.25%, and the estimated coverage accuracy is improved by 6.89% and 5.79%, respectively. The result indicates that the proposed model achieves the automation of contour extraction of street trees and more accurate estimation of street trees coverage ratio

    LiDAR Segmentation-based Adversarial Attacks on Autonomous Vehicles

    Get PDF
    Autonomous vehicles utilizing LiDAR-based 3D perception systems are susceptible to adversarial attacks. This paper focuses on a specific attack scenario that relies on the creation of adversarial point clusters with the intention of fooling the segmentation model utilized by LiDAR into misclassifying point cloud data. This can be translated into the real world with the placement of objects (such as road signs or cardboard) at these adversarial point cluster locations. These locations are generated through an optimization algorithm performed on said adversarial point clusters that are introduced by the attacker

    Automated Safety Assessment of Rural Roadways Using Computer Vision

    Get PDF
    228094Roadside elements play an important role in the number and severity of crashes. Rigid obstacles (trees, rocks, embankments, etc.), guardrails, clear zones, and side slopes are among the factors that might affect roadside safety. The Federal Highway Administration (FHWA) presented a rating system to help DOTs and transportation agencies make better decisions about improving road segments. However, the manual process of rating road segments is time consuming, inconsistent, and labor intensive. To this end, this project proposed an automated rating system based on images taken from Utah roadways. Utilizing machine-learning algorithms and Mandli images, the developed approach employs the FHWA rating system as the primary standard for assessing roadside safety. To provide more detailed information about safety conditions on the roadside, various computer vision algorithms have been developed to detect each roadside feature. The pre-trained models for available clear zone detection and side slope classification have also been established. A shape-file has been generated by assigning a safety ranking to road segments on five state roads. This product can assist traffic engineers in decision-making to improve road safety by prioritizing projects that address problematic locations. The results show a promising approach to enhancing road safety and preventing crashes

    닀쀑 μ„Όμ‹± ν”Œλž«νΌκ³Ό λ”₯λŸ¬λ‹μ„ ν™œμš©ν•œ λ„μ‹œ 규λͺ¨μ˜ 수λͺ© 맡핑 및 μˆ˜μ’… 탐지

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : 농업생λͺ…κ³Όν•™λŒ€ν•™ μƒνƒœμ‘°κ²½Β·μ§€μ—­μ‹œμŠ€ν…œκ³΅ν•™λΆ€(μƒνƒœμ‘°κ²½ν•™), 2023. 2. λ₯˜μ˜λ ¬.Precise estimation of the number of trees and individual tree location with species information all over the city forms solid foundation for enhancing ecosystem service. However, mapping individual trees at the city scale remains challenging due to heterogeneous patterns of urban tree distribution. Here, we present a novel framework for merging multiple sensing platforms with leveraging various deep neural networks to produce a fine-grained urban tree map. We performed mapping trees and detecting species by relying only on RGB images taken by multiple sensing platforms such as airborne, citizens and vehicles, which fueled six deep learning models. We divided the entire process into three steps, since each platform has its own strengths. First, we produced individual tree location maps by converting the central points of the bounding boxes into actual coordinates from airborne imagery. Since many trees were obscured by the shadows of the buildings, we applied Generative Adversarial Network (GAN) to delineate hidden trees from the airborne images. Second, we selected tree bark photos collected by citizen for species mapping in urban parks and forests. Species information of all tree bark photos were automatically classified after non-tree parts of images were segmented. Third, we classified species of roadside trees by using a camera mounted on a car to augment our species mapping framework with street-level tree data. We estimated the distance from a car to street trees from the number of lanes detected from the images. Finally, we assessed our results by comparing it with Light Detection and Ranging (LiDAR), GPS and field data. We estimated over 1.2 million trees existed in the city of 121.04 kmΒ² and generated more accurate individual tree positions, outperforming the conventional field survey methods. Among them, we detected the species of more than 63,000 trees. The most frequently detected species was Prunus yedoensis (21.43 %) followed by Ginkgo biloba (19.44 %), Zelkova serrata (18.68 %), Pinus densiflora (7.55 %) and Metasequoia glyptostroboides (5.97 %). Comprehensive experimental results demonstrate that tree bark photos and street-level imagery taken by citizens and vehicles are conducive to delivering accurate and quantitative information on the distribution of urban tree species.λ„μ‹œ 전역에 μ‘΄μž¬ν•˜λŠ” λͺ¨λ“  수λͺ©μ˜ μˆ«μžμ™€ κ°œλ³„ μœ„μΉ˜, 그리고 μˆ˜μ’… 뢄포λ₯Ό μ •ν™•ν•˜κ²Œ νŒŒμ•…ν•˜λŠ” 것은 μƒνƒœκ³„ μ„œλΉ„μŠ€λ₯Ό ν–₯μƒμ‹œν‚€κΈ° μœ„ν•œ ν•„μˆ˜μ‘°κ±΄μ΄λ‹€. ν•˜μ§€λ§Œ, λ„μ‹œμ—μ„œλŠ” 수λͺ©μ˜ 뢄포가 맀우 λ³΅μž‘ν•˜κΈ° λ•Œλ¬Έμ— κ°œλ³„ 수λͺ©μ„ λ§΅ν•‘ν•˜λŠ” 것은 μ–΄λ €μ› λ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ”, μ—¬λŸ¬κ°€μ§€ μ„Όμ‹± ν”Œλž«νΌμ„ μœ΅ν•©ν•¨κ³Ό λ™μ‹œμ— λ‹€μ–‘ν•œ λ”₯λŸ¬λ‹ λ„€νŠΈμ›Œν¬λ“€μ„ ν™œμš©ν•˜μ—¬ μ„Έλ°€ν•œ λ„μ‹œ 수λͺ© 지도λ₯Ό μ œμž‘ν•˜λŠ” μƒˆλ‘œμš΄ ν”„λ ˆμž„μ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. μš°λ¦¬λŠ” 였직 항곡사진, μ‹œλ―Ό, μ°¨λŸ‰ λ“±μ˜ ν”Œλž«νΌμœΌλ‘œλΆ€ν„° μˆ˜μ§‘λœ RGB μ΄λ―Έμ§€λ§Œμ„ μ‚¬μš©ν•˜μ˜€μœΌλ©°, 6가지 λ”₯λŸ¬λ‹ λͺ¨λΈμ„ ν™œμš©ν•˜μ—¬ 수λͺ©μ„ λ§΅ν•‘ν•˜κ³  μˆ˜μ’…μ„ νƒμ§€ν•˜μ˜€λ‹€. 각각의 ν”Œλž«νΌμ€ μ €λ§ˆλ‹€μ˜ 강점이 있기 λ•Œλ¬Έμ— μ „ 과정을 μ„Έ 가지 μŠ€ν…μœΌλ‘œ ꡬ뢄할 수 μžˆλ‹€. 첫째, μš°λ¦¬λŠ” 항곡사진 μƒμ—μ„œ νƒμ§€λœ 수λͺ©μ˜ λ”₯λŸ¬λ‹ λ°”μš΄λ”© λ°•μŠ€λ‘œλΆ€ν„° 쀑심점을 μΆ”μΆœν•˜μ—¬ κ°œλ³„ 수λͺ©μ˜ μœ„μΉ˜ 지도λ₯Ό μ œμž‘ν•˜μ˜€λ‹€. λ§Žμ€ 수λͺ©μ΄ λ„μ‹œ λ‚΄ κ³ μΈ΅ λΉŒλ”©μ˜ κ·Έλ¦Όμžμ— μ˜ν•΄ κ°€λ €μ‘ŒκΈ° λ•Œλ¬Έμ—, μš°λ¦¬λŠ” 생정적 μ λŒ€μ  신경망 (Generative Adversarial Network, GAN)을 톡해 항곡사진 상에 μˆ¨κ²¨μ§„ 수λͺ©μ„ κ·Έλ €λ‚΄κ³ μž ν•˜μ˜€λ‹€. λ‘˜μ§Έ, μš°λ¦¬λŠ” μ‹œλ―Όλ“€μ΄ μˆ˜μ§‘ν•œ 수λͺ©μ˜ μˆ˜ν”Ό 사진을 ν™œμš©ν•˜μ—¬ λ„μ‹œ 곡원 및 λ„μ‹œ 숲 μΌλŒ€μ— μˆ˜μ’… 정보λ₯Ό λ§΅ν•‘ν•˜μ˜€λ‹€. μˆ˜ν”Ό μ‚¬μ§„μœΌλ‘œλΆ€ν„°μ˜ μˆ˜μ’… μ •λ³΄λŠ” λ”₯λŸ¬λ‹ λ„€νŠΈμ›Œν¬μ— μ˜ν•΄ μžλ™μœΌλ‘œ λΆ„λ₯˜λ˜μ—ˆμœΌλ©°, 이 κ³Όμ •μ—μ„œ 이미지 λΆ„ν•  λͺ¨λΈ λ˜ν•œ μ μš©λ˜μ–΄ λ”₯λŸ¬λ‹ λΆ„λ₯˜ λͺ¨λΈμ΄ μ˜€λ‘œμ§€ μˆ˜ν”Ό λΆ€λΆ„μ—λ§Œ 집쀑할 수 μžˆλ„λ‘ ν•˜μ˜€λ‹€. μ…‹μ§Έ, μš°λ¦¬λŠ” μ°¨λŸ‰μ— νƒ‘μž¬λœ 카메라λ₯Ό ν™œμš©ν•˜μ—¬ λ„λ‘œλ³€ κ°€λ‘œμˆ˜μ˜ μˆ˜μ’…μ„ νƒμ§€ν•˜μ˜€λ‹€. 이 κ³Όμ •μ—μ„œ μ°¨λŸ‰μœΌλ‘œλΆ€ν„° κ°€λ‘œμˆ˜κΉŒμ§€μ˜ 거리 정보가 ν•„μš”ν•˜μ˜€λŠ”λ°, μš°λ¦¬λŠ” 이미지 μƒμ˜ μ°¨μ„  κ°œμˆ˜λ‘œλΆ€ν„° 거리λ₯Ό μΆ”μ •ν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, λ³Έ 연ꡬ κ²°κ³ΌλŠ” 라이닀 (Light Detection and Ranging, LiDAR)와 GPS μž₯λΉ„, 그리고 ν˜„μž₯ μžλ£Œμ— μ˜ν•΄ ν‰κ°€λ˜μ—ˆλ‹€. μš°λ¦¬λŠ” 121.04 kmΒ² 면적의 λŒ€μƒμ§€ 내에 μ•½ 130λ§Œμ—¬ 그루의 수λͺ©μ΄ μ‘΄μž¬ν•˜λŠ” 것을 ν™•μΈν•˜μ˜€μœΌλ©°, λ‹€μ–‘ν•œ 선행연ꡬ보닀 높은 μ •ν™•λ„μ˜ κ°œλ³„ 수λͺ© μœ„μΉ˜ 지도λ₯Ό μ œμž‘ν•˜μ˜€λ‹€. νƒμ§€λœ λͺ¨λ“  수λͺ© 쀑 μ•½ 6만 3μ²œμ—¬ 그루의 μˆ˜μ’… 정보가 νƒμ§€λ˜μ—ˆμœΌλ©°, 이쀑 κ°€μž₯ 빈번히 νƒμ§€λœ 수λͺ©μ€ μ™•λ²šλ‚˜λ¬΄ (Prunus yedoensis, 21.43 %)μ˜€λ‹€. μ€ν–‰λ‚˜λ¬΄ (Ginkgo biloba, 19.44 %), λŠν‹°λ‚˜λ¬΄ (Zelkova serrata, 18.68 %), μ†Œλ‚˜λ¬΄ (Pinus densiflora, 7.55 %), 그리고 메타세쿼이어 (Metasequoia glyptostroboides, 5.97 %) 등이 κ·Έ λ’€λ₯Ό μ΄μ—ˆλ‹€. 포괄적인 검증이 μˆ˜ν–‰λ˜μ—ˆκ³ , λ³Έ μ—°κ΅¬μ—μ„œλŠ” μ‹œλ―Όμ΄ μˆ˜μ§‘ν•œ μˆ˜ν”Ό 사진과 μ°¨λŸ‰μœΌλ‘œλΆ€ν„° μˆ˜μ§‘λœ λ„λ‘œλ³€ μ΄λ―Έμ§€λŠ” λ„μ‹œ μˆ˜μ’… 뢄포에 λŒ€ν•œ μ •ν™•ν•˜κ³  μ •λŸ‰μ μΈ 정보λ₯Ό μ œκ³΅ν•œλ‹€λŠ” 것을 κ²€μ¦ν•˜μ˜€λ‹€.1. Introduction 6 2. Methodology 9 2.1. Data collection 9 2.2. Deep learning overall 12 2.3. Tree counting and mapping 15 2.4. Tree species detection 16 2.5. Evaluation 21 3. Results 22 3.1. Evaluation of deep learning performance 22 3.2. Tree counting and mapping 23 3.3. Tree species detection 27 4. Discussion 30 4.1. Multiple sensing platforms for urban areas 30 4.2. Potential of citizen and vehicle sensors 34 4.3. Implications 48 5. Conclusion 51 Bibliography 52 Abstract in Korean 61석
    • …
    corecore