1,031 research outputs found

    Cataloging Public Objects Using Aerial and Street-Level Images – Urban Trees

    Get PDF
    Each corner of the inhabited world is imaged from multiple viewpoints with increasing frequency. Online map services like Google Maps or Here Maps provide direct access to huge amounts of densely sampled, georeferenced images from street view and aerial perspective. There is an opportunity to design computer vision systems that will help us search, catalog and monitor public infrastructure, buildings and artifacts. We explore the architecture and feasibility of such a system. The main technical challenge is combining test time information from multiple views of each geographic location (e.g., aerial and street views). We implement two modules: det2geo, which detects the set of locations of objects belonging to a given category, and geo2cat, which computes the fine-grained category of the object at a given location. We introduce a solution that adapts state-of-the-art CNN-based object detectors and classifiers. We test our method on β€œPasadena Urban Trees”, a new dataset of 80,000 trees with geographic and species annotations, and show that combining multiple views significantly improves both tree detection and tree species classification, rivaling human performance

    닀쀑 μ„Όμ‹± ν”Œλž«νΌκ³Ό λ”₯λŸ¬λ‹μ„ ν™œμš©ν•œ λ„μ‹œ 규λͺ¨μ˜ 수λͺ© 맡핑 및 μˆ˜μ’… 탐지

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : 농업생λͺ…κ³Όν•™λŒ€ν•™ μƒνƒœμ‘°κ²½Β·μ§€μ—­μ‹œμŠ€ν…œκ³΅ν•™λΆ€(μƒνƒœμ‘°κ²½ν•™), 2023. 2. λ₯˜μ˜λ ¬.Precise estimation of the number of trees and individual tree location with species information all over the city forms solid foundation for enhancing ecosystem service. However, mapping individual trees at the city scale remains challenging due to heterogeneous patterns of urban tree distribution. Here, we present a novel framework for merging multiple sensing platforms with leveraging various deep neural networks to produce a fine-grained urban tree map. We performed mapping trees and detecting species by relying only on RGB images taken by multiple sensing platforms such as airborne, citizens and vehicles, which fueled six deep learning models. We divided the entire process into three steps, since each platform has its own strengths. First, we produced individual tree location maps by converting the central points of the bounding boxes into actual coordinates from airborne imagery. Since many trees were obscured by the shadows of the buildings, we applied Generative Adversarial Network (GAN) to delineate hidden trees from the airborne images. Second, we selected tree bark photos collected by citizen for species mapping in urban parks and forests. Species information of all tree bark photos were automatically classified after non-tree parts of images were segmented. Third, we classified species of roadside trees by using a camera mounted on a car to augment our species mapping framework with street-level tree data. We estimated the distance from a car to street trees from the number of lanes detected from the images. Finally, we assessed our results by comparing it with Light Detection and Ranging (LiDAR), GPS and field data. We estimated over 1.2 million trees existed in the city of 121.04 kmΒ² and generated more accurate individual tree positions, outperforming the conventional field survey methods. Among them, we detected the species of more than 63,000 trees. The most frequently detected species was Prunus yedoensis (21.43 %) followed by Ginkgo biloba (19.44 %), Zelkova serrata (18.68 %), Pinus densiflora (7.55 %) and Metasequoia glyptostroboides (5.97 %). Comprehensive experimental results demonstrate that tree bark photos and street-level imagery taken by citizens and vehicles are conducive to delivering accurate and quantitative information on the distribution of urban tree species.λ„μ‹œ 전역에 μ‘΄μž¬ν•˜λŠ” λͺ¨λ“  수λͺ©μ˜ μˆ«μžμ™€ κ°œλ³„ μœ„μΉ˜, 그리고 μˆ˜μ’… 뢄포λ₯Ό μ •ν™•ν•˜κ²Œ νŒŒμ•…ν•˜λŠ” 것은 μƒνƒœκ³„ μ„œλΉ„μŠ€λ₯Ό ν–₯μƒμ‹œν‚€κΈ° μœ„ν•œ ν•„μˆ˜μ‘°κ±΄μ΄λ‹€. ν•˜μ§€λ§Œ, λ„μ‹œμ—μ„œλŠ” 수λͺ©μ˜ 뢄포가 맀우 λ³΅μž‘ν•˜κΈ° λ•Œλ¬Έμ— κ°œλ³„ 수λͺ©μ„ λ§΅ν•‘ν•˜λŠ” 것은 μ–΄λ €μ› λ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ”, μ—¬λŸ¬κ°€μ§€ μ„Όμ‹± ν”Œλž«νΌμ„ μœ΅ν•©ν•¨κ³Ό λ™μ‹œμ— λ‹€μ–‘ν•œ λ”₯λŸ¬λ‹ λ„€νŠΈμ›Œν¬λ“€μ„ ν™œμš©ν•˜μ—¬ μ„Έλ°€ν•œ λ„μ‹œ 수λͺ© 지도λ₯Ό μ œμž‘ν•˜λŠ” μƒˆλ‘œμš΄ ν”„λ ˆμž„μ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. μš°λ¦¬λŠ” 였직 항곡사진, μ‹œλ―Ό, μ°¨λŸ‰ λ“±μ˜ ν”Œλž«νΌμœΌλ‘œλΆ€ν„° μˆ˜μ§‘λœ RGB μ΄λ―Έμ§€λ§Œμ„ μ‚¬μš©ν•˜μ˜€μœΌλ©°, 6가지 λ”₯λŸ¬λ‹ λͺ¨λΈμ„ ν™œμš©ν•˜μ—¬ 수λͺ©μ„ λ§΅ν•‘ν•˜κ³  μˆ˜μ’…μ„ νƒμ§€ν•˜μ˜€λ‹€. 각각의 ν”Œλž«νΌμ€ μ €λ§ˆλ‹€μ˜ 강점이 있기 λ•Œλ¬Έμ— μ „ 과정을 μ„Έ 가지 μŠ€ν…μœΌλ‘œ ꡬ뢄할 수 μžˆλ‹€. 첫째, μš°λ¦¬λŠ” 항곡사진 μƒμ—μ„œ νƒμ§€λœ 수λͺ©μ˜ λ”₯λŸ¬λ‹ λ°”μš΄λ”© λ°•μŠ€λ‘œλΆ€ν„° 쀑심점을 μΆ”μΆœν•˜μ—¬ κ°œλ³„ 수λͺ©μ˜ μœ„μΉ˜ 지도λ₯Ό μ œμž‘ν•˜μ˜€λ‹€. λ§Žμ€ 수λͺ©μ΄ λ„μ‹œ λ‚΄ κ³ μΈ΅ λΉŒλ”©μ˜ κ·Έλ¦Όμžμ— μ˜ν•΄ κ°€λ €μ‘ŒκΈ° λ•Œλ¬Έμ—, μš°λ¦¬λŠ” 생정적 μ λŒ€μ  신경망 (Generative Adversarial Network, GAN)을 톡해 항곡사진 상에 μˆ¨κ²¨μ§„ 수λͺ©μ„ κ·Έλ €λ‚΄κ³ μž ν•˜μ˜€λ‹€. λ‘˜μ§Έ, μš°λ¦¬λŠ” μ‹œλ―Όλ“€μ΄ μˆ˜μ§‘ν•œ 수λͺ©μ˜ μˆ˜ν”Ό 사진을 ν™œμš©ν•˜μ—¬ λ„μ‹œ 곡원 및 λ„μ‹œ 숲 μΌλŒ€μ— μˆ˜μ’… 정보λ₯Ό λ§΅ν•‘ν•˜μ˜€λ‹€. μˆ˜ν”Ό μ‚¬μ§„μœΌλ‘œλΆ€ν„°μ˜ μˆ˜μ’… μ •λ³΄λŠ” λ”₯λŸ¬λ‹ λ„€νŠΈμ›Œν¬μ— μ˜ν•΄ μžλ™μœΌλ‘œ λΆ„λ₯˜λ˜μ—ˆμœΌλ©°, 이 κ³Όμ •μ—μ„œ 이미지 λΆ„ν•  λͺ¨λΈ λ˜ν•œ μ μš©λ˜μ–΄ λ”₯λŸ¬λ‹ λΆ„λ₯˜ λͺ¨λΈμ΄ μ˜€λ‘œμ§€ μˆ˜ν”Ό λΆ€λΆ„μ—λ§Œ 집쀑할 수 μžˆλ„λ‘ ν•˜μ˜€λ‹€. μ…‹μ§Έ, μš°λ¦¬λŠ” μ°¨λŸ‰μ— νƒ‘μž¬λœ 카메라λ₯Ό ν™œμš©ν•˜μ—¬ λ„λ‘œλ³€ κ°€λ‘œμˆ˜μ˜ μˆ˜μ’…μ„ νƒμ§€ν•˜μ˜€λ‹€. 이 κ³Όμ •μ—μ„œ μ°¨λŸ‰μœΌλ‘œλΆ€ν„° κ°€λ‘œμˆ˜κΉŒμ§€μ˜ 거리 정보가 ν•„μš”ν•˜μ˜€λŠ”λ°, μš°λ¦¬λŠ” 이미지 μƒμ˜ μ°¨μ„  κ°œμˆ˜λ‘œλΆ€ν„° 거리λ₯Ό μΆ”μ •ν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, λ³Έ 연ꡬ κ²°κ³ΌλŠ” 라이닀 (Light Detection and Ranging, LiDAR)와 GPS μž₯λΉ„, 그리고 ν˜„μž₯ μžλ£Œμ— μ˜ν•΄ ν‰κ°€λ˜μ—ˆλ‹€. μš°λ¦¬λŠ” 121.04 kmΒ² 면적의 λŒ€μƒμ§€ 내에 μ•½ 130λ§Œμ—¬ 그루의 수λͺ©μ΄ μ‘΄μž¬ν•˜λŠ” 것을 ν™•μΈν•˜μ˜€μœΌλ©°, λ‹€μ–‘ν•œ 선행연ꡬ보닀 높은 μ •ν™•λ„μ˜ κ°œλ³„ 수λͺ© μœ„μΉ˜ 지도λ₯Ό μ œμž‘ν•˜μ˜€λ‹€. νƒμ§€λœ λͺ¨λ“  수λͺ© 쀑 μ•½ 6만 3μ²œμ—¬ 그루의 μˆ˜μ’… 정보가 νƒμ§€λ˜μ—ˆμœΌλ©°, 이쀑 κ°€μž₯ 빈번히 νƒμ§€λœ 수λͺ©μ€ μ™•λ²šλ‚˜λ¬΄ (Prunus yedoensis, 21.43 %)μ˜€λ‹€. μ€ν–‰λ‚˜λ¬΄ (Ginkgo biloba, 19.44 %), λŠν‹°λ‚˜λ¬΄ (Zelkova serrata, 18.68 %), μ†Œλ‚˜λ¬΄ (Pinus densiflora, 7.55 %), 그리고 메타세쿼이어 (Metasequoia glyptostroboides, 5.97 %) 등이 κ·Έ λ’€λ₯Ό μ΄μ—ˆλ‹€. 포괄적인 검증이 μˆ˜ν–‰λ˜μ—ˆκ³ , λ³Έ μ—°κ΅¬μ—μ„œλŠ” μ‹œλ―Όμ΄ μˆ˜μ§‘ν•œ μˆ˜ν”Ό 사진과 μ°¨λŸ‰μœΌλ‘œλΆ€ν„° μˆ˜μ§‘λœ λ„λ‘œλ³€ μ΄λ―Έμ§€λŠ” λ„μ‹œ μˆ˜μ’… 뢄포에 λŒ€ν•œ μ •ν™•ν•˜κ³  μ •λŸ‰μ μΈ 정보λ₯Ό μ œκ³΅ν•œλ‹€λŠ” 것을 κ²€μ¦ν•˜μ˜€λ‹€.1. Introduction 6 2. Methodology 9 2.1. Data collection 9 2.2. Deep learning overall 12 2.3. Tree counting and mapping 15 2.4. Tree species detection 16 2.5. Evaluation 21 3. Results 22 3.1. Evaluation of deep learning performance 22 3.2. Tree counting and mapping 23 3.3. Tree species detection 27 4. Discussion 30 4.1. Multiple sensing platforms for urban areas 30 4.2. Potential of citizen and vehicle sensors 34 4.3. Implications 48 5. Conclusion 51 Bibliography 52 Abstract in Korean 61석

    Archeological Survey For The Temple-Belton Regional Sewer System Improvement Project, Bell County, Texas

    Get PDF
    Prewitt and Associates, Inc., (PAI) was contracted by Kasberg, Patrick, and Associates to perform an intensive archeological survey prior to the proposed installation of new sewer lines, the expansion of one lift station, and the construction of another lift station in Bell County,Texas.This investigation was conducted in April 2013 in compliance with the Texas Antiquities Code.The Temple-Belton Regional Sewer System (TBRSS) Improvement Project will construct a new 1.7-mile-long (8,730-ft-long) Shallowford force Main sewer line from the Temple-Belton wastewater Treatment Plant on fM 93 to the Shallowford Lift Station just north of the Leon River. The project also calls for small expansions of the Shallowford Lift Station and Belton Lift Station. The survey recorded two new archeological sites, 41BL1380 and 41BL1381, and revisited one previously recorded site, 41BL260. All three sites are recommended as not eligible for listing in the National Register or for designation as State Antiquities Landmarks. The TBRSS project was put on hold in 2013 and 2014, and PAI archeologists did not conduct trenching in a 2,788-ft-long section of the force main alignment where the landowner denied right of entry. This segment in the Leon River valley has the potential to contain intact buried archeological remains in holocene-age alluvium. Consequently, this section of the force main alignment will need to be investigated with mechanical trenching if the TBRSS project is resurrected

    Geocoding of trees from street addresses and street-level images

    Get PDF
    We introduce an approach for updating older tree inventories with geographic coordinates using street-level panorama images and a global optimization framework for tree instance matching. Geolocations of trees in inventories until the early 2000s where recorded using street addresses whereas newer inventories use GPS. Our method retrofits older inventories with geographic coordinates to allow connecting them with newer inventories to facilitate long-term studies on tree mortality etc. What makes this problem challenging is the different number of trees per street address, the heterogeneous appearance of different tree instances in the images, ambiguous tree positions if viewed from multiple images and occlusions. To solve this assignment problem, we (i) detect trees in Google street-view panoramas using deep learning, (ii) combine multi-view detections per tree into a single representation, (iii) and match detected trees with given trees per street address with a global optimization approach. Experiments for trees in 5 cities in California, USA, show that we are able to assign geographic coordinates to 38% of the street trees, which is a good starting point for long-term studies on the ecosystem services value of street trees at large scale

    Individual Tree Detection in Large-Scale Urban Environments using High-Resolution Multispectral Imagery

    Full text link
    We introduce a novel deep learning method for detection of individual trees in urban environments using high-resolution multispectral aerial imagery. We use a convolutional neural network to regress a confidence map indicating the locations of individual trees, which are localized using a peak finding algorithm. Our method provides complete spatial coverage by detecting trees in both public and private spaces, and can scale to very large areas. We performed a thorough evaluation of our method, supported by a new dataset of over 1,500 images and almost 100,000 tree annotations, covering eight cities, six climate zones, and three image capture years. We trained our model on data from Southern California, and achieved a precision of 73.6% and recall of 73.3% using test data from this region. We generally observed similar precision and slightly lower recall when extrapolating to other California climate zones and image capture dates. We used our method to produce a map of trees in the entire urban forest of California, and estimated the total number of urban trees in California to be about 43.5 million. Our study indicates the potential for deep learning methods to support future urban forestry studies at unprecedented scales

    Automatic Large Scale Detection of Red Palm Weevil Infestation using Aerial and Street View Images

    Full text link
    The spread of the Red Palm Weevil has dramatically affected date growers, homeowners and governments, forcing them to deal with a constant threat to their palm trees. Early detection of palm tree infestation has been proven to be critical in order to allow treatment that may save trees from irreversible damage, and is most commonly performed by local physical access for individual tree monitoring. Here, we present a novel method for surveillance of Red Palm Weevil infested palm trees utilizing state-of-the-art deep learning algorithms, with aerial and street-level imagery data. To detect infested palm trees we analyzed over 100,000 aerial and street-images, mapping the location of palm trees in urban areas. Using this procedure, we discovered and verified infested palm trees at various locations
    • …
    corecore