448 research outputs found

    Deep-learning feature descriptor for tree bark re-identification

    Get PDF
    Lโ€™habilitรฉ de visuellement rรฉ-identifier des objets est une capacitรฉ fondamentale des systรจmes de vision. Souvent, ces systรจmes sโ€™appuient sur une collection de signatures visuelles basรฉes sur des descripteurs comme SIFT ou SURF. Cependant, ces descripteurs traditionnels ont รฉtรฉ conรงus pour un certain domaine dโ€™aspects et de gรฉomรฉtries de surface (relief limitรฉ). Par consรฉquent, les surfaces trรจs texturรฉes telles que lโ€™รฉcorce des arbres leur posent un dรฉfi. Alors, cela rend plus difficile lโ€™utilisation des arbres comme points de repรจre identifiables ร  des fins de navigation (robotique) ou le suivi du bois abattu le long dโ€™une chaรฎne logistique (logistique). Nous proposons donc dโ€™utiliser des descripteurs basรฉs sur les donnรฉes, qui une fois entraรฎnรฉ avec des images dโ€™รฉcorce, permettront la rรฉ-identification de surfaces dโ€™arbres. ร€ cet effet, nous avons collectรฉ un grand ensemble de donnรฉes contenant 2 400 images dโ€™รฉcorce prรฉsentant de forts changements dโ€™รฉclairage, annotรฉes par surface et avec la possibilitรฉ dโ€™รชtre alignรฉes au pixels prรจs. Nous avons utilisรฉ cet ensemble de donnรฉes pour รฉchantillonner parmis plus de 2 millions de parcelle dโ€™image de 64x64 pixels afin dโ€™entraรฎner nos nouveaux descripteurs locaux DeepBark et SqueezeBark. Notre mรฉthode DeepBark a montrรฉ un net avantage par rapport aux descripteurs fabriquรฉs ร  la main SIFT et SURF. Par exemple, nous avons dรฉmontrรฉ que DeepBark peut atteindre une mAP de 87.2% lorsquโ€™il doit retrouver 11 images dโ€™รฉcorce pertinentes, i.e correspondant ร  la mรชme surface physique, ร  une image requรชte parmis 7,900 images. Notre travail suggรจre donc quโ€™il est possible de rรฉ-identifier la surfaces des arbres dans un contexte difficile, tout en rendant public un nouvel ensemble de donnรฉes.The ability to visually re-identify objects is a fundamental capability in vision systems. Oftentimes,it relies on collections of visual signatures based on descriptors, such as SIFT orSURF. However, these traditional descriptors were designed for a certain domain of surface appearances and geometries (limited relief). Consequently, highly-textured surfaces such as tree bark pose a challenge to them. In turn, this makes it more difficult to use trees as identifiable landmarks for navigational purposes (robotics) or to track felled lumber along a supply chain (logistics). We thus propose to use data-driven descriptors trained on bark images for tree surface re-identification. To this effect, we collected a large dataset containing 2,400 bark images with strong illumination changes, annotated by surface and with the ability to pixel align them. We used this dataset to sample from more than 2 million 64 64 pixel patches to train our novel local descriptors DeepBark and SqueezeBark. Our DeepBark method has shown a clear advantage against the hand-crafted descriptors SIFT and SURF. For instance, we demonstrated that DeepBark can reach a mAP of 87.2% when retrieving 11 relevant barkimages, i.e. corresponding to the same physical surface, to a bark query against 7,900 images. ur work thus suggests that re-identifying tree surfaces in a challenging illuminations contextis possible. We also make public our dataset, which can be used to benchmark surfacere-identification techniques

    Fine-Grained Object Recognition and Zero-Shot Learning in Remote Sensing Imagery

    Full text link
    Fine-grained object recognition that aims to identify the type of an object among a large number of subcategories is an emerging application with the increasing resolution that exposes new details in image data. Traditional fully supervised algorithms fail to handle this problem where there is low between-class variance and high within-class variance for the classes of interest with small sample sizes. We study an even more extreme scenario named zero-shot learning (ZSL) in which no training example exists for some of the classes. ZSL aims to build a recognition model for new unseen categories by relating them to seen classes that were previously learned. We establish this relation by learning a compatibility function between image features extracted via a convolutional neural network and auxiliary information that describes the semantics of the classes of interest by using training samples from the seen classes. Then, we show how knowledge transfer can be performed for the unseen classes by maximizing this function during inference. We introduce a new data set that contains 40 different types of street trees in 1-ft spatial resolution aerial data, and evaluate the performance of this model with manually annotated attributes, a natural language model, and a scientific taxonomy as auxiliary information. The experiments show that the proposed model achieves 14.3% recognition accuracy for the classes with no training examples, which is significantly better than a random guess accuracy of 6.3% for 16 test classes, and three other ZSL algorithms.Comment: G. Sumbul, R. G. Cinbis, S. Aksoy, "Fine-Grained Object Recognition and Zero-Shot Learning in Remote Sensing Imagery", IEEE Transactions on Geoscience and Remote Sensing (TGRS), in press, 201

    Forest canopy mortality during the 2018-2020 summer drought years in Central Europe: The application of a deep learning approach on aerial images across Luxembourg

    Get PDF
    Efficient monitoring of tree canopy mortality requires data that cover large areas and capture changes over time while being precise enough to detect changes at the canopy level. In the development of automated approaches, aerial images represent an under-exploited scale between high-resolution drone images and satellite data. Our aim herein was to use a deep learning model to automatically detect canopy mortality from high-resolution aerial images after severe drought events in the summers 2018โ€“2020 in Luxembourg. We analysed canopy mortality for the years 2017โ€“2020 using the EfficientUNet++, a state-of-the-art convolutional neural network. Training data were acquired for the years 2017 and 2019 only, in order to test the robustness of the model for years with no reference data. We found a severe increase in canopy mortality from 0.64 km2 in 2017 to 7.49 km2 in 2020, with conifers being affected at a much higher rate than broadleaf trees. The model was able to classify canopy mortality with an F1-score of 66%โ€“71% and we found that for years without training data, we were able to transfer the model trained on other years to predict canopy mortality, if illumination conditions did not deviate severely. We conclude that aerial images hold much potential for automated regular monitoring of canopy mortality over large areas at canopy level when analysed with deep learning approaches. We consider the suggested approach a cost-efficient and -effective alternative to drone and field-based sampling

    SOFTWARE FOR FOREST SPECIES RECOGNITION BASED ON DIGITAL IMAGES OF WOOD

    Get PDF
    Classifying forest species is an essential process for the correct management of wood and forest control. After cutting off the trunk of the tree, many of the characteristics of the species are lost and identifying them becomes a much more difficult task. In this context, an anatomical analysis of the wood becomes necessary, needing to be done by specialists who know very well the cellular structures of each species. However, such methodology approaches few automated techniques, making it a delayed and error-prone activity. These factors undermine environmental control and decision-making. The use of computer vision is an alternative to automatic recognition, since it allows the development of intelligent systems, in which, from images, are able to detect features and perform a final classification. One of the techniques of Computer Vision is the use of Convolutional Neural Networks, technique that represents the state of the art in this area, it is the construction of models capable of interpreting patterns in images. This research addresses experiments using convolutional neural networks for recognizing forest species from digital images. Two original datasets were used, one including macroscopic images and the other including microscopic images, for which three models were created: scale recognition, species recognition from macroscopic images and species recognition from microscopic. The best models provide 100% recognition rates for the scale dataset, 98.73% for the macroscopic and 99.11% for the microscopic which made possible to develop a software as a final product, using these three models

    ๋‹ค์ค‘ ์„ผ์‹ฑ ํ”Œ๋žซํผ๊ณผ ๋”ฅ๋Ÿฌ๋‹์„ ํ™œ์šฉํ•œ ๋„์‹œ ๊ทœ๋ชจ์˜ ์ˆ˜๋ชฉ ๋งตํ•‘ ๋ฐ ์ˆ˜์ข… ํƒ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(์„์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๋†์—…์ƒ๋ช…๊ณผํ•™๋Œ€ํ•™ ์ƒํƒœ์กฐ๊ฒฝยท์ง€์—ญ์‹œ์Šคํ…œ๊ณตํ•™๋ถ€(์ƒํƒœ์กฐ๊ฒฝํ•™), 2023. 2. ๋ฅ˜์˜๋ ฌ.Precise estimation of the number of trees and individual tree location with species information all over the city forms solid foundation for enhancing ecosystem service. However, mapping individual trees at the city scale remains challenging due to heterogeneous patterns of urban tree distribution. Here, we present a novel framework for merging multiple sensing platforms with leveraging various deep neural networks to produce a fine-grained urban tree map. We performed mapping trees and detecting species by relying only on RGB images taken by multiple sensing platforms such as airborne, citizens and vehicles, which fueled six deep learning models. We divided the entire process into three steps, since each platform has its own strengths. First, we produced individual tree location maps by converting the central points of the bounding boxes into actual coordinates from airborne imagery. Since many trees were obscured by the shadows of the buildings, we applied Generative Adversarial Network (GAN) to delineate hidden trees from the airborne images. Second, we selected tree bark photos collected by citizen for species mapping in urban parks and forests. Species information of all tree bark photos were automatically classified after non-tree parts of images were segmented. Third, we classified species of roadside trees by using a camera mounted on a car to augment our species mapping framework with street-level tree data. We estimated the distance from a car to street trees from the number of lanes detected from the images. Finally, we assessed our results by comparing it with Light Detection and Ranging (LiDAR), GPS and field data. We estimated over 1.2 million trees existed in the city of 121.04 kmยฒ and generated more accurate individual tree positions, outperforming the conventional field survey methods. Among them, we detected the species of more than 63,000 trees. The most frequently detected species was Prunus yedoensis (21.43 %) followed by Ginkgo biloba (19.44 %), Zelkova serrata (18.68 %), Pinus densiflora (7.55 %) and Metasequoia glyptostroboides (5.97 %). Comprehensive experimental results demonstrate that tree bark photos and street-level imagery taken by citizens and vehicles are conducive to delivering accurate and quantitative information on the distribution of urban tree species.๋„์‹œ ์ „์—ญ์— ์กด์žฌํ•˜๋Š” ๋ชจ๋“  ์ˆ˜๋ชฉ์˜ ์ˆซ์ž์™€ ๊ฐœ๋ณ„ ์œ„์น˜, ๊ทธ๋ฆฌ๊ณ  ์ˆ˜์ข… ๋ถ„ํฌ๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ํŒŒ์•…ํ•˜๋Š” ๊ฒƒ์€ ์ƒํƒœ๊ณ„ ์„œ๋น„์Šค๋ฅผ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ํ•„์ˆ˜์กฐ๊ฑด์ด๋‹ค. ํ•˜์ง€๋งŒ, ๋„์‹œ์—์„œ๋Š” ์ˆ˜๋ชฉ์˜ ๋ถ„ํฌ๊ฐ€ ๋งค์šฐ ๋ณต์žกํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ฐœ๋ณ„ ์ˆ˜๋ชฉ์„ ๋งตํ•‘ํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ค์› ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š”, ์—ฌ๋Ÿฌ๊ฐ€์ง€ ์„ผ์‹ฑ ํ”Œ๋žซํผ์„ ์œตํ•ฉํ•จ๊ณผ ๋™์‹œ์— ๋‹ค์–‘ํ•œ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋“ค์„ ํ™œ์šฉํ•˜์—ฌ ์„ธ๋ฐ€ํ•œ ๋„์‹œ ์ˆ˜๋ชฉ ์ง€๋„๋ฅผ ์ œ์ž‘ํ•˜๋Š” ์ƒˆ๋กœ์šด ํ”„๋ ˆ์ž„์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์šฐ๋ฆฌ๋Š” ์˜ค์ง ํ•ญ๊ณต์‚ฌ์ง„, ์‹œ๋ฏผ, ์ฐจ๋Ÿ‰ ๋“ฑ์˜ ํ”Œ๋žซํผ์œผ๋กœ๋ถ€ํ„ฐ ์ˆ˜์ง‘๋œ RGB ์ด๋ฏธ์ง€๋งŒ์„ ์‚ฌ์šฉํ•˜์˜€์œผ๋ฉฐ, 6๊ฐ€์ง€ ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์„ ํ™œ์šฉํ•˜์—ฌ ์ˆ˜๋ชฉ์„ ๋งตํ•‘ํ•˜๊ณ  ์ˆ˜์ข…์„ ํƒ์ง€ํ•˜์˜€๋‹ค. ๊ฐ๊ฐ์˜ ํ”Œ๋žซํผ์€ ์ €๋งˆ๋‹ค์˜ ๊ฐ•์ ์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์ „ ๊ณผ์ •์„ ์„ธ ๊ฐ€์ง€ ์Šคํ…์œผ๋กœ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ฒซ์งธ, ์šฐ๋ฆฌ๋Š” ํ•ญ๊ณต์‚ฌ์ง„ ์ƒ์—์„œ ํƒ์ง€๋œ ์ˆ˜๋ชฉ์˜ ๋”ฅ๋Ÿฌ๋‹ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค๋กœ๋ถ€ํ„ฐ ์ค‘์‹ฌ์ ์„ ์ถ”์ถœํ•˜์—ฌ ๊ฐœ๋ณ„ ์ˆ˜๋ชฉ์˜ ์œ„์น˜ ์ง€๋„๋ฅผ ์ œ์ž‘ํ•˜์˜€๋‹ค. ๋งŽ์€ ์ˆ˜๋ชฉ์ด ๋„์‹œ ๋‚ด ๊ณ ์ธต ๋นŒ๋”ฉ์˜ ๊ทธ๋ฆผ์ž์— ์˜ํ•ด ๊ฐ€๋ ค์กŒ๊ธฐ ๋•Œ๋ฌธ์—, ์šฐ๋ฆฌ๋Š” ์ƒ์ •์  ์ ๋Œ€์  ์‹ ๊ฒฝ๋ง (Generative Adversarial Network, GAN)์„ ํ†ตํ•ด ํ•ญ๊ณต์‚ฌ์ง„ ์ƒ์— ์ˆจ๊ฒจ์ง„ ์ˆ˜๋ชฉ์„ ๊ทธ๋ ค๋‚ด๊ณ ์ž ํ•˜์˜€๋‹ค. ๋‘˜์งธ, ์šฐ๋ฆฌ๋Š” ์‹œ๋ฏผ๋“ค์ด ์ˆ˜์ง‘ํ•œ ์ˆ˜๋ชฉ์˜ ์ˆ˜ํ”ผ ์‚ฌ์ง„์„ ํ™œ์šฉํ•˜์—ฌ ๋„์‹œ ๊ณต์› ๋ฐ ๋„์‹œ ์ˆฒ ์ผ๋Œ€์— ์ˆ˜์ข… ์ •๋ณด๋ฅผ ๋งตํ•‘ํ•˜์˜€๋‹ค. ์ˆ˜ํ”ผ ์‚ฌ์ง„์œผ๋กœ๋ถ€ํ„ฐ์˜ ์ˆ˜์ข… ์ •๋ณด๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ์— ์˜ํ•ด ์ž๋™์œผ๋กœ ๋ถ„๋ฅ˜๋˜์—ˆ์œผ๋ฉฐ, ์ด ๊ณผ์ •์—์„œ ์ด๋ฏธ์ง€ ๋ถ„ํ•  ๋ชจ๋ธ ๋˜ํ•œ ์ ์šฉ๋˜์–ด ๋”ฅ๋Ÿฌ๋‹ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์ด ์˜ค๋กœ์ง€ ์ˆ˜ํ”ผ ๋ถ€๋ถ„์—๋งŒ ์ง‘์ค‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค. ์…‹์งธ, ์šฐ๋ฆฌ๋Š” ์ฐจ๋Ÿ‰์— ํƒ‘์žฌ๋œ ์นด๋ฉ”๋ผ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๋„๋กœ๋ณ€ ๊ฐ€๋กœ์ˆ˜์˜ ์ˆ˜์ข…์„ ํƒ์ง€ํ•˜์˜€๋‹ค. ์ด ๊ณผ์ •์—์„œ ์ฐจ๋Ÿ‰์œผ๋กœ๋ถ€ํ„ฐ ๊ฐ€๋กœ์ˆ˜๊นŒ์ง€์˜ ๊ฑฐ๋ฆฌ ์ •๋ณด๊ฐ€ ํ•„์š”ํ•˜์˜€๋Š”๋ฐ, ์šฐ๋ฆฌ๋Š” ์ด๋ฏธ์ง€ ์ƒ์˜ ์ฐจ์„  ๊ฐœ์ˆ˜๋กœ๋ถ€ํ„ฐ ๊ฑฐ๋ฆฌ๋ฅผ ์ถ”์ •ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ๋ณธ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋Š” ๋ผ์ด๋‹ค (Light Detection and Ranging, LiDAR)์™€ GPS ์žฅ๋น„, ๊ทธ๋ฆฌ๊ณ  ํ˜„์žฅ ์ž๋ฃŒ์— ์˜ํ•ด ํ‰๊ฐ€๋˜์—ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” 121.04 kmยฒ ๋ฉด์ ์˜ ๋Œ€์ƒ์ง€ ๋‚ด์— ์•ฝ 130๋งŒ์—ฌ ๊ทธ๋ฃจ์˜ ์ˆ˜๋ชฉ์ด ์กด์žฌํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•˜์˜€์œผ๋ฉฐ, ๋‹ค์–‘ํ•œ ์„ ํ–‰์—ฐ๊ตฌ๋ณด๋‹ค ๋†’์€ ์ •ํ™•๋„์˜ ๊ฐœ๋ณ„ ์ˆ˜๋ชฉ ์œ„์น˜ ์ง€๋„๋ฅผ ์ œ์ž‘ํ•˜์˜€๋‹ค. ํƒ์ง€๋œ ๋ชจ๋“  ์ˆ˜๋ชฉ ์ค‘ ์•ฝ 6๋งŒ 3์ฒœ์—ฌ ๊ทธ๋ฃจ์˜ ์ˆ˜์ข… ์ •๋ณด๊ฐ€ ํƒ์ง€๋˜์—ˆ์œผ๋ฉฐ, ์ด์ค‘ ๊ฐ€์žฅ ๋นˆ๋ฒˆํžˆ ํƒ์ง€๋œ ์ˆ˜๋ชฉ์€ ์™•๋ฒš๋‚˜๋ฌด (Prunus yedoensis, 21.43 %)์˜€๋‹ค. ์€ํ–‰๋‚˜๋ฌด (Ginkgo biloba, 19.44 %), ๋Šํ‹ฐ๋‚˜๋ฌด (Zelkova serrata, 18.68 %), ์†Œ๋‚˜๋ฌด (Pinus densiflora, 7.55 %), ๊ทธ๋ฆฌ๊ณ  ๋ฉ”ํƒ€์„ธ์ฟผ์ด์–ด (Metasequoia glyptostroboides, 5.97 %) ๋“ฑ์ด ๊ทธ ๋’ค๋ฅผ ์ด์—ˆ๋‹ค. ํฌ๊ด„์ ์ธ ๊ฒ€์ฆ์ด ์ˆ˜ํ–‰๋˜์—ˆ๊ณ , ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์‹œ๋ฏผ์ด ์ˆ˜์ง‘ํ•œ ์ˆ˜ํ”ผ ์‚ฌ์ง„๊ณผ ์ฐจ๋Ÿ‰์œผ๋กœ๋ถ€ํ„ฐ ์ˆ˜์ง‘๋œ ๋„๋กœ๋ณ€ ์ด๋ฏธ์ง€๋Š” ๋„์‹œ ์ˆ˜์ข… ๋ถ„ํฌ์— ๋Œ€ํ•œ ์ •ํ™•ํ•˜๊ณ  ์ •๋Ÿ‰์ ์ธ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•œ๋‹ค๋Š” ๊ฒƒ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค.1. Introduction 6 2. Methodology 9 2.1. Data collection 9 2.2. Deep learning overall 12 2.3. Tree counting and mapping 15 2.4. Tree species detection 16 2.5. Evaluation 21 3. Results 22 3.1. Evaluation of deep learning performance 22 3.2. Tree counting and mapping 23 3.3. Tree species detection 27 4. Discussion 30 4.1. Multiple sensing platforms for urban areas 30 4.2. Potential of citizen and vehicle sensors 34 4.3. Implications 48 5. Conclusion 51 Bibliography 52 Abstract in Korean 61์„
    • โ€ฆ
    corecore