62 research outputs found
Aerial Semantic Mapping for Precision Agriculture using Multispectral Imagery
Nowadays constant technological evolution cover several necessities and daily tasks in our
society. In particular, drones usage, given its wide vision to capture the terrain surface
images, allows to collect large amounts of information with high efficiency, performance
and accuracy.
This master dissertation’s main purpose is the analysis, classification and respective
mapping of different terrain types and characteristics, using multispectral imagery.
Solar radiation flow reflected on the surface is captured by the used multispectral
camera’s different lenses (RedEdge-M, created by Micasense). Each one of these five
lenses is able to capture different colour spectrums (i.e. Blue, Green, Red, Near-Infrared
and RedEdge). It is possible to analyse the various spectrum indices from the collected
imagery, according to the fusion of different combinations between coloured bands (e.g.
NDVI, ENDVI, RDVI. . . ).
This project engages a ROS (Robot Operating System) framework development, capable
of correcting different captured imagery and, hence, calculating the implemented
spectral indices. Several parametrizations of terrain analysis were carried throughout the
project, and this information was represented in semantic maps by layers (e.g. vegetation,
water, soil, rocks).
The obtained experimental results were validated in the scope of several projects
incorporated in PDR2020, with success rates between 70% and 90%.
This framework can have multiple technical applications, not only in Precision Agriculture,
but also in vehicles autonomous navigation and multi-robot cooperation
Vision-Based Autonomous Human Tracking Mobile Robot
Tracking moving objects is one of the most important but problematic features of motion analysis and understanding. In order to effectively interact robots with people in close proximity, the systems must first be able to detect, track, and follow people. Following a human with a mobile robot arises in many different service robotic applications. This paper proposes to build an autonomous human tracking mobile robot which can solve the occlusion problem during tracking. The robot can make human tracking efficiently by analysing the information obtained from a camera which is equipped on the top of the robot. The system performs human detection by using Histogram of Oriented Gradient (HOG) and Support Vector Machine (SVM) algorithms and then uses HSV (Hue Saturation Value) color system for detecting stranger. If the detected human is stranger, robot will make tracking. During the process, the robot needs to track the stranger without missing. So, Kalman filter is used to solve this problem. Kalman filter can estimate the target human when the human is occluded with walls or something. This paper describes the processing results and experimental results of a mobile robot which will track unmarked human efficiently and handle the occlusion using vision sensor and Kalman filter
Recommended from our members
Panoramic Video Stitching
Digital camera and smartphone technologies have made high quality images and video pervasive and abundant. Combining or stitching collections of images from a variety of viewpoints into an extended panoramic image is a common and popular function for such devices. Extending this functionality to video however, poses many new challenges due to the demand for both spatial and temporal continuity. Multi-view video stitching (also called panoramic video stitching) is an emerging, common research area in computer vision, image/video processing and computer graphics and has wide applications in virtual reality, virtual tourism, surveillance, and human computer interaction. In this thesis, I will explore the technical and practical problems in the complete process of stitching a high-resolution multiview video into a high-resolution panoramic video. The challenges addressed include video stabilization, efficient multi-view video alignment and panoramic video stitching, color correction, and blurred frame detection and repair.
Specifically, I propose a continuity aware Kalman filtering scheme for rotation angles for video stabilization and jitter removal. For efficient stitching of long, high-resolution panoramic videos, I propose constrained and multigrid SIFT matching schemes, concatenated image projection and warping and min-space feathering. These three approaches together can greatly reduce the computational time and memory requirement in panoramic video stitching, which makes it feasible to stitch high-resolution (e.g., 1920x1080 pixels) and long panoramic video sequences using standard workstations.
Color correction is the emphasis of my research. On this topic I first performed a systematic survey and performance evaluation of nine state of the art color correction approaches in the context of two-view image stitching. My evaluation work not only gives useful insights and conclusions about the relative performance of these approaches, but also points out the remaining challenges and possible directions for future color correction research. Based on the conclusions from this evaluation work, I proposed a hybrid and scalable color correction approach for general n-view image stitching, and designed a two-view video color correction approach for panoramic video stitching.
For blurred frame detection and repair, I have completed preliminary work on image partial blur detection and classification, in which I proposed a SVM-based blur block classifier using improved and new local blur features. Then, based on partial blur classification results, I designed a statistical thresholding scheme for blurred frame identification. For the detected blurred frames, I repaired them using polynomial data fitting from neighboring unblurred frames.
Many of the techniques and ideas in this thesis are novel and general solutions to the technical or practical problems in panoramic video stitching. At the end of this thesis, I conclude the contributions made by this thesis to the research and popularization of panoramic video stitching, and describe those open research issues
아카리 우주 망원경으로 관측한 황도북극 지역의 적외선 천체목록 및 근접 은하들의 중적외 광도함수
학위논문 (박사)-- 서울대학교 대학원 : 물리·천문학부(천문학전공), 2013. 2. 이형목.적외선 우주 망원경 아카리(AKARI)가 그 임무 중 하나인 황도 북극(North Ecliptic Pole, NEP) 지역에 대한 관측을 성공적으로 수행하고 많은 양의 데이터를 쏟아내었다. 본 연구는 이 관측 데이터를 처리하고, 신뢰성 있는 분석을 바탕으로 비교적 가까운 곳에 위치한 은하(local galaxies)들의 밝기(luminosity) 분포를 조사하여, 은하의 진화(evolution)에 관해 논의하였다. 아카리가 관측한 곳은 황도 북극을 중심으로 대략 5.4 sq. deg의 영역이며, 아카리에 장착된 적외선 카메라(IRC)의 9개 측광 밴드를 통해서 관측되었다. 근적외(NIR)에서 중적외(MIR) 측광 밴드로 관측된 다양한 외부 은하들을 이해하기 위해 수많은(~10만개) 소스들을 측광학적으로 조사하여, 적외선 천체 목록(catalogue)을 만든 후 그들의 통계적인 특징들을 보고자 하였다. 천문 관측 본래의 특징 뿐 아니라 적외선 파장대와 관측 기기의 고유한 특성 등에서 비롯되는, 우주선(cosmic-ray)이나 Mux-bleeding 현상과 같은 수많은 거짓 신호·소스들을 제거하기 위해 폭넓은 시도와 노력을 기울였다. 검출된 소스들의 개수는 필터 밴드에 따라 다르게 나타나는데, 근적외에서 10만 여개인데 반해, 중적외 파장대에서 만 5천여개 정도이며, 소스의 검출 한계(5-σ)는 대략 근적외에서 21 등급(AB), 중적외에서는 19.5 -- 18.5 등급(AB) 수준이었다. 검출된 소스들의 정당성 확보를 위해, 보유한 보조 데이터를 포함한 모든 밴드의 소스를 이용하여 교차적인 매칭(cross-matching)을 통해 거짓 소스들을 최대한 가려내었다. 모든 결과를 종합한 최종 통합(band-merged) 카타로그에는 대략 114,800 개의 소스들이 등재되었고, 여기에는 우리 은하의 항성들을 비롯하여 다양한 외부 은하 천체들이 포함되어 있다. 광학 파장대(r', R)의 stellarity와 등급을 이용하면, 우리 은하의 별들을 통계적으로 골라내는 좋은 기준을 제시할 수 있고, 은하들의 통계적인 특성과 분류에 대해서는 아카리의 근적외 및 중적외 밴드의 색지수(color)를 이용하여 효율적으로 논의할 수 있음을 보였다. 이들 중 분광 관측으로 적색이동(redshift, z) 값을 알아낸 소스들을 이용하여 근접한 (z < 0.3) 은하들의 적외선 광도 분포를 조사하였다. 근접한 곳에 있는 은하들은 매우 밝은 것은 많지 않고 보통 은하들이 다수인 것으로 보인다. 가장 널리 받아들여지고 있는 1/V 방법을 이용하여, 중적외 파장대의 광도 함수(luminosity function)를 측정한 결과, 밝은 쪽에서는 최근에 보고된 연구와 잘 일치하는 결과를 얻었다. 이것을 NEP-Deep data의 결과(Goto et al. 2010)와 함께 비교하면, z에 따른 luminosity evolution의 관측적인 신호로 해석할 수 있고, 근접한 우주(local universe)에 이르기까지 은하들이 down-sizing의 진화 패턴을 따르고 있다는 사실을 짐작하게 한다. 반면 어두운 쪽에서는, 경향성을 판단하기 어려운 불확실성을 내포하고 있는데, 상대적으로 어두운 은하들이 관측·검출될 확률과, 표본 추출된 소스들의 편향성(bias) 및 어둡기 때문에 생기는 측광 오차 등에 의해 영향을 받는 것으로 판단되며, 이것은 광도함수의 연구에 있어 신중하게 해결해야 할 문제이다.Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 AKARI space telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 The North Ecliptic Pole (NEP) Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 NEP-Wide survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Survey Strategies for NEP-Wide Field . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Purpose of the Present Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Data Characteristics of NEP-Wide survey . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Data Reduction Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.1 Standard Reduction with IRC pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1.2 Astrometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.1.3 Post-processing and image correction . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Photometric Properties of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.1 Source detection and photometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.2.2 Detection Limits and Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Construction of the Point Source Catalogue . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1 Confirmation of the Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.1 Supplementary data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2 Overview of the source matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.1.3 Summary of the source matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2 Point Source Catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2.1 Band-merging to catalog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.2.2 Catalog format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.3 Nature of the sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3.1 Number counts and the source matching ratio . . . . . . . . . . . . . . . . . . . . . 57
3.3.2 Color-Color and Color-Magnitude diagrams . . . . . . . . . . . . . . . . . . . . . . . 60
4 Mid-infrared Luminosity Function of Local Galaxies . . . . . . . . . . . . . . . . . . . . 67
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2 Galaxy sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.3 SED fitting and SFG/AGN separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4 Luminosity Function Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.4.1 K-correction for AKARI bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
4.4.2 1/Vmax method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.4.3 Completeness and the selection function . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.5 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.5.1 8μm luminosity function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.5.2 AKARI Mid-IR bands luminosity functions . . . . . . . . . . . . . . . . . . . . . . . . . 91
5 Summary and Future Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
요약 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109Docto
Hyperspectral Imaging for Fine to Medium Scale Applications in Environmental Sciences
The aim of the Special Issue “Hyperspectral Imaging for Fine to Medium Scale Applications in Environmental Sciences” was to present a selection of innovative studies using hyperspectral imaging (HSI) in different thematic fields. This intention reflects the technical developments in the last three decades, which have brought the capacity of HSI to provide spectrally, spatially and temporally detailed data, favoured by e.g., hyperspectral snapshot technologies, miniaturized hyperspectral sensors and hyperspectral microscopy imaging. The present book comprises a suite of papers in various fields of environmental sciences—geology/mineral exploration, digital soil mapping, mapping and characterization of vegetation, and sensing of water bodies (including under-ice and underwater applications). In addition, there are two rather methodically/technically-oriented contributions dealing with the optimized processing of UAV data and on the design and test of a multi-channel optical receiver for ground-based applications. All in all, this compilation documents that HSI is a multi-faceted research topic and will remain so in the future
Inteligência artificial e sistemas de irrigação por pivô central : desenvolvimento de estratégias e técnicas para o aprimoramento do mapeamento automático
Tese (doutorado) — Universidade de Brasília, Instituto de Ciências Humanas, Departamento de Geografia, Programa de Pós-Graduação em Geografia, 2022.A irrigação é o principal responsável pelo aumento da produtividade dos cultivos. Os sistemas
de irrigação por pivô central (SIPC) são líderes em irrigação mecanizada no Brasil, com
expressivo crescimento nas últimas décadas e projeção de aumento de mais de 134% de área
até 2040. O método mais utilizado para identificação de SIPC é baseado na interpretação visual
e mapeamento manual das feições circulares, tornando a tarefa demorada e trabalhosa. Nesse
contexto, métodos baseados em Deep Learning (DL) apresentam grande potencial na
classificação de imagens de sensoriamento remoto, utilizando Convolutional Neural Networks
(CNN’s). O uso de DL provoca uma revolução na classificação de imagens, superando métodos
tradicionais e alcançando maior precisão e eficiência, permitindo monitoramento regional e
contínuo com baixo custo e agilidade. Essa pesquisa teve como objetivo aplicação de técnicas
de DL utilizando algoritmos baseados em CNN’s para identificação de SIPC em imagens de
sensoriamento remoto. O presente trabalho foi dividido em três capítulos principais: (a)
identificação de SIPC em imagens Landsat-8/OLI, utilizando segmentação semântica com três
algoritmos de CNN (U-Net, Deep ResUnet e SharpMask); (b) detecção de SIPC usando
segmentação de instâncias de imagens multitemporais Sentinel-1/SAR (duas polarizações, VV
e VH) utilizando o algoritmo Mask-RCNN, com o backbone ResNeXt-101-32x8d; e (c)
detecção de SIPC utilizando imagens multitemporais Sentinel-2/MSI com diferentes
percentuais de nuvens e segmentação de instâncias utilizando Mask-RCNN, com o backbone
ResNext-101. As etapas metodológicas foram distintas entre os capítulos e todas apresentaram
altos valores de métricas e grande capacidade de detecção de SIPC. As classificações utilizando
imagens Landsat-8/OLI, e os algoritmos U-Net, Depp ResUnet e SharpMask tiveram
respectivamente 0,96, 0,95 e 0,92 de coeficientes Kappa. As classificações usando imagens
Sentinel-1/SAR apresentaram melhores métricas na combinação das duas polarizações VV+VH
(75%AP, 91%AP50 e 86%AP75). A classificação de imagens Sentinel-2/MSI com nuvens
apresentou métricas no conjunto de 6 imagens sem nuvens (80%AP e 93%AP50) bem próximas
aos valores do conjunto de imagens com cenário extremo de nuvens (74%AP e 88%AP50),
demonstrando que a utilização de imagens multitemporais, aumenta o poder preditivo no
aprendizado. Uma contribuição significativa da pesquisa foi a proposição de reconstrução de
imagens de grandes áreas, utilizando o algoritmo de janela deslizante, permitindo várias
sobreposições de imagens classificadas e uma melhor estimativa de pivô por pixel. O presente
estudo possibilitou o estabelecimento de metodologia adequada para detecção automática de
pivô central utilizando três tipos diferentes de imagens de sensoriamento remoto, que estão disponíveis gratuitamente, além de um banco de dados com vetores de SIPC no Brasil Central.Irrigation is primarily responsible for increasing crop productivity. Center pivot irrigation
systems (CPIS) are leaders in mechanized irrigation in Brazil, with significant growth in recent
decades and a projected increase of more than 134% in area by 2040. The most used method
for identifying CPIS is based on the interpretation visual and manual mapping of circular
features, making the task time-consuming and laborious. In this context, methods based on
Deep Learning (DL) have great potential in the classification of remote sensing images, using
Convolutional Neural Networks (CNN's). The use of Deep Learning causes a revolution in
image classification, surpassing traditional methods and achieving greater precision and
efficiency, allowing regional and continuous monitoring with low cost and agility. This research
aimed to apply DL techniques using algorithms based on CNN's to identify CIPS in remote
sensing images. The present work was divided into three main chapters: (a) identification of
CIPS in Landsat-8/OLI images, using semantic segmentation with three CNN algorithms (UNet, Deep ResUnet and SharpMask); (b) CPIS detection using Sentinel-1/SAR multitemporal
image instance segmentation (two polarizations, VV and VH) using the Mask-RCNN
algorithm, with the ResNeXt-101-32x8d backbone; and (c) SIPC detection using Sentinel2/MSI multitemporal images with different percentages of clouds and instance segmentation
using Mask-RCNN, with the ResNext-101 backbone. The methodological steps were different
between the chapters and all presented high metric values and great CPIS detection capacity.
The classifications using Landsat-8/OLI images, and the U-Net, Depp ResUnet and SharpMask
algorithms had respectively 0.96, 0.95 and 0.92 of Kappa coefficients. Classifications using
Sentinel-1/SAR images showed better metrics in the combination of the two VV+VH
polarizations (75%AP, 91%AP50 and 86%AP75). The classification of Sentinel-2/MSI images
with clouds presented metrics in the set of 6 images without clouds (80%AP and 93%AP50)
very close to the values of the set of images with extreme cloud scenario (74%AP and
88%AP50), demonstrating that the use of multitemporal images increases the predictive power
in learning. A significant contribution of the research was the proposition of reconstruction of
images of large areas, using the sliding window algorithm, allowing several overlaps of
classified images and a better estimation of pivot per pixel. The present study made it possible
to establish an adequate methodology for automatic center pivot detection using three different
types of remote sensing images, which are freely available, in addition to a database with CPIS
vectors in Central Brazil
High resolution aerial and field mapping of thermal features in Ragged Hills, Yellowstone National Park
High resolution aerial images taken in a cost and time effective way from low-flying platforms were used to map a hydrothermal area in the Yellowstone National Park. The
mapping area called Ragged Hills is located in the Norris Geyser Basin, a major hydrothermal basin of the Park famous for its great diversity and number of thermal features. Because of an increasing thermal activity since the early 1990s numerous hydrothermal features of different
sizes developed in Ragged Hills. Various changes in size and chemistry of the thermal
features were observed during sporadic ground surveys. No detailed maps of the thermal
inventory existed because of the difficulties in mapping this rapidly changing area by standard ground survey methods. Mapping the features in a short time to get a status quo of the feature’s form and size was the goal of the present work.
Two different low-flying platforms were used during this project – a helium filled balloon and a single engine airplane (Cessna 172). To be able to georeference the aerial photos later a grid of ground control points was laid out and the points were surveyed by differential GPS as well as by theodolite. Deviations between both methods were on average 37 cm (Northing) and 61 cm (Easting). The overflights with the airplane were more cost intensive, requiring aircraft rental and trained pilots. Because the obtained images were in most cases blurred, they were served as overview only. Nevertheless the pixel resolution was quiet good with an average of
6 cm. Besides the true color images taken by a digital camera, also thermal pictures were
taken from the airplane with a spatial resolution of 1.2 m. The balloon survey provided a costeffective and easy-to-handle alternative. Major restrictions are only the transport of the helium bottles to the study site, and the requirements for calm wind conditions. From an altitude of 50 to 80 m sharp and high resolution images were obtained. About 45 pictures were used to
create a mosaic of the whole study area with a pixel resolution of 2.5 cm. No high-resolution thermal pictures could be taken from the balloon because the weight of the camera (3.9 kg) exceeded the balloon’s lifting capacity (1.5 kg). The created high-resolution aerial overview was included in a digital atlas together with topography and geological maps, older lowresolution
aerial pictures, and hydrochemical data.
The following diploma thesis gives an overview about available low-flying platforms and their individual advantages and disadvantages, describes the methods used in detail and evaluates them regarding expenditure and time it took to realize the individual working steps.
Furthermore an interpretation of mapping and hydrochemical data is presented.Förderkreis Freiberger Geologie e.V.researc
Image Registration Workshop Proceedings
Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research
Modular space station phase B extension preliminary system design. Volume 3: Experiment analyses
Experiment analysis tasks performed during program definition study are described. Experiment accommodation and scheduling, and defining and implementing the laboratory evolution are discussed. The general purpose laboratory requirements and concepts are defined, and supplemental studies are reported
- …