1,495 research outputs found

    Super-Resolution Textured Digital Surface Map (DSM) Formation by Selecting the Texture From Multiple Perspective Texel Images Taken by a Low-Cost Small Unmanned Aerial Vehicle (UAV)

    Get PDF
    Textured Digital Surface Model (TDSM) is a three-dimensional terrain map with texture overlaid on it. Utah State University has developed a texel camera which can capture a 3D image called a texel image. A TDSM can be constructed by combining these multiple texel images, which is much cheaper than the traditional method. The overall goal is to create a TDSM for a larger area that is cheaper and equally accurate as the TDSM created using a high-cost system. The images obtained from such an inexpensive camera have a lot of errors. To create scientifically accurate TDSM, the error presented in the image must be corrected. An automatic process to create TDSM is presented that can handle a large number of input texel images. The advantage of using such a large set of input images is that they can cover a large area on the ground, making the algorithm suitable for large-scale applications. This is done by processing images and correcting them in a windowing manner. Furthermore, the appearance of the final 3D terrain map is improved by selecting the texture from many candidate images. This ensures that the best texture is selected. The selection criteria are discussed. Lastly, a method to increase the resolution of the final image is discussed. The methods described in this dissertation improve the current technique of creating TDSM, and the results are shown and analyzed

    DeepSurveyCam — A Deep Ocean Optical Mapping System

    Get PDF
    Underwater photogrammetry and in particular systematic visual surveys of the deep sea are by far less developed than similar techniques on land or in space. The main challenges are the rough conditions with extremely high pressure, the accessibility of target areas (container and ship deployment of robust sensors, then diving for hours to the ocean floor), and the limitations of localization technologies (no GPS). The absence of natural light complicates energy budget considerations for deep diving flash-equipped drones. Refraction effects influence geometric image formation considerations with respect to field of view and focus, while attenuation and scattering degrade the radiometric image quality and limit the effective visibility. As an improvement on the stated issues, we present an AUV-based optical system intended for autonomous visual mapping of large areas of the seafloor (square kilometers) in up to 6000 m water depth. We compare it to existing systems and discuss tradeoffs such as resolution vs. mapped area and show results from a recent deployment with 90,000 mapped square meters of deep ocean floor

    Digital Multispectral Map Reconstruction Using Aerial Imagery

    Get PDF
    Advances made in the computer vision field allowed for the establishment of faster and more accurate photogrammetry techniques. Structure from Motion(SfM) is a photogrammetric technique focused on the digital spatial reconstruction of objects based on a sequence of images. The benefit of Unmanned Aerial Vehicle (UAV) platforms allowed the ability to acquire high fidelity imagery intended for environmental mapping. This way, UAV platforms became a heavily adopted method of survey. The combination of SfM and the recent improvements of Unmanned Aerial Vehicle (UAV) platforms granted greater flexibility and applicability, opening a new path for a new remote sensing technique aimed to replace more traditional and laborious approaches often associated with high monetary costs. The continued development of digital reconstruction software and advances in the field of computer processing allowed for a more affordable and higher resolution solution when compared to the traditional methods. The present work proposed a digital reconstruction algorithm based on images taken by a UAV platform inspired by the work made available by the open-source project OpenDroneMap. The aerial images are inserted in the computer vision program and several operations are applied to them, including detection and matching of features, point cloud reconstruction, meshing, and texturing, which results in a final product that represents the surveyed site. Additionally, from the study, it was concluded that an implementation which addresses the processing of thermal images was not integrated in the works of OpenDroneMap. By this point, their work was altered to allow for the reconstruction of thermal maps without sacrificing the resolution of the final model. Standard methods to process thermal images required a larger image footprint (or area of ground capture in a frame), the reason for this is that these types of images lack the presence of invariable features and by increasing the image’s footprint, the number of features present in each frame also rises. However, this method of image capture results in a lower resolution of the final product. The algorithm was developed using open-source libraries. In order to validate the obtained results, this model was compared to data obtained from commercial products, like Pix4D. Furthermore, due to circumstances brought about by the current pandemic, it was not possible to conduct a field study for the comparison and assessment of our results, as such the validation of the models was performed by verifying if the geographic location of the model was performed correctly and by visually assessing the generated maps.Avanços no campo da visão computacional permitiu o desenvolvimento de algoritmos mais eficientes de fotogrametria. Structure from Motion (SfM) é uma técnica de fotogrametria que tem como objetivo a reconstrução digital de objectos no espaço derivados de uma sequência de imagens. A característica importante que os Veículos Aérios não-tripulados (UAV) conseguem fornecer, a nível de mapeamento, é a sua capacidade de obter um conjunto de imagens de alta resolução. Devido a isto, UAV tornaram-se num dos métodos adotados no estudo de topografia. A combinação entre SfM e recentes avanços nos UAV permitiram uma melhor flexibilidade e aplicabilidade, permitindo deste modo desenvolver um novo método de Remote Sensing. Este método pretende substituir técnicas tradicionais, as quais estão associadas a mão-de-obra intensiva e a custos monetários elevados. Avanços contínuos feitos em softwares de reconstrução digital e no poder de processamento resultou em modelos de maior resolução e menos dispendiosos comparando a métodos tradicionais. O presente estudo propõe um algoritmo de reconstrução digital baseado em imagens obtidas através de UAV inspiradas no estudo disponibilizado pela OpenDroneMap. Estas imagens são inseridas no programa de visão computacional, onde várias operações são realizadas, incluindo: deteção e correspondência de caracteristicas, geração da point cloud, meshing e texturação dos quais resulta o produto final que representa o local em estudo. De forma complementar, concluiu-se que o trabalho da OpenDroneMap não incluia um processo de tratamento de imagens térmicas. Desta forma, alterações foram efetuadas que permitissem a criação de mapas térmicos sem sacrificar resolução do produto final, pois métodos típicos para processamento de imagens térmicas requerem uma área de captura maior, devido à falta de características invariantes neste tipo de imagens, o que leva a uma redução de resolução. Desta forma, o programa proposto foi desenvolvido através de bibliotecas open-source e os resultados foram comparados com modelos gerados através de software comerciais. Além do mais, devido à situação pandémica atual, não foi possível efetuar um estudo de campo para validar os modelos obtidos, como tal esta verificação foi feita através da correta localização geográfica do modelo, bem como avaliação visual dos modelos criados

    Optimizing the distribution of tie points for the bundle adjustment of hrsc image mosaics

    Get PDF
    For a systematic mapping of the Martian surface, the Mars Express orbiter is equipped with a multi-line scanner: Since the beginning of 2004 the High Resolution Stereo Camera (HRSC) regularly acquires long image strips. By now more than 4, 000 strips covering nearly the whole planet are available. Due to the nine channels, each with different viewing direction, and partly with different optical filters, each strip provides 3D and color information and allows the generation of digital terrain models (DTMs) and orthophotos. To map larger regions, neighboring HRSC strips can be combined to build DTM and orthophoto mosaics. The global mapping scheme Mars Chart 30 is used to define the extent of these mosaics. In order to avoid unreasonably large data volumes, each MC-30 tile is divided into two parts, combining about 90 strips each. To ensure a seamless fit of these strips, several radiometric and geometric corrections are applied in the photogrammetric process. A simultaneous bundle adjustment of all strips as a block is carried out to estimate their precise exterior orientation. Because size, position, resolution and image quality of the strips in these blocks are heterogeneous, also the quality and distribution of the tie points vary. In absence of ground control points, heights of a global terrain model are used as reference information, and for this task a regular distribution of these tie points is preferable. Besides, their total number should be limited because of computational reasons. In this paper, we present an algorithm, which optimizes the distribution of tie points under these constraints. A large number of tie points used as input is reduced without affecting the geometric stability of the block by preserving connections between strips. This stability is achieved by using a regular grid in object space and discarding, for each grid cell, points which are redundant for the block adjustment. The set of tie points, filtered by the algorithm, shows a more homogenous distribution and is considerably smaller. Used for the block adjustment, it yields results of equal quality, with significantly shorter computation time. In this work, we present experiments with MC-30 half-tile blocks, which confirm our idea for reaching a stable and faster bundle adjustment. The described method is used for the systematic processing of HRSC data.DLR/50 QM 1601BMWi/50 QM 160

    Methods for Real-time Visualization and Interaction with Landforms

    Get PDF
    This thesis presents methods to enrich data modeling and analysis in the geoscience domain with a particular focus on geomorphological applications. First, a short overview of the relevant characteristics of the used remote sensing data and basics of its processing and visualization are provided. Then, two new methods for the visualization of vector-based maps on digital elevation models (DEMs) are presented. The first method uses a texture-based approach that generates a texture from the input maps at runtime taking into account the current viewpoint. In contrast to that, the second method utilizes the stencil buffer to create a mask in image space that is then used to render the map on top of the DEM. A particular challenge in this context is posed by the view-dependent level-of-detail representation of the terrain geometry. After suitable visualization methods for vector-based maps have been investigated, two landform mapping tools for the interactive generation of such maps are presented. The user can carry out the mapping directly on the textured digital elevation model and thus benefit from the 3D visualization of the relief. Additionally, semi-automatic image segmentation techniques are applied in order to reduce the amount of user interaction required and thus make the mapping process more efficient and convenient. The challenge in the adaption of the methods lies in the transfer of the algorithms to the quadtree representation of the data and in the application of out-of-core and hierarchical methods to ensure interactive performance. Although high-resolution remote sensing data are often available today, their effective resolution at steep slopes is rather low due to the oblique acquisition angle. For this reason, remote sensing data are suitable to only a limited extent for visualization as well as landform mapping purposes. To provide an easy way to supply additional imagery, an algorithm for registering uncalibrated photos to a textured digital elevation model is presented. A particular challenge in registering the images is posed by large variations in the photos concerning resolution, lighting conditions, seasonal changes, etc. The registered photos can be used to increase the visual quality of the textured DEM, in particular at steep slopes. To this end, a method is presented that combines several georegistered photos to textures for the DEM. The difficulty in this compositing process is to create a consistent appearance and avoid visible seams between the photos. In addition to that, the photos also provide valuable means to improve landform mapping. To this end, an extension of the landform mapping methods is presented that allows the utilization of the registered photos during mapping. This way, a detailed and exact mapping becomes feasible even at steep slopes

    Relating Multimodal Imagery Data in 3D

    Get PDF
    This research develops and improves the fundamental mathematical approaches and techniques required to relate imagery and imagery derived multimodal products in 3D. Image registration, in a 2D sense, will always be limited by the 3D effects of viewing geometry on the target. Therefore, effects such as occlusion, parallax, shadowing, and terrain/building elevation can often be mitigated with even a modest amounts of 3D target modeling. Additionally, the imaged scene may appear radically different based on the sensed modality of interest; this is evident from the differences in visible, infrared, polarimetric, and radar imagery of the same site. This thesis develops a `model-centric\u27 approach to relating multimodal imagery in a 3D environment. By correctly modeling a site of interest, both geometrically and physically, it is possible to remove/mitigate some of the most difficult challenges associated with multimodal image registration. In order to accomplish this feat, the mathematical framework necessary to relate imagery to geometric models is thoroughly examined. Since geometric models may need to be generated to apply this `model-centric\u27 approach, this research develops methods to derive 3D models from imagery and LIDAR data. Of critical note, is the implementation of complimentary techniques for relating multimodal imagery that utilize the geometric model in concert with physics based modeling to simulate scene appearance under diverse imaging scenarios. Finally, the often neglected final phase of mapping localized image registration results back to the world coordinate system model for final data archival are addressed. In short, once a target site is properly modeled, both geometrically and physically, it is possible to orient the 3D model to the same viewing perspective as a captured image to enable proper registration. If done accurately, the synthetic model\u27s physical appearance can simulate the imaged modality of interest while simultaneously removing the 3-D ambiguity between the model and the captured image. Once registered, the captured image can then be archived as a texture map on the geometric site model. In this way, the 3D information that was lost when the image was acquired can be regained and properly related with other datasets for data fusion and analysis

    How to build a 2d and 3d aerial multispectral map?—all steps deeply explained

    Get PDF
    UIDB/04111/2020 PCIF/SSI/0102/2017 IF/00325/2015 UIDB/00066/2020The increased development of camera resolution, processing power, and aerial platforms helped to create more cost-efficient approaches to capture and generate point clouds to assist in scientific fields. The continuous development of methods to produce three-dimensional models based on two-dimensional images such as Structure from Motion (SfM) and Multi-View Stereopsis (MVS) allowed to improve the resolution of the produced models by a significant amount. By taking inspiration from the free and accessible workflow made available by OpenDroneMap, a detailed analysis of the processes is displayed in this paper. As of the writing of this paper, no literature was found that described in detail the necessary steps and processes that would allow the creation of digital models in two or three dimensions based on aerial images. With this, and based on the workflow of OpenDroneMap, a detailed study was performed. The digital model reconstruction process takes the initial aerial images obtained from the field survey and passes them through a series of stages. From each stage, a product is acquired and used for the following stage, for example, at the end of the initial stage a sparse reconstruction is produced, obtained by extracting features of the images and matching them, which is used in the following step, to increase its resolution. Additionally, from the analysis of the workflow, adaptations were made to the standard workflow in order to increase the compatibility of the developed system to different types of image sets. Particularly, adaptations focused on thermal imagery were made. Due to the low presence of strong features and therefore difficulty to match features across thermal images, a modification was implemented, so thermal models could be produced alongside the already implemented processes for multispectral and RGB image sets.publishersversionpublishe

    Virtual reconstruction of a seventeenth-century Portuguese nau

    Get PDF
    This interdisciplinary research project combines the fields of nautical archaeology and computer visualization to create an interactive virtual reconstruction of the 1606 Portuguese vessel Nossa Senhora dos Mártires, also known as the Pepper Wreck. Using reconstruction information provided by Dr. Filipe Castro (Texas A&M Department of Anthropology), a detailed 3D computer model of the ship was constructed and filled with cargo to demonstrate how the ship might have been loaded on the return voyage from India. The models are realistically shaded, lighted, and placed into an appropriate virtual environment. The scene can be viewed using the real-time immersive and interactive system developed by Dr. Frederic Parke (Texas A&M Department of Visualization). The process developed to convert the available information and data into a reconstructed 3D model is documented. This documentation allows future projects to adapt this process for other archaeological visualizations, as well as informs archaeologists about the type of data most useful for computer visualizations of this kind

    A Pipeline of 3D Scene Reconstruction from Point Clouds

    Get PDF
    3D technologies are becoming increasingly popular as their applications in industrial, consumer, entertainment, healthcare, education, and governmental increase in number. According to market predictions, the total 3D modeling and mapping market is expected to grow from 1.1billionin2013to1.1 billion in 2013 to 7.7 billion by 2018. Thus, 3D modeling techniques for different data sources are urgently needed. This thesis addresses techniques for automated point cloud classification and the reconstruction of 3D scenes (including terrain models, 3D buildings and 3D road networks). First, georeferenced binary image processing techniques were developed for various point cloud classifications. Second, robust methods for the pipeline from the original point cloud to 3D model construction were proposed. Third, the reconstruction for the levels of detail (LoDs) of 1-3 (CityGML website) of 3D models was demonstrated. Fourth, different data sources for 3D model reconstruction were studied. The strengths and weaknesses of using the different data sources were addressed. Mobile laser scanning (MLS), unmanned aerial vehicle (UAV) images, airborne laser scanning (ALS), and the Finnish National Land Survey’s open geospatial data sources e.g. a topographic database, were employed as test data. Among these data sources, MLS data from three different systems were explored, and three different densities of ALS point clouds (0.8, 8 and 50 points/m2) were studied. The results were compared with reference data such as an orthophoto with a ground sample distance of 20cm or measured reference points from existing software to evaluate their quality. The results showed that 74.6% of building roofs were reconstructed with the automated process. The resulting building models provided an average height deviation of 15 cm. A total of 6% of model points had a greater than one-pixel deviation from laser points. A total of 2.5% had a deviation of greater than two pixels. The pixel size was determined by the average distance of input laser points. The 3D roads were reconstructed with an average width deviation of 22 cm and an average height deviation of 14 cm. The results demonstrated that 93.4% of building roofs were correctly classified from sparse ALS and that 93.3% of power line points are detected from the six sets of dense ALS data located in forested areas. This study demonstrates the operability of 3D model construction for LoDs of 1-3 via the proposed methodologies and datasets. The study is beneficial to future applications, such as 3D-model-based navigation applications, the updating of 2D topographic databases into 3D maps and rapid, large-area 3D scene reconstruction. 3D-teknologiat ovat tulleet yhä suositummiksi niiden sovellusalojen lisääntyessä teollisuudessa, kuluttajatuotteissa, terveydenhuollossa, koulutuksessa ja hallinnossa. Ennusteiden mukaan 3D-mallinnus- ja -kartoitusmarkkinat kasvavat vuoden 2013 1,1 miljardista dollarista 7,7 miljardiin vuoteen 2018 mennessä. Erilaisia aineistoja käyttäviä 3D-mallinnustekniikoita tarvitaankin yhä enemmän. Tässä väitöskirjatutkimuksessa kehitettiin automaattisen pistepilviaineiston luokittelutekniikoita ja rekonstruoitiin 3D-ympäristöja (maanpintamalleja, rakennuksia ja tieverkkoja). Georeferoitujen binääristen kuvien prosessointitekniikoita kehitettiin useiden pilvipisteaineistojen luokitteluun. Työssä esitetään robusteja menetelmiä alkuperäisestä pistepilvestä 3D-malliin eri CityGML-standardin tarkkuustasoilla. Myös eri aineistolähteitä 3D-mallien rekonstruointiin tutkittiin. Eri aineistolähteiden käytön heikkoudet ja vahvuudet analysoitiin. Testiaineistona käytettiin liikkuvalla keilauksella (mobile laser scanning, MLS) ja ilmakeilauksella (airborne laser scanning, ALS) saatua laserkeilausaineistoja, miehittämättömillä lennokeilla (unmanned aerial vehicle, UAV) otettuja kuvia sekä Maanmittauslaitoksen avoimia aineistoja, kuten maastotietokantaa. Liikkuvalla laserkeilauksella kerätyn aineiston osalta tutkimuksessa käytettiin kolmella eri järjestelmällä saatua dataa, ja kolmen eri tarkkuustason (0,8, 8 ja 50 pistettä/m2) ilmalaserkeilausaineistoa. Tutkimuksessa saatuja tulosten laatua arvioitiin vertaamalla niitä referenssiaineistoon, jona käytettiin ortokuvia (GSD 20cm) ja nykyisissä ohjelmistoissa olevia mitattuja referenssipisteitä. 74,6 % rakennusten katoista saatiin rekonstruoitua automaattisella prosessilla. Rakennusmallien korkeuksien keskipoikkeama oli 15 cm. 6 %:lla mallin pisteistä oli yli yhden pikselin poikkeama laseraineiston pisteisiin verrattuna. 2,5 %:lla oli yli kahden pikselin poikkeama. Pikselikoko määriteltiin kahden laserpisteen välimatkan keskiarvona. Rekonstruoitujen teiden leveyden keskipoikkeama oli 22 cm ja korkeuden keskipoikkeama oli 14 cm. Tulokset osoittavat että 93,4 % rakennuksista saatiin luokiteltua oikein harvasta ilmalaserkeilausaineistosta ja 93,3 % sähköjohdoista saatiin havaittua kuudesta tiheästä metsäalueen ilmalaserkeilausaineistosta. Tutkimus demonstroi 3D-mallin konstruktion toimivuutta tarkkuustasoilla (LoD) 1-3 esitetyillä menetelmillä ja aineistoilla. Tulokset ovat hyödyllisiä kehitettäessä tulevaisuuden sovelluksia, kuten 3D-malleihin perustuvia navigointisovelluksia, topografisten 2D-karttojen ajantasaistamista 3D-kartoiksi, ja nopeaa suurten alueiden 3D-ympäristöjen rekonstruktiota
    corecore