16 research outputs found

    Mobile Panoramic Mapping using CCD-Line Camera and Laser Scanner

    Get PDF
    The fusion of panoramic camera data with laser scanner data is a new approach and allows the combination of high-resolution image and depth data. Application areas are city modelling, virtual reality and documentation of the cultural heritage. Panoramic recording of image data is realized by a CCD-line, which is precisely rotated around the projection centre. In the case of other possible movements, the actual position of the projection centre and the view direction has to be measured. Linear moving panoramas e.g. along a wall are an interesting extension of such rotational panoramas. Here, the instantaneous position and orientation determination can be realized with an integrated navigation system comprising differential GPS and an inertial measurement unit. This paper investigates the combination of a panoramic camera and a laser scanner with a navigation system for indoor and outdoor applications. First it will be reported about laboratory experiments, which were carried out to obtain valid parameters about the surveying accuracy achievable with both sensors panoramic camera and laser scanner respectively. Then out door surveying results using a position and orientation system as navigation sensor will be presented and discussed

    Accurate Calibration Scheme for a Multi-Camera Mobile Mapping System

    Get PDF
    Mobile mapping systems (MMS) are increasingly used for many photogrammetric and computer vision applications, especially encouraged by the fast and accurate geospatial data generation. The accuracy of point position in an MMS is mainly dependent on the quality of calibration, accuracy of sensor synchronization, accuracy of georeferencing and stability of geometric configuration of space intersections. In this study, we focus on multi-camera calibration (interior and relative orientation parameter estimation) and MMS calibration (mounting parameter estimation). The objective of this study was to develop a practical scheme for rigorous and accurate system calibration of a photogrammetric mapping station equipped with a multi-projective camera (MPC) and a global navigation satellite system (GNSS) and inertial measurement unit (IMU) for direct georeferencing. The proposed technique is comprised of two steps. Firstly, interior orientation parameters of each individual camera in an MPC and the relative orientation parameters of each cameras of the MPC with respect to the first camera are estimated. In the second step the offset and misalignment between MPC and GNSS/IMU are estimated. The global accuracy of the proposed method was assessed using independent check points. A correspondence map for a panorama is introduced that provides metric information. Our results highlight that the proposed calibration scheme reaches centimeter-level global accuracy for 3D point positioning. This level of global accuracy demonstrates the feasibility of the proposed technique and has the potential to fit accurate mapping purposes

    Accurate Calibration Scheme for a Multi-Camera Mobile Mapping System

    Get PDF
    Mobile mapping systems (MMS) are increasingly used for many photogrammetric and computer vision applications, especially encouraged by the fast and accurate geospatial data generation. The accuracy of point position in an MMS is mainly dependent on the quality of calibration, accuracy of sensor synchronization, accuracy of georeferencing and stability of geometric configuration of space intersections. In this study, we focus on multi-camera calibration (interior and relative orientation parameter estimation) and MMS calibration (mounting parameter estimation). The objective of this study was to develop a practical scheme for rigorous and accurate system calibration of a photogrammetric mapping station equipped with a multi-projective camera (MPC) and a global navigation satellite system (GNSS) and inertial measurement unit (IMU) for direct georeferencing. The proposed technique is comprised of two steps. Firstly, interior orientation parameters of each individual camera in an MPC and the relative orientation parameters of each cameras of the MPC with respect to the first camera are estimated. In the second step the offset and misalignment between MPC and GNSS/IMU are estimated. The global accuracy of the proposed method was assessed using independent check points. A correspondence map for a panorama is introduced that provides metric information. Our results highlight that the proposed calibration scheme reaches centimeter-level global accuracy for 3D point positioning. This level of global accuracy demonstrates the feasibility of the proposed technique and has the potential to fit accurate mapping purposes

    Multi-Projective Camera-Calibration, Modeling, and Integration in Mobile-Mapping Systems

    Get PDF
    Optical systems are vital parts of most modern systems such as mobile mapping systems, autonomous cars, unmanned aerial vehicles (UAV), and game consoles. Multi-camera systems (MCS) are commonly employed for precise mapping including aerial and close-range applications. In the first part of this thesis a simple and practical calibration model and a calibration scheme for multi-projective cameras (MPC) is presented. The calibration scheme is enabled by implementing a camera test field equipped with a customized coded target as FGI’s camera calibration room. The first hypothesis was that a test field is necessary to calibrate an MPC. Two commercially available MPCs with 6 and 36 cameras were successfully calibrated in FGI’s calibration room. The calibration results suggest that the proposed model is able to estimate parameters of the MPCs with high geometric accuracy, and reveals the internal structure of the MPCs. In the second part, the applicability of an MPC calibrated by the proposed approach was investigated in a mobile mapping system (MMS). The second hypothesis was that a system calibration is necessary to achieve high geometric accuracies in a multi-camera MMS. The MPC model was updated to consider mounting parameters with respect to GNSS and IMU. A system calibration scheme for an MMS was proposed. The results showed that the proposed system calibration approach was able to produce accurate results by direct georeferencing of multi-images in an MMS. Results of geometric assessments suggested that a centimeter-level accuracy is achievable by employing the proposed approach. A novel correspondence map is demonstrated for MPCs that helps to create metric panoramas. In the third part, the problem of real-time trajectory estimation of a UAV equipped with a projective camera was studied. The main objective of this part was to address the problem of real-time monocular simultaneous localization and mapping (SLAM) of a UAV. An angular framework was discussed to address the gimbal lock singular situation. The results suggest that the proposed solution is an effective and rigorous monocular SLAM for aerial cases where the object is near-planar. In the last part, the problem of tree-species classification by a UAV equipped with two hyper-spectral an RGB cameras was studied. The objective of this study was to investigate different aspects of a precise tree-species classification problem by employing state-of-art methods. A 3D convolutional neural-network (3D-CNN) and a multi-layered perceptron (MLP) were proposed and compared. Both classifiers were highly successful in their tasks, while the 3D-CNN was superior in performance. The classification result was the most accurate results published in comparison to other works.Optiset kuvauslaitteet ovat keskeisessä roolissa moderneissa konenäköön perustuvissa järjestelmissä kuten autonomiset autot, miehittämättömät lentolaitteet (UAV) ja pelikonsolit. Tällaisissa sovelluksissa hyödynnetään tyypillisesti monikamerajärjestelmiä. Väitöskirjan ensimmäisessä osassa kehitetään yksinkertainen ja käytännöllinen matemaattinen malli ja kalibrointimenetelmä monikamerajärjestelmille. Koodatut kohteet ovat keinotekoisia kuvia, joita voidaan tulostaa esimerkiksi A4-paperiarkeille ja jotka voidaan mitata automaattisesti tietokonealgoritmeillä. Matemaattinen malli määritetään hyödyntämällä 3-ulotteista kamerakalibrointihuonetta, johon kehitetyt koodatut kohteet asennetaan. Kaksi kaupallista monikamerajärjestelmää, jotka muodostuvat 6 ja 36 erillisestä kamerasta, kalibroitiin onnistuneesti ehdotetulla menetelmällä. Tulokset osoittivat, että menetelmä tuotti tarkat estimaatit monikamerajärjestelmän geometrisille parametreille ja että estimoidut parametrit vastasivat hyvin kameran sisäistä rakennetta. Työn toisessa osassa tutkittiin ehdotetulla menetelmällä kalibroidun monikamerajärjestelmän mittauskäyttöä liikkuvassa kartoitusjärjestelmässä (MMS). Tavoitteena oli kehittää ja tutkia korkean geometrisen tarkkuuden kartoitusmittauksia. Monikameramallia laajennettiin navigointilaitteiston paikannus ja kallistussensoreihin (GNSS/IMU) liittyvillä parametreillä ja ehdotettiin järjestelmäkalibrointimenetelmää liikkuvalle kartoitusjärjestelmälle. Kalibroidulla järjestelmällä saavutettiin senttimetritarkkuus suorapaikannusmittauksissa. Työssä myös esitettiin monikuville vastaavuuskartta, joka mahdollistaa metristen panoraamojen luonnin monikamarajärjestelmän kuvista. Kolmannessa osassa tutkittiin UAV:​​n liikeradan reaaliaikaista estimointia hyödyntäen yhteen kameraan perustuvaa menetelmää. Päätavoitteena oli kehittää monokulaariseen kuvaamiseen perustuva reaaliaikaisen samanaikaisen paikannuksen ja kartoituksen (SLAM) menetelmä. Työssä ehdotettiin moniresoluutioisiin kuvapyramideihin ja eteneviin suorakulmaisiin alueisiin perustuvaa sovitusmenetelmää. Ehdotetulla lähestymistavalla pystyttiin alentamaan yhteensovittamisen kustannuksia sovituksen tarkkuuden säilyessä muuttumattomana. Kardaanilukko (gimbal lock) tilanteen käsittelemiseksi toteutettiin uusi kulmajärjestelmä. Tulokset osoittivat, että ehdotettu ratkaisu oli tehokas ja tarkka tilanteissa joissa kohde on lähes tasomainen. Suorituskyvyn arviointi osoitti, että kehitetty menetelmä täytti UAV:n reaaliaikaiselle reitinestimoinnille annetut aika- ja tarkkuustavoitteet. Työn viimeisessä osassa tutkittiin puulajiluokitusta käyttäen hyperspektri- ja RGB-kameralla varustettua UAV-järjestelmää. Tavoitteena oli tutkia uusien koneoppimismenetelmien käyttöä tarkassa puulajiluokituksessa ja lisäksi vertailla hyperspektri ja RGB-aineistojen suorituskykyä. Työssä verrattiin 3D-konvoluutiohermoverkkoa (3D-CNN) ja monikerroksista perceptronia (MLP). Molemmat luokittelijat tuottivat hyvän luokittelutarkkuuden, mutta 3D-CNN tuotti tarkimmat tulokset. Saavutettu tarkkuus oli parempi kuin aikaisemmat julkaistut tulokset vastaavilla aineistoilla. Hyperspektrisen ja RGB-datan yhdistelmä tuotti parhaan tarkkuuden, mutta myös RGB-kamera yksin tuotti tarkan tuloksen ja on edullinen ja tehokas aineisto monille luokittelusovelluksille

    Geometrische und stochastische Modelle für die integrierte Auswertung terrestrischer Laserscannerdaten und photogrammetrischer Bilddaten: Geometrische und stochastische Modelle für die integrierte Auswertung terrestrischer Laserscannerdaten und photogrammetrischer Bilddaten

    Get PDF
    Terrestrische Laserscanner finden seit einigen Jahren immer stärkere Anwendung in der Praxis und ersetzen bzw. ergänzen bisherige Messverfahren, oder es werden neue Anwendungsgebiete erschlossen. Werden die Daten eines terrestrischen Laserscanners mit photogrammetrischen Bilddaten kombiniert, ergeben sich viel versprechende Möglichkeiten, weil die Eigenschaften beider Datentypen als weitestgehend komplementär angesehen werden können: Terrestrische Laserscanner erzeugen schnell und zuverlässig dreidimensionale Repräsentationen von Objektoberflächen von einem einzigen Aufnahmestandpunkt aus, während sich zweidimensionale photogrammetrische Bilddaten durch eine sehr gute visuelle Qualität mit hohem Interpretationsgehalt und hoher lateraler Genauigkeit auszeichnen. Infolgedessen existieren bereits zahlreiche Ansätze, sowohl software- als auch hardwareseitig, in denen diese Kombination realisiert wird. Allerdings haben die Bildinformationen bisher meist nur ergänzenden Charakter, beispielsweise bei der Kolorierung von Punktwolken oder der Texturierung von aus Laserscannerdaten erzeugten Oberflächenmodellen. Die konsequente Nutzung der komplementären Eigenschaften beider Sensortypen bietet jedoch ein weitaus größeres Potenzial. Aus diesem Grund wurde im Rahmen dieser Arbeit eine Berechnungsmethode – die integrierte Bündelblockausgleichung – entwickelt, bei dem die aus terrestrischen Laserscannerdaten und photogrammetrischen Bilddaten abgeleiteten Beobachtungen diskreter Objektpunkte gleichberechtigt Verwendung finden können. Diese Vorgehensweise hat mehrere Vorteile: durch die Nutzung der individuellen Eigenschaften beider Datentypen unterstützen sie sich gegenseitig bei der Bestimmung von 3D-Objektkoordinaten, wodurch eine höhere Genauigkeit erreicht werden kann. Alle am Ausgleichungsprozess beteiligten Daten werden optimal zueinander referenziert und die verwendeten Aufnahmegeräte können simultan kalibriert werden. Wegen des (sphärischen) Gesichtsfeldes der meisten terrestrischen Laserscanner von 360° in horizontaler und bis zu 180° in vertikaler Richtung bietet sich die Kombination mit Rotationszeilen-Panoramakameras oder Kameras mit Fisheye-Objektiv an, weil diese im Vergleich zu zentralperspektiven Kameras deutlich größere Winkelbereiche in einer Aufnahme abbilden können. Grundlage für die gemeinsame Auswertung terrestrischer Laserscanner- und photogrammetrischer Bilddaten ist die strenge geometrische Modellierung der Aufnahmegeräte. Deshalb wurde für terrestrische Laserscanner und verschiedene Kameratypen ein geometrisches Modell, bestehend aus einem Grundmodell und Zusatzparametern zur Kompensation von Restsystematiken, entwickelt und verifiziert. Insbesondere bei der Entwicklung des geometrischen Modells für Laserscanner wurden verschiedene in der Literatur beschriebene Ansätze berücksichtigt. Dabei wurde auch auf von Theodoliten und Tachymetern bekannte Korrekturmodelle zurückgegriffen. Besondere Bedeutung innerhalb der gemeinsamen Auswertung hat die Festlegung des stochastischen Modells. Weil verschiedene Typen von Beobachtungen mit unterschiedlichen zugrunde liegenden geometrischen Modellen und unterschiedlichen stochastischen Eigenschaften gemeinsam ausgeglichen werden, muss den Daten ein entsprechendes Gewicht zugeordnet werden. Bei ungünstiger Gewichtung der Beobachtungen können die Ausgleichungsergebnisse negativ beeinflusst werden. Deshalb wurde die integrierte Bündelblockausgleichung um das Verfahren der Varianzkomponentenschätzung erweitert, mit dem optimale Beobachtungsgewichte automatisch bestimmt werden können. Erst dadurch wird es möglich, das Potenzial der Kombination terrestrischer Laserscanner- und photogrammetrischer Bilddaten vollständig auszuschöpfen. Zur Berechnung der integrierten Bündelblockausgleichung wurde eine Software entwickelt, mit der vielfältige Varianten der algorithmischen Kombination der Datentypen realisiert werden können. Es wurden zahlreiche Laserscannerdaten, Panoramabilddaten, Fisheye-Bilddaten und zentralperspektive Bilddaten in mehreren Testumgebungen aufgenommen und unter Anwendung der entwickelten Software prozessiert. Dabei wurden verschiedene Berechnungsvarianten detailliert analysiert und damit die Vorteile und Einschränkungen der vorgestellten Methode demonstriert. Ein Anwendungsbeispiel aus dem Bereich der Geologie veranschaulicht das Potenzial des Algorithmus in der Praxis.The use of terrestrial laser scanning has grown in popularity in recent years, and replaces and complements previous measuring methods, as well as opening new fields of application. If data from terrestrial laser scanners are combined with photogrammetric image data, this yields promising possibilities, as the properties of both types of data can be considered mainly complementary: terrestrial laser scanners produce fast and reliable three-dimensional representations of object surfaces from only one position, while two-dimensional photogrammetric image data are characterised by a high visual quality, ease of interpretation, and high lateral accuracy. Consequently there are numerous approaches existing, both hardware- and software-based, where this combination is realised. However, in most approaches, the image data are only used to add additional characteristics, such as colouring point clouds or texturing object surfaces generated from laser scanner data. A thorough exploitation of the complementary characteristics of both types of sensors provides much more potential. For this reason a calculation method – the integrated bundle adjustment – was developed within this thesis, where the observations of discrete object points derived from terrestrial laser scanner data and photogrammetric image data are utilised equally. This approach has several advantages: using the individual characteristics of both types of data they mutually strengthen each other in terms of 3D object coordinate determination, so that a higher accuracy can be achieved; all involved data sets are optimally co-registered; and each instrument is simultaneously calibrated. Due to the (spherical) field of view of most terrestrial laser scanners of 360° in the horizontal direction and up to 180° in the vertical direction, the integration with rotating line panoramic cameras or cameras with fisheye lenses is very appropriate, as they have a wider field of view compared to central perspective cameras. The basis for the combined processing of terrestrial laser scanner and photogrammetric image data is the strict geometric modelling of the recording instruments. Therefore geometric models, consisting of a basic model and additional parameters for the compensation of systematic errors, was developed and verified for terrestrial laser scanners and different types of cameras. Regarding the geometric laser scanner model, different approaches described in the literature were considered, as well as applying correction models known from theodolites and total stations. A particular consideration within the combined processing is the definition of the stochastic model. Since different types of observations with different underlying geometric models and different stochastic properties have to be adjusted simultaneously, adequate weights have to be assigned to the measurements. An unfavourable weighting can have a negative influence on the adjustment results. Therefore a variance component estimation procedure was implemented in the integrated bundle adjustment, which allows for an automatic determination of optimal observation weights. Hence, it becomes possible to exploit the potential of the combination of terrestrial laser scanner and photogrammetric image data completely. For the calculation of the integrated bundle adjustment, software was developed allowing various algorithmic configurations of the different data types to be applied. Numerous laser scanner, panoramic image, fisheye image and central perspective image data were recorded in different test fields and processed using the developed software. Several calculation alternatives were analysed, demonstrating the advantages and limitations of the presented method. An application example from the field of geology illustrates the potential of the algorithm in practice

    Spherical Image Processing for Immersive Visualisation and View Generation

    Get PDF
    This research presents the study of processing panoramic spherical images for immersive visualisation of real environments and generation of in-between views based on two views acquired. For visualisation based on one spherical image, the surrounding environment is modelled by a unit sphere mapped with the spherical image and the user is then allowed to navigate within the modelled scene. For visualisation based on two spherical images, a view generation algorithm is developed for modelling an indoor manmade environment and new views can be generated at an arbitrary position with respect to the existing two. This allows the scene to be modelled using multiple spherical images and the user to move smoothly from one sphere mapped image to another one by going through in-between sphere mapped images generated

    Spherical image processing for immersive visualisation and view generation

    Get PDF
    This research presents the study of processing panoramic spherical images for immersive visualisation of real environments and generation of in-between views based on two views acquired. For visualisation based on one spherical image, the surrounding environment is modelled by a unit sphere mapped with the spherical image and the user is then allowed to navigate within the modelled scene. For visualisation based on two spherical images, a view generation algorithm is developed for modelling an indoor manmade environment and new views can be generated at an arbitrary position with respect to the existing two. This allows the scene to be modelled using multiple spherical images and the user to move smoothly from one sphere mapped image to another one by going through in-between sphere mapped images generated.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Modélisation et développement d'une plateforme intelligente pour la capture d'images panoramiques cylindriques

    Get PDF
    In most robotic applications, vision systems can significantly improve the perception of the environment. The panoramic view has particular attractions because it allows omnidirectional perception. However, it is rarely used because the methods that provide panoramic views also have significant drawbacks. Most of these omnidirectional vision systems involve the combination of a matrix camera and a mirror, rotating matrix cameras or a wide angle lens. The major drawbacks of this type of sensors are in the great distortions of the images and the heterogeneity of the resolution. Some other methods, while providing homogeneous resolutions, also provide a huge data flow that is difficult to process in real time and are either too slow or lacking in precision. To address these problems, we propose a smart panoramic vision system that presents technological improvements over rotating linear sensor methods. It allows homogeneous 360 degree cylindrical imaging with a resolution of 6600 × 2048 pixels and a precision turntable to synchronize position with acquisition. We also propose a solution to the bandwidth problem with the implementation of a feature etractor that selects only the invariant feaures of the image in such a way that the camera produces a panoramic view at high speed while delivering only relevant information. A general geometric model has been developped has been developped to describe the image formation process and a caligration method specially designed for this kind of sensor is presented. Finally, localisation and structure from motion experiments are described to show a practical use of the system in SLAM applications.Dans la plupart des applications de robotique, un système de vision apporte une amélioration significative de la perception de l’environnement. La vision panoramique est particulièrement intéressante car elle rend possible une perception omnidirectionnelle. Elle est cependant rarement utilisée en pratique à cause des limitations technologiques découlant des méthodes la permettant. La grande majorité de ces méthodes associent des caméras, des miroirs, des grands angles et des systèmes rotatifs ensembles pour créer des champs de vision élargis. Les principaux défauts de ces méthodes sont les importantes distorsions des images et l’hétérogénéité de la résolution. Certaines autres méthodes permettant des résolutions homogènes, prodiguent un flot de données très important qui est difficile à traiter en temps réel et sont soit trop lents soit manquent de précision. Pour résoudre ces problèmes, nous proposons la réalisation d’une caméra panoramique intelligente qui présente plusieurs améliorations technologiques par rapport aux autres caméras linéaires rotatives. Cette caméra capture des panoramas cylindriques homogènes avec une résolution de 6600 × 2048 pixels. La synchronisation de la capture avec la position angulaire est possible grâce à une plateforme rotative de précision. Nous proposons aussi une solution au problème que pose le gros flot de données avec l’implémentation d’un extracteur de primitives qui sélectionne uniquement les primitives invariantes des images pour donner un système panoramique de vision qui ne transmet que les données pertinentes. Le système a été modélisé et une méthode de calibrage spécifiquement conçue pour les systèmes cylindriques rotatifs est présentée. Enfin, une application de localisation et de reconstruction 3D est décrite pour montrer une utilisation pratique dans une application de type Simultaneous Localization And Mapping ( SLAM )

    Assessment of the FARO 3D focus laser scanner for forest inventory

    Get PDF
    The research project focused on the use of Terrestrial Laser Scanning, for use in forest inventory for monitoring of forests. Typically, TLS has been used in the built environment, eg building information modelling, mining, architecture and engineering. In recent years advancements in TLS technology have allowed it to be applied to a greater amount of applications, including natural resource management and monitoring of forest resources. Forests are one of Australia's most valuable natural assets. They are highly valued and have many uses and benefits for our society. Just over 20% of Australia is covered by native forest. There is an increasing need to measure and monitor the extent and condition of Australia‟s forests. A National Forest Inventory (2003) highlighted the importance of having consistency in the collection of forest data. In 2006, the Continental Forest Monitoring Framework (CFMF) was identified by State and Federal governments of Australia as a means to streamline and standardise the collection methods of forest related information. Could TLS be of use here? A stand of planted rainforest trees in central Murwillumbah, northern NSW, was chosen as the scan site. Approximately 7 scans were undertaken on the 60m x 60m site in an attempt to create a 3D point cloud of the entire stand. See figure 1 for a view from the Scanner. Despite some issues with registration, enough data was captured to be representative of a 40m radius sample plot. Many measurements and forest characteristics were able to be extracted to an accurate standard. To be used long term to gather forest related data, further investigation is required to ascertain the optimal type of instrument, plot size, resolution settings, number of scans for a site, software, targets used and other best practices required to generate an optimal repeatable method for forest data capture, comparison and storage. The TLS would be suitable for under canopy measurements and long term monitoring, measuring and data extraction of forest based data. TLS is able to gain a snapshot of a particular forest location then perform that same snapshot and compare the change over time. If a rigorous system is developed for monitoring forestry, TLS would definitely have advantages
    corecore