74 research outputs found
Visual Localization of Mobile Robot
Tato práce se zaměřuje na prozkoumání současné situace na poli určování polohy z kamerových dat a na navržení vhodného řešení pro mobilní robotickou platformu vybavenou vertikálně orientovanou RGB kamerou s fisheye čočkou. Navržený systém by měl být schopen dlouhodobě vykonávat globální lokalizaci v měnícím se vnitřním prostředí výrobních závodů a kancelářských budov. Pro ověření funkč nosti vybraných metod byl nasnímán dataset fisheye obrazů spolu s jejich polohou. VLAD a NetVLAD deskriptory byly otestovány v kombinaci s dlaždicovou reprezentací panoramat. Jako řešení byla navržena jednoduchá metoda, určující aktuální polohu na základě polohy nejpodobnějšího obrazu z databáze.This work aims to examine the current state of the art in visual localization and find a suitable solution for an indoor mobile robotic platform equipped with a single upward-looking RGB camera and fisheye lens. The system should be able to perform longterm global localization in changing indoor industrial or office environment. A dataset of localized omnidirectional images was captured and used for evaluation of the performance of selected methods. VLAD and NetVLAD descriptors were tested in combination with tiled panorama representation. A simple localization method based on taking the position of the most similar database image is proposed as the solution
Imaging methods for understanding and improving visual training in the geosciences
Experience in the field is a critical educational component of every student studying geology. However, it is typically difficult to ensure that every student gets the necessary experience because of monetary and scheduling limitations. Thus, we proposed to create a virtual field trip based off of an existing 10-day field trip to California taken as part of an undergraduate geology course at the University of Rochester. To assess the effectiveness of this approach, we also proposed to analyze the learning and observation processes of both students and experts during the real and virtual field trips. At sites intended for inclusion in the virtual field trip, we captured gigapixel resolution panoramas by taking hundreds of images using custom built robotic imaging systems. We gathered data to analyze the learning process by fitting each geology student and expert with a portable eye- tracking system that records a video of their eye movements and a video of the scene they are observing. An important component of analyzing the eye-tracking data requires mapping the gaze of each observer into a common reference frame. We have made progress towards developing a software tool that helps automate this procedure by using image feature tracking and registration methods to map the scene video frames from each eye-tracker onto a reference panorama for each site. For the purpose of creating a virtual field trip, we have a large scale semi-immersive display system that consists of four tiled projectors, which have been colorimetrically and photometrically calibrated, and a curved widescreen display surface. We use this system to present the previously captured panoramas, which simulates the experience of visiting the sites in person. In terms of broader geology education and outreach, we have created an interactive website that uses Google Earth as the interface for visually exploring the panoramas captured for each site
Differently stained whole slide image registration technique with landmark validation
Abstract. One of the most significant features in digital pathology is to compare and fuse successive differently stained tissue sections, also called slides, visually. Doing so, aligning different images to a common frame, ground truth, is required. Current sample scanning tools enable to create images full of informative layers of digitalized tissues, stored with a high resolution into whole slide images. However, there are a limited amount of automatic alignment tools handling large images precisely in acceptable processing time. The idea of this study is to propose a deep learning solution for histopathology image registration. The main focus is on the understanding of landmark validation and the impact of stain augmentation on differently stained histopathology images. Also, the developed registration method is compared with the state-of-the-art algorithms which utilize whole slide images in the field of digital pathology.
There are previous studies about histopathology, digital pathology, whole slide imaging and image registration, color staining, data augmentation, and deep learning that are referenced in this study. The goal is to develop a learning-based registration framework specifically for high-resolution histopathology image registration. Different whole slide tissue sample images are used with a resolution of up to 40x magnification. The images are organized into sets of consecutive, differently dyed sections, and the aim is to register the images based on only the visible tissue and ignore the background. Significant structures in the tissue are marked with landmarks.
The quality measurements include, for example, the relative target registration error, structural similarity index metric, visual evaluation, landmark-based evaluation, matching points, and image details. These results are comparable and can be used also in the future research and in development of new tools. Moreover, the results are expected to show how the theory and practice are combined in whole slide image registration challenges. DeepHistReg algorithm will be studied to better understand the development of stain color feature augmentation-based image registration tool of this study. Matlab and Aperio ImageScope are the tools to annotate and validate the image, and Python is used to develop the algorithm of this new registration tool.
As cancer is globally a serious disease regardless of age or lifestyle, it is important to find ways to develop the systems experts can use while working with patients’ data. There is still a lot to improve in the field of digital pathology and this study is one step toward it.Eri menetelmin värjättyjen virtuaalinäytelasien rekisteröintitekniikka kiintopisteiden validointia hyödyntäen. Tiivistelmä. Yksi tärkeimmistä digitaalipatologian ominaisuuksista on verrata ja fuusioida peräkkäisiä eri menetelmin värjättyjä kudosleikkeitä toisiinsa visuaalisesti. Tällöin keskenään lähes identtiset kuvat kohdistetaan samaan yhteiseen kehykseen, niin sanottuun pohjatotuuteen. Nykyiset näytteiden skannaustyökalut mahdollistavat sellaisten kuvien luonnin, jotka ovat täynnä kerroksittaista tietoa digitalisoiduista näytteistä, tallennettuna erittäin korkean resoluution virtuaalisiin näytelaseihin. Tällä hetkellä on olemassa kuitenkin vain kourallinen automaattisia työkaluja, jotka kykenevät käsittelemään näin valtavia kuvatiedostoja tarkasti hyväksytyin aikarajoin. Tämän työn tarkoituksena on syväoppimista hyväksikäyttäen löytää ratkaisu histopatologisten kuvien rekisteröintiin. Tärkeimpänä osa-alueena on ymmärtää kiintopisteiden validoinnin periaatteet sekä eri väriaineiden augmentoinnin vaikutus. Lisäksi tässä työssä kehitettyä rekisteröintialgoritmia tullaan vertailemaan muihin kirjallisuudessa esitettyihin algoritmeihin, jotka myös hyödyntävät virtuaalinäytelaseja digitaalipatologian saralla.
Kirjallisessa osiossa tullaan siteeraamaan aiempia tutkimuksia muun muassa seuraavista aihealueista: histopatologia, digitaalipatologia, virtuaalinäytelasi, kuvantaminen ja rekisteröinti, näytteen värjäys, data-augmentointi sekä syväoppiminen. Tavoitteena on kehittää oppimispohjainen rekisteröintikehys erityisesti korkearesoluutioisille digitalisoiduille histopatologisille kuville. Erilaisissa näytekuvissa tullaan käyttämään jopa 40-kertaista suurennosta. Kuvat kudoksista on järjestetty eri menetelmin värjättyihin peräkkäisiin kuvasarjoihin ja tämän työn päämääränä on rekisteröidä kuvat pohjautuen ainoastaan kudosten näkyviin osuuksiin, jättäen kuvien tausta huomioimatta. Kudosten merkittävimmät rakenteet on merkattu niin sanotuin kiintopistein.
Työn laatumittauksina käytetään arvoja, kuten kohteen suhteellinen rekisteröintivirhe (rTRE), rakenteellisen samankaltaisuuindeksin mittari (SSIM), sekä visuaalista arviointia, kiintopisteisiin pohjautuvaa arviointia, yhteensopivuuskohtia, ja kuvatiedoston yksityiskohtia. Nämä arvot ovat verrattavissa myös tulevissa tutkimuksissa ja samaisia arvoja voidaan käyttää uusia työkaluja kehiteltäessä. DeepHistReg metodi toimii pohjana tässä työssä kehitettävälle näytteen värjäyksen parantamiseen pohjautuvalle rekisteröintityökalulle. Matlab ja Aperio ImageScope ovat ohjelmistoja, joita tullaan hyödyntämään tässä työssä kuvien merkitsemiseen ja validointiin. Ohjelmointikielenä käytetään Pythonia.
Syöpä on maailmanlaajuisesti vakava sairaus, joka ei katso ikää eikä elämäntyyliä. Siksi on tärkeää löytää uusia keinoja kehittää työkaluja, joita asiantuntijat voivat hyödyntää jokapäiväisessä työssään potilastietojen käsittelyssä. Digitaalipatologian osa-alueella on vielä paljon innovoitavaa ja tämä työ on yksi askel eteenpäin taistelussa syöpäsairauksia vastaan
Viewpoint-Free Photography for Virtual Reality
Viewpoint-free photography, i.e., interactively controlling the viewpoint of a photograph after capture, is a standing challenge. In this thesis, we investigate algorithms to enable viewpoint-free photography for virtual reality (VR) from casual capture, i.e., from footage easily captured with consumer cameras. We build on an extensive body of work in image-based rendering (IBR). Given images of an object or scene, IBR methods aim to predict the appearance of an image taken from a novel perspective. Most IBR methods focus on full or near-interpolation, where the output viewpoints either lie directly between captured images, or nearby. These methods are not suitable for VR, where the user has significant range of motion and can look in all directions. Thus, it is essential to create viewpoint-free photos with a wide field-of-view and sufficient positional freedom to cover the range of motion a user might experience in VR. We focus on two VR experiences: 1) Seated VR experiences, where the user can lean in different directions. This simplifies the problem, as the scene is only observed from a small range of viewpoints. Thus, we focus on easy capture, showing how to turn panorama-style capture into 3D photos, a simple representation for viewpoint-free photos, and also how to speed up processing so users can see the final result on-site. 2) Room-scale VR experiences, where the user can explore vastly different perspectives. This is challenging: More input footage is needed, maintaining real-time display rates becomes difficult, view-dependent appearance and object backsides need to be modelled, all while preventing noticeable mistakes. We address these challenges by: (1) creating refined geometry for each input photograph, (2) using a fast tiled rendering algorithm to achieve real-time display rates, and (3) using a convolutional neural network to hide visual mistakes during compositing. Overall, we provide evidence that viewpoint-free photography is feasible from casual capture. We thoroughly compare with the state-of-the-art, showing that our methods achieve both a numerical improvement and a clear increase in visual quality for both seated and room-scale VR experiences
Place Recognition by Per-Location Classifiers
Place recognition is formulated as a task of finding the location where the query image
was captured. This is an important task that has many practical applications in robotics,
autonomous driving, augmented reality, 3D reconstruction or systems that organize imagery
in geographically structured manner. Place recognition is typically done by finding
a reference image in a large structured geo-referenced database.
In this work, we first address the problem of building a geo-referenced dataset for
place recognition. We describe a framework for building the dataset from the street-side
imagery of the Google Street View that provides panoramic views from positions along
many streets, cities and rural areas worldwide. Besides of downloading the panoramic
views and ability to transform them into a set of perspective images, the framework is
capable of getting underlying scene depth information.
Second, we aim at localizing a query photograph by finding other images depicting
the same place in a large geotagged image database. This is a challenging task due
to changes in viewpoint, imaging conditions and the large size of the image database.
The contribution of this work is two-fold; (i) we cast the place recognition problem as a
classification task and use the available geotags to train a classifier for each location in the
database in a similar manner to per-exemplar SVMs in object recognition, and (ii) as only
a few positive training examples are available for each location, we propose two methods
to calibrate all the per-location SVM classifiers without the need for additional positive
training data. The first method relies on p-values from statistical hypothesis testing and
uses only the available negative training data. The second method performs an affine
calibration by appropriately normalizing the learned classifier hyperplane and does not
need any additional labeled training data. We test the proposed place recognition method
with the bag-of-visual-words and Fisher vector image representations suitable for large
scale indexing.
Experiments are performed on three datasets: 25,000 and 55,000 geotagged street
view images of Pittsburgh, and the 24/7 Tokyo benchmark containing 76,000 images with
varying illumination conditions. The results show improved place recognition accuracy of
the learned image representation over direct matching of raw image descriptors.Katedra kybernetik
Multi-Projective Camera-Calibration, Modeling, and Integration in Mobile-Mapping Systems
Optical systems are vital parts of most modern systems such as mobile mapping systems, autonomous cars, unmanned aerial vehicles (UAV), and game consoles. Multi-camera systems (MCS) are commonly employed for precise mapping including aerial and close-range applications.
In the first part of this thesis a simple and practical calibration model and a calibration scheme for multi-projective cameras (MPC) is presented. The calibration scheme is enabled by implementing a camera test field equipped with a customized coded target as FGI’s camera calibration room. The first hypothesis was that a test field is necessary to calibrate an MPC.
Two commercially available MPCs with 6 and 36 cameras were successfully calibrated in FGI’s calibration room. The calibration results suggest that the proposed model is able to estimate parameters of the MPCs with high geometric accuracy, and reveals the internal structure of the MPCs.
In the second part, the applicability of an MPC calibrated by the proposed approach was investigated in a mobile mapping system (MMS). The second hypothesis was that a system calibration is necessary to achieve high geometric accuracies in a multi-camera MMS. The MPC model was updated to consider mounting parameters with respect to GNSS and IMU. A system calibration scheme for an MMS was proposed. The results showed that the proposed system calibration approach was able to produce accurate results by direct georeferencing of multi-images in an MMS. Results of geometric assessments suggested that a centimeter-level accuracy is achievable by employing the proposed approach. A novel correspondence map is demonstrated for MPCs that helps to create metric panoramas.
In the third part, the problem of real-time trajectory estimation of a UAV equipped with a projective camera was studied. The main objective of this part was to address the problem of real-time monocular simultaneous localization and mapping (SLAM) of a UAV. An angular framework was discussed to address the gimbal lock singular situation. The results suggest that the proposed solution is an effective and rigorous monocular SLAM for aerial cases where the object is near-planar.
In the last part, the problem of tree-species classification by a UAV equipped with two hyper-spectral an RGB cameras was studied. The objective of this study was to investigate different aspects of a precise tree-species classification problem by employing state-of-art methods. A 3D convolutional neural-network (3D-CNN) and a multi-layered perceptron (MLP) were proposed and compared. Both classifiers were highly successful in their tasks, while the 3D-CNN was superior in performance. The classification result was the most accurate results published in comparison to other works.Optiset kuvauslaitteet ovat keskeisessä roolissa moderneissa konenäköön perustuvissa järjestelmissä kuten autonomiset autot, miehittämättömät lentolaitteet (UAV) ja pelikonsolit. Tällaisissa sovelluksissa hyödynnetään tyypillisesti monikamerajärjestelmiä.
Väitöskirjan ensimmäisessä osassa kehitetään yksinkertainen ja käytännöllinen matemaattinen malli ja kalibrointimenetelmä monikamerajärjestelmille. Koodatut kohteet ovat keinotekoisia kuvia, joita voidaan tulostaa esimerkiksi A4-paperiarkeille ja jotka voidaan mitata automaattisesti tietokonealgoritmeillä. Matemaattinen malli määritetään hyödyntämällä 3-ulotteista kamerakalibrointihuonetta, johon kehitetyt koodatut kohteet asennetaan. Kaksi kaupallista monikamerajärjestelmää, jotka muodostuvat 6 ja 36 erillisestä kamerasta, kalibroitiin onnistuneesti ehdotetulla menetelmällä. Tulokset osoittivat, että menetelmä tuotti tarkat estimaatit monikamerajärjestelmän geometrisille parametreille ja että estimoidut parametrit vastasivat hyvin kameran sisäistä rakennetta.
Työn toisessa osassa tutkittiin ehdotetulla menetelmällä kalibroidun monikamerajärjestelmän mittauskäyttöä liikkuvassa kartoitusjärjestelmässä (MMS). Tavoitteena oli kehittää ja tutkia korkean geometrisen tarkkuuden kartoitusmittauksia. Monikameramallia laajennettiin navigointilaitteiston paikannus ja kallistussensoreihin (GNSS/IMU) liittyvillä parametreillä ja ehdotettiin järjestelmäkalibrointimenetelmää liikkuvalle kartoitusjärjestelmälle. Kalibroidulla järjestelmällä saavutettiin senttimetritarkkuus suorapaikannusmittauksissa. Työssä myös esitettiin monikuville vastaavuuskartta, joka mahdollistaa metristen panoraamojen luonnin monikamarajärjestelmän kuvista.
Kolmannessa osassa tutkittiin UAV:n liikeradan reaaliaikaista estimointia hyödyntäen yhteen kameraan perustuvaa menetelmää. Päätavoitteena oli kehittää monokulaariseen kuvaamiseen perustuva reaaliaikaisen samanaikaisen paikannuksen ja kartoituksen (SLAM) menetelmä. Työssä ehdotettiin moniresoluutioisiin kuvapyramideihin ja eteneviin suorakulmaisiin alueisiin perustuvaa sovitusmenetelmää. Ehdotetulla lähestymistavalla pystyttiin alentamaan yhteensovittamisen kustannuksia sovituksen tarkkuuden säilyessä muuttumattomana. Kardaanilukko (gimbal lock) tilanteen käsittelemiseksi toteutettiin uusi kulmajärjestelmä. Tulokset osoittivat, että ehdotettu ratkaisu oli tehokas ja tarkka tilanteissa joissa kohde on lähes tasomainen. Suorituskyvyn arviointi osoitti, että kehitetty menetelmä täytti UAV:n reaaliaikaiselle reitinestimoinnille annetut aika- ja tarkkuustavoitteet.
Työn viimeisessä osassa tutkittiin puulajiluokitusta käyttäen hyperspektri- ja RGB-kameralla varustettua UAV-järjestelmää. Tavoitteena oli tutkia uusien koneoppimismenetelmien käyttöä tarkassa puulajiluokituksessa ja lisäksi vertailla hyperspektri ja RGB-aineistojen suorituskykyä. Työssä verrattiin 3D-konvoluutiohermoverkkoa (3D-CNN) ja monikerroksista perceptronia (MLP). Molemmat luokittelijat tuottivat hyvän luokittelutarkkuuden, mutta 3D-CNN tuotti tarkimmat tulokset. Saavutettu tarkkuus oli parempi kuin aikaisemmat julkaistut tulokset vastaavilla aineistoilla. Hyperspektrisen ja RGB-datan yhdistelmä tuotti parhaan tarkkuuden, mutta myös RGB-kamera yksin tuotti tarkan tuloksen ja on edullinen ja tehokas aineisto monille luokittelusovelluksille
Towards Data-Driven Large Scale Scientific Visualization and Exploration
Technological advances have enabled us to acquire extremely large
datasets but it remains a challenge to store, process, and extract
information from them. This dissertation builds upon recent advances
in machine learning, visualization, and user interactions to
facilitate exploration of large-scale scientific datasets. First, we
use data-driven approaches to computationally identify regions of
interest in the datasets. Second, we use visual presentation for
effective user comprehension. Third, we provide interactions for
human users to integrate domain knowledge and semantic information
into this exploration process.
Our research shows how to extract, visualize, and explore informative
regions on very large 2D landscape images, 3D volumetric datasets,
high-dimensional volumetric mouse brain datasets with thousands of
spatially-mapped gene expression profiles, and geospatial trajectories
that evolve over time. The contribution of this dissertation include:
(1) We introduce a sliding-window saliency model that discovers
regions of user interest in very large images; (2) We develop visual
segmentation of intensity-gradient histograms to identify meaningful
components from volumetric datasets; (3) We extract boundary surfaces
from a wealth of volumetric gene expression mouse brain profiles to
personalize the reference brain atlas; (4) We show how to efficiently
cluster geospatial trajectories by mapping each sequence of locations
to a high-dimensional point with the kernel distance framework.
We aim to discover patterns, relationships, and anomalies that would
lead to new scientific, engineering, and medical advances. This work
represents one of the first steps toward better visual understanding
of large-scale scientific data by combining machine learning and human
intelligence
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
- …