96 research outputs found

    Fehlerkaschierte Bildbasierte Darstellungsverfahren

    Get PDF
    Creating photo-realistic images has been one of the major goals in computer graphics since its early days. Instead of modeling the complexity of nature with standard modeling tools, image-based approaches aim at exploiting real-world footage directly,as they are photo-realistic by definition. A drawback of these approaches has always been that the composition or combination of different sources is a non-trivial task, often resulting in annoying visible artifacts. In this thesis we focus on different techniques to diminish visible artifacts when combining multiple images in a common image domain. The results are either novel images, when dealing with the composition task of multiple images, or novel video sequences rendered in real-time, when dealing with video footage from multiple cameras.Fotorealismus ist seit jeher eines der großen Ziele in der Computergrafik. Anstatt die Komplexität der Natur mit standardisierten Modellierungswerkzeugen nachzubauen, gehen bildbasierte Ansätze den umgekehrten Weg und verwenden reale Bildaufnahmen zur Modellierung, da diese bereits per Definition fotorealistisch sind. Ein Nachteil dieser Variante ist jedoch, dass die Komposition oder Kombination mehrerer Quellbilder eine nichttriviale Aufgabe darstellt und häufig unangenehm auffallende Artefakte im erzeugten Bild nach sich zieht. In dieser Dissertation werden verschiedene Ansätze verfolgt, um Artefakte zu verhindern oder abzuschwächen, welche durch die Komposition oder Kombination mehrerer Bilder in einer gemeinsamen Bilddomäne entstehen. Im Ergebnis liefern die vorgestellten Verfahren neue Bilder oder neue Ansichten einer Bildsammlung oder Videosequenz, je nachdem, ob die jeweilige Aufgabe die Komposition mehrerer Bilder ist oder die Kombination mehrerer Videos verschiedener Kameras darstellt

    Enhanced Ultrasound Visualization for Procedure Guidance

    Get PDF
    Intra-cardiac procedures often involve fast-moving anatomic structures with large spatial extent and high geometrical complexity. Real-time visualization of the moving structures and instrument-tissue contact is crucial to the success of these procedures. Real-time 3D ultrasound is a promising modality for procedure guidance as it offers improved spatial orientation information relative to 2D ultrasound. Imaging rates at 30 fps enable good visualization of instrument-tissue interactions, far faster than the volumetric imaging alternatives (MR/CT). Unlike fluoroscopy, 3D ultrasound also allows better contrast of soft tissues, and avoids the use of ionizing radiation.Engineering and Applied Science

    Radiometric Correction and 3D Integration of Long-Range Ground-Based Hyperspectral Imagery for Mineral Exploration of Vertical Outcrops

    Get PDF
    Recently, ground-based hyperspectral imaging has come to the fore, supporting the arduous task of mapping near-vertical, difficult-to-access geological outcrops. The application of outcrop sensing within a range of one to several hundred metres, including geometric corrections and integration with accurate terrestrial laser scanning models, is already developing rapidly. However, there are few studies dealing with ground-based imaging of distant targets (i.e., in the range of several kilometres) such as mountain ridges, cliffs, and pit walls. In particular, the extreme influence of atmospheric effects and topography-induced illumination differences have remained an unmet challenge on the spectral data. These effects cannot be corrected by means of common correction tools for nadir satellite or airborne data. Thus, this article presents an adapted workflow to overcome the challenges of long-range outcrop sensing, including straightforward atmospheric and topographic corrections. Using two datasets with different characteristics, we demonstrate the application of the workflow and highlight the importance of the presented corrections for a reliable geological interpretation. The achieved spectral mapping products are integrated with 3D photogrammetric data to create large-scale now-called “hyperclouds”, i.e., geometrically correct representations of the hyperspectral datacube. The presented workflow opens up a new range of application possibilities of hyperspectral imagery by significantly enlarging the scale of ground-based measurements

    Situated Displays in Telecommunication

    Get PDF
    In face to face conversation, numerous cues of attention, eye contact, and gaze direction provide important channels of information. These channels create cues that include turn taking, establish a sense of engagement, and indicate the focus of conversation. However, some subtleties of gaze can be lost in common videoconferencing systems, because the single perspective view of the camera doesn't preserve the spatial characteristics of the face to face situation. In particular, in group conferencing, the `Mona Lisa effect' makes all observers feel that they are looked at when the remote participant looks at the camera. In this thesis, we present designs and evaluations of four novel situated teleconferencing systems, which aim to improve the teleconferencing experience. Firstly, we demonstrate the effectiveness of a spherical video telepresence system in that it allows a single observer at multiple viewpoints to accurately judge where the remote user is placing their gaze. Secondly, we demonstrate the gaze-preserving capability of a cylindrical video telepresence system, but for multiple observers at multiple viewpoints. Thirdly, we demonstrated the further improvement of a random hole autostereoscopic multiview telepresence system in conveying gaze by adding stereoscopic cues. Lastly, we investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. The results show the spherical avatar telepresence system has the ability to be viewed qualitatively similarly from all angles and demonstrate how trust can be altered depending on how one views the avatar. Together these demonstrations motivate the further study of novel display configurations and suggest parameters for the design of future teleconferencing systems

    The Need for Accurate Pre-processing and Data Integration for the Application of Hyperspectral Imaging in Mineral Exploration

    Get PDF
    Die hyperspektrale Bildgebung stellt eine Schlüsseltechnologie in der nicht-invasiven Mineralanalyse dar, sei es im Labormaßstab oder als fernerkundliche Methode. Rasante Entwicklungen im Sensordesign und in der Computertechnik hinsichtlich Miniaturisierung, Bildauflösung und Datenqualität ermöglichen neue Einsatzgebiete in der Erkundung mineralischer Rohstoffe, wie die drohnen-gestützte Datenaufnahme oder digitale Aufschluss- und Bohrkernkartierung. Allgemeingültige Datenverarbeitungsroutinen fehlen jedoch meist und erschweren die Etablierung dieser vielversprechenden Ansätze. Besondere Herausforderungen bestehen hinsichtlich notwendiger radiometrischer und geometrischer Datenkorrekturen, der räumlichen Georeferenzierung sowie der Integration mit anderen Datenquellen. Die vorliegende Arbeit beschreibt innovative Arbeitsabläufe zur Lösung dieser Problemstellungen und demonstriert die Wichtigkeit der einzelnen Schritte. Sie zeigt das Potenzial entsprechend prozessierter spektraler Bilddaten für komplexe Aufgaben in Mineralexploration und Geowissenschaften.Hyperspectral imaging (HSI) is one of the key technologies in current non-invasive material analysis. Recent developments in sensor design and computer technology allow the acquisition and processing of high spectral and spatial resolution datasets. In contrast to active spectroscopic approaches such as X-ray fluorescence or laser-induced breakdown spectroscopy, passive hyperspectral reflectance measurements in the visible and infrared parts of the electromagnetic spectrum are considered rapid, non-destructive, and safe. Compared to true color or multi-spectral imagery, a much larger range and even small compositional changes of substances can be differentiated and analyzed. Applications of hyperspectral reflectance imaging can be found in a wide range of scientific and industrial fields, especially when physically inaccessible or sensitive samples and processes need to be analyzed. In geosciences, this method offers a possibility to obtain spatially continuous compositional information of samples, outcrops, or regions that might be otherwise inaccessible or too large, dangerous, or environmentally valuable for a traditional exploration at reasonable expenditure. Depending on the spectral range and resolution of the deployed sensor, HSI can provide information about the distribution of rock-forming and alteration minerals, specific chemical compounds and ions. Traditional operational applications comprise space-, airborne, and lab-scale measurements with a usually (near-)nadir viewing angle. The diversity of available sensors, in particular the ongoing miniaturization, enables their usage from a wide range of distances and viewing angles on a large variety of platforms. Many recent approaches focus on the application of hyperspectral sensors in an intermediate to close sensor-target distance (one to several hundred meters) between airborne and lab-scale, usually implying exceptional acquisition parameters. These comprise unusual viewing angles as for the imaging of vertical targets, specific geometric and radiometric distortions associated with the deployment of small moving platforms such as unmanned aerial systems (UAS), or extreme size and complexity of data created by large imaging campaigns. Accurate geometric and radiometric data corrections using established methods is often not possible. Another important challenge results from the overall variety of spatial scales, sensors, and viewing angles, which often impedes a combined interpretation of datasets, such as in a 2D geographic information system (GIS). Recent studies mostly referred to work with at least partly uncorrected data that is not able to set the results in a meaningful spatial context. These major unsolved challenges of hyperspectral imaging in mineral exploration initiated the motivation for this work. The core aim is the development of tools that bridge data acquisition and interpretation, by providing full image processing workflows from the acquisition of raw data in the field or lab, to fully corrected, validated and spatially registered at-target reflectance datasets, which are valuable for subsequent spectral analysis, image classification, or fusion in different operational environments at multiple scales. I focus on promising emerging HSI approaches, i.e.: (1) the use of lightweight UAS platforms, (2) mapping of inaccessible vertical outcrops, sometimes at up to several kilometers distance, (3) multi-sensor integration for versatile sample analysis in the near-field or lab-scale, and (4) the combination of reflectance HSI with other spectroscopic methods such as photoluminescence (PL) spectroscopy for the characterization of valuable elements in low-grade ores. In each topic, the state of the art is analyzed, tailored workflows are developed to meet key challenges and the potential of the resulting dataset is showcased on prominent mineral exploration related examples. Combined in a Python toolbox, the developed workflows aim to be versatile in regard to utilized sensors and desired applications

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Advances in Object and Activity Detection in Remote Sensing Imagery

    Get PDF
    The recent revolution in deep learning has enabled considerable development in the fields of object and activity detection. Visual object detection tries to find objects of target classes with precise localisation in an image and assign each object instance a corresponding class label. At the same time, activity recognition aims to determine the actions or activities of an agent or group of agents based on sensor or video observation data. It is a very important and challenging problem to detect, identify, track, and understand the behaviour of objects through images and videos taken by various cameras. Together, objects and their activity recognition in imaging data captured by remote sensing platforms is a highly dynamic and challenging research topic. During the last decade, there has been significant growth in the number of publications in the field of object and activity recognition. In particular, many researchers have proposed application domains to identify objects and their specific behaviours from air and spaceborne imagery. This Special Issue includes papers that explore novel and challenging topics for object and activity detection in remote sensing images and videos acquired by diverse platforms
    • …
    corecore