1,356 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationInteractive editing and manipulation of digital media is a fundamental component in digital content creation. One media in particular, digital imagery, has seen a recent increase in popularity of its large or even massive image formats. Unfortunately, current systems and techniques are rarely concerned with scalability or usability with these large images. Moreover, processing massive (or even large) imagery is assumed to be an off-line, automatic process, although many problems associated with these datasets require human intervention for high quality results. This dissertation details how to design interactive image techniques that scale. In particular, massive imagery is typically constructed as a seamless mosaic of many smaller images. The focus of this work is the creation of new technologies to enable user interaction in the formation of these large mosaics. While an interactive system for all stages of the mosaic creation pipeline is a long-term research goal, this dissertation concentrates on the last phase of the mosaic creation pipeline - the composition of registered images into a seamless composite. The work detailed in this dissertation provides the technologies to fully realize interactive editing in mosaic composition on image collections ranging from the very small to massive in scale

    An Integrated Approach to Benthic Habitat Classification of the North Eastern Qatar Marine Zone Using Remote Sensing, Geographic Information System, and in Situ Measurements

    Get PDF
    A key aspect in the conservation management of the coastal marine zone is mapping the benthic habitat, which is the focus of the work presented in this project. Multispectral Worldview-2 (WV2) satellite data acquired in April 2010 was used to classify and map the benthic habitat of the North-Eastern part of Qatar marine zone: A 35 km stretch of coastline, 7km wide, with water depth ranging from 0 to 11 m. Baseline field surveys of the area of study carried out in March-April 2010 identified 4 broad benthic types: Seagrass, Algae, live corals and sand. WV2 data was corrected for atmospheric and water column effects. Depth-invariant bottom indices were calculated and formed the basis for classification. Field survey data was used to implement the supervised classification and accuracy assessment. From the result of the classification, an overall accuracy of 81.8% was obtained. The gap in the available information on the benthic cover in the Qatari coastal marine zone makes the study useful to detect changes in the benthic cover over time

    High Dynamic Range Image Construction and Noise Reduction Using Differently Exposed Images

    Get PDF
    This thesis discuss how to use mutual information in differently exposed images to improve noise reduction. It also investigates how one can create an high dynamic range (HDR) image from multiple differently exposed images taken with a handheld camera while simultaneously cope with the problems this approach introduces. The proposed method and workflow in this thesis is based on the non-local means algorithm that uses both image registration and a intensity transformation to takes advantage of the mutual information in the different images. It uses alpha expansion to select contiguous areas for the HDR construction and image blending to create seamless transitions between the images. The noise reduction algorithm shows better results with an intensity based noise level compared to a constant one. The methods used to take advantage of the mutual information are proven to be inadequate as the result for using noise reduction in a single image is shown to be just as good. The HDR image construction using both alpha expansion and image blending works well. For smaller movements the approach shows good result and even works as an anti-ghosting algorithm but for larger movements ghosting artifacts are introduced.High­dynamic­range imaging (HDR) Àr en mÀngd tekniker som anvÀnds för att producera bilder med mer detaljer i bÄde mörka och ljusa omrÄden jÀmfört med ett traditionellt foto. Ett sÀtt att göra HDR pÄ Àr att sÀtta ihop bilder med olika mÀngd ljus i sig. Problem uppstÄr dock om kameran rör pÄ sig mellan bilderna eller föremÄl i bilderna flyttar pÄ sig

    Universal density: a musical representation of astronomical objects and concepts

    Get PDF
    Universal Density is a musical composition for digital electronic media. It takes the listener on a rapid tour of the universe that starts in the far away Southern Local supervoid and abruptly ends at the edge of our Solar system. Various astronomical structures along the way are represented by shifting arrangements of sounds. Specific sets of sounds represent each structure's size, relative orientation to Earth, density, temperature, and electromagnetic content, and the character of and prominence of these sets shift to reflect the varying content of the astronomical structures. Ultimately, Universal Density is a detailed system of musical references that seeks to facilitate a richer understanding of the universe by provoking both an intellectual and an emotional reaction to information about the astronomical environments that cradle the Earth.Universal Density also includes two visual components: a graphic score and a short film. The graphic score provides visual confirmation of interactions between musical symbols; it is a critical tool for understanding the flow of information through the entire musical composition. The film presents a collection of astronomical photography from NASA that has been animated by Mandi Hart. While it is not essential to the music, the film is a particularly inspirational supplement to the musical presentation and re-enforces Universal Density's ultimate goal of inspiring the audience to enrich their understanding of the space outside our solar system. Musical examples mentioned in the text externally accompany the thesis as MP3 files. The complete graphic score and the collection of astronomical photography comprising the film are included as appendices in this thesis. The film integrating Universal Density's audio and the animated photography externally accompanies the thesis as a Quicktime movie file

    Mosaicing Tool for Aerial Imagery from a Lidar Bathymetry Survey

    Get PDF
    Aerial imagery collected during lidar bathymetry surveying provides an independent reference dataset for ground truth. Mosaicing of aerial imagery requires some manual involvement by the operator, which is time consuming. This paper presents an automatic mosaicing procedure that creates a continuous and visually consistent photographic map of the imaged area. This study aimed to use only the frames from the aerial camera without additional information. A comparison between the features in the resultant mosaic and a reference chart shows that the mosaic is visually consistent and there is good spatial-geometric correlation of features.Las imagenes aereas recogidas durante los levantamientos batimetricos efectuados con el lfdar proporcionan una coleccion de datos de referencia independientes para la validacion en el terreno. La composicion de las imagenes aereas en forma de mosaico requiere una cierta implicacion manual par parte del operador, to que toma mucho tiempo. Este articulo presenta un procedimiento para la composicion automatica en forma de mosaico, que crea un mapa fotografico continuo y visuatmente coherente de la zona representada en la imagen. El objetivo de este estudio es utilizar solo los marcos de la camara aerea sin informacion adicional. Una comparacion entre las caracterfsticas del mosaico resultante y una carta de referencia muestra que el mosaico es visualmente coherente y que hay una buena correlacion geometrico-espacial de las caracteristicas.L'imagerie aerienne effectuee pendant les leves bathymetriques lidar constitue un ensemble de donnees de reference independant, pour la realite de terrain. Le mosaiiquage de l'imagerie aerienne requiert une intervention manuelle de l'operateur, laquelle prend beaucoup de temps. Cet article presente une procedure de mosaiiquage automatique qui permet d'obtenir une carte photographique continue et visuellement coherente de la zone couverte. L'objectif de cette etude consiste a utiliser seulement les images de la camera aerienne sans informations supptementaires. Une comparaison entre tes elements dans la mosaique resultante et une carte de reference montre que la mosaique est visuellement coherente et qu'il existe une bonne correlation geometrique-spatiale des elements

    Mosaicing Tool for Aerial Imagery from a Lidar Bathymetry Survey

    Get PDF
    Aerial imagery collected during lidar bathymetry surveying provides an independent reference dataset for ground truth. Mosaicing of aerial imagery requires some manual involvement by the operator, which is time consuming. This paper presents an automatic mosaicing procedure that creates a continuous and visually consistent photographic map of the imaged area. This study aimed to use only the frames from the aerial camera without additional information. A comparison between the features in the resultant mosaic and a reference chart shows that the mosaic is visually consistent and there is good spatial-geometric correlation of features.Las imagenes aereas recogidas durante los levantamientos batimetricos efectuados con el lfdar proporcionan una coleccion de datos de referencia independientes para la validacion en el terreno. La composicion de las imagenes aereas en forma de mosaico requiere una cierta implicacion manual par parte del operador, to que toma mucho tiempo. Este articulo presenta un procedimiento para la composicion automatica en forma de mosaico, que crea un mapa fotografico continuo y visuatmente coherente de la zona representada en la imagen. El objetivo de este estudio es utilizar solo los marcos de la camara aerea sin informacion adicional. Una comparacion entre las caracterfsticas del mosaico resultante y una carta de referencia muestra que el mosaico es visualmente coherente y que hay una buena correlacion geometrico-espacial de las caracteristicas.L'imagerie aerienne effectuee pendant les leves bathymetriques lidar constitue un ensemble de donnees de reference independant, pour la realite de terrain. Le mosaiiquage de l'imagerie aerienne requiert une intervention manuelle de l'operateur, laquelle prend beaucoup de temps. Cet article presente une procedure de mosaiiquage automatique qui permet d'obtenir une carte photographique continue et visuellement coherente de la zone couverte. L'objectif de cette etude consiste a utiliser seulement les images de la camera aerienne sans informations supptementaires. Une comparaison entre tes elements dans la mosaique resultante et une carte de reference montre que la mosaique est visuellement coherente et qu'il existe une bonne correlation geometrique-spatiale des elements

    Real-Time Computational Gigapixel Multi-Camera Systems

    Get PDF
    The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions

    SOW: Digitization and longterm preservation of weather maps at ZAMG

    Get PDF
    The targets of this concept are: delivering a catalog of requirements; the evaluation of tools; possible file formats (e.g. FITS) necessary for digitization and longtime preservation of the historical weather maps at ZAMG (Central Institute for Meteorology and Geodynamics, Austria's national weather and geophysical service

    Perception and Mitigation of Artifacts in a Flat Panel Tiled Display System

    Get PDF
    Flat panel displays continue to dominate the display market. Larger, higher resolution flat panel displays are now in demand for scientific, business, and entertainment purposes. Manufacturing such large displays is currently difficult and expensive. Alternately, larger displays can be constructed by tiling smaller flat panel displays. While this approach may prove to be more cost effective, appropriate measures must be taken to achieve visual seamlessness and uniformity. In this project we conducted a set of experiments to study the perception and mitigation of image artifacts in tiled display systems. In the first experiment we used a prototype tiled display to investigate its current viability and to understand what critical perceptible visual artifacts exist in this system. Based on word frequencies of the survey responses, the most disruptive artifacts perceived were ranked. On the basis of these findings, we conducted a second experiment to test the effectiveness of image processing algorithms designed to mitigate some of the most distracting artifacts without changing the physical properties of the display system. Still images were processed using several algorithms and evaluated by observers using magnitude scaling. Participants in the experiment noticed statistically significant improvement in image quality from one of the two algorithms. Similar testing should be conducted to evaluate the effectiveness of the algorithms on video content. While much work still needs to be done, the contributions of this project should enable the development of an image processing pipeline to mitigate perceived artifacts in flat panel display systems and provide the groundwork for extending such a pipeline to realtime applications

    Digital Stack Photography and Its Applications

    Get PDF
    <p>This work centers on digital stack photography and its applications.</p><p>A stack of images refer, in a broader sense, to an ensemble of</p><p>associated images taken with variation in one or more than one various </p><p>values in one or more parameters in system configuration or setting.</p><p>An image stack captures and contains potentially more information than</p><p>any of the constituent images. Digital stack photography (DST)</p><p>techniques explore the rich information to render a synthesized image</p><p>that oversteps the limitation in a digital camera's capabilities.</p><p>This work considers in particular two basic DST problems, which had</p><p>been challenging, and their applications. One is high-dynamic-range</p><p>(HDR) imaging of non-stationary dynamic scenes, in which the stacked</p><p>images vary in exposure conditions. The other</p><p>is large scale panorama composition from multiple images. In this</p><p>case, the image components are related to each other by the spatial</p><p>relation among the subdomains of the same scene they covered and</p><p>captured jointly. We consider the non-conventional, practical and</p><p>challenge situations where the spatial overlap among the sub-images is</p><p>sparse (S), irregular in geometry and imprecise from the designed</p><p>geometry (I), and the captured data over the overlap zones are noisy</p><p>(N) or lack of features. We refer to these conditions simply as the</p><p>S.I.N. conditions.</p><p>There are common challenging issues with both problems. For example,</p><p>both faced the dominant problem with image alignment for</p><p>seamless and artifact-free image composition. Our solutions to the</p><p>common problems are manifested differently in each of the particular</p><p>problems, as a result of adaption to the specific properties in each</p><p>type of image ensembles. For the exposure stack, existing</p><p>alignment approaches struggled to overcome three main challenges:</p><p>inconsistency in brightness, large displacement in dynamic scene and</p><p>pixel saturation. We exploit solutions in the following three</p><p>aspects. In the first, we introduce a model that addresses and admits</p><p>changes in both geometric configurations and optical conditions, while</p><p>following the traditional optical flow description. Previous models</p><p>treated these two types of changes one or the other, namely, with</p><p>mutual exclusions. Next, we extend the pixel-based optical flow model</p><p>to a patch-based model. There are two-fold advantages. A patch has</p><p>texture and local content that individual pixels fail to present. It</p><p>also renders opportunities for faster processing, such as via</p><p>two-scale or multiple-scale processing. The extended model is then</p><p>solved efficiently with an EM-like algorithm, which is reliable in the</p><p>presence of large displacement. Thirdly, we present a generative</p><p>model for reducing or eliminating typical artifacts as a side effect</p><p>of an inadequate alignment for clipped pixels. A patch-based texture</p><p>synthesis is combined with the patch-based alignment to achieve an</p><p>artifact free result.</p><p>For large-scale panorama composition under the S.I.N. conditions, we</p><p>have developed an effective solution scheme that significantly reduces</p><p>both processing time and artifacts. Previously existing approaches can</p><p>be roughly categorized as either geometry-based composition or feature</p><p>based composition. In the former approach, one relies on precise</p><p>knowledge of the system geometry, by design and/or calibration. It</p><p>works well with a far-away scene, in which case there is only limited</p><p>variation in projective geometry among the sub-images. However, the</p><p>system geometry is not invariant to physical conditions such as</p><p>thermal variation, stress variation and etc.. The composition with</p><p>this approach is typically done in the spatial space. The other</p><p>approach is more robust to geometric and optical conditions. It works</p><p>surprisingly well with feature-rich and stationary scenes, not well</p><p>with the absence of recognizable features. The composition based on</p><p>feature matching is typically done in the spatial gradient domain. In</p><p>short, both approaches are challenged by the S.I.N. conditions. With</p><p>certain snapshot data sets obtained and contributed by Brady et al, </p><p>these methods either fail in composition or render images with</p><p>visually disturbing artifacts. To overcome the S.I.N. conditions, we</p><p>have reconciled these two approaches and made successful and</p><p>complementary use of both priori and approximate information about</p><p>geometric system configuration and the feature information from the</p><p>image data. We also designed and developed a software architecture</p><p>with careful extraction of primitive function modules that can be</p><p>efficiently implemented and executed in parallel. In addition to a</p><p>much faster processing speed, the resulting images are clear and</p><p>sharper at the overlapping zones, without typical ghosting artifacts.</p>Dissertatio
    • 

    corecore