2,419 research outputs found

    Computer-aided display control Final report

    Get PDF
    Human composition and modification of computer driven cathode ray tube displa

    Marine Isotope Stage (MIS) 5 on the Umnak Plateau, Bering Sea (IODP Site U1339): diatom taxonomy, grain size and isotopic composition of marine sediments as proxies for primary productivity and sea ice extent

    Get PDF
    The current rapid reduction of sea ice in the arctic has motivated numerous studies to observe how sea ice declines during times of climate warming and its impact on marine ecosystems. Marine Isotope Stage (MIS) 5, the last interglacial prior to the Holocene, is characterized as having higher summer air temperatures and sea level compared to today; however, there is a scarcity of data for how sea ice extent and ecosystems changed during MIS 5. The Umnak Plateau is not currently covered by sea ice due to the influence of the warm Alaskan Coastal Current entering through the eastern Aleutian Island passes; however, low-resolution studies from the Last Glacial Maximum (LGM) demonstrate that sea ice extended to the Umnak Plateau when sea level dropped and restricted flow of the Alaskan Coastal Current over the Umnak Plateau. This study uses a multi-proxy approach consisting of grain size, diatom assemblages, and isotopic analyses to determine how environmental conditions changed at the Umnak Plateau (IODP Site U1339) during MIS 5 as well as the end of MIS 6 and the beginning of MIS 4 (146ka – 65ka), both of which are glacial periods. The research presented in this thesis reveals that the glacials MIS 6 and MIS 4 are both characterized as having decreased primary productivity combined with increased nutrient utilization and increased terrestrial organic matter deposition, suggesting there may have been an extensive sea ice cover at the Umnak Plateau and a limited influence of the Alaskan Coastal Current. In contrast, MIS 5 is characterized as having higher primary productivity combined with decreased nutrient utilization and decreased sea ice extent. MIS 5e, the warm substage of MIS 5 that has been correlated with the Eemian Interglacial from terrestrial records, shows that decreased productivity at the Umnak Plateau may be related to an intensified stratification associated with increased warming of the surface waters that resulted from increased insolation and a prolonged summer season. Comparing the stable nitrogen isotope (δ15N) record with other sites in the North Pacific reveals noteworthy similarities between δ15N patterns during the warm substages of MIS 5 from the Umnak Plateau and from the Gulf of Alaska, the origin of the Alaskan Coastal Current. Thus, the δ15N record from the Umnak Plateau may be displaying changes relating to the source of the nitrates over the Umnak Plateau from a more western Bering Sea source during the cold substages of MIS 5 to being sourced from the Alaskan Coastal Current during parts of the warm substages of MIS 5

    AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures

    Get PDF
    AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air - with 2-5 armatures poseable in 7DoF within the same workspace - to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air. We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing "seeing and being seen"in remote work

    Widening Viewing Angles of Automultiscopic Displays using Refractive Inserts

    Get PDF

    Efficient image-based rendering

    Get PDF
    Recent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.Jüngste Fortschritte im Bereich Echtzeit-Raytracing und Deep Learning haben den Realismus computergenerierter Bilder erheblich verbessert. Konventionelle 3DComputergrafik (CG) kann jedoch nach wie vor zeit- und ressourcenintensiv sein, insbesondere bei der Erstellung fotorealistischer Simulationen von komplexen oder animierten Szenen. Das bildbasierte Rendering (IBR) hat sich als alternativer Ansatz herauskristallisiert, bei dem vorab aufgenommene Bilder aus der realen Welt verwendet werden, um realistische Bilder in Echtzeit zu erzeugen, so dass keine umfangreiche Modellierung erforderlich ist. Obwohl IBR seine Vorteile hat, ist es eine Herausforderung, das gleiche Maß an Kontrolle über Szenenattribute zu bieten wie traditionelle CG-Pipelines und komplexe Szenen und Objekte mit unterschiedlichen Materialien, wie z.B. transparente Objekte, akkurat wiederzugeben. In dieser Arbeit wird versucht, diese Probleme zu lösen, indem die Möglichkeiten des Deep Learning genutzt und die grundlegenden Prinzipien der Grafik und des physikalisch basierten Renderings einbezogen werden. Sie bietet eine effiziente Lösung, die eine interaktive Manipulation von dynamischen Szenen aus der realen Welt ermöglicht, die aus spärlichen Ansichten, Beleuchtungspositionen und Zeiten erfasst wurden, sowie einen physikalisch basierten Ansatz, der eine genaue Reproduktion des Effekts der Sichtabhängigkeit ermöglicht, der sich aus der Interaktion zwischen transparenten Objekten und ihrer Umgebung ergibt. Darüber hinaus wird in dieser Arbeit eine Sichtbarkeitsmetrik entwickelt, mit der Artefakte in den rekonstruierten IBR-Bildern identifiziert werden können, ohne das Referenzbild zu betrachten, und die somit zur Entwicklung einer effektiven IBR-Erfassungspipeline beiträgt. Schließlich wird ein wahrnehmungsgesteuertes Rendering-Verfahren entwickelt, um visuelle Inhalte in Virtual-Reality-Displays mit hoherWiedergabetreue zu liefern und gleichzeitig die Rechenleistung zu erhalten

    A Wearable communications device

    Get PDF
    The purpose of this thesis is to develop a concept for a wearable communications device. Proposed for the market of ten years into the future, this device will integrate today\u27s multiple communications devices and be capable of wireless global communications

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Analyzing interfaces and workflows for light field editing

    Get PDF
    With the increasing number of available consumer light field cameras, such as Lytro, Raytrix, or Pelican Imaging, this new form of photography is progressively becoming more common. However, there are still very few tools for light field editing, and the interfaces to create those edits remain largely unexplored. Given the extended dimensionality of light field data, it is not clear what the most intuitive interfaces and optimal workflows are, in contrast with well-studied two-dimensional (2-D) image manipulation software. In this work, we provide a detailed description of subjects' performance and preferences for a number of simple editing tasks, which form the basis for more complex operations. We perform a detailed state sequence analysis and hidden Markov chain analysis based on the sequence of tools and interaction paradigms users employ while editing light fields. These insights can aid researchers and designers in creating new light field editing tools and interfaces, thus helping to close the gap between 4-D and 2-D image editing
    corecore