40 research outputs found

    Analysing observer preferences when presenting a product in a rendered scene: 2D vs. autostereoscopic 3D displays

    Get PDF
    This research compares the way the image of a product included within a rendered scene shown on an autostereoscopic 3D display is rated versus the same image shown in a 2D display. The purpose is to understand the observer's preferences and to determine the features that a composition should have to highlight the product and to make its presentation more attractive to observers, thereby helping designers and advertisers who use both displays to prepare images to make them more effective when visually presenting a product. The results show that observers like the images on autostereoscopic 3D displays slightly more than those presented by means of 2D displays. On both displays the product is perceived more quickly when it is larger than the other elements and is shown with greater chromatic contrast, but a composition is seen as more attractive when the chromatic relationship between all the elements is more harmonious

    Stereoscopic 3D user interfaces : exploring the potentials and risks of 3D displays in cars

    Get PDF
    During recent years, rapid advancements in stereoscopic digital display technology has led to acceptance of high-quality 3D in the entertainment sector and even created enthusiasm towards the technology. The advent of autostereoscopic displays (i.e., glasses-free 3D) allows for introducing 3D technology into other application domains, including but not limited to mobile devices, public displays, and automotive user interfaces - the latter of which is at the focus of this work. Prior research demonstrates that 3D improves the visualization of complex structures and augments virtual environments. We envision its use to enhance the in-car user interface by structuring the presented information via depth. Thus, content that requires attention can be shown close to the user and distances, for example to other traffic participants, gain a direct mapping in 3D space

    Quality of Experience in Immersive Video Technologies

    Get PDF
    Over the last decades, several technological revolutions have impacted the television industry, such as the shifts from black & white to color and from standard to high-definition. Nevertheless, further considerable improvements can still be achieved to provide a better multimedia experience, for example with ultra-high-definition, high dynamic range & wide color gamut, or 3D. These so-called immersive technologies aim at providing better, more realistic, and emotionally stronger experiences. To measure quality of experience (QoE), subjective evaluation is the ultimate means since it relies on a pool of human subjects. However, reliable and meaningful results can only be obtained if experiments are properly designed and conducted following a strict methodology. In this thesis, we build a rigorous framework for subjective evaluation of new types of image and video content. We propose different procedures and analysis tools for measuring QoE in immersive technologies. As immersive technologies capture more information than conventional technologies, they have the ability to provide more details, enhanced depth perception, as well as better color, contrast, and brightness. To measure the impact of immersive technologies on the viewersâ QoE, we apply the proposed framework for designing experiments and analyzing collected subjectsâ ratings. We also analyze eye movements to study human visual attention during immersive content playback. Since immersive content carries more information than conventional content, efficient compression algorithms are needed for storage and transmission using existing infrastructures. To determine the required bandwidth for high-quality transmission of immersive content, we use the proposed framework to conduct meticulous evaluations of recent image and video codecs in the context of immersive technologies. Subjective evaluation is time consuming, expensive, and is not always feasible. Consequently, researchers have developed objective metrics to automatically predict quality. To measure the performance of objective metrics in assessing immersive content quality, we perform several in-depth benchmarks of state-of-the-art and commonly used objective metrics. For this aim, we use ground truth quality scores, which are collected under our subjective evaluation framework. To improve QoE, we propose different systems for stereoscopic and autostereoscopic 3D displays in particular. The proposed systems can help reducing the artifacts generated at the visualization stage, which impact picture quality, depth quality, and visual comfort. To demonstrate the effectiveness of these systems, we use the proposed framework to measure viewersâ preference between these systems and standard 2D & 3D modes. In summary, this thesis tackles the problems of measuring, predicting, and improving QoE in immersive technologies. To address these problems, we build a rigorous framework and we apply it through several in-depth investigations. We put essential concepts of multimedia QoE under this framework. These concepts not only are of fundamental nature, but also have shown their impact in very practical applications. In particular, the JPEG, MPEG, and VCEG standardization bodies have adopted these concepts to select technologies that were proposed for standardization and to validate the resulting standards in terms of compression efficiency

    Extending mobile touchscreen interaction

    Get PDF
    Touchscreens have become a de facto interface for mobile devices, and are penetrating further beyond their core application domain of smartphones. This work presents a design space for extending touchscreen interaction, to which new solutions may be mapped. Specific touchscreen enhancements in the domains of manual input, visual output and haptic feedback are explored and quantitative and experiental findings reported. Particular areas covered are unintentional interaction, screen locking, stereoscopic displays and picoprojection. In addition, the novel interaction approaches of finger identification and onscreen physical guides are also explored. The use of touchscreens in the domains of car dashboards and smart handbags are evaluated as domain specific use cases. This work draws together solutions from the broad area of mobile touchscreen interaction. Fruitful directions for future research are identified, and information is provided for future researchers addressing those topics.Kosketusnäytöistä on muodostunut mobiililaitteiden pääasiallinen käyttöliittymä, ja ne ovat levinneet alkuperäiseltä ydinsovellusalueeltaan, matkapuhelimista, myös muihin laitteisiin. Työssä tutkitaan uusia vuorovaikutuksen, visualisoinnin ja käyttöliittymäpalautteen keinoja, jotka laajentavat perinteistä kosketusnäytön avulla tapahtuvaa vuorovaikutusta. Näihin liittyen väitöskirjassa esitetään sekä kvantitatiivisia tuloksia että uutta kartoittavia löydöksiä. Erityisesti työ tarkastelee tahatonta kosketusnäytön käyttöä, kosketusnäytön lukitusta, stereoskooppisia kosketusnäyttöjä ja pikoprojektoreiden hyödyntämistä. Lisäksi kartoitetaan uusia vuorovaikutustapoja, jotka liittyvät sormien identifioimiseen vuorovaikutuksen yhteydessä, ja fyysisiin, liikettä ohjaaviin rakenteisiin kosketusnäytöllä. Kosketusnäytön käyttöä autossa sekä osana älykästä käsilaukkua tarkastellaan esimerkkeinä käyttökonteksteista. Väitöskirjassa esitetään vuorovaikutussuunnittelun viitekehys, joka laajentaa kosketusnäyttöjen kautta tapahtuvaa vuorovaikutusta mobiililaitteen kanssa, ja johon työssä esitellyt, uudet vuorovaikutustavat voidaan sijoittaa. Väitöskirja yhdistää kosketusnäyttöihin liittyviä käyttöliittymäsuunnittelun ratkaisuja laajalta alueelta. Työ esittelee potentiaalisia suuntaviivoja tulevaisuuden tutkimuksille ja tuo uutta tutkimustietoa, jota mobiililaitteiden vuorovaikutuksen tutkijat ja käyttöliittymäsuunnittelijat voivat hyödyntää

    Exploration of smart infrastructure for drivers of autonomous vehicles

    Get PDF
    The connection between vehicles and infrastructure is an integral part of providing autonomous vehicles information about the environment. Autonomous vehicles need to be safe and users need to trust their driving decision. When smart infrastructure information is integrated into the vehicle, the driver needs to be informed in an understandable manner what the smart infrastructure detected. Nevertheless, interactions that benefit from smart infrastructure have not been the focus of research, leading to knowledge gaps in the integration of smart infrastructure information in the vehicle. For example, it is unclear, how the information from two complex systems can be presented, and if decisions are made, how these can be explained. Enriching the data of vehicles with information from the infrastructure opens unexplored opportunities. Smart infrastructure provides vehicles with information to predict traffic flow and traffic events. Additionally, it has information about traffic events in several kilometers distance and thus enables a look ahead on a traffic situation, which is not in the immediate view of drivers. We argue that this smart infrastructure information can be used to enhance the driving experience. To achieve this, we explore designing novel interactions, providing warnings and visualizations about information that is out of the view of the driver, and offering explanations for the cause of changed driving behavior of the vehicle. This thesis focuses on exploring the possibilities of smart infrastructure information with a focus on the highway. The first part establishes a design space for 3D in-car augmented reality applications that profit from smart infrastructure information. Through the input of two focus groups and a literature review, use cases are investigated that can be introduced in the vehicle's interaction interface which, among others, rely on environment information. From those, a design space that can be used to design novel in-car applications is derived. The second part explores out-of-view visualizations before and during take over requests to increase situation awareness. With three studies, different visualizations for out-of-view information are implemented in 2D, stereoscopic 3D, and augmented reality. Our results show that visualizations improve the situation awareness about critical events in larger distances during take over request situations. In the third part, explanations are designed for situations in which the vehicle drives unexpectedly due to unknown reasons. Since smart infrastructure could provide connected vehicles with out-of-view or cloud information, the driving maneuver of the vehicle might remain unclear to the driver. Therefore, we explore the needs of drivers in those situations and derive design recommendations for an interface which displays the cause for the unexpected driving behavior. This thesis answers questions about the integration of environment information in vehicles'. Three important aspects are explored, which are essential to consider when implementing use cases with smart infrastructure in mind. It enables to design novel interactions, provides insights on how out-of-view visualizations can improve the drivers' situation awareness and explores unexpected driving situations and the design of explanations for them. Overall, we have shown how infrastructure and connected vehicle information can be introduced in vehicles' user interface and how new technology such as augmented reality glasses can be used to improve the driver's perception of the environment.Autonome Fahrzeuge werden immer mehr in den alltäglichen Verkehr integriert. Die Verbindung von Fahrzeugen mit der Infrastruktur ist ein wesentlicher Bestandteil der Bereitstellung von Umgebungsinformationen in autonome Fahrzeugen. Die Erweiterung der Fahrzeugdaten mit Informationen der Infrastruktur eröffnet ungeahnte Möglichkeiten. Intelligente Infrastruktur übermittelt verbundenen Fahrzeugen Informationen über den prädizierten Verkehrsfluss und Verkehrsereignisse. Zusätzlich können Verkehrsgeschehen in mehreren Kilometern Entfernung übermittelt werden, wodurch ein Vorausblick auf einen Bereich ermöglicht wird, der für den Fahrer nicht unmittelbar sichtbar ist. Mit dieser Dissertation wird gezeigt, dass Informationen der intelligenten Infrastruktur benutzt werden können, um das Fahrerlebnis zu verbessern. Dies kann erreicht werden, indem innovative Interaktionen gestaltet werden, Warnungen und Visualisierungen über Geschehnisse außerhalb des Sichtfelds des Fahrers vermittelt werden und indem Erklärungen über den Grund eines veränderten Fahrzeugverhaltens untersucht werden. Interaktionen, welche von intelligenter Infrastruktur profitieren, waren jedoch bisher nicht im Fokus der Forschung. Dies führt zu Wissenslücken bezüglich der Integration von intelligenter Infrastruktur in das Fahrzeug. Diese Dissertation exploriert die Möglichkeiten intelligenter Infrastruktur, mit einem Fokus auf die Autobahn. Der erste Teil erstellt einen Design Space für Anwendungen von augmentierter Realität (AR) in 3D innerhalb des Autos, die unter anderem von Informationen intelligenter Infrastruktur profitieren. Durch das Ergebnis mehrerer Studien werden Anwendungsfälle in einem Katalog gesammelt, welche in die Interaktionsschnittstelle des Autos einfließen können. Diese Anwendungsfälle bauen unter anderem auf Umgebungsinformationen. Aufgrund dieser Anwendungen wird der Design Space entwickelt, mit Hilfe dessen neuartige Anwendungen für den Fahrzeuginnenraum entwickelt werden können. Der zweite Teil exploriert Visualisierungen für Verkehrssituationen, die außerhalb des Sichtfelds des Fahrers sind. Es wird untersucht, ob durch diese Visualisierungen der Fahrer besser auf ein potentielles Übernahmeszenario vorbereitet wird. Durch mehrere Studien wurden verschiedene Visualisierungen in 2D, stereoskopisches 3D und augmentierter Realität implementiert, die Szenen außerhalb des Sichtfelds des Fahrers darstellen. Diese Visualisierungen verbessern das Situationsbewusstsein über kritische Szenarien in einiger Entfernung während eines Übernahmeszenarios. Im dritten Teil werden Erklärungen für Situationen gestaltet, in welchen das Fahrzeug ein unerwartetes Fahrmanöver ausführt. Der Grund des Fahrmanövers ist dem Fahrer dabei unbekannt. Mit intelligenter Infrastruktur verbundene Fahrzeuge erhalten Informationen, die außerhalb des Sichtfelds des Fahrers liegen oder von der Cloud bereit gestellt werden. Dadurch könnte der Grund für das unerwartete Fahrverhalten unklar für den Fahrer sein. Daher werden die Bedürfnisse des Fahrers in diesen Situationen erforscht und Empfehlungen für die Gestaltung einer Schnittstelle, die Erklärungen für das unerwartete Fahrverhalten zur Verfügung stellt, abgeleitet. Zusammenfassend wird gezeigt wie Daten der Infrastruktur und Informationen von verbundenen Fahrzeugen in die Nutzerschnittstelle des Fahrzeugs implementiert werden können. Zudem wird aufgezeigt, wie innovative Technologien wie AR Brillen, die Wahrnehmung der Umgebung des Fahrers verbessern können. Durch diese Dissertation werden Fragen über Anwendungsfälle für die Integration von Umgebungsinformationen in Fahrzeugen beantwortet. Drei wichtige Themengebiete wurden untersucht, welche bei der Betrachtung von Anwendungsfällen der intelligenten Infrastruktur essentiell sind. Durch diese Arbeit wird die Gestaltung innovativer Interaktionen ermöglicht, Einblicke in Visualisierungen von Informationen außerhalb des Sichtfelds des Fahrers gegeben und es wird untersucht, wie Erklärungen für unerwartete Fahrsituationen gestaltet werden können

    Real-time GPU-accelerated Out-of-Core Rendering and Light-field Display Visualization for Improved Massive Volume Understanding

    Get PDF
    Nowadays huge digital models are becoming increasingly available for a number of different applications ranging from CAD, industrial design to medicine and natural sciences. Particularly, in the field of medicine, data acquisition devices such as MRI or CT scanners routinely produce huge volumetric datasets. Currently, these datasets can easily reach dimensions of 1024^3 voxels and datasets larger than that are not uncommon. This thesis focuses on efficient methods for the interactive exploration of such large volumes using direct volume visualization techniques on commodity platforms. To reach this goal specialized multi-resolution structures and algorithms, which are able to directly render volumes of potentially unlimited size are introduced. The developed techniques are output sensitive and their rendering costs depend only on the complexity of the generated images and not on the complexity of the input datasets. The advanced characteristics of modern GPGPU architectures are exploited and combined with an out-of-core framework in order to provide a more flexible, scalable and efficient implementation of these algorithms and data structures on single GPUs and GPU clusters. To improve visual perception and understanding, the use of novel 3D display technology based on a light-field approach is introduced. This kind of device allows multiple naked-eye users to perceive virtual objects floating inside the display workspace, exploiting the stereo and horizontal parallax. A set of specialized and interactive illustrative techniques capable of providing different contextual information in different areas of the display, as well as an out-of-core CUDA based ray-casting engine with a number of improvements over current GPU volume ray-casters are both reported. The possibilities of the system are demonstrated by the multi-user interactive exploration of 64-GVoxel datasets on a 35-MPixel light-field display driven by a cluster of PCs. ------------------------------------------------------------------------------------------------------ Negli ultimi anni si sta verificando una proliferazione sempre più consistente di modelli digitali di notevoli dimensioni in campi applicativi che variano dal CAD e la progettazione industriale alla medicina e le scienze naturali. In modo particolare, nel settore della medicina, le apparecchiature di acquisizione dei dati come RM o TAC producono comunemente dei dataset volumetrici di grosse dimensioni. Questi dataset possono facilmente raggiungere taglie dell’ordine di 10243 voxels e dataset di dimensioni maggiori possono essere frequenti. Questa tesi si focalizza su metodi efficienti per l’esplorazione di tali grossi volumi utilizzando tecniche di visualizzazione diretta su piattaforme HW di diffusione di massa. Per raggiungere tale obiettivo si introducono strutture specializzate multi-risoluzione e algoritmi in grado di visualizzare volumi di dimensioni potenzialmente infinite. Le tecniche sviluppate sono “ouput sensitive” e la loro complessità di rendering dipende soltanto dalle dimensioni delle immagini generate e non dalle dimensioni dei dataset di input. Le caratteristiche avanzate delle architetture moderne GPGPU vengono inoltre sfruttate e combinate con un framework “out-of-core” in modo da offrire una implementazione di questi algoritmi e strutture dati più flessibile, scalabile ed efficiente su singole GPU o cluster di GPU. Per migliorare la percezione visiva e la comprensione dei dati, viene introdotto inoltre l’uso di tecnologie di display 3D di nuova generazione basate su un approccio di tipo light-field. Questi tipi di dispositivi consentono a diversi utenti di percepire ad occhio nudo oggetti che galleggiano all’interno dello spazio di lavoro del display, sfruttando lo stereo e la parallasse orizzontale. Si descrivono infine un insieme di tecniche illustrative interattive in grado di fornire diverse informazioni contestuali in diverse zone del display, così come un motore di “ray-casting out-of-core” basato su CUDA e contenente una serie di miglioramenti rispetto agli attuali metodi GPU di “ray-casting” di volumi. Le possibilità del sistema sono dimostrate attraverso l’esplorazione interattiva di dataset di 64-GVoxel su un display di tipo light-field da 35-MPixel pilotato da un cluster di PC
    corecore