1,231 research outputs found

    Designing Hybrid Interactions through an Understanding of the Affordances of Physical and Digital Technologies

    Get PDF
    Two recent technological advances have extended the diversity of domains and social contexts of Human-Computer Interaction: the embedding of computing capabilities into physical hand-held objects, and the emergence of large interactive surfaces, such as tabletops and wall boards. Both interactive surfaces and small computational devices usually allow for direct and space-multiplex input, i.e., for the spatial coincidence of physical action and digital output, in multiple points simultaneously. Such a powerful combination opens novel opportunities for the design of what are considered as hybrid interactions in this work. This thesis explores the affordances of physical interaction as resources for interface design of such hybrid interactions. The hybrid systems that are elaborated in this work are envisioned to support specific social and physical contexts, such as collaborative cooking in a domestic kitchen, or collaborative creativity in a design process. In particular, different aspects of physicality characteristic of those specific domains are explored, with the aim of promoting skill transfer across domains. irst, different approaches to the design of space-multiplex, function-specific interfaces are considered and investigated. Such design approaches build on related work on Graspable User Interfaces and extend the design space to direct touch interfaces such as touch-sensitive surfaces, in different sizes and orientations (i.e., tablets, interactive tabletops, and walls). These approaches are instantiated in the design of several experience prototypes: These are evaluated in different settings to assess the contextual implications of integrating aspects of physicality in the design of the interface. Such implications are observed both at the pragmatic level of interaction (i.e., patterns of users' behaviors on first contact with the interface), as well as on user' subjective response. The results indicate that the context of interaction affects the perception of the affordances of the system, and that some qualities of physicality such as the 3D space of manipulation and relative haptic feedback can affect the feeling of engagement and control. Building on these findings, two controlled studies are conducted to observe more systematically the implications of integrating some of the qualities of physical interaction into the design of hybrid ones. The results indicate that, despite the fact that several aspects of physical interaction are mimicked in the interface, the interaction with digital media is quite different and seems to reveal existing mental models and expectations resulting from previous experience with the WIMP paradigm on the desktop PC

    State of the art 3D technologies and MVV end to end system design

    Get PDF
    L’oggetto del presente lavoro di tesi è costituito dall’analisi e dalla recensione di tutte le tecnologie 3D: esistenti e in via di sviluppo per ambienti domestici; tenendo come punto di riferimento le tecnologie multiview video (MVV). Tutte le sezioni della catena dalla fase di cattura a quella di riproduzione sono analizzate. Lo scopo è di progettare una possibile architettura satellitare per un futuro sistema MVV televisivo, nell’ambito di due possibili scenari, broadcast o interattivo. L’analisi coprirà considerazioni tecniche, ma anche limitazioni commerciali

    Holographic reality: enhancing the artificial reality experience throuhg interactive 3D holography

    Get PDF
    Holography was made know by several science-fiction productions, however this technology dates back to the year 1940. Despite the considerable age of this discovery, this technology remains inaccessible to the average consumer. The main goal of this manuscript is to advance the state of the art in interactive holography, providing an accessible and low-cost solution. The final product intends to nudge the HCI com munity to explore potential applications, in particular to be aquatic centric and environmentally friendly. Two main user studies are performed, in order to determine the impact of the proposed solution by a sample audience. Provided user studies include a first prototype as a Tangible User Interface - TUI for Holographic Reality - HR Second study included the Holographic Mounted Display - HMD for proposed HR interface, further analyzing the interactive holographic experience without hand-held devices. Both of these studies were further compared with an Augmented Reality setting. Obtained results demonstrate a significantly higher score for the HMD approach. This suggests it is the better solution, most likely due to the added simplicity and immersiveness features it has. However the TUI study did score higher in several key parameters, and should be considered for future studies. Comparing with an AR experience, the HMD study scores slightly lower, but manages to surpass AR in several parameters. Several approaches were outlined and evaluated, depicting different methods for the creation of Interactive Holographic Reality experiences. In spite of the low maturity of holographic technology, it can be concluded it is comparable and can keep up to other more developed and mature artificial reality settings, further supporting the need for the existence of the Holographic Reality conceptA tecnologia holográfica tornou-se conhecida através da ficção científica, contudo esta tecnologia remonta até ao ano 1940. Apesar da considerável idade desta descoberta, esta tecnologia continua a não ser acessíveil para o consumidor. O objetivo deste manuscrito é avançar o estado de arte da Holografia Interactiva, e fornecer uma solução de baixo custo. O objetivo do produto final é persuadir a comunidade HCI para a exploração de aplicações desta tecnologia, em particular em contextos aquáticos e pró-ambientais. Dois estudos principais foram efetuados, de modo a determinar qual o impacto da solução pro posta numa amostra. Os estudos fornecidos incluem um protótipo inicial baseado numa Interface Tangível e Realidade Holográfica e um dispositivo tangível. O segundo estudo inclui uma interface baseada num dispositivo head-mounted e em Realidade Holográfica, de modo a analisar e avaliar a experiência interativa e holográfica. Ambos os estudos são comparados com uma experiência semelhante, em Realidade Aumentada. Os resultados obtidos demonstram que o estudo HMD recebeu uma avaliação significante mel hor, em comparação com a abordagem TUI. Isto sugere que uma abordagem "head-mounted" tende a ser melhor solução, muito provavelmente devido às vantagens que possui em relação à simplicidade e imersividade que oferece. Contudo, o estudo TUI recebeu pontuações mais altas em alguns parâmetros chave, e deve ser considerados para a implementação de futuros estudos. Comparando com uma experiência de realidade aumentada, o estudo HMD recebeu uma avaliação ligeiramente menor, mas por uma margem mínima, e ultrapassando a AR em alguns parâmetros. Várias abordagens foram deliniadas e avaliadas, com diferentes métodos para a criação de experiências de Realidade Holográfica. Apesar da pouca maturidade da tecnologia holográfica, podemos concluir que a mesma é comparável e consegue acompanhar outros tipos de realidade artificial, que são muito mais desenvolvidos, o que suporta a necessidade da existência do conceito de Realidade Holográfica

    Toward General Purpose 3D User Interfaces: Extending Windowing Systems to Three Dimensions

    Get PDF
    Recent growth in the commercial availability of consumer grade 3D user interface devices like the Microsoft Kinect and the Oculus Rift, coupled with the broad availability of high performance 3D graphics hardware, has put high quality 3D user interfaces firmly within the reach of consumer markets for the first time ever. However, these devices require custom integration with every application which wishes to use them, seriously limiting application support, and there is no established mechanism for multiple applications to use the same 3D interface hardware simultaneously. This thesis proposes that these problems can be solved in the same way that the same problems were solved for 2D interfaces: by abstracting the input hardware behind input primitives provided by the windowing system and compositing the output of applications within the windowing system before displaying it. To demonstrate the feasibility of this approach this thesis also presents a novel Wayland compositor which allows clients to create 3D interface contexts within a 3D interface space in the same way that traditional windowing systems allow applications to create 2D interface contexts (windows) within a 2D interface space (the desktop), as well as allowing unmodified 2D Wayland clients to window into the same 3D interface space and receive standard 2D input events. This implementation demonstrates the ability of consumer 3D interface hardware to support a 3D windowing system, the ability of this 3D windowing system to support applications with compelling 3D interfaces, the ability of this style of windowing system to be built on top of existing hardware accelerated graphics and windowing infrastructure, and the ability of such a windowing system to support unmodified 2D interface applications windowing into the same 3D windowing space as the 3D interface applications. This means that application developers could create compelling 3D interfaces with no knowledge of the hardware that supports them, that new hardware could be introduced without needing to integrate it with individual applications, and that users could mix whatever 2D and 3D applications they wish in an immersive 3D interface space regardless of the details of the underlying hardware

    Bi & tri dimensional scene description and composition in the MPEG-4 standard

    Get PDF
    MPEG-4 is a new ISO/IEC standard being developed by MPEG (Moving Picture Experts Group). The standard is to be released in November 1998 and version 1 will be an International Standard in January 1999 The MPEG-4 standard addresses the new demands that arise in a world in which more and more audio-visual material is exchanged in digital form MPEG-4 addresses the coding of objects of various types. Not only traditional video and audio frames, but also natural video and audio objects as well as textures, text, 2- and 3-dimensional graphic primitives, and synthetic music and sound effects. Using MPEG-4 to reconstruct an audio-visual scene at a terminal, it is hence no longer sufficient to encode the raw audio-visual data and transmit it, as MPEG-2 does m order to synchronize video and audio. In MPEG-4, all objects are multiplexed together at the encoder and transported to the terminal Once de-multiplexed, these objects are composed at the terminal to construct and present to the end user a meaningful audio-visual scene. The placement of these elementary audio-visual objects in space and time is described in the scene description of a scene. While the action of putting these objects together in the same representation space is the composition of audio-visual objects. My research was concerned with the scene description and composition of the audio-visual objects that are defined in an audio-visual scene Scene descriptions are coded independently irom sticams related to primitive audio-visual objects. The set of parameters belonging to the scene description are differentiated from the parameters that are used to improve the coding efficiency of an object. While the independent coding of different objects may achieve a higher compression rate, it also brings the ability to manipulate content at the terminal. This allows the modification of the scene description parameters without having to decode the primitive audio-visual objects themselves. This approach allows the development of a syntax that describes the spatio-temporal relationships of audio-visual scene objects. The behaviours of objects and their response to user inputs can thus also be represented in the scene description, allowing richer audio-visual content to be delivered as an MPEG-4 stream

    Beyond imaging with coherent anti-Stokes Raman scattering microscopy

    Get PDF
    La microscopie optique permet de visualiser des échantillons biologiques avec une bonne sensibilité et une résolution spatiale élevée tout en interférant peu avec les échantillons. La microscopie par diffusion Raman cohérente (CARS) est une technique de microscopie non linéaire basée sur l’effet Raman qui a comme avantage de fournir un mécanisme de contraste endogène sensible aux vibrations moléculaires. La microscopie CARS est maintenant une modalité d’imagerie reconnue, en particulier pour les expériences in vivo, car elle élimine la nécessité d’utiliser des agents de contraste exogènes, et donc les problèmes liés à leur distribution, spécificité et caractère invasif. Cependant, il existe encore plusieurs obstacles à l’adoption à grande échelle de la microscopie CARS en biologie et en médecine : le coût et la complexité des systèmes actuels, les difficultés d’utilisation et d’entretient, la rigidité du mécanisme de contraste, la vitesse de syntonisation limitée et le faible nombre de méthodes d’analyse d’image adaptées. Cette thèse de doctorat vise à aller au-delà de certaines des limites actuelles de l’imagerie CARS dans l’espoir que cela encourage son adoption par un public plus large. Tout d’abord, nous avons introduit un nouveau système d’imagerie spectrale CARS ayant une vitesse de syntonisation de longueur d’onde beaucoup plus rapide que les autres techniques similaires. Ce système est basé sur un laser à fibre picoseconde synchronisé qui est à la fois robuste et portable. Il peut accéder à des lignes de vibration Raman sur une plage importante (2700–2950 cm-1) à des taux allant jusqu’à 10 000 points spectrales par seconde. Il est parfaitement adapté pour l’acquisition d’images spectrales dans les tissus épais. En second lieu, nous avons proposé une nouvelle méthode d’analyse d’images pour l’évaluation de la structure de la myéline dans des images de sections longitudinales de moelle épinière. Nous avons introduit un indicateur quantitatif sensible à l’organisation de la myéline et démontré comment il pourrait être utilisé pour étudier certaines pathologies. Enfin, nous avons développé une méthode automatisé pour la segmentation d’axones myélinisés dans des images CARS de coupes transversales de tissu nerveux. Cette méthode a été utilisée pour extraire des informations morphologique des fibres nerveuses dans des images CARS de grande échelle.Optical-based microscopy techniques can sample biological specimens using many contrast mechanisms providing good sensitivity and high spatial resolution while minimally interfering with the samples. Coherent anti-Stokes Raman scattering (CARS) microscopy is a nonlinear microscopy technique based on the Raman effect. It shares common characteristics of other optical microscopy modalities with the added benefit of providing an endogenous contrast mechanism sensitive to molecular vibrations. CARS is now recognized as a great imaging modality, especially for in vivo experiments since it eliminates the need for exogenous contrast agents, and hence problems related to the delivery, specificity, and invasiveness of those markers. However, there are still several obstacles preventing the wide-scale adoption of CARS in biology and medicine: cost and complexity of current systems as well as difficulty to operate and maintain them, lack of flexibility of the contrast mechanism, low tuning speed and finally, poor accessibility to adapted image analysis methods. This doctoral thesis strives to move beyond some of the current limitations of CARS imaging in the hope that it might encourage a wider adoption of CARS as a microscopy technique. First, we introduced a new CARS spectral imaging system with vibrational tuning speed many orders of magnitude faster than other narrowband techniques. The system presented in this original contribution is based on a synchronized picosecond fibre laser that is both robust and portable. It can access Raman lines over a significant portion of the highwavenumber region (2700–2950 cm-1) at rates of up to 10,000 spectral points per second and is perfectly suitable for the acquisition of CARS spectral images in thick tissue. Secondly, we proposed a new image analysis method for the assessment of myelin health in images of longitudinal sections of spinal cord. We introduced a metric sensitive to the organization/disorganization of the myelin structure and showed how it could be used to study pathologies such as multiple sclerosis. Finally, we have developped a fully automated segmentation method specifically designed for CARS images of transverse cross sections of nerve tissue.We used our method to extract nerve fibre morphology information from large scale CARS images

    A Multi-view and Multi-interaction System for Digital-mock up’s collaborative environment

    Get PDF
    The current industrial PLM tool generally relies on Concurrent Engineering (CE), which involves conducting product design and manufacturing stages in parallel and integrating technical data for sharing among different experts in parallel. Various experts use domain-specific software to produce various data. This package of data is usually called Digital mock-up (DMU), as well as Building Information Model (BIM) in architectural engineering. For sharing the DMU data, many works have been done to improve the interoperability among the engineering software and among the models in domains of mechanical design and eco-design. However, the computer-human interaction (CHI) currently used in the context of CE project reviews is not optimized to enhance the interoperability among various experts of different domains. Here the CHI concerns both complex DMU visualization and multi-users interaction. Since the DMU has its multiple representations according to involved domains, therefore when various experts need to work together on the DMU they may prefer their own point-of-view on the DMU and proper manner to interact with the DMU.With the development of 3D visualization and virtual reality CHI technology, it is possible to devise more intuitive tools and methods to enhance the interoperability of collaboration among experts both in multi-view and multi-interaction for co-located synchronous collaborative design activities. In this paper, we discuss the different approaches of displaying multiple point-of-views of DMU and multiple interactions with DMU in the context of 3D visualization, virtual reality and augmented reality. A co-located collaborative environment of CHI supporting system is proposed. This collaborative environment allows the experts to see respectively the multiple point-of-view of the DMU in front of a unique display system and to interact with the DMU in using different metaphors according to their specific needs. This could be used to assist collaborative design during project review where some decision on product design solution should be made.CSC (China Scholarship Council

    Rendering and display for multi-viewer tele-immersion

    Get PDF
    Video teleconferencing systems are widely deployed for business, education and personal use to enable face-to-face communication between people at distant sites. Unfortunately, the two-dimensional video of conventional systems does not correctly convey several important non-verbal communication cues such as eye contact and gaze awareness. Tele-immersion refers to technologies aimed at providing distant users with a more compelling sense of remote presence than conventional video teleconferencing. This dissertation is concerned with the particular challenges of interaction between groups of users at remote sites. The problems of video teleconferencing are exacerbated when groups of people communicate. Ideally, a group tele-immersion system would display views of the remote site at the right size and location, from the correct viewpoint for each local user. However, is is not practical to put a camera in every possible eye location, and it is not clear how to provide each viewer with correct and unique imagery. I introduce rendering techniques and multi-view display designs to support eye contact and gaze awareness between groups of viewers at two distant sites. With a shared 2D display, virtual camera views can improve local spatial cues while preserving scene continuity, by rendering the scene from novel viewpoints that may not correspond to a physical camera. I describe several techniques, including a compact light field, a plane sweeping algorithm, a depth dependent camera model, and video-quality proxies, suitable for producing useful views of a remote scene for a group local viewers. The first novel display provides simultaneous, unique monoscopic views to several users, with fewer user position restrictions than existing autostereoscopic displays. The second is a random hole barrier autostereoscopic display that eliminates the viewing zones and user position requirements of conventional autostereoscopic displays, and provides unique 3D views for multiple users in arbitrary locations
    corecore