1,483 research outputs found

    Interactive Visualization Lenses:: Natural Magic Lens Interaction for Graph Visualization

    Get PDF
    Information visualization is an important research field concerned with making sense and inferring knowledge from data collections. Graph visualizations are specific techniques for data representation relevant in diverse application domains among them biology, software-engineering, and business finance. These data visualizations benefit from the display space provided by novel interactive large display environments. However, these environments also cause new challenges and result in new requirements regarding the need for interaction beyond the desktop and according redesign of analysis tools. This thesis focuses on interactive magic lenses, specialized locally applied tools that temporarily manipulate the visualization. These may include magnification of focus regions but also more graph-specific functions such as pulling in neighboring nodes or locally reducing edge clutter. Up to now, these lenses have mostly been used as single-user, single-purpose tools operated by mouse and keyboard. This dissertation presents the extension of magic lenses both in terms of function as well as interaction for large vertical displays. In particular, this thesis contributes several natural interaction designs with magic lenses for the exploration of graph data in node-link visualizations using diverse interaction modalities. This development incorporates flexible switches between lens functions, adjustment of individual lens properties and function parameters, as well as the combination of lenses. It proposes interaction techniques for fluent multi-touch manipulation of lenses, controlling lenses using mobile devices in front of large displays, and a novel concept of body-controlled magic lenses. Functional extensions in addition to these interaction techniques convert the lenses to user-configurable, personal territories with use of alternative interaction styles. To create the foundation for this extension, the dissertation incorporates a comprehensive design space of magic lenses, their function, parameters, and interactions. Additionally, it provides a discussion on increased embodiment in tool and controller design, contributing insights into user position and movement in front of large vertical displays as a result of empirical investigations and evaluations.Informationsvisualisierung ist ein wichtiges Forschungsfeld, das das Analysieren von Daten unterstützt. Graph-Visualisierungen sind dabei eine spezielle Variante der Datenrepräsentation, deren Nutzen in vielerlei Anwendungsfällen zum Einsatz kommt, u.a. in der Biologie, Softwareentwicklung und Finanzwirtschaft. Diese Datendarstellungen profitieren besonders von großen Displays in neuen Displayumgebungen. Jedoch bringen diese Umgebungen auch neue Herausforderungen mit sich und stellen Anforderungen an Nutzerschnittstellen jenseits der traditionellen Ansätze, die dadurch auch Anpassungen von Analysewerkzeugen erfordern. Diese Dissertation befasst sich mit interaktiven „Magischen Linsen“, spezielle lokal-angewandte Werkzeuge, die temporär die Visualisierung zur Analyse manipulieren. Dabei existieren zum Beispiel Vergrößerungslinsen, aber auch Graph-spezifische Manipulationen, wie das Anziehen von Nachbarknoten oder das Reduzieren von Kantenüberlappungen im lokalen Bereich. Bisher wurden diese Linsen vor allem als Werkzeug für einzelne Nutzer mit sehr spezialisiertem Effekt eingesetzt und per Maus und Tastatur bedient. Die vorliegende Doktorarbeit präsentiert die Erweiterung dieser magischen Linsen, sowohl in Bezug auf die Funktionalität als auch für die Interaktion an großen, vertikalen Displays. Insbesondere trägt diese Dissertation dazu bei, die Exploration von Graphen mit magischen Linsen durch natürliche Interaktion mit unterschiedlichen Modalitäten zu unterstützen. Dabei werden flexible Änderungen der Linsenfunktion, Anpassungen von individuellen Linseneigenschaften und Funktionsparametern, sowie die Kombination unterschiedlicher Linsen ermöglicht. Es werden Interaktionstechniken für die natürliche Manipulation der Linsen durch Multitouch-Interaktion, sowie das Kontrollieren von Linsen durch Mobilgeräte vor einer Displaywand vorgestellt. Außerdem wurde ein neuartiges Konzept körpergesteuerter magischer Linsen entwickelt. Funktionale Erweiterungen in Kombination mit diesen Interaktionskonzepten machen die Linse zu einem vom Nutzer einstellbaren, persönlichen Arbeitsbereich, der zudem alternative Interaktionsstile erlaubt. Als Grundlage für diese Erweiterungen stellt die Dissertation eine umfangreiche analytische Kategorisierung bisheriger Forschungsarbeiten zu magischen Linsen vor, in der Funktionen, Parameter und Interaktion mit Linsen eingeordnet werden. Zusätzlich macht die Arbeit Vor- und Nachteile körpernaher Interaktion für Werkzeuge bzw. ihre Steuerung zum Thema und diskutiert dabei Nutzerposition und -bewegung an großen Displaywänden belegt durch empirische Nutzerstudien

    Digital Alchemy: Matter and Metamorphosis in Contemporary Digital Animation and Interface Design

    Get PDF
    The recent proliferation of special effects in Hollywood film has ushered in an era of digital transformation. Among scholars, digital technology is hailed as a revolutionary moment in the history of communication and representation. Nevertheless, media scholars and cultural historians have difficulty finding a language adequate to theorizing digital artifacts because they are not just texts to be deciphered. Rather, digital media artifacts also invite critiques about the status of reality because they resurrect ancient problems of embodiment and transcendence.In contrast to scholarly approaches to digital technology, computer engineers, interface designers, and special effects producers have invented a robust set of terms and phrases to describe the practice of digital animation. In order to address this disconnect between producers of new media and scholars of new media, I argue that the process of digital animation borrows extensively from a set of preexisting terms describing materiality that were prominent for centuries prior to the scientific revolution. Specifically, digital animators and interface designers make use of the ancient science, art, and technological craft of alchemy. Both alchemy and digital animation share several fundamental elements: both boast the power of being able to transform one material, substance, or thing into a different material, substance, or thing. Both seek to transcend the body and materiality but in the process, find that this elusive goal (realism and gold) is forever receding onto the horizon.The introduction begins with a literature review of the field of digital media studies. It identifies a gap in the field concerning disparate arguments about new media technology. On the one hand, scholars argue that new technologies like cyberspace and digital technology enable radical new forms of engagement with media on individual, social, and economic levels. At the same time that media scholars assert that our current epoch is marked by a historical rupture, many other researchers claim that new media are increasingly characterized by ancient metaphysical problems like embodiment and transcendence. In subsequent chapters I investigate this disparity

    The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions

    Full text link
    The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art, and culture. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution

    Light on horizontal interactive surfaces: Input space for tabletop computing

    Get PDF
    In the last 25 years we have witnessed the rise and growth of interactive tabletop research, both in academic and in industrial settings. The rising demand for the digital support of human activities motivated the need to bring computational power to table surfaces. In this article, we review the state of the art of tabletop computing, highlighting core aspects that frame the input space of interactive tabletops: (a) developments in hardware technologies that have caused the proliferation of interactive horizontal surfaces and (b) issues related to new classes of interaction modalities (multitouch, tangible, and touchless). A classification is presented that aims to give a detailed view of the current development of this research area and define opportunities and challenges for novel touch- and gesture-based interactions between the human and the surrounding computational environment. © 2014 ACM.This work has been funded by Integra (Amper Sistemas and CDTI, Spanish Ministry of Science and Innovation) and TIPEx (TIN2010-19859-C03-01) projects and Programa de Becas y Ayudas para la Realización de Estudios Oficiales de Máster y Doctorado en la Universidad Carlos III de Madrid, 2010

    REAL-TIME CAPTURE AND RENDERING OF PHYSICAL SCENE WITH AN EFFICIENTLY CALIBRATED RGB-D CAMERA NETWORK

    Get PDF
    From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. With the recent explosive growth of Augmented Reality (AR) and Virtual Reality (VR) platforms, utilizing camera RGB-D camera networks to capture and render dynamic physical space can enhance immersive experiences for users. To maximize coverage and minimize costs, practical applications often use a small number of RGB-D cameras and sparsely place them around the environment for data capturing. While sparse color camera networks have been studied for decades, the problems of extrinsic calibration of and rendering with sparse RGB-D camera networks are less well understood. Extrinsic calibration is difficult because of inappropriate RGB-D camera models and lack of shared scene features. Due to the significant camera noise and sparse coverage of the scene, the quality of rendering 3D point clouds is much lower compared with synthetic models. Adding virtual objects whose rendering depend on the physical environment such as those with reflective surfaces further complicate the rendering pipeline. In this dissertation, I propose novel solutions to tackle these challenges faced by RGB-D camera systems. First, I propose a novel extrinsic calibration algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Second, I propose a novel rendering pipeline that can capture and render, in real-time, dynamic scenes in the presence of arbitrary-shaped reflective virtual objects. Third, I have demonstrated a teleportation application that uses the proposed system to merge two geographically separated 3D captured scenes into the same reconstructed environment. To provide a fast and robust calibration for a sparse RGB-D camera network, first, the correspondences between different camera views are established by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic using rigid transformation that is optimal only for pinhole cameras, different view transformation functions including rigid transformation, polynomial transformation, and manifold regression are systematically tested to determine the most robust mapping that generalizes well to unseen data. Third, the celebrated bundle adjustment procedure is reformulated to minimize the global 3D projection error so as to fine-tune the initial estimates. To achieve a realistic mirror rendering, a robust eye detector is used to identify the viewer\u27s 3D location and render the reflective scene accordingly. The limited field of view obtained from a single camera is overcome by our calibrated RGB-D camera network system that is scalable to capture an arbitrarily large environment. The rendering is accomplished by raytracing light rays from the viewpoint to the scene reflected by the virtual curved surface. To the best of our knowledge, the proposed system is the first to render reflective dynamic scenes from real 3D data in large environments. Our scalable client-server architecture is computationally efficient - the calibration of a camera network system, including data capture, can be done in minutes using only commodity PCs

    Overcoming the limitations of commodity augmented reality head mounted displays for use in product assembly

    Get PDF
    Numerous studies have shown the effectiveness of utilizing Augmented Reality (AR) to deliver work instructions for complex assemblies. Traditionally, this research has been performed using hand-held displays, such as smartphones and tablets, or custom-built Head Mounted Displays (HMDs). AR HMDs have been shown to be especially effective for assembly tasks as they allow the user to remain hands-free while receiving work instructions. Furthermore, in recent years a wave of commodity AR HMDs have come to market including the Microsoft HoloLens, Magic Leap One, Meta 2, and DAQRI Smart Glasses. These devices present a unique opportunity for delivering assembly instructions due to their relatively low cost and accessibility compared to custom built AR HMD solutions of the past. Despite these benefits, the technology behind these HMDs still contains many limitations including input, user interface, spatial registration, navigation and occlusion. To accurately deliver work instructions for complex assemblies, the hardware limitations of these commodity AR HMDs must be overcome. For this research, an AR assembly application was developed for the Microsoft HoloLens using methods specifically designed to address the aforementioned issues. Input and user interface methods were implemented and analyzed to maximize the usability of the application. An intuitive navigation system was developed to guide users through a large training environment, leading them to the current point of interest. The native tracking system of the HoloLens was augmented with image target tracking capabilities to stabilize virtual content, enhance accuracy, and account for spatial drift. This fusion of marker-based and marker-less tracking techniques provides a novel approach to display robust AR assembly instructions on a commodity AR HMD. Furthermore, utilizing this novel spatial registration approach, the position of real-world objects was accurately registered to properly occlude virtual work instructions. To render the desired effect, specialized computer graphics methods and custom shaders were developed and implemented for an AR assembly application. After developing novel methods to display work instructions on a commodity AR HMD, it was necessary to validate that these work instructions were being accurately delivered. Utilizing the sensors on the HoloLens, data was collected during the assembly process regarding head position, orientation, assembly step times, and an estimation of spatial drift. With the addition of wearable physiological sensor data, this data was fused together in a visualization application to validate instructions were properly delivered and provide an opportunity for an analysist to examine trends within an assembly session. Additionally, the spatial drift data was then analyzed to gain a better understanding of how spatial drift accumulates over time and ensure that the spatial registration mitigation techniques was effective. Academic research has shown that AR may substantial reduce cost for assembly operations through a reduction in errors, time, and cognitive workload. This research provides novel solutions to overcome the limitations of commodity AR HMDs and validate their use for product assembly. Furthermore, the research provided in this thesis demonstrates the potential of commodity AR HMDs and how their limitations can be mitigated for use in product assembly tasks

    Visualization and Human-Machine Interaction

    Get PDF
    The digital age offers a lot of challenges in the eld of visualization. Visual imagery has been effectively used to communicate messages through the ages, to express both abstract and concrete ideas. Today, visualization has ever-expanding applications in science, engineering, education, medicine, entertainment and many other areas. Different areas of research contribute to the innovation in the eld of interactive visualization, such as data science, visual technology, Internet of things and many more. Among them, two areas of renowned importance are Augmented Reality and Visual Analytics. This thesis presents my research in the fields of visualization and human-machine interaction. The purpose of the proposed work is to investigate existing solutions in the area of Augmented Reality (AR) for maintenance. A smaller section of this thesis presents a minor research project on an equally important theme, Visual Analytics. Overall, the main goal is to identify the most important existing problems and then design and develop innovative solutions to address them. The maintenance application domain has been chosen since it is historically one of the first fields of application for Augmented Reality and it offers all the most common and important challenges that AR can arise, as described in chapter 2. Since one of the main problem in AR application deployment is reconfigurability of the application, a framework has been designed and developed that allows the user to create, deploy and update in real-time AR applications. Furthermore, the research focused on the problems related to hand-free interaction, thus investigating the area of speech-recognition interfaces and designing innovative solutions to address the problems of intuitiveness and robustness of the interface. On the other hand, the area of Visual Analytics has been investigated: among the different areas of research, multidimensional data visualization, similarly to AR, poses specific problems related to the interaction between the user and the machine. An analysis of the existing solutions has been carried out in order to identify their limitations and to point out possible improvements. Since this analysis delineates the scatterplot as a renowned visualization tool worthy of further research, different techniques for adapting its usage to multidimensional data are analyzed. A multidimensional scatterplot has been designed and developed in order to perform a comparison with another multidimensional visualization tool, the ScatterDice. The first chapters of my thesis describe my investigations in the area of Augmented Reality for maintenance. Chapter 1 provides definitions for the most important terms and an introduction to AR. The second chapter focuses on maintenance, depicting the motivations that led to choose this application domain. Moreover, the analysis concerning open problems and related works is described along with the methodology adopted to design and develop the proposed solutions. The third chapter illustrates how the adopted methodology has been applied in order to assess the problems described in the previous one. Chapter 4 describes the methodology adopted to carry out the tests and outlines the experimental results, whereas the fifth chapter illustrates the conclusions and points out possible future developments. Chapter 6 describes the analysis and research work performed in the eld of Visual Analytics, more specifically on multidimensional data visualizations. Overall, this thesis illustrates how the proposed solutions address common problems of visualization and human-machine interaction, such as interface de- sign, robustness of the interface and acceptance of new technology, whereas other problems are related to the specific research domain, such as pose tracking and reconfigurability of the procedure for the AR domain

    Extending the Metaverse: Exploring Generative Objects with Extended Reality Environments and Adaptive Context Awareness

    Get PDF
    The metaverse, with the internet, virtual and augmented reality, and other domains for an immersive environment, has been considered mainstream in recent years. However, the current metaverse platforms have a gap in the physical space, leading to reduced engagement in these applications. This thesis project explores an extended metaverse framework with generative content and the design of a seamless interface to increase the connection between the metaverse and the physical environment and create coherence and efficiency between them. The extended metaverse agent helps prevent this from happening by improving the interaction, embodiment, and agency that dynamically engage humans in mixed reality (MR) environments. This thesis project will design and prototype MR objects and environments with the research through design (RTD) and speculative design methodology, whereby future applications are imagined, assuming plausibility of smart glasses being commonplace to help users visualize the coherence of virtual and physical spaces in simultaneity. To summarize, this thesis project provides an extended metaverse framework and agent that generates from physical contexts to describe the coherence of virtual and physical environments
    • …
    corecore