2,063 research outputs found

    Visualizing and Interacting with Concept Hierarchies

    Full text link
    Concept Hierarchies and Formal Concept Analysis are theoretically well grounded and largely experimented methods. They rely on line diagrams called Galois lattices for visualizing and analysing object-attribute sets. Galois lattices are visually seducing and conceptually rich for experts. However they present important drawbacks due to their concept oriented overall structure: analysing what they show is difficult for non experts, navigation is cumbersome, interaction is poor, and scalability is a deep bottleneck for visual interpretation even for experts. In this paper we introduce semantic probes as a means to overcome many of these problems and extend usability and application possibilities of traditional FCA visualization methods. Semantic probes are visual user centred objects which extract and organize reduced Galois sub-hierarchies. They are simpler, clearer, and they provide a better navigation support through a rich set of interaction possibilities. Since probe driven sub-hierarchies are limited to users focus, scalability is under control and interpretation is facilitated. After some successful experiments, several applications are being developed with the remaining problem of finding a compromise between simplicity and conceptual expressivity

    Shifting the Focus: The Role of Presence in Reconceptualising the Design Process

    Get PDF
    In this paper the relationship between presence and imaging is examined with the view to establish how our understanding of imaging, and subsequently the design process, may be reconceptualised to give greater focus to its experiential potential. First, the paper outlines the research project contributing to the discussion. Then, it provides brief overviews of research on both imaging and presence in the process highlighting the narrow conceptions of imaging (and the recognition of the need for further research) compared to the more holistic and experiential understandings of presence. The paper concludes with an argument and proposed study for exploring the role of digital technology and presence in extending the potential of imaging and its role in the design process. As indicated in the DRS Conference Theme, this paper focuses “
on what people experience and the systems and actions that create those experiences.” Interface designers, information architects and interactive media artists understand the powerful influence of experience in design. ‘Experience design’ is a community of practice driven by individuals within digital based disciplines where the belief is that understanding people is essential to any successful design in any medium and that “
experience is the personal connection with the moment and
 every aspect of living is an experience, whether we are the creators or simply chance participants” (Shedroff, 2001, p. 5). Keywords: Design, Design Process, Presence, Imaging, Grounded Theory</p

    Breaking the Screen: Interaction Across Touchscreen Boundaries in Virtual Reality for Mobile Knowledge Workers.

    Get PDF
    Virtual Reality (VR) has the potential to transform knowledge work. One advantage of VR knowledge work is that it allows extending 2D displays into the third dimension, enabling new operations, such as selecting overlapping objects or displaying additional layers of information. On the other hand, mobile knowledge workers often work on established mobile devices, such as tablets, limiting interaction with those devices to a small input space. This challenge of a constrained input space is intensified in situations when VR knowledge work is situated in cramped environments, such as airplanes and touchdown spaces. In this paper, we investigate the feasibility of interacting jointly between an immersive VR head-mounted display and a tablet within the context of knowledge work. Specifically, we 1) design, implement and study how to interact with information that reaches beyond a single physical touchscreen in VR; 2) design and evaluate a set of interaction concepts; and 3) build example applications and gather user feedback on those applications.Comment: 10 pages, 8 figures, ISMAR 202

    An Empirical Evaluation of Visual Cues for 3D Flow Field Perception

    Get PDF
    Three-dimensional vector fields are common datasets throughout the sciences. They often represent physical phenomena that are largely invisible to us in the real world, like wind patterns and ocean currents. Computer-aided visualization is a powerful tool that can represent data in any way we choose through digital graphics. Visualizing 3D vector fields is inherently difficult due to issues such as visual clutter, self-occlusion, and the difficulty of providing depth cues that adequately support the perception of flow direction in 3D space. Cutting planes are often used to overcome these issues by presenting slices of data that are more cognitively manageable. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. The most valuable depth cue for the perception of other kinds of 3D data, notably 3D networks and 3D point clouds, is structure-from-motion (also called the Kinetic Depth Effect); another powerful depth cue is stereoscopic viewing, but none of these cues have been fully examined in the context of flow visualization. This dissertation presents a series of quantitative human factors studies that evaluate depth and direction cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The results of the studies are distilled into a set of design guidelines to improve the effectiveness of 3D flow field visualizations, and those guidelines are implemented as an immersive, interactive 3D flow visualization proof-of-concept application

    Use of extended realities in cardiology

    Get PDF
    Recent miniaturization of electronic components and advances in image processing software have facilitated the entry of extended reality technology into clinical practice. In the last several years, the number of applications in cardiology has multiplied, with many promising to become standard of care. We review many of these applications in the areas of patient and physician education, cardiac rehabilitation, pre-procedural planning and intraprocedural use. The rapid integration of these approaches into the many facets of cardiology suggests that they will one day become an every-day part of physician practice

    Enhancing interaction in mixed reality

    Get PDF
    With continuous technological innovation, we observe mixed reality emerging from research labs into the mainstream. The arrival of capable mixed reality devices transforms how we are entertained, consume information, and interact with computing systems, with the most recent being able to present synthesized stimuli to any of the human senses and substantially blur the boundaries between the real and virtual worlds. In order to build expressive and practical mixed reality experiences, designers, developers, and stakeholders need to understand and meet its upcoming challenges. This research contributes a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. We present the results of seven studies examining the challenges and opportunities of mixed reality experiences, the impact of modalities and interaction techniques on the user experience, and how to enhance the experiences. We begin with a study determining user attitudes towards mixed reality in domestic and educational environments, followed by six research probes that each investigate an aspect of reality or virtuality. In the first, a levitating steerable projector enables us to investigate how the real world can be enhanced without instrumenting the user. We show that the presentation of in-situ instructions for navigational tasks leads to a significantly higher ability to observe and recall real-world landmarks. With the second probe, we enhance the perception of reality by superimposing information usually not visible to the human eye. In amplifying the human vision, we enable users to perceive thermal radiation visually. Further, we examine the effect of substituting physical components with non-functional tangible proxies or entirely virtual representations. With the third research probe, we explore how to enhance virtuality to enable a user to input text on a physical keyboard while being immersed in the virtual world. Our prototype tracked the user’s hands and keyboard to enable generic text input. Our analysis of text entry performance showed the importance and effect of different hand representations. We then investigate how to touch virtuality by simulating generic haptic feedback for virtual reality and show how tactile feedback through quadcopters can significantly increase the sense of presence. Our final research probe investigates the usability and input space of smartphones within mixed reality environments, pairing the user’s smartphone as an input device with a secondary physical screen. Based on our learnings from these individual research probes, we developed a novel taxonomy for categorizing mixed reality experiences and guidelines for designing mixed reality experiences. The taxonomy is based on the human sensory system and human capabilities of articulation. We showcased its versatility and set our research probes into perspective by organizing them inside the taxonomic space. The design guidelines are divided into user-centered and technology-centered. It is our hope that these will contribute to the bright future of mixed reality systems while emphasizing the new underlining interaction paradigm.Mixed Reality (vermischte RealitĂ€ten) gehen aufgrund kontinuierlicher technologischer Innovationen langsam von der reinen Forschung in den Massenmarkt ĂŒber. Mit der EinfĂŒhrung von leistungsfĂ€higen Mixed-Reality-GerĂ€ten verĂ€ndert sich die Art und Weise, wie wir Unterhaltungsmedien und Informationen konsumieren und wie wir mit Computersystemen interagieren. Verschiedene existierende GerĂ€te sind in der Lage, jeden der menschlichen Sinne mit synthetischen Reizen zu stimulieren. Hierdurch verschwimmt zunehmend die Grenze zwischen der realen und der virtuellen Welt. Um eindrucksstarke und praktische Mixed-Reality-Erfahrungen zu kreieren, mĂŒssen Designer und Entwicklerinnen die kĂŒnftigen Herausforderungen und neuen Möglichkeiten verstehen. In dieser Dissertation prĂ€sentieren wir eine neue Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen. Wir stellen die Ergebnisse von sieben Studien vor, in denen die Herausforderungen und Chancen von Mixed-Reality-Erfahrungen, die Auswirkungen von ModalitĂ€ten und Interaktionstechniken auf die Benutzererfahrung und die Möglichkeiten zur Verbesserung dieser Erfahrungen untersucht werden. Wir beginnen mit einer Studie, in der die Haltung der nutzenden Person gegenĂŒber Mixed Reality in hĂ€uslichen und Bildungsumgebungen analysiert wird. In sechs weiteren Fallstudien wird jeweils ein Aspekt der RealitĂ€t oder VirtualitĂ€t untersucht. In der ersten Fallstudie wird mithilfe eines schwebenden und steuerbaren Projektors untersucht, wie die Wahrnehmung der realen Welt erweitert werden kann, ohne dabei die Person mit Technologie auszustatten. Wir zeigen, dass die Darstellung von in-situ-Anweisungen fĂŒr Navigationsaufgaben zu einer deutlich höheren FĂ€higkeit fĂŒhrt, SehenswĂŒrdigkeiten der realen Welt zu beobachten und wiederzufinden. In der zweiten Fallstudie erweitern wir die Wahrnehmung der RealitĂ€t durch Überlagerung von Echtzeitinformationen, die fĂŒr das menschliche Auge normalerweise unsichtbar sind. Durch die Erweiterung des menschlichen Sehvermögens ermöglichen wir den Anwender:innen, WĂ€rmestrahlung visuell wahrzunehmen. DarĂŒber hinaus untersuchen wir, wie sich das Ersetzen von physischen Komponenten durch nicht funktionale, aber greifbare Replikate oder durch die vollstĂ€ndig virtuelle Darstellung auswirkt. In der dritten Fallstudie untersuchen wir, wie virtuelle RealitĂ€ten verbessert werden können, damit eine Person, die in der virtuellen Welt verweilt, Text auf einer physischen Tastatur eingeben kann. Unser Versuchsdemonstrator detektiert die HĂ€nde und die Tastatur, zeigt diese in der vermischen RealitĂ€t an und ermöglicht somit die verbesserte Texteingaben. Unsere Analyse der TexteingabequalitĂ€t zeigte die Wichtigkeit und Wirkung verschiedener Handdarstellungen. Anschließend untersuchen wir, wie man VirtualitĂ€t berĂŒhren kann, indem wir generisches haptisches Feedback fĂŒr virtuelle RealitĂ€ten simulieren. Wir zeigen, wie Quadrokopter taktiles Feedback ermöglichen und dadurch das PrĂ€senzgefĂŒhl deutlich steigern können. Unsere letzte Fallstudie untersucht die Benutzerfreundlichkeit und den Eingaberaum von Smartphones in Mixed-Reality-Umgebungen. Hierbei wird das Smartphone der Person als EingabegerĂ€t mit einem sekundĂ€ren physischen Bildschirm verbunden, um die Ein- und AusgabemodalitĂ€ten zu erweitern. Basierend auf unseren Erkenntnissen aus den einzelnen Fallstudien haben wir eine neuartige Taxonomie zur Kategorisierung von Mixed-Reality-Erfahrungen sowie Richtlinien fĂŒr die Gestaltung von solchen entwickelt. Die Taxonomie basiert auf dem menschlichen Sinnessystem und den ArtikulationsfĂ€higkeiten. Wir stellen die vielseitige Verwendbarkeit vor und setzen unsere Fallstudien in Kontext, indem wir sie innerhalb des taxonomischen Raums einordnen. Die Gestaltungsrichtlinien sind in nutzerzentrierte und technologiezentrierte Richtlinien unterteilt. Es ist unsere Anliegen, dass diese Gestaltungsrichtlinien zu einer erfolgreichen Zukunft von Mixed-Reality-Systemen beitragen und gleichzeitig die neuen Interaktionsparadigmen hervorheben

    Simulating bodily movement as an agent for the reactivation of forgotten open air spaces in the city

    Get PDF
    This paper presents experimental work that uses immersive technologies for engaging users and local communities in the design process of architectural interventions on historic, fragmented environments in an effort to re-activate the place under study. In addition to the use of cutting-edge methods of capturing and analysing on-site information, this research framework, implemented in the on-going study of Paphos Gate area of historic Nicosia which lies on the infamous Green Line that still divides the city, explores the potential of narrative-led visualization to enable personal interpretations of space and its history. This virtual environment hosts reconstructions of the Paphos Gate neighbourhood which were produced based on archival material and via 3D data acquisition (LiDAR, UAV and terrain Structure-from-Motion techniques), in order to explore the associations between the transformation of the monument through the years – from its construction to present day – and the bodily experience of the visitors sojourning in its surrounding part of the city. The vision of this research is to develop a digital platform which through immersion, cinematic language and storytelling will enable the evaluation of alternative scenarios and design interventions in the context of the management plan of forgotten open air spaces that used to be popular within their urban fabric.Funded by the Horizon 2020 Framework Programme of the European Union.peer-reviewe
    • 

    corecore