4,128 research outputs found
Design Patterns for Situated Visualization in Augmented Reality
Situated visualization has become an increasingly popular research area in
the visualization community, fueled by advancements in augmented reality (AR)
technology and immersive analytics. Visualizing data in spatial proximity to
their physical referents affords new design opportunities and considerations
not present in traditional visualization, which researchers are now beginning
to explore. However, the AR research community has an extensive history of
designing graphics that are displayed in highly physical contexts. In this
work, we leverage the richness of AR research and apply it to situated
visualization. We derive design patterns which summarize common approaches of
visualizing data in situ. The design patterns are based on a survey of 293
papers published in the AR and visualization communities, as well as our own
expertise. We discuss design dimensions that help to describe both our patterns
and previous work in the literature. This discussion is accompanied by several
guidelines which explain how to apply the patterns given the constraints
imposed by the real world. We conclude by discussing future research directions
that will help establish a complete understanding of the design of situated
visualization, including the role of interactivity, tasks, and workflows.Comment: To appear in IEEE VIS 202
Digital twin-enabled human-robot collaborative teaming towards sustainable and healthy built environments
Development of sustainable and healthy built environments (SHBE) is highly advocated to achieve collective societal good. Part of the pathway to SHBE is the engagement of robots to manage the ever-complex facilities for tasks such as inspection and disinfection. However, despite the increasing advancements of robot intelligence, it is still “mission impossible” for robots to independently undertake such open-ended problems as facility management, calling for a need to “team up” the robots with humans. Leveraging digital twin's ability to capture real-time data and inform decision-making via dynamic simulation, this study aims to develop a human-robot teaming framework for facility management to achieve sustainability and healthiness in the built environments. A digital twin-enabled prototype system is developed based on the framework. Case studies showed that the framework can safely and efficiently incorporate robotics into facility management tasks (e.g., patrolling, inspection, and cleaning) by allowing humans to plan, oversee, manage, and cooperate with the robot via the digital twin's bi-directional mechanism. The study lays out a high-level framework, under which purposeful efforts can be made to unlock digital twin's full potential in collaborating humans and robots in facility management towards SHBE
Recommended from our members
3D (embodied) projection mapping and sensing bodies : a study in interactive dance performance
This dissertation identifies the synergies between physical and virtual environments when designing for immersive experiences in interactive dance performances. The integration of virtual information in physical space is transforming our interactions and experiences with the world. By using the body and creative expression as the interface between real and virtual worlds, dance performance creates a privileged framework to research and design interactive mixed reality environments and immersive augmented architectures. The research is primarily situated in the fields of visual art and interaction design. It combines performance with transdisciplinary fields and intertwines practice with theory. The theoretical and conceptual implications involved in designing and experiencing immersive hybrid environments are analyzed using the reality–virtuality continuum. These theories helped frame the ways augmented reality architectures are achieved through the integration of dance performance with digital software and reception displays. They also helped identify the main artistic affordances and restrictions in the design of augmented reality and augmented virtuality environments for live performance. These pervasive media architectures were materialized in three field experiments, the live dance performances. Each performance was created in three different stages of conception, design and production. The first stage was to “digitize” the performer’s movement and brain activity to the virtual environment and our system. This was accomplished through the use of depth sensor cameras, 3D motion capture, and brain computer interfaces. The second stage was the creation of the computational architecture and software that aggregates the connections and mapping between the physical body and the spatial dynamics of the virtual environment. This process created real-time interactions between the performer’s behavior and motion and the real-time generative computer 3D graphics. Finally, the third stage consisted of the output modality: 3D projector based augmentation techniques were adopted in order to overlay the virtual environment onto physical space. This thesis proposes and lays out theoretical, technical, and artistic frameworks between 3D digital environments and moving bodies in dance performance. By sensing the body and the brain with the 3D virtual environments, new layers of augmentation and interactions are established, and ultimately this generates mixed reality environments for embodied improvisational self-expression.Radio-Television-Fil
The Sunglasses of Ideology: Augmented Reality as Posthuman Cognitive Prosthesis
This project argues a methodological approach for examining augmented reality (AR) that blends new media studies with that of the digital humanities to develop a hybrid methodology that accounts for AR as a digital medium and, in turn, a critical framework for digital humanities (DH) cultural criticism. As Steven Jones argues in The Emergence of the Digital Humanities, the digital has always been physical, and the network has become the water in which we swim (20). Our networked tech has begun to reflect this by showing closer interaction between physical and digital artifacts, the most notable example being AR, where digital information responds directly to physical space. This project takes a multidisciplinary approach to explore the rhetorical and ideological implications of AR as both a technology and a medium. By exploring AR as it relates to current digital humanities scholarship, comparative new media studies, and critical theory, as well as a hands-on approach that involved the development of an AR smartphone application, this project aims to show that augmented reality is uniquely useful as a vessel for future research into digital materiality, while eventually arguing that this tech literalizes imaginative and cognitive processes, ultimately revealing a posthuman ontology where thinking and technology are indistinguishable from one another
Haptic Media Scenes
The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction design—as haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messages—yet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity
Proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET 2013)
"This book contains the proceedings of the International Workshop on EuroPLOT Persuasive Technology for Learning, Education and Teaching (IWEPLET) 2013 which was held on 16.-17.September 2013 in Paphos (Cyprus) in conjunction with the EC-TEL conference. The workshop and hence the proceedings are divided in two parts: on Day 1 the EuroPLOT project and its results are introduced, with papers about the specific case studies and their evaluation. On Day 2, peer-reviewed papers are presented which address specific topics and issues going beyond the EuroPLOT scope. This workshop is one of the deliverables (D 2.6) of the EuroPLOT project, which has been funded from November 2010 – October 2013 by the Education, Audiovisual and Culture Executive Agency (EACEA) of the European Commission through the Lifelong Learning Programme (LLL) by grant #511633. The purpose of this project was to develop and evaluate Persuasive Learning Objects and Technologies (PLOTS), based on ideas of BJ Fogg. The purpose of this workshop is to summarize the findings obtained during this project and disseminate them to an interested audience. Furthermore, it shall foster discussions about the future of persuasive technology and design in the context of learning, education and teaching. The international community working in this area of research is relatively small. Nevertheless, we have received a number of high-quality submissions which went through a peer-review process before being selected for presentation and publication. We hope that the information found in this book is useful to the reader and that more interest in this novel approach of persuasive design for teaching/education/learning is stimulated. We are very grateful to the organisers of EC-TEL 2013 for allowing to host IWEPLET 2013 within their organisational facilities which helped us a lot in preparing this event. I am also very grateful to everyone in the EuroPLOT team for collaborating so effectively in these three years towards creating excellent outputs, and for being such a nice group with a very positive spirit also beyond work. And finally I would like to thank the EACEA for providing the financial resources for the EuroPLOT project and for being very helpful when needed. This funding made it possible to organise the IWEPLET workshop without charging a fee from the participants.
Exploring glass as a novel method for hands-free data entry in flexible cystoscopy
We present a way to annotate cystoscopy finding on Google Glass in a reproducible and hands free manner for use by surgeons during operations in the sterile environment inspired by the current practice of hand-drawn sketches. We developed three data entry variants based on speech and head movements. We assessed the feasibility, benefits and drawbacks of the system with 8 surgeons and Foundation Doctors having up to 30 years' cystoscopy experience at a UK hospital in laboratory trials. We report data entry speed and error rate of input modalities and contrast it with the participants' feedback on their perception of usability, acceptance, and suitability for deployment. The results are supportive of new data entry technologies and point out directions for future improvement of eyewear computers. The findings can be generalised to other endoscopic procedures (e.g. OGD/laryngoscopy) and could be included within hospital IT in the future
- …