633 research outputs found

    Scalable and Extensible Augmented Reality with Applications in Civil Infrastructure Systems.

    Full text link
    In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: 1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; 2) It allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; 3) It offsets the significant cost of 3D Model Engineering by including the real world background. The research has successfully overcome several long-standing technical obstacles in AR and investigated technical approaches to address fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co- existence; integrating these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community, and can be readily reused and extended by other researchers and engineers. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables spotters to ”see” buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96145/1/dsuyang_1.pd

    The effects of changing projection geometry on perception of 3D objects on and around tabletops

    Get PDF
    Funding: Natural Sciences and Engineering Research Council of Canada Networks of Centres of Excellence of Canada.Displaying 3D objects on horizontal displays can cause problems in the way that the virtual scene is presented on the 2D surface; inappropriate choices in how 3D is represented can lead to distorted images and incorrect object interpretations. We present four experiments that test 3D perception. We varied projection geometry in three ways: type of projection (perspective/parallel), separation between the observer’s point of view and the projection’s center (discrepancy), and the presence of motion parallax (with/without parallax). Projection geometry had strong effects different for each task. Reducing discrepancy is desirable for orientation judgments, but not for object recognition or internal angle judgments. Using a fixed center of projection above the table reduces error and improves accuracy in most tasks. The results have far-reaching implications for the design of 3D views on tables, in particular for multi-user applications where projections that appear correct for one person will not be perceived correctly by another.PostprintPeer reviewe

    Supporting Collaborative Learning in Computer-Enhanced Environments

    Full text link
    As computers have expanded into almost every aspect of our lives, the ever-present graphical user interface (GUI) has begun facing its limitations. Demanding its own share of attention, GUIs move some of the users\u27 focus away from the task, particularly when the task is 3D in nature or requires collaboration. Researchers are therefore exploring other means of human-computer interaction. Individually, some of these new techniques show promise, but it is the combination of multiple approaches into larger systems that will allow us to more fully replicate our natural behavior within a computing environment. As computers become more capable of understanding our varied natural behavior (speech, gesture, etc.), the less we need to adjust our behavior to conform to computers\u27 requirements. Such capabilities are particularly useful where children are involved, and make using computers in education all the more appealing. Herein are described two approaches and implementations of educational computer systems that work not by user manipulation of virtual objects, but rather, by user manipulation of physical objects within their environment. These systems demonstrate how new technologies can promote collaborative learning among students, thereby enhancing both the students\u27 knowledge and their ability to work together to achieve even greater learning. With these systems, the horizon of computer-facilitated collaborative learning has been expanded. Included among this expansion is identification of issues for general and special education students, and applications in a variety of domains, which have been suggested

    Co-present photo sharing on mobile devices

    Get PDF
    This dissertation researches current approaches to photo sharing. We have found that most current methods of photo sharing are not as compelling as traditional photo sharing - with the increasing in popularity of digital photography, consumers do not print photos as often as before and thus typically require a group display (such as a PC) to view their photographs collectively. This dissertation describes a mobile application that attempts to support traditional photo sharing activities by allowing users to share photos with other co-present users by synchronizing the display on multiple mobile devices. Various floor control policies (software locks that determine when someone can control the displays) were implemented. The behaviour of groups of users was studied to determine how people would use this application for sharing photos and how various floor control policies affect this behaviour

    Exploring the Multi-touch Interaction Design Space for 3D Virtual Objects to Support Procedural Training Tasks

    Get PDF
    Multi-touch interaction has the potential to be an important input method for realistic training in 3D environments. However, multi-touch interaction has not been explored much in 3D tasks, especially when trying to leverage realistic, real-world interaction paradigms. A systematic inquiry into what realistic gestures look like for 3D environments is required to understand how users translate real-world motions to multi-touch motions. Once those gestures are defined, it is important to see how we can leverage those gestures to enhance training tasks. In order to explore the interaction design space for 3D virtual objects, we began by conducting our first study exploring user-defined gestures. From this work we identified a taxonomy and design guidelines for 3D multi-touch gestures and how perspective view plays a role in the chosen gesture. We also identified a desire to use pressure on capacitive touch screens. Since the best way to implement pressure still required some investigation, our second study evaluated two different pressure estimation techniques in two different scenarios. Once we had a taxonomy of gestures we wanted to examine whether implementing these realistic multi-touch interactions in a training environment provided training benefits. Our third study compared multi-touch interaction to standard 2D mouse interaction and to actual physical training and found that multi-touch interaction performed better than 2D mouse and as well as physical training. This study showed us that multi-touch training using a realistic gesture set can perform as well as training on the actual apparatus. One limitation of the first training study was that the user had constrained perspective to allow for us to focus on isolating the gestures. Since users can change their perspective in a real life training scenario and therefore gain spatial knowledge of components, we wanted to see if allowing users to alter their perspective helped or hindered training. Our final study compared training with Unconstrained multi-touch interaction, Constrained multi-touch interaction, or training on the actual physical apparatus. Results show that the Unconstrained multi-touch interaction and the Physical groups had significantly better performance scores than the Constrained multi-touch interaction group, with no significant difference between the Unconstrained multi-touch and Physical groups. Our results demonstrate that allowing users more freedom to manipulate objects as they would in the real world benefits training. In addition to the research already performed, we propose several avenues for future research into the interaction design space for 3D virtual objects that we believe will be of value to researchers and designers of 3D multi-touch training environments

    St. George and the Dragon: Design and production of a cultural heritage museum installation using media archaeology

    Get PDF
    Media archaeology is a field of media research investigating new media cultures through material manifestations. Although often recognized as an approach to art, its use as an approach to design has not been fully explored. Media archaeology can be valuable because it offers alternative qualities of mediation, as a design palette, to that of prescriptive common media devices. This thesis describes a media archaeological approach towards the design of a cultural heritage media installation, exhibited at Häme Castle between April–December 2017, and produced as a collaboration between the National Museum of Finland (Kansallismuseo) and the Systems of Representation research group in the Department of Media at Aalto University in Finland. The installation displayed a multi-view stereoscopic (3D) digital reconstruction of a medieval sculptural scene of St. George and the Dragon, based on preserved, fragmented medieval sculptures from the museum’s archives. Four stereoscopic video viewers were synchronized to a rotating central physical display, affording visitors an effect of augmented reality, without the need for a mainstream augmented reality implementation. Though the work was time-limited and project-driven, the design approach achieved a well-integrated installation that was sensitive to the aims of an exhibition of sculpture within a cultural heritage museum: artistry, materiality, interpretation. This thesis therefore seeks to argue that media archaeological approaches to design can identify historical ideas that can be remediated into relevancy for new contexts, and, in spite of their historical connotations, foster engaging technological experiences for the contemporary audience, that are sensitive to the aims of an exhibition of cultural heritage

    How to Create Suitable Augmented Reality Application to Teach Social Skills for Children with ASD

    Get PDF
    Autism spectrum disorders (ASDs) are characterized by a reduced ability to appropriately express social greetings. Studies have indicated that individuals with ASD might not recognize the crucial nonverbal cues that usually aid social interaction. This study applied augmented reality (AR) with tabletop role-playing game (AR-RPG) to focus on the standard nonverbal social cues to teach children with ASD, how to appropriately reciprocate when they socially interact with others. The results showed that intervention system provides an AR combined with physical manipulatives and presents corresponding specific elements in an AR 3D animation with dialogue; thus, it can be used to help them increase their social interaction skills and drive their attention toward the meaning and social value of greeting behavior in specific social situations. We conclude that AR-RPG of social situations helped children with ASD recognize and better understand these situations and moderately effective in teaching the target greeting responses

    On Inter-referential Awareness in Collaborative Augmented Reality

    Get PDF
    For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness

    THE UNIVERSAL MEDIA BOOK

    Get PDF
    We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume
    • …
    corecore