190 research outputs found
Augmented reality and GIS: On the possibilities and limits of markerless AR
Ponencias, comunicaciones y pósters presentados en el 17th AGILE Conference on Geographic Information Science
"Connecting a Digital Europe through Location and Place", celebrado en la Universitat Jaume I del 3 al 6 de junio de 2014.The application of Augmented Reality (AR) in the geo-spatial domain offers huge potentials: AR can visualize invisible properties of spatial entities, can display historic data for them, or can help in finding places. Whatever the application is, AR in the geo-spatial domain will often be purely sensor based, thus without the help of visual or sensory markers. In this paper we analyse the achievable accuracy of AR projections under everyday conditions with consumer hardware. We can show that AR can be applied in applications in smaller geographic scale, but is not sufficient if it comes to the preciseness required when inspecting infrastructural data of small scale
On Inter-referential Awareness in Collaborative Augmented Reality
For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness
Alternative realities : from augmented reality to mobile mixed reality
This thesis provides an overview of (mobile) augmented and mixed reality by clarifying the different concepts of reality, briefly covering the technology behind mobile augmented and mixed reality systems, conducting a concise survey of existing and emerging mobile augmented and mixed reality applications and devices. Based on the previous analysis and the survey, this work will next attempt to assess what mobile augmented and mixed reality could make possible, and what related applications and environments could offer to users, if tapped into their full potential. Additionally, this work briefly discusses what might be the cause for mobile augmented reality not yet being widely adopted to everyday use, even though many such applications already exist for the smartphone platform, and smartglass systems slowly becoming increasingly common. Other related topics and issues that are briefly covered include information security and privacy issues related to mobile augmented and mixed reality systems, the link between mobile mixed reality and ubiquitous computing, previously conducted user studies, as well as user needs and user experience issues.
The overall purpose of this thesis is to demonstrate what is already possible to implement on the mobile platform (including both hand-held devices and head-mounted configurations) by using augmented and mixed reality interfaces, and to consider how mobile mixed reality systems could be improved, based on existing products, studies and lessons learned from the survey conducted in this thesis
Handheld Guides in Inspection Tasks : Augmented Reality versus Picture
Inspection tasks focus on observation of the environment and are required in many industrial domains. Inspectors usually execute these tasks by using a guide such as a paper manual, and directly observing the environment. The effort required to match the information in a guide with the information in an environment and the constant gaze shifts required between the two can severely lower the work efficiency of inspector in performing his/her tasks. Augmented reality (AR) allows the information in a guide to be overlaid directly on an environment. This can decrease the amount of effort required for information matching, thus increasing work efficiency. AR guides on head-mounted displays (HMDs) have been shown to increase efficiency. Handheld AR (HAR) is not as efficient as HMD-AR in terms of manipulability, but is more practical and features better information input and sharing capabilities. In this study, we compared two handheld guides: an AR interface that shows 3D registered annotations, that is, annotations having a fixed 3D position in the AR environment, and a non-AR picture interface that displays non-registered annotations on static images. We focused on inspection tasks that involve high information density and require the user to move, as well as to perform several viewpoint alignments. The results of our comparative evaluation showed that use of the AR interface resulted in lower task completion times, fewer errors, fewer gaze shifts, and a lower subjective workload. We are the first to present findings of a comparative study of an HAR and a picture interface when used in tasks that require the user to move and execute viewpoint alignments, focusing only on direct observation. Our findings can be useful for AR practitioners and psychology researchers
Towards exploring future landscapes using augmented reality
With increasing pressure to better manage the environment many government and private organisations are studying the relationships between social, economic and environmental factors to determine how they can best be optimised for increased sustainability. The analysis of such relationships are undertaken using computer-based Integrated Catchment Models (ICM). These models are capable of generating multiple scenarios depicting alternative land use alternatives at a variety of temporal and spatial scales, which present (potentially) better Triple-Bottom Line (TBL) outcomes than the prevailing situation. Dissemination of this data is (for the most part) reliant on traditional, static map products however, the ability of such products to display the complexity and temporal aspects is limited and ultimately undervalues both the knowledge incorporated in the models and the capacity of stakeholders to disseminate the complexities through other means. Geovisualization provides tools and methods for disseminating large volumes of spatial (and associated non-spatial) data. Virtual Environments (VE) have been utilised for various aspects of landscape planning for more than a decade. While such systems are capable of visualizing large volumes of data at ever-increasing levels of realism, they restrict the users ability to accurately perceive the (virtual) space. Augmented Reality (AR) is a visualization technique which allows users freedom to explore a physical space and have that space augmented with additional, spatially referenced information. A review of existing mobile AR systems forms the basis of this research. A theoretical mobile outdoor AR system using Common-Of-The-Shelf (COTS) hardware and open-source software is developed. The specific requirements for visualizing land use scenarios in a mobile AR system were derived using a usability engineering approach known as Scenario-Based Design (SBD). This determined the elements required in the user interfaces resulting in the development of a low-fidelity, computer-based prototype. The prototype user interfaces were evaluated using participants from two targeted stakeholder groups undertaking hypothetical use scenarios. Feedback from participants was collected using the cognitive walk-through technique and supplemented by evaluator observations of participants physical actions. Results from this research suggest that the prototype user interfaces did provide the necessary functionality for interacting with land use scenarios. While there were some concerns about the potential implementation of "yet another" system, participants were able to envisage the benefits of visualizing land use scenario data in the physical environment
ARVISCOPE: Georeferenced Visualization of Dynamic Construction Processes in Three-Dimensional Outdoor Augmented Reality.
Construction processes can be conceived as systems of discrete, interdependent activities. Discrete Event Simulation (DES) has thus evolved as an effective tool to model operations that compete over available resources (personnel, material, and equipment). A DES model has to be verified and validated to ensure that it reflects a modeler’s intentions, and faithfully represents a real operation. 3D visualization is an effective means of achieving this, and facilitating the process of communicating and accrediting simulation results. Visualization of simulated operations has traditionally been achieved in Virtual Reality (VR). In order to create convincing VR animations, detailed information about an operation and the environment has to be obtained. The data must describe the simulated processes, and provide 3D CAD models of project resources, the facility under construction, and the surrounding terrain (Model Engineering). As the size and complexity of an operation increase, such data collection becomes an arduous, impractical, and often impossible task. This directly translates into loss of financial and human resources that could otherwise be productively used. In an effort to remedy this situation, this dissertation proposes an alternate approach of visualizing simulated operations using Augmented Reality (AR) to create mixed views of real existing jobsite facilities and virtual CAD models of construction resources. The application of AR in animating simulated operations has significant potential in reducing the aforementioned Model Engineering and data collection tasks, and at the same time can help in creating visually convincing output that can be effectively communicated.
This dissertation presents the design, methodology, and development of ARVISCOPE, a general purpose AR animation authoring language, and ROVER, a mobile computing hardware framework. When used together, ARVISCOPE and ROVER can create three-dimensional AR animations of any length and complexity from the results of running DES models of engineering operations. ARVISCOPE takes advantage of advanced Global Positioning System (GPS) and orientation tracking technologies to accurately track a user’s spatial context, and georeferences superimposed 3D graphics in an augmented environment. In achieving the research objectives, major technical challenges such as accurate registration, automated occlusion handling, and dynamic scene construction and manipulation have been successfully identified and addressed.Ph.D.Civil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/60761/1/abehzada_1.pd
Senseable Spaces: from a theoretical perspective to the application in augmented environments
openGrazie all’ enorme diffusione di dispositivi senzienti nella vita di tutti i giorni, nell’ ultimo decennio abbiamo assistito ad un cambio definitivo nel modo in cui gli utenti interagiscono con lo spazio circostante.
Viene coniato il termine Spazio Sensibile, per descrivere quegli spazi in grado di fornire servizi contestuali agli utenti, misurando e analizzando le dinamiche che in esso avvengono, e di reagire conseguentemente a questo continuo flusso di dati bidirezionale.
La ricerca è stata condotta abbracciando diversi domini di applicazione, le cui singole esigenze hanno reso necessario testare il concetto di Spazi Sensibili in diverse declinazioni, mantenendo al centro della ricerca l’utente, con la duplice accezione di end-user e manager.
Molteplici sono i contributi rispetto allo stato dell’ arte. Il concetto di Spazio Sensibile è stato calato nel settore dei Beni Culturali, degli Spazi Pubblici, delle Geosciences e del Retail. I casi studio nei musei e nella archeologia dimostrano come l’ utilizzo della Realtà Aumentata possa essere sfruttata di fronte a un dipinto o in outdoor per la visualizzazione di modelli complessi, In ambito urbano, il monitoraggio di dati generati dagli utenti ha consentito di capire le dinamiche di un evento di massa, durante il quale le stesse persone fruivano di servizi contestuali. Una innovativa applicazione di Realtà Aumentata è stata come servizio per facilitare l’ ispezione di fasce tampone lungo i fiumi, standardizzando flussi di dati e modelli provenienti da un Sistema Informativo Territoriale. Infine, un robusto sistema di indoor localization è stato istallato in ambiente retail, per scopi classificazione dei percorsi e per determinare le potenzialità di un punto vendita.
La tesi è inoltre una dimostrazione di come Space Sensing e Geomatica siano discipline complementari: la geomatica consente di acquisire e misurare dati geo spaziali e spazio temporali a diversa scala, lo Space Sensing utilizza questi dati per fornire servizi all’ utente precisi e contestuali.Given the tremendous growth of ubiquitous services in our daily lives, during the last few decades we have witnessed a definitive change in the way users' experience their surroundings.
At the current state of art, devices are able to sense the environment and users’ location, enabling them to experience improved digital services, creating synergistic loop between the use of the technology, and the use of the space itself.
We coined the term Senseable Space, to define the kinds of spaces able to provide users with contextual services, to measure and analyse their dynamics and to react accordingly, in a seamless exchange of information.
Following the paradigm of Senseable Spaces as the main thread, we selected a set of experiences carried out in different fields; central to this investigation there is of course the user, placed in the dual roles of end-user and manager.
The main contribution of this thesis lies in the definition of this new paradigm, realized in the following domains: Cultural Heritage, Public Open Spaces, Geosciences and Retail.
For the Cultural Heritage panorama, different pilot projects have been constructed from creating museum based installations to developing mobile applications for archaeological settings. Dealing with urban areas, app-based services are designed to facilitate the route finding in a urban park and to provide contextual information in a city festival. We also outlined a novel application to facilitate the on-site inspection by risk managers thanks to the use of Augmented Reality services. Finally, a robust indoor localization system has been developed, designed to ease customer profiling in the retail sector.
The thesis also demonstrates how Space Sensing and Geomatics are complementary to one another, given the assumption that the branches of Geomatics cover all the different scales of data collection, whilst Space Sensing gives one the possibility to provide the services at the correct location, at the correct time.INGEGNERIA DELL'INFORMAZIONEembargoed_20181001Pierdicca, RobertoPierdicca, Robert
Senseable Spaces: from a theoretical perspective to the application in augmented environments
Grazie all’ enorme diffusione di dispositivi senzienti nella vita di tutti i giorni, nell’ ultimo decennio abbiamo assistito ad un cambio definitivo nel modo in cui gli utenti interagiscono con lo spazio circostante.
Viene coniato il termine Spazio Sensibile, per descrivere quegli spazi in grado di fornire servizi contestuali agli utenti, misurando e analizzando le dinamiche che in esso avvengono, e di reagire conseguentemente a questo continuo flusso di dati bidirezionale.
La ricerca è stata condotta abbracciando diversi domini di applicazione, le cui singole esigenze hanno reso necessario testare il concetto di Spazi Sensibili in diverse declinazioni, mantenendo al centro della ricerca l’utente, con la duplice accezione di end-user e manager.
Molteplici sono i contributi rispetto allo stato dell’ arte. Il concetto di Spazio Sensibile è stato calato nel settore dei Beni Culturali, degli Spazi Pubblici, delle Geosciences e del Retail. I casi studio nei musei e nella archeologia dimostrano come l’ utilizzo della Realtà Aumentata possa essere sfruttata di fronte a un dipinto o in outdoor per la visualizzazione di modelli complessi, In ambito urbano, il monitoraggio di dati generati dagli utenti ha consentito di capire le dinamiche di un evento di massa, durante il quale le stesse persone fruivano di servizi contestuali. Una innovativa applicazione di Realtà Aumentata è stata come servizio per facilitare l’ ispezione di fasce tampone lungo i fiumi, standardizzando flussi di dati e modelli provenienti da un Sistema Informativo Territoriale. Infine, un robusto sistema di indoor localization è stato istallato in ambiente retail, per scopi classificazione dei percorsi e per determinare le potenzialità di un punto vendita.
La tesi è inoltre una dimostrazione di come Space Sensing e Geomatica siano discipline complementari: la geomatica consente di acquisire e misurare dati geo spaziali e spazio temporali a diversa scala, lo Space Sensing utilizza questi dati per fornire servizi all’ utente precisi e contestuali.Given the tremendous growth of ubiquitous services in our daily lives, during the last few decades we have witnessed a definitive change in the way users' experience their surroundings.
At the current state of art, devices are able to sense the environment and users’ location, enabling them to experience improved digital services, creating synergistic loop between the use of the technology, and the use of the space itself.
We coined the term Senseable Space, to define the kinds of spaces able to provide users with contextual services, to measure and analyse their dynamics and to react accordingly, in a seamless exchange of information.
Following the paradigm of Senseable Spaces as the main thread, we selected a set of experiences carried out in different fields; central to this investigation there is of course the user, placed in the dual roles of end-user and manager.
The main contribution of this thesis lies in the definition of this new paradigm, realized in the following domains: Cultural Heritage, Public Open Spaces, Geosciences and Retail.
For the Cultural Heritage panorama, different pilot projects have been constructed from creating museum based installations to developing mobile applications for archaeological settings. Dealing with urban areas, app-based services are designed to facilitate the route finding in a urban park and to provide contextual information in a city festival. We also outlined a novel application to facilitate the on-site inspection by risk managers thanks to the use of Augmented Reality services. Finally, a robust indoor localization system has been developed, designed to ease customer profiling in the retail sector.
The thesis also demonstrates how Space Sensing and Geomatics are complementary to one another, given the assumption that the branches of Geomatics cover all the different scales of data collection, whilst Space Sensing gives one the possibility to provide the services at the correct location, at the correct time
Recommended from our members
Augmented Reality Interfaces for Enabling Fast and Accurate Task Localization
Changing viewpoints is a common technique to gain additional visual information about the spatial relations among the objects contained within an environment. In many cases, all of the necessary visual information is not available from a single vantage point, due to factors such as occlusion, level of detail, and limited field of view. In certain instances, strategic viewpoints may need to be visited multiple times (e.g., after each step of an iterative process), which makes being able to transition between viewpoints precisely and with minimum effort advantageous for improved task performance (e.g., faster completion time, fewer errors, less dependence on memory).
Many augmented reality (AR) applications are designed to make tasks easier to perform by supplementing a user's first-person view with virtual instructions. For those tasks that benefit from being seen from more than a single viewpoint, AR users typically have to physically relocalize (i.e., move a see-through display and typically themselves since those displays are often head-worn or hand-held) to those additional viewpoints. However, this physical motion may be costly or difficult, due to increased distances or obstacles in the environment.
We have developed a set of interaction techniques that enable fast and accurate task localization in AR. Our first technique, SnapAR, allows users to take snapshots of augmented scenes that can be virtually revisited at later times. The system stores still images of scenes along with camera poses, so that augmentations remain dynamic and interactive. Our prototype implementation features a set of interaction techniques specifically designed to enable quick viewpoint switching. A formal evaluation of the capability to manipulate virtual objects within snapshot mode showed significant savings in time spent and gain in accuracy when compared to physically traveling between viewpoints.
For cases when a user has to physically travel to a strategic viewpoint (e.g., to perform maintenance and repair on a large physical piece of equipment), we present ParaFrustum, a geometric construct that represents this set of strategic viewpoints and viewing directions and establishes constraints on a range of acceptable locations for the user's eyes and a range of acceptable angles in which the user's head can be oriented. Providing tolerance in the allowable viewing positions and directions avoids burdening the user with the need to assume a tightly constrained 6DOF pose when it is not required by the task. We describe two visualization techniques, ParaFrustum-InSitu and ParaFrustum-HUD, that guide a user to assume one of the poses defined by a ParaFrustum. A formal user study corroborated that speed improvements increase with larger tolerances and reveals interesting differences in participant trajectories based on the visualization technique.
When the object to be operated on is smaller and can be handheld, instead of being large and stationary, it can be manually rotated instead of the user moving to a strategic viewpoint. Examples of such situations include tasks in which one object must be oriented relative to a second prior to assembly and tasks in which objects must be held in specific ways to inspect them. Researchers have investigated guidance mechanisms for some 6DOF tasks, using wide--field-of-view (FOV), stereoscopic virtual and augmented reality head-worn displays (HWDs). However, there has been relatively little work directed toward smaller FOV lightweight monoscopic HWDs, such as Google Glass, which may remain more comfortable and less intrusive than stereoscopic HWDs in the near future. In our Orientation Assistance work, we have designed and implemented a novel visualization approach and three additional visualizations representing different paradigms for guiding unconstrained manual 3DOF rotation, targeting these monoscopic HWDs. This chapter includes our exploration of these paradigms and the results of a user study evaluating the relative performance of the visualizations and showing the advantages of our new approach.
In summary, we investigated ways of enabling an AR user to obtain visual information from multiple viewpoints, both physically and virtually. In the virtual case, we showed how one can change viewpoints precisely and with less effort. In the physical case, we explored how we can interactively guide users to obtain strategic viewpoints, either by moving their heads or re-orienting handheld objects. In both cases, we showed that our techniques help users accomplish certain types of tasks more quickly and with fewer errors, compared to when they have to change viewpoints following alternative, previously suggested methods
- …