211 research outputs found

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die BerĂŒcksichtigung verĂ€nderter Schwerkraftbedingungen

    LandMarkAR: An application to study virtual route instructions and the design of 3D landmarks for indoor pedestrian navigation with a mixed reality head-mounted display

    Get PDF
    Mixed Reality (MR) interfaces on head-mounted displays (HMDs) have the potential to replace screen-based interfaces as the primary interface to the digital world. They potentially offer a more immersive and less distracting experience compared to mobile phones, allowing users to stay focused on their environment and main goals while accessing digital information. Due to their ability to gracefully embed virtual information in the environment, MR HMDs could potentially alleviate some of the issues plaguing users of mobile pedestrian navigation systems, such as distraction, diminished route recall, and reduced spatial knowledge acquisition. However, the complexity of MR technology presents significant challenges, particularly for researchers with limited programming knowledge. This thesis presents “LandMarkAR” to address those challenges. “LandMarkAR” is a HoloLens application that allows researchers to create augmented territories to study human navigation with MR interfaces, even if they have little programming knowledge. “LandMarkAR” was designed using different methods from human-centered design (HCD), such as design thinking and think-aloud testing, and was developed with Unity and the Mixed Reality Toolkit (MRTK). With “LandMarkAR”, researchers can place and manipulate 3D objects as holograms in real-time, facilitating indoor navigation experiments using 3D objects that serve as turn-by-turn instructions, highlights of physical landmarks, or other information researchers may come up with. Researchers with varying technical expertise will be able to use “LandMarkAR” for MR navigation studies. They can opt to utilize the easy-to-use User Interface (UI) on the HoloLens or add custom functionality to the application directly in Unity. “LandMarkAR” empowers researchers to explore the full potential of MR interfaces in human navigation and create meaningful insights for their studies

    The BIM process for the architectural heritage: New communication tools based on AR/VR Case study: Palazzo di CittĂ 

    Get PDF
    The present study aims at presenting the application of the Building Information Modeling methodology to the case study of Palazzo di CittĂ , the Turin City Hall, investigating the possibilities of integration of new technologies in Cultural Heritage preservation and valorization. From the survey phase to the communication of the CH to end-users, BIM methodology, combined with the latest digital innovations (AR, VR, 3d Laser Scanner and much more), allows a fast and highly communicative representation of buildings to both professionals and common visitors who interact with the building life-cycle. An important objective of this work is moreover to demonstrate the advantages of adopting and integrating this technologies in Real Estate Management at a national scale, fully testing the adaptability of parametric software and Virtual Reality modeling to complex and highly decorated buildings, confirming the potentiality of BIM software upon an uncommon field: the historic buildings. The case study is in fact Palazzo di CittĂ , the baroque, seventieth century City Hall of Turin. The research fully meets the latest directives of European Union and other International Organizations in the field of digitization of archives and Public Property management, participating to the international community effort to overcome the contemporary deep Construction Field crisis. In particular, the methodology has been focused and adapted to the protection and management of our huge Heritage, founding its objectives on the quest of cost-saving processes and instruments, applied to the management of a CH. Through BIM it is in fact possible to increase the communication and cooperation among all the actors involved in the building life-cycle behaving as a common working platform. Draws, 3D model and database are shared by all the actors and integrated in the same digital structure, where control tools and cooperation can prevent the designers from errors, saving time and money in the construction phase. The particularity of the case study, Palazzo di CittĂ , being contemporarily a CH, a public asset and a working space, allows a deep study of the possibilities of BIM applied to a complex building, touching very important aspects of a historic building management: digitization of the historic information, publication of modeling techniques of complex architectonical elements, transformations reconstruction, energy consumption control, Facility Management, dissemination, virtual reconstructions of the lost appearance and accessibility for people with sensory and motor impairments. Moreover, the last chapters of the study focus on the fruition of this paramount Turin CH, making available for all kind of people interesting and not well known aspects of the history of the building and of the city itself. This part of the research suggests a methodology to translate static 2d images and written descriptions of a CH into living and immersive VR environment, presenting in an interactive way the transformation of the Marble Hall, once called Aula Maior: the room where the Mayor meets his citizens. Besides the aspects related to the valorization and preservation of the CH, the study reserves considerable space to the deepening of technical aspects involving advanced parametric modeling techniques, use of BIM software and all the vital procedures necessary to the generation of an efficient management informative platform. The whole work is intended as a guide for future works, structuring a replicable protocol to achieve an efficient digitization of papery resources into a 3d virtual model

    Modified Structured Domain Randomization in a Synthetic Environment for Learning Algorithms

    Get PDF
    Deep Reinforcement Learning (DRL) has the capability to solve many complex tasks in robotics, self-driving cars, smart grids, finance, healthcare, and intelligent autonomous systems. During training, DRL agents interact freely with the environment to arrive at an inference model. Under real-world conditions this training creates difficulties of safety, cost, and time considerations. Training in synthetic environments helps overcome these difficulties, however, this only approximates real-world conditions resulting in a ‘reality gap’. The synthetic training of agents has proven advantageous but requires methods to bridge this reality gap. This work addressed this through a methodology which supports agent learning. A framework which incorporates a modifiable synthetic environment integrated with an unmodified DRL algorithm was used to train, test, and evaluate agents while using a modified Structured Domain Randomization (SDR+) technique. It was hypothesized that the application of environment domain randomizations (DR) during the learning process would allow the agent to learn variability and adapt accordingly. Experiments using the SDR+ technique included naturalistic and physical-based DR while applying the concept of context-aware elements (CAE) to guide and speed up agent training. Drone racing served as the use case. The experimental framework workflow generated the following results. First, a baseline was established by training and validating an agent in a generic synthetic environment void of DR and CAE. The agent was then tested in environments with DR which showed degradation of performance. This validated the reality gap phenomenon under synthetic conditions and established a metric for comparison. Second, an SDR+ agent was successfully trained and validated under various applications of DR and CAE. Ablation studies determine most DR and CAE effects applied had equivalent effects on agent performance. Under comparison, the SDR+ agent’s performance exceeded that of the baseline agent in every test where single or combined DR effects were applied. These tests indicated that the SDR+ agent’s performance did improve in environments with applied DR of the same order as received during training. The last result came from testing the SDR+ agent’s inference model in a completely new synthetic environment with more extreme and additional DR effects applied. The SDR+ agent’s performance was degraded to a point where it was inconclusive if generalization occurred in the form of learning to adapt to variations. If the agent’s navigational capabilities, control/feedback from the DRL algorithm, and the use of visual sensing were improved, it is assumed that future work could exhibit indications of generalization using the SDR+ technique

    Reconstrução tridimensional de ambientes reais usando dados laser e de intensidade

    Get PDF
    O objectivo do trabalho apresentado nesta tese é a criação de modelos tridimensionais completos e de alta resolução de ambientes reais (informação geométrica e de textura) a partir de imagens passivas de intensidade e de sensores de distùncia activos. A maior parte dos sistemas de reconstrução 3D são baseados em sensores laser de distùncia ou em cùmaras fotogråficas, mas muito pouco trabalho tem tentado combinar estes dois tipos de sensores. A extracção de profundidade a partir de imagens de intensidade é complicada. Por outro lado, as fotografias fornecem informação adicional sobre os ambientes que pode ser usada durante o processo de modelação, em particular para definir, de uma forma precisa, as fronteiras das superfícies. Isto torna os sensores activos e passivos complementares em varios modos e é a ideia de base que motivou o trabalho apresentado nesta tese. Na primeira parte da tese, concentramo-nos no registro entre dados oriundos de sensores activos de distùncia e de cùmaras digitais passivas e no desenvolvimento de ferramentas para tornar este passo mais fåcil, independente do utilizador e mais preciso. No fim, com esta técnica, obtém-se um mapa de textura para os modelos baseado em vårias fotografias digitais. O modelo 3D assim obtido é realizado baseado nos dados de distùncia para a geometria e nas fotografias digitais para a textura. Com estes modelos, obtémse uma qualidade fotogråfica: uma espécie de fotografia de alta resolução em 3D dum ambiente real. Na segunda parte da tese, vai-se mais longe na combinação dos dados. As fotografias digitais são usadas como uma fonte adicional de informação tridimensional que pode ser valiosa para definir com precisão as fronteiras das superfícies (onde a informação de distùncia é menos fiåvel) ou então preencher falhas nos dados ou aumentar a densidade de pontos 3D em åreas de interesse.The objective of the work presented in this thesis is to generate complete, highresolution three-dimensional models of real world scenes (3D geometric and texture information) from passive intensity images and active range sensors. Most 3D reconstruction systems are based either in range finders or in digital cameras but little work tries to combine these two sensors. Depth extraction from intensity images is complex. On the other hand digital photographs provide additional information about the scenes that can be used to help the modelling process, in particular to define accurate surface boundary conditions. This makes active and passive sensors complementary in many ways and is the base idea that motivates the work in this thesis. In the first part of the thesis, we concentrate in the registration between data coming from active range sensors and passive digital cameras and the development of tools to make this step easier, more user-independent and more precise. In the end, with this technique, a texture map for the models is computed based on several digital photographs. This will lead to 3D models where 3D geometry is extracted from range data, whereas texture information comes from digital photographs. With these models, photo realistic quality is achieved: a kind of high-resolution 3D photograph of a real scene. In the second part of the thesis, we go further in the combination between the datasets. The digital photographs are used as an additional source of threedimensional information that can be valuable to define accurate surface boundary conditions (where range data is less reliable) or even to fill holes in the data or increase 3D point density in areas of interest

    Development of an Augmented Reality Interface for Intuitive Robot Programming

    Get PDF
    As the demand for advanced robotic systems continues to grow, the need for new technologies and techniques that can improve the efficiency and effectiveness of robot programming is imperative. The latter relies heavily on the effective communication of tasks between the user and the robot. To address this issue, we developed an Augmented Reality (AR) interface that incorporates Head Mounted Display (HMD) capabilities, and integrated it with an active learning framework for intuitive programming of robots. This integration enables the execution of conditional tasks, bridging the gap between user and robot knowledge. The active learning model with the user's guidance incrementally programs a complex task and after encoding the skills, generates a high level task graph. Then the holographic robot is visualising individual skills of the task in order to increase the user's intuition of the whole procedure with sensory information retrieved from the physical robot in real-time. The interactive aspect of the interface can be utilised in this phase, by providing the user the option of actively validating the learnt skills or potentially changing them and thus generating a new skill sequence. Teaching the real robot through teleoperation by using the HMD is also possible for the user to increase the directness and immersion factors of teaching procedure while safely manipulating the physical robot from a distance. The evaluation of the proposed framework is conducted through a series of experiments employing the developed interface on the real system. These experiments aim to assess the degree of intuitiveness provided by the interface features to the user and to determine the extent of similarity between the virtual system's behavior during the robot programming procedure and that of its physical counterpart

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Prediction of Early Vigor from Overhead Images of Carinata Plants

    Get PDF
    Breeding more resilient, higher yielding crops is an essential component of ensuring ongoing food security. Early season vigor is signi cantly correlated with yields and is often used as an early indicator of tness in breeding programs. Early vigor can be a useful indicator of the health and strength of plants with bene ts such as improved light interception, reduced surface evaporation, and increased biological yield. However, vigor is challenging to measure analytically and is often rated using subjective visual scoring. This traditional method of breeder scoring becomes cumbersome as the size of breeding programs increase. In this study, we used hand-held cameras tted on gimbals to capture images which were then used as the source for automated vigor scoring. We have employed a novel image metric, the extent of plant growth from the row centerline, as an indicator of vigor. Along with this feature, additional features were used for training a random forest model and a support vector machine, both of which were able to predict expert vigor ratings with an 88:9% and 88% accuracies respectively, providing the potential for more reliable, higher throughput vigor estimates
    • 

    corecore