10 research outputs found

    An MDE-based framework to support the development of Mixed Interactive Systems

    Get PDF
    International audienceIn the domain of Human Computer Interaction (HCI), recent advances in sensors, communication technologies, miniaturization and computing capabilities have led to new and advanced forms of interaction. Among them, Mixed Interactive Systems (MIS), form a class of interactive systems that comprises augmented reality, tangible interfaces and ambient computing; MIS aim to take advantage of physical and digital worlds to promote a more transparent integration of interactive systems with the user's environment. Due to the constant change of technologies and the multiplicity of these interaction forms, specific development approaches have been developed. As a result, numerous taxonomies, frameworks, API and models have emerged, each one covering a specific and limited aspect of the development of MIS. To support a coherent use of these multiple development resources and contribute to the increasing popularity of MIS, we have developed a framework based on Model-Driven Engineering. The goal is to take advantage of Model-Driven Engineering (MDE) standards, methodology and tools to support the manipulation of complementary Domain Specific Languages (DSL), to organize and link the use of different design and implementation resources, and to ensure a rationalized implementation based on design choices. In this paper, we first summarize existing uses of MDE in HCI before focusing on five major benefits MDE can provide in a MIS development context. We then detail which MDE tools and resources support these benefits and thus form the pillars of the success of an MDE-based MIS development approach. Based on this analysis, we introduce our framework, called Guide-Me, and illustrate its use through a case study. This framework includes two design models. Model transformations are also included to link one model to another; as a result the frameworks coverage extends from the earliest design step to a software component-based prototyping platform. A toolset based on Eclipse Modeling Framework (EMF) that supports the use of the framework is also presented. We finally assess our MDE-based development process for MIS based on the five major MDE benefits for MIS

    Desenvolvimento de um protótipo de realidade aumentada em ambientes externos para visualização de modelos virtuais georreferenciados

    Get PDF
    Orientador : Prof. Dr. Daniel Rodrigues dos SantosDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências da Terra, Programa de Pós-Graduação em Ciências Geodésicas. Defesa: Curitiba, 229/09/2011Inclui referências : f. 67-69Resumo: Neste estudo apresenta-se uma proposta metodológica para construção de um protótipo de Realidade Aumentada (RA) para visualização de modelos virtuais em escala 1:1 em ambientes externos. A RA é uma tecnologia baseada na sobreposição harmônica do cenário do mundo real (MR) registrado a partir de sensores de imageamento com o cenário do mundo virtual (MR) gerado por computação gráfica em tempo real. Nesta pesquisa foram explorados três aspectos relevantes em RA, a saber: o problema da integração dos sensores; o problema da determinação dos parâmetros de orientação interior (OI) da Webcam e de seus parâmetros de orientação exterior (OE); e a viabilidade do método para ambientes externos. O sistema desenvolvido está equipado com um sensor inercial de baixo custo integrado a um receptor GPS de dupla freqüência, uma Webcam e um óculos de visualização HMD. A criação do cenário virtual foi feita com uso de dados LiDAR, e arquivos CAD e a qualidade da visualização está associada à precisão dos equipamentos usados no sistema. Os parâmetros de OI foram definidos a partir da calibração da Webcam, os parâmetros de OE são definidos em tempo real, para isso foram usados os sensores de navegação e posicionamento supracitados para determinar diretamente a orientação e posição do usuário no momento da visualização do cenário. Dois experimentos foram realizados e os resultados obtidos mostraram a viabilidade do uso do sistema para estudos em ambientes externos. Palavras-chave:Realidade Aumentada, Realidade Virtual, sensores de navegação e posicionamento, LiDAR.Abstract: This research implements and demonstrates a methodology for building an outdoor augmented reality prototype for visualization of virtual models in 1:1 scale.The AR system combines the interactive real world with an interactive computer-generated world in such a way that they appear as one environment in real time. As the user moves around the real object, the virtual (i.e. computer generated) one reacts as it is completely integrated with the real world. In this research has been explored three important aspects in augmented reality system: the problem of sensors's integration; the problem of determining the video camera's parameters of interior orientation and its parameters of exterior orientation; and the viability of the method for outdoor environments. The developed system is equipped with a low-cost inertial sensor integrated with a dual frequency GPS receiver, a Webcam and a Head Mounted Display with special glasses. The development of the virtual models was made with LiDAR data and CAD files. The quality of the visualization is associated with the accuracy of equipment used in the system. The interior orientation parameters were defined from the camera calibration, the parameters of external orientation weredefined in real time. For that were used the navigation and positioning sensors to directly determine the orientation and position of the user when viewing the scene.Two experiments were conducted and the results showed the system viability for outdoor environment. Keywords:augmented reality, virtual reality, navigation and positioning sensors, LiDAR

    Problems Analysis and Solutions for the Establishment of Augmented Reality Technology in Maintenance and Education

    Get PDF
    Maintenance is one important area in the life-cycle management of scientific facilities. The design of a maintenance model requires time and effort, and the best solutions and technologies need to be considered for its implementation. Scientific infrastructures that emit ionizing radiation are complex facilities that require taking into account not only traditional maintenance approaches but also specific solutions to prevent operational and health risks. Radiation directly affects workers' health, and therefore new approaches for enhancing maintenance operations are sought. For instance, scientific facilities are integrating remote handling techniques to reduce the radiation dose of workers. As a result of the need for risk reduction, a fast and accurate maintenance procedure is required to provide working conditions for the equipment, increasing the safety in the facilities and reducing costs and maintenance time. Augmented Reality (AR) is a technology that has previously shown benefits in the maintenance field. AR applications allow workers to follow virtual in-place instructions of the maintenance tasks displayed on real devices. This approach provides shorter maintenance time, as workers do not need to find the required help in the appropriate manual and reduces risks, as the right steps to follow are always displayed to the worker. This is especially important in scientific facilities, as less maintenance time implies less radiation affecting workers and equipment, while less risk reduces the potential damages created by a wrong procedure. This thesis proposes a new platform to provide a flexible AR solution targeted to help maintenance procedures at scientific facilities. The platform comprises the required elements for the creation and updating and execution of AR applications, including maintenance-specific features. The platform includes a powerful AR engine capable of producing AR scenes in maintenance environments and an authoring tool for the creation of the AR applications. The platform has been tested in a prototype case in a real facility, where a guiding system has been designed to aid the collimator exchange at the Large Hadron Collider (LHC) at CERN. In order to demonstrate the flexibility of the platform in adapting to other environments, it has been used as basis to develop a solution for a problem detected in a second field: education. AR has been previously used in the education field with promising results. However, the technology has not been established in the large majority of schools and universities. The difficulties in creating AR experiences for educators (related to the lack of time or the required programming expertise) have constituted a barrier to the expansion of the technology in the field. Therefore, new solutions that help to overcome this barrier are needed. For this reason, the main platform developed in this thesis has been utilised as a basis to develop a new educational platform that aims to promote collaboration between developers and educators in order to overcome the detected problems. Finally, during the development of the aforementioned solutions, a comprehensive review of the state of the art of AR technology has been carried out. During the study, the main drivers and bottlenecks of the technology in several fields have been analysed. The results of this analysis have been published with the aim of helping other researchers to find solutions that help the spread of AR technology

    A NOVEL AUGMENTED REALITY BASED SYSTEM FOR PROVIDING MAINTENANCE ASSISTANCE

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Round-trip Engineering für Anwendungen der Virtuellen und Erweiterten Realität

    Get PDF
    Traditionelle 3D-Anwendungsentwicklung für VR/AR verläuft in heterogenen Entwicklerteams unstrukturiert, ad hoc und ist fehlerbehaftet. Der präsentierte Roundtrip3D Entwicklungsprozess ermöglicht die iterativ inkrementelle 3D-Anwendungsentwicklung, wechselseitig auf Softwaremodell- und Implementierungsebene. Modelle fördern das gemeinsame Verständnis unter Projektbeteiligten und sichern durch generierte Schnittstellen gleichzeitiges Programmieren und 3D-Modellieren zu. Das Roundtrip3D Werkzeug ermittelt Inkonsistenzen zwischen vervollständigten 3D-Inhalten und Quelltexten auch für verschiedene Plattformen und visualisiert sie auf abstrakter Modellebene. Die gesamte Implementierung wird nicht simultan, sondern nach codegetriebener Entwicklung kontrolliert mit Softwaremodellen abgeglichen. Inkremente aus aktualisierten Softwaremodellen fließen in dann wieder zueinander konsistente Quelltexte und 3D-Inhalte ein. Der Roundtrip3D Entwicklungsprozess vereint dauerhaft Vorteile codegetriebener mit modellgetriebener 3D-Anwendungsentwicklung und fördert strukturiertes Vorgehen im agilen Umfeld

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lösen. Dies erfordert auch die Berücksichtigung veränderter Schwerkraftbedingungen

    Aportaciones al proceso de creación de contenidos de realidad aumentada, orientados a formación, industria y construcción

    Get PDF
    La Realidad Aumentada (AR) consiste en aumentar la percepción que el usuario tiene del mundo real con información virtual que lo complementa. Esta característica de aumentar los sentidos, es deseable en muchos campos de aplicación, por lo que aunque es un campo de investigación relativamente joven, ha experimentado un gran auge en los últimos años. La evolución de cada uno de los procesos que componen un sistema AR, ha permitido que está tecnología traspase las puertas de los laboratorios, y pueda ser utilizada por cualquier usuario que disponga, por ejemplo, de un Smartphone. Sin embargo, todavía existe una gran limitación y es la creación de contenidos (authoring). En la actualidad existen un pequeño número de herramientas de authoring que permiten crear contenidos AR, y en la mayoría de casos son herramientas propietarias que se limitan al uso de su propia tecnología. En este trabajo se presentan las aportaciones al proceso de authoring de contenidos AR orientados a formación, industria y construcción. Estas aportaciones se engloban en dos campos principales: la creación de contenidos para ayudas visuales, y la creación de información durante la revisión del proceso constructivo por medio de AR (denominada información As-Built). Para la creación de ayudas visuales AR se define un nuevo modelo de authoring, a partir del modelo de presentaciones de diapositivas digitales, en el que se incluyen los mecanismos necesarios para crear los contenidos aumentados. Para la creación de la información As-Built, se han desarrollado, tanto una metodología, como el sistema que permite llevarla a cabo. Todas las aportaciones presentadas en este trabajo tienen la finalidad de permitir a usuarios no expertos crear contenidos AR, abstrayéndoles del funcionamiento a bajo nivel de los sistemas AR. En los resultados presentados a lo largo de este trabajo se muestra cómo los sistemas desarrollados logran dicho objetivo, permitiendo a los usuarios no expertos, crear contenidos AR tanto en ayudas visuales, como para documentar el proceso constructivo.Augmented Reality (AR) consists in augmenting the user’s perception of the real world adding virtual information that complements it. This capability of augment the senses, it is desirable in many applications, so although it is a relatively young field of research, has experienced a boom in recent years. The evolution of each of the processes involved in AR systems (computer vision, 3d rendering, etc.), it is making possible that this technology goes out from laboratories, to be used by anyone who own, for example, a Smartphone. However, there is still a major constraint, the AR content creation (denominated authoring). Nowadays, there are a small number of authoring tools for creating AR content, and in most cases are proprietary systems that are limited to using its own technology. In this thesis, the contributions to the process of authoring of AR content oriented to training, construction and industry are presented. These contributions are focused on two main areas: creating AR content for visual aids, and the creation of information during the review of the construction process through AR (As-Built information). To create AR visual aids, a new authoring model is defined extending the digital slide presentations, which include the necessary mechanisms to create the augmented content. To create the As-Built information, during the revision of the construction process, a methodology and the system that allows carrying it out are proposed. All contributions presented in this paper are intended to allow non-expert users to create AR content, by abstracting them from the low level technical details of AR applications. The results presented will show how the proposed systems achieved this objective; allowing non-expert users to create AR content both AR visual aids and As-Built information to document the construction process

    Entwicklungsunterstützung für interaktive 3D-Anwendungen

    Get PDF
    Die vorliegende Arbeit befasst sich mit der Entwicklung interaktiver 3D-Anwendungen. Interaktive 3D-Grafik wird heutzutage in den verschiedensten Domänen eingesetzt, z. B. im e-Commerce-, Unterhaltungs- und Ausbildungsbereich. Dennoch stellt die Entwicklung einer umfangreicheren 3D-Anwendung nach wie vor eine Herausforderung dar. Programmcode und 3D-Inhalte werden i. d. R. von verschiedenen Entwicklern erstellt, die unterschiedliches Fachwissen besitzen und mit völlig verschiedenartigen Werkzeugen arbeiten. Diese Situation führt häufig zu Problemen bei der Integration der erstellten Anwendungskomponenten in ein komplexes interaktives 3D-Gesamtsystem. So können etwa Inkonsistenzen auftreten, die einen korrekten Zugriff des Programms auf die 3D-Inhalte zur Laufzeit verhindern. Zudem fehlen Konzepte und Werkzeuge zur Unterstützung einer strukturierten interdisziplinären 3D-Entwicklung. In der vorliegenden Arbeit wird ein neuartiger Lösungsansatz für die genannten Probleme vorgestellt. Es handelt sich dabei um eine Familie domänenspezifischer Sprachen, die für einen Einsatz in der Entwurfsphase vor der Implementierung im 3D-Entwicklungsprozess konzipiert wurden. Basissprache der Familie, die durch weitere Sprachkomponenten ergänzt wird, ist die Scene Structure and Integration Modelling Language (kurz SSIML). Mittels visueller Modelle lassen sich -- werkzeuggestützt -- Verknüpfungen zwischen Programmkomponenten und 3D-Inhalten spezifizieren. Durch die automatische Erzeugung von Codeskeletten aus einem Modell, die sowohl dem 3D-Designer als auch dem Programmierer als zu vervollständigende Vorlagen dienen, kann die Konsistenz zwischen den einzelnen Anwendungsbestandteilen sichergestellt werden. Neben der Verknüpfung von Programmcode und 3D-Inhalten sind die Strukturierung und Modularisierung von 3D-Inhalten, die aufgabenorientierte 3D-Visualisierung, 3D-Verhalten und Animation und Augmented Realitiy-Anwendungen weitere wichtige Aspekte, die durch die Mitglieder der SSIML-Sprachfamilie abgedeckt werden. Außerdem wird in der vorliegenden Arbeit ein 3D-Entwicklungsprozess skizziert, der einen sinnvollen Einbezug der vorgestellten Konzepte und Werkzeuge erlaubt

    Towards exploring future landscapes using augmented reality

    Get PDF
    With increasing pressure to better manage the environment many government and private organisations are studying the relationships between social, economic and environmental factors to determine how they can best be optimised for increased sustainability. The analysis of such relationships are undertaken using computer-based Integrated Catchment Models (ICM). These models are capable of generating multiple scenarios depicting alternative land use alternatives at a variety of temporal and spatial scales, which present (potentially) better Triple-Bottom Line (TBL) outcomes than the prevailing situation. Dissemination of this data is (for the most part) reliant on traditional, static map products however, the ability of such products to display the complexity and temporal aspects is limited and ultimately undervalues both the knowledge incorporated in the models and the capacity of stakeholders to disseminate the complexities through other means. Geovisualization provides tools and methods for disseminating large volumes of spatial (and associated non-spatial) data. Virtual Environments (VE) have been utilised for various aspects of landscape planning for more than a decade. While such systems are capable of visualizing large volumes of data at ever-increasing levels of realism, they restrict the users ability to accurately perceive the (virtual) space. Augmented Reality (AR) is a visualization technique which allows users freedom to explore a physical space and have that space augmented with additional, spatially referenced information. A review of existing mobile AR systems forms the basis of this research. A theoretical mobile outdoor AR system using Common-Of-The-Shelf (COTS) hardware and open-source software is developed. The specific requirements for visualizing land use scenarios in a mobile AR system were derived using a usability engineering approach known as Scenario-Based Design (SBD). This determined the elements required in the user interfaces resulting in the development of a low-fidelity, computer-based prototype. The prototype user interfaces were evaluated using participants from two targeted stakeholder groups undertaking hypothetical use scenarios. Feedback from participants was collected using the cognitive walk-through technique and supplemented by evaluator observations of participants physical actions. Results from this research suggest that the prototype user interfaces did provide the necessary functionality for interacting with land use scenarios. While there were some concerns about the potential implementation of "yet another" system, participants were able to envisage the benefits of visualizing land use scenario data in the physical environment
    corecore