5,896 research outputs found

    The Analysis of design and manufacturing tasks using haptic and immersive VR - Some case studies

    Get PDF
    The use of virtual reality in interactive design and manufacture has been researched extensively but the practical application of this technology in industry is still very much in its infancy. This is surprising as one would have expected that, after some 30 years of research commercial applications of interactive design or manufacturing planning and analysis would be widespread throughout the product design domain. One of the major but less well known advantages of VR technology is that logging the user gives a great deal of rich data which can be used to automatically generate designs or manufacturing instructions, analyse design and manufacturing tasks, map engineering processes and, tentatively, acquire expert knowledge. The authors feel that the benefits of VR in these areas have not been fully disseminated to the wider industrial community and - with the advent of cheaper PC-based VR solutions - perhaps a wider appreciation of the capabilities of this type of technology may encourage companies to adopt VR solutions for some of their product design processes. With this in mind, this paper will describe in detail applications of haptics in assembly demonstrating how user task logging can lead to the analysis of design and manufacturing tasks at a level of detail not previously possible as well as giving usable engineering outputs. The haptic 3D VR study involves the use of a Phantom and 3D system to analyse and compare this technology against real-world user performance. This work demonstrates that the detailed logging of tasks in a virtual environment gives considerable potential for understanding how virtual tasks can be mapped onto their real world equivalent as well as showing how haptic process plans can be generated in a similar manner to the conduit design and assembly planning HMD VR tool reported in PART A. The paper concludes with a view as to how the authors feel that the use of VR systems in product design and manufacturing should evolve in order to enable the industrial adoption of this technology in the future

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks

    Leveraging Multimodal Interaction and Adaptive Interfaces for Location-based Augmented Reality Islamic Tourism Application

    Get PDF
    A Location-based Augmented Reality (LBAR) application leveraging multimodal interaction and adaptive interface based on Islamic tourism information was proposed to enhance user experience while travelling. LBAR has the potential to improve tourist experience and help tourists to access relevant information, thus improving their knowledge regarding touristic destination while increasing levels of their entertainment throughout the process. In LBAR application, Point of Interest (POI) displayed are exposed to the “occlusion problem” where the AR contents are visually redundant and overlapping with one another causing the users to loose valuable information. Previous research have suggested the design of AR POI which help user to see the augmented POI clearly. The user can click on the desired POI but it still displays a large amount of POI. From our best study, there is limitation of research studying on how to minimize the amount of displayed POI based on user’s current needs. Therefore, in this paper we suggest to use an adaptive user interface and multimodal interaction to solve this problem. We discussed the process of analysing and designing the user interfaces of previous studies. The proposed mobile solution was presented by explaining the application contents, the combination of adaptive multimodal inputs, system’s flow chart and multimodal task definition. Then the user evaluation was conducted to measure the level of satisfaction in terms of the usability of the application. A total of 24 Islamic tourists have participated in this study. The findings revealed that the average SUS score of 75.83 of respondents agree in terms of satisfaction of the LBAR application to be utilized while traveling. Finally, we conclude this paper by providing the suggestion of future works

    Virtual and Augmented Reality in Finance: State Visibility of Events and Risk

    Get PDF
    International audienceThe recent financial crisis and its aftermath motivate our re-thinking of the role of Information and Communication Technologies (ICT) as a driver for change in global finance and a critical factor for success and sustainability. We attribute the recent financial crisis that hit the global market, causing a drastic economic slowdown and recession, to a lack of state visibility of risk, inadequate response to events, and a slow dynamic system adaptation to events. There is evidence that ICT is not yet appropriately developed to create business value and business intelligence capable of counteracting devastating events. The aim of this chapter is to assess the potential of Virtual Reality and Augmented Reality (VR / AR) technologies in supporting the dynamics of global financial systems and in addressing the grand challenges posed by unexpected events and crises. We overview, firstly, in this chapter traditional AR/VR uses. Secondly, we describe early attempts to use 3D/ VR / AR technologies in Finance. Thirdly, we consider the case study of mediating the visibility of the financial state and we explore the various dimensions of the problem. Fourthly, we assess the potential of AR / VR technologies in raising the perception of the financial state (including financial risk). We conclude the chapter with a summary and a research agenda to develop technologies capable of increasing the perception of the financial state and risk and counteracting devastating events

    Designing user interaction using gesture and speech for mixed reality interface

    Get PDF
    Mixed Reality (MR) is the next evolution of humans interacting with computer as MR can combine the physical environment and digital environment and making them coexist with each other [1]. Interaction is still a valid research area in MR, and this paper focuses on interaction rather than other research areas such as tracking, calibration, and display [2] because the current interaction technique still not intuitive enough to let the user interact with the computer. This paper explores the user interaction using gesture and speech interaction for 3D object manipulation in mixed reality environment. The paper explains the design stage that involves interaction using gesture and speech inputs to enhance user experience in MR workspace. After acquiring gesture input and speech commands, MR prototype is proposed to integrate the interaction technique using gesture and speech. The paper concludes with results and discussion

    An Evaluation of an Augmented Reality Multimodal Interface Using Speech and Paddle Gestures

    Get PDF
    This paper discusses an evaluation of an augmented reality (AR) multimodal interface that uses combined speech and paddle gestures for interaction with virtual objects in the real world. We briefly describe our AR multimodal interface architecture and multimodal fusion strategies that are based on the combination of time-based and domain semantics. Then, we present the results from a user study comparing using multimodal input to using gesture input alone. The results show that a combination of speech and paddle gestures improves the efficiency of user interaction. Finally, we describe some design recommendations for developing other multimodal AR interfaces
    corecore