6,897 research outputs found

    Interactive product browsing and configuration using remote augmented reality sales services

    Get PDF
    Real-time remote sales assistance is an underdeveloped component of online sales services. Solutions involving web page text chat, telephony and video support prove problematic when seeking to remotely guide customers in their sales processes, especially with configurations of physically complex artefacts. Recently, there has been great interest in the application of virtual worlds and augmented reality to create synthetic environments for remote sales of physical artefacts. However, there is a lack of analysis and development of appropriate software services to support these processes. We extend our previous work with the detailed design of configuration context services to support the management of an interactive sales session using augmented reality. We detail the context and configuration services required, presenting a novel data service streaming configuration information to the vendor for business analytics. We expect that a fully implemented configuration management service, based on our design, will improve the remote sales experience for both customers and vendors alike via analysis of the streamed information

    The LAB@FUTURE Project - Moving Towards the Future of E-Learning

    Get PDF
    This paper presents Lab@Future, an advanced e-learning platform that uses novel Information and Communication Technologies to support and expand laboratory teaching practices. For this purpose, Lab@Future uses real and computer-generated objects that are interfaced using mechatronic systems, augmented reality, mobile technologies and 3D multi user environments. The main aim is to develop and demonstrate technological support for practical experiments in the following focused subjects namely: Fluid Dynamics - Science subject in Germany, Geometry - Mathematics subject in Austria, History and Environmental Awareness – Arts and Humanities subjects in Greece and Slovenia. In order to pedagogically enhance the design and functional aspects of this e-learning technology, we are investigating the dialogical operationalisation of learning theories so as to leverage our understanding of teaching and learning practices in the targeted context of deployment

    MINDtouch: Embodied mobile media ephemeral transference

    Get PDF
    Copyright @ 2013 ISAST.This article reviews discoveries that emerged from the author's MINDtouch media research project, in which a mobile device was repurposed for visual and non-verbal communication through gestural and visual mobile expressivity. The work revealed new insights from emerging mobile media and participatory performance practices. The author contextualizes her media research on mobile video and networked performance alongside relevant discourse on presence and the embodiment of technology. From the research, an intimate, phenomenological and visual form of mobile expression has emerged. This form has reconfigured the communication device from voice and text/SMS only to a visual and synesthetic mode for deeper expression

    U-DiVE: Design and evaluation of a distributed photorealistic virtual reality environment

    Get PDF
    This dissertation presents a framework that allows low-cost devices to visualize and interact with photorealistic scenes. To accomplish this task, the framework makes use of Unity’s high-definition rendering pipeline, which has a proprietary Ray Tracing algorithm, and Unity’s streaming package, which allows an application to be streamed within its editor. The framework allows the composition of a realistic scene using a Ray Tracing algorithm, and a virtual reality camera with barrel shaders, to correct the lens distortion needed for the use on an inexpensive cardboard. It also includes a method to collect the mobile device’s spatial orientation through a web browser to control the user’s view, delivered via WebRTC. The proposed framework can produce low-latency, realistic and immersive environments to be accessed through low-cost HMDs and mobile devices. To evaluate the structure, this work includes the verification of the frame rate achieved by the server and mobile device, which should be higher than 30 FPS for a smooth experience. In addition, it discusses whether the overall quality of experience is acceptable by evaluating the delay of image delivery from the server up to the mobile device, in face of user’s movement. Our tests showed that the framework reaches a mean latency around 177 (ms) with household Wi-Fi equipment and a maximum latency variation of 77.9 (ms), among the 8 scenes tested.Esta dissertação apresenta um framework que permite que dispositivos de baixo custo visualizem e interajam com cenas fotorrealísticas. Para realizar essa tarefa, o framework faz uso do pipeline de renderização de alta definição do Unity, que tem um algoritmo de rastreamento de raio proprietário, e o pacote de streaming do Unity, que permite o streaming de um aplicativo em seu editor. O framework permite a composição de uma cena realista usando um algoritmo de Ray Tracing, e uma câmera de realidade virtual com shaders de barril, para corrigir a distorção da lente necessária para usar um cardboard de baixo custo. Inclui também um método para coletar a orientação espacial do dispositivo móvel por meio de um navegador Web para controlar a visão do usuário, entregue via WebRTC. O framework proposto pode produzir ambientes de baixa latência, realistas e imersivos para serem acessados por meio de HMDs e dispositivos móveis de baixo custo. Para avaliar a estrutura, este trabalho considera a verificação da taxa de quadros alcançada pelo servidor e pelo dispositivo móvel, que deve ser superior a 30 FPS para uma experiência fluida. Além disso, discute se a qualidade geral da experiência é aceitável, ao avaliar o atraso da entrega das imagens desde o servidor até o dispositivo móvel, em face da movimentação do usuário. Nossos testes mostraram que o framework atinge uma latência média em torno dos 177 (ms) com equipamentos wi-fi de uso doméstico e uma variação máxima das latências igual a 77.9 (ms), entre as 8 cenas testadas
    corecore