1,011 research outputs found

    Analyzing the Impact of Spatio-Temporal Sensor Resolution on Player Experience in Augmented Reality Games

    Get PDF
    Along with automating everyday tasks of human life, smartphones have become one of the most popular devices to play video games on due to their interactivity. Smartphones are embedded with various sensors which enhance their ability to adopt new new interaction techniques for video games. These integrated sen- sors, such as motion sensors or location sensors, make the device able to adopt new interaction techniques that enhance usability. However, despite their mobility and embedded sensor capacity, smartphones are limited in processing power and display area compared to desktop computer consoles. When it comes to evaluat- ing Player Experience (PX), players might not have as compelling an experience because the rich graphics environments that a desktop computer can provide are absent on a smartphone. A plausible alternative in this regard can be substituting the virtual game world with a real world game board, perceived through the device camera by rendering the digital artifacts over the camera view. This technology is widely known as Augmented Reality (AR). Smartphone sensors (e.g. GPS, accelerometer, gyro-meter, compass) have enhanced the capability for deploying Augmented Reality technology. AR has been applied to a large number of smartphone games including shooters, casual games, or puzzles. Because AR play environments are viewed through the camera, rendering the digital artifacts consistently and accurately is crucial because the digital characters need to move with respect to sensed orientation, then the accelerometer and gyroscope need to provide su ciently accurate and precise readings to make the game playable. In particular, determining the pose of the camera in space is vital as the appropriate angle to view the rendered digital characters are determined by the pose of the camera. This defines how well the players will be able interact with the digital game characters. Depending in the Quality of Service (QoS) of these sensors, the Player Experience (PX) may vary as the rendering of digital characters are affected by noisy sensors causing a loss of registration. Confronting such problem while developing AR games is di cult in general as it requires creating wide variety of game types, narratives, input modalities as well as user-testing. Moreover, current AR games developers do not have any specific guidelines for developing AR games, and concrete guidelines outlining the tradeoffs between QoS and PX for different genres and interaction techniques are required. My dissertation provides a complete view (a taxonomy) of the spatio-temporal sensor resolution depen- dency of the existing AR games. Four user experiments have been conducted and one experiment is proposed to validate the taxonomy and demonstrate the differential impact of sensor noise on gameplay of different genres of AR games in different aspect of PX. This analysis is performed in the context of a novel instru- mentation technology, which allows the controlled manipulation of QoS on position and orientation sensors. The experimental outcome demonstrated how the QoS of input sensor noise impacts the PX differently while playing AR game of different genre and the key elements creating this differential impact are - the input modality, narrative and game mechanics. Later, concrete guidelines are derived to regulate the sensor QoS as complete set of instructions to develop different genres or AR games

    Mobile phone technology as an aid to contemporary transport questions in walkability, in the context of developing countries

    Get PDF
    The emerging global middle class, which is expected to double by 2050 desires more walkable, liveable neighbourhoods, and as distances between work and other amenities increases, cities are becoming less monocentric and becoming more polycentric. African cities could be described as walking cities, based on the number of people that walk to their destinations as opposed to other means of mobility but are often not walkable. Walking is by far the most popular form of transportation in Africa’s rapidly urbanising cities, although it is not often by choice rather a necessity. Facilitating this primary mode, while curbing the growth of less sustainable mobility uses requires special attention for the safety and convenience of walking in view of a Global South context. In this regard, to further promote walking as a sustainable mobility option, there is a need to assess the current state of its supporting infrastructure and begin giving it higher priority, focus and emphasis. Mobile phones have emerged as a useful alternative tool to collect this data and audit the state of walkability in cities. They eliminate the inaccuracies and inefficiencies of human memories because smartphone sensors such as GPS provides information with accuracies within 5m, providing superior accuracy and precision compared to other traditional methods. The data is also spatial in nature, allowing for a range of possible applications and use cases. Traditional inventory approaches in walkability often only revealed the perceived walkability and accessibility for only a subset of journeys. Crowdsourcing the perceived walkability and accessibility of points of interest in African cities could address this, albeit aspects such as ease-of-use and road safety should also be considered. A tool that crowdsources individual pedestrian experiences; availability and state of pedestrian infrastructure and amenities, using state-of-the-art smartphone technology, would over time also result in complete surveys of the walking environment provided such a tool is popular and safe. This research will illustrate how mobile phone applications currently in the market can be improved to offer more functionality that factors in multiple sensory modalities for enhanced visual appeal, ease of use, and aesthetics. The overarching aim of this research is, therefore, to develop the framework for and test a pilot-version mobile phone-based data collection tool that incorporates emerging technologies in collecting data on walkability. This research project will assess the effectiveness of the mobile application and test the technical capabilities of the system to experience how it operates within an existing infrastructure. It will continue to investigate the use of mobile phone technology in the collection of user perceptions of walkability, and the limitations of current transportation-based mobile applications, with the aim of developing an application that is an improvement to current offerings in the market. The prototype application will be tested and later piloted in different locations around the globe. Past studies are primarily focused on the development of transport-based mobile phone applications with basic features and limited functionality. Although limited progress has been made in integrating emerging advanced technologies such as Augmented Reality (AR), Machine Learning (ML), Big Data analytics, amongst others into mobile phone applications; what is missing from these past examples is a comprehensive and structured application in the transportation sphere. In turn, the full research will offer a broader understanding of the iii information gathered from these smart devices, and how that large volume of varied data can be better and more quickly interpreted to discover trends, patterns, and aid in decision making and planning. This research project attempts to fill this gap and also bring new insights, thus promote the research field of transportation data collection audits, with particular emphasis on walkability audits. In this regard, this research seeks to provide insights into how such a tool could be applied in assessing and promoting walkability as a sustainable and equitable mobility option. In order to get policy-makers, analysts, and practitioners in urban transport planning and provision in cities to pay closer attention to making better, more walkable places, appealing to them from an efficiency and business perspective is vital. This crowdsourced data is of great interest to industry practitioners, local governments and research communities as Big Data, and to urban communities and civil society as an input in their advocacy activities. The general findings from the results of this research show clear evidence that transport-based mobile phone applications currently available in the market are increasingly getting outdated and are not keeping up with new and emerging technologies and innovations. It is also evident from the results that mobile smartphones have revolutionised the collection of transport-related information hence the need for new initiatives to help take advantage of this emerging opportunity. The implications of these findings are that more attention needs to be paid to this niche going forward. This research project recommends that more studies, particularly on what technologies and functionalities can realistically be incorporated into mobile phone applications in the near future be done as well as on improving the hardware specifications of mobile phone devices to facilitate and support these emerging technologies whilst keeping the cost of mobile devices as low as possible

    Privacy-Aware Eye Tracking Using Differential Privacy

    Full text link
    With eye tracking being increasingly integrated into virtual and augmented reality (VR/AR) head-mounted displays, preserving users' privacy is an ever more important, yet under-explored, topic in the eye tracking community. We report a large-scale online survey (N=124) on privacy aspects of eye tracking that provides the first comprehensive account of with whom, for which services, and to what extent users are willing to share their gaze data. Using these insights, we design a privacy-aware VR interface that uses differential privacy, which we evaluate on a new 20-participant dataset for two privacy sensitive tasks: We show that our method can prevent user re-identification and protect gender information while maintaining high performance for gaze-based document type classification. Our results highlight the privacy challenges particular to gaze data and demonstrate that differential privacy is a potential means to address them. Thus, this paper lays important foundations for future research on privacy-aware gaze interfaces.Comment: 9 pages, 8 figures, supplementary materia

    Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality

    Get PDF
    abstract: Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona’s Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.Dissertation/ThesisDoctoral Dissertation Mechanical Engineering 201

    Towards Natural Control of Artificial Limbs

    Get PDF
    The use of implantable electrodes has been long thought as the solution for a more natural control of artificial limbs, as these offer access to long-term stable and physiologically appropriate sources of control, as well as the possibility to elicit appropriate sensory feedback via neurostimulation. Although these ideas have been explored since the 1960’s, the lack of a long-term stable human-machine interface has prevented the utilization of even the simplest implanted electrodes in clinically viable limb prostheses.In this thesis, a novel human-machine interface for bidirectional communication between implanted electrodes and the artificial limb was developed and clinically implemented. The long-term stability was achieved via osseointegration, which has been shown to provide stable skeletal attachment. By enhancing this technology as a communication gateway, the longest clinical implementation of prosthetic control sourced by implanted electrodes has been achieved, as well as the first in modern times. The first recipient has used it uninterruptedly in daily and professional activities for over one year. Prosthetic control was found to improve in resolution while requiring less muscular effort, as well as to be resilient to motion artifacts, limb position, and environmental conditions.In order to support this work, the literature was reviewed in search of reliable and safe neuromuscular electrodes that could be immediately used in humans. Additional work was conducted to improve the signal-to-noise ratio and increase the amount of information retrievable from extraneural recordings. Different signal processing and pattern recognition algorithms were investigated and further developed towards real-time and simultaneous prediction of limb movements. These algorithms were used to demonstrate that higher functionality could be restored by intuitive control of distal joints, and that such control remains viable over time when using epimysial electrodes. Lastly, the long-term viability of direct nerve stimulation to produce intuitive sensory feedback was also demonstrated.The possibility to permanently and reliably access implanted electrodes, thus making them viable for prosthetic control, is potentially the main contribution of this work. Furthermore, the opportunity to chronically record and stimulate the neuromuscular system offers new venues for the prediction of complex limb motions and increased understanding of somatosensory perception. Therefore, the technology developed here, combining stable attachment with permanent and reliable human-machine communication, is considered by the author as a critical step towards more functional artificial limbs

    Using brain-computer interaction and multimodal virtual-reality for augmenting stroke neurorehabilitation

    Get PDF
    Every year millions of people suffer from stroke resulting to initial paralysis, slow motor recovery and chronic conditions that require continuous reha bilitation and therapy. The increasing socio-economical and psychological impact of stroke makes it necessary to find new approaches to minimize its sequels, as well as novel tools for effective, low cost and personalized reha bilitation. The integration of current ICT approaches and Virtual Reality (VR) training (based on exercise therapies) has shown significant improve ments. Moreover, recent studies have shown that through mental practice and neurofeedback the task performance is improved. To date, detailed in formation on which neurofeedback strategies lead to successful functional recovery is not available while very little is known about how to optimally utilize neurofeedback paradigms in stroke rehabilitation. Based on the cur rent limitations, the target of this project is to investigate and develop a novel upper-limb rehabilitation system with the use of novel ICT technolo gies including Brain-Computer Interfaces (BCI’s), and VR systems. Here, through a set of studies, we illustrate the design of the RehabNet frame work and its focus on integrative motor and cognitive therapy based on VR scenarios. Moreover, we broadened the inclusion criteria for low mobility pa tients, through the development of neurofeedback tools with the utilization of Brain-Computer Interfaces while investigating the effects of a brain-to-VR interaction.Todos os anos, milho˜es de pessoas sofrem de AVC, resultando em paral isia inicial, recupera¸ca˜o motora lenta e condic¸˜oes cr´onicas que requerem re abilita¸ca˜o e terapia cont´ınuas. O impacto socioecon´omico e psicol´ogico do AVC torna premente encontrar novas abordagens para minimizar as seque las decorrentes, bem como desenvolver ferramentas de reabilita¸ca˜o, efetivas, de baixo custo e personalizadas. A integra¸c˜ao das atuais abordagens das Tecnologias da Informa¸ca˜o e da Comunica¸ca˜o (TIC) e treino com Realidade Virtual (RV), com base em terapias por exerc´ıcios, tem mostrado melhorias significativas. Estudos recentes mostram, ainda, que a performance nas tare fas ´e melhorada atrav´es da pra´tica mental e do neurofeedback. At´e a` data, na˜o existem informac¸˜oes detalhadas sobre quais as estrat´egias de neurofeed back que levam a uma recupera¸ca˜o funcional bem-sucedida. De igual modo, pouco se sabe acerca de como utilizar, de forma otimizada, o paradigma de neurofeedback na recupera¸c˜ao de AVC. Face a tal, o objetivo deste projeto ´e investigar e desenvolver um novo sistema de reabilita¸ca˜o de membros supe riores, recorrendo ao uso de novas TIC, incluindo sistemas como a Interface C´erebro-Computador (ICC) e RV. Atrav´es de um conjunto de estudos, ilus tramos o design do framework RehabNet e o seu foco numa terapia motora e cognitiva, integrativa, baseada em cen´arios de RV. Adicionalmente, ampli amos os crit´erios de inclus˜ao para pacientes com baixa mobilidade, atrav´es do desenvolvimento de ferramentas de neurofeedback com a utilizac¸˜ao de ICC, ao mesmo que investigando os efeitos de uma interac¸˜ao c´erebro-para-RV

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
    corecore