163 research outputs found

    The Perception/Action loop: A Study on the Bandwidth of Human Perception and on Natural Human Computer Interaction for Immersive Virtual Reality Applications

    Get PDF
    Virtual Reality (VR) is an innovating technology which, in the last decade, has had a widespread success, mainly thanks to the release of low cost devices, which have contributed to the diversification of its domains of application. In particular, the current work mainly focuses on the general mechanisms underling perception/action loop in VR, in order to improve the design and implementation of applications for training and simulation in immersive VR, especially in the context of Industry 4.0 and the medical field. On the one hand, we want to understand how humans gather and process all the information presented in a virtual environment, through the evaluation of the visual system bandwidth. On the other hand, since interface has to be a sort of transparent layer allowing trainees to accomplish a task without directing any cognitive effort on the interaction itself, we compare two state of the art solutions for selection and manipulation tasks, a touchful one, the HTC Vive controllers, and a touchless vision-based one, the Leap Motion. To this aim we have developed ad hoc frameworks and methodologies. The software frameworks consist in the creation of VR scenarios, where the experimenter can choose the modality of interaction and the headset to be used and set experimental parameters, guaranteeing experiments repeatability and controlled conditions. The methodology includes the evaluation of performance, user experience and preferences, considering both quantitative and qualitative metrics derived from the collection and the analysis of heterogeneous data, as physiological and inertial sensors measurements, timing and self-assessment questionnaires. In general, VR has been found to be a powerful tool able to simulate specific situations in a realistic and involving way, eliciting user\u2019s sense of presence, without causing severe cybersickness, at least when interaction is limited to the peripersonal and near-action space. Moreover, when designing a VR application, it is possible to manipulate its features in order to trigger or avoid triggering specific emotions and voluntarily create potentially stressful or relaxing situations. Considering the ability of trainees to perceive and process information presented in an immersive virtual environment, results show that, when people are given enough time to build a gist of the scene, they are able to recognize a change with 0.75 accuracy when up to 8 elements are in the scene. For interaction, instead, when selection and manipulation tasks do not require fine movements, controllers and Leap Motion ensure comparable performance; whereas, when tasks are complex, the first solution turns out to be more stable and efficient, also because visual and audio feedback, provided as a substitute of the haptic one, does not substantially contribute to improve performance in the touchless case

    Convex Interaction : VR o mochiita kōdō asshuku ni yoru kūkanteki intarakushon no kakuchō

    Get PDF

    When technology cares for people with dementia:A critical review using neuropsychological rehabilitation as a conceptual framework

    Get PDF
    Clinicians and researchers have become increasingly interested in the potential of technology in assisting persons with dementia (PwD). However, several issues have emerged in relation to how studies have conceptualized who the main technology user is (PwD/carer), how technology is used (as compensatory, environment modification, monitoring or retraining tool), why it is used (i.e., what impairments and/or disabilities are supported) and what variables have been considered as relevant to support engagement with technology. In this review we adopted a Neuropsychological Rehabilitation perspective to analyse 253 studies reporting on technological solutions for PwD. We analysed purposes/uses, supported impairments and disabilities and how engagement was considered. Findings showed that the most frequent purposes of technology use were compensation and monitoring, supporting orientation, sequencing complex actions and memory impairments in a wide range of activities. The few studies that addressed the issue of engagement with technology considered how the ease of use, social appropriateness, level of personalization, dynamic adaptation and carers' mediation allowed technology to adapt to PWD's and carers' preferences and performance. Conceptual and methodological tools emerged as outcomes of the analytical process, representing an important contribution to understanding the role of technologies to increase PwD's wellbeing and orient future research.University of Huddersfield, under grants URF301-01 and URF506-01

    IntGUItive : Developing a Natural, Intuitive Graphical User Interface for Mobile Devices

    Get PDF
    Daily life has experienced a sudden increase in mobile device usage. One needs to only look around to find several tiny devices packing power and function. This is all thanks to exponential advances in technology in recent years; each year technology companies around the world introduce their products; smaller, lighter, faster - state of the art. However, as these devices increase, so do their users and contexts of usage. People use them more and more in different situations, on the move, with different styles, tastes and constraints, such as time. In these cases, using a device whose interface is complicated, cumbersome and non-intuitive, ends up costing precious time and perhaps money, and most definitely causes frustration. There has been much research into this field of design, called by various phrases but perhaps best summed up by the term "user experience" or "UX". Several concepts within exist, such as gamification, haptics and natural user interfaces, or NUIs. Native system applications of today's mobile devices do not seem to be very intuitive or easy to use as they get more and more complicated. This research attempts to provide a solution to that problem by focusing on depth perception and a novel way of designing a user interface that provides said depth as the user navigates the system and through applications. The metaphors of a camera zoom lens, a rifle scope and binoculars are loose inspirations which form the basis for the prototype application developed. Despite the prototype application lacking many features due to time and technical factors, the user study revealed highly positive results, with users enjoying the intuitive and natural feel of the new design. Users also expressed a great interest in using an application with such an interface in the future with improvements and thus have prompted further research, proving that such a design opens up endless possibilities for improvement in an otherwise stagnant field

    A Usability Study of Virtual Reality Systems: On Best Practices for User-Centered Design in Virtual Reality Gaming Interfaces

    Get PDF
    In an effort to gather a list of best practices for user-centered design in virtual reality gaming interfaces, this study combines evidence from industry anecdotal observations, heuristic evaluations, and usability testing with three of the leading virtual reality platforms on the market: HTC Vive, Oculus Rift, and Windows Mixed Reality. Quantitative and qualitative data were collected from a variety of usability scales and questionnaires, think-aloud tasks, observation, and semi-structured interviews. The results of the study suggest that immersion is an effective design feature across all interfaces, however the lack of real-world awareness resulting from immersion can be a major usability concern. Pain-points included controller design and button mapping, physiological comfort, and adapting to new methods of movement and interaction required in 3D virtual environments. The findings emphasize the need to prioritize learnability in the design of VR systems. The paper concludes with fifteen guidelines for designing user-friendly virtual reality interfaces.Master of Science in Information Scienc

    Toward multimodality: gesture and vibrotactile feedback in natural human computer interaction

    Get PDF
    In the present work, users’ interaction with advanced systems has been investigated in different application domains and with respect to different interfaces. The methods employed were carefully devised to respond to the peculiarities of the interfaces under examination. We could extract a set of recommendations for developers. The first application domain examined regards the home. In particular, we addressed the design of a gestural interface for controlling a lighting system embedded into a piece of furniture in the kitchen. A sample of end users was observed while interacting with the virtual simulation of the interface. Based on the videoanalysis of users’ spontaneous behaviors, we could derive a set of significant interaction trends The second application domain involved the exploration of an urban environment in mobility. In a comparative study, a haptic-audio interface and an audio-visual interface were employed for guiding users towards landmarks and for providing them with information. We showed that the two systems were equally efficient in supporting the users and they were both well- received by them. In a navigational task we compared two tactile displays each embedded in a different wearable device, i.e., a glove and a vest. Despite the differences in the shape and size, both systems successfully directed users to the target. The strengths and the flaws of the two devices were pointed out and commented by users. In a similar context, two devices supported Augmented Reality technology, i.e., a pair of smartglasses and a smartphone, were compared. The experiment allowed us to identify the circumstances favoring the use of smartglasses or the smartphone. Considered altogether, our findings suggest a set of recommendations for developers of advanced systems. First, we outline the importance of properly involving end users for unveiling intuitive interaction modalities with gestural interfaces. We also highlight the importance of providing the user the chance to choose the interaction mode better fitting the contextual characteristics and to adjust the features of every interaction mode. Finally, we outline the potential of wearable devices to support interactions on the move and the importance of finding a proper balance between the amount of information conveyed to the user and the size of the device

    Conception, design and evaluation of an ICT platform for independent living and remote health monitoring

    Get PDF
    The current society is dealing with a progressive ageing of the population. Life expectancy is constantly increasing and, at the same time, families tend to have less children than in the past. For these reasons, the global proportion of people aged 60 or over is expected to outnumber the younger age groups. This trend will have a serious impact on the society, since the health related costs will rise, there will be a lack of professional caregivers trained to assist the elderly and more and more people will suffer from chronic diseases that must be treated somehow. To overcome this situation, in the past years many initiatives aiming at increasing the elderly independence were born. The problemin developing technological systems for the elderly is that they are reluctant to try out newsystems and devices, so a great emphasismust be put on the design of an acceptable and usable solution. In this thesis, an ICT platform for independent living of older adults is presented. The platform is based on a standard TV and remote control, in order to lower the risk of technology refusal by older people, and aims at offering a rich set of services that include social networking, support, welfare and health. The health aspect is important but not the leading one, since such platformshould be first perceived as useful for different aspects of their daily life, and not strictly related to the concept the being oldmeans having health problems. Another aimof the proposed platformis to expand the offered services by involving external service providers, that will exploit the basic functionalities offered natively by the platform. The aspects related to the initial studies that let to the definition of system requirements and technical specifications will be presented, together with some preliminary usability results obtained from several user tests. Starting from mid 2016, the proposed platformwill be tested during three field trials in Italy, Belgiumand the Netherlands

    Assessment of Audio Interfaces for use in Smartphone Based Spatial Learning Systems for the Blind

    Get PDF
    Recent advancements in the field of indoor positioning and mobile computing promise development of smart phone based indoor navigation systems. Currently, the preliminary implementations of such systems only use visual interfaces—meaning that they are inaccessible to blind and low vision users. According to the World Health Organization, about 39 million people in the world are blind. This necessitates the need for development and evaluation of non-visual interfaces for indoor navigation systems that support safe and efficient spatial learning and navigation behavior. This thesis research has empirically evaluated several different approaches through which spatial information about the environment can be conveyed through audio. In the first experiment, blindfolded participants standing at an origin in a lab learned the distance and azimuth of target objects that were specified by four audio modes. The first three modes were perceptual interfaces and did not require cognitive mediation on the part of the user. The fourth mode was a non-perceptual mode where object descriptions were given via spatial language using clockface angles. After learning the targets through the four modes, the participants spatially updated the position of the targets and localized them by walking to each of them from two indirect waypoints. The results also indicate hand motion triggered mode to be better than the head motion triggered mode and comparable to auditory snapshot. In the second experiment, blindfolded participants learned target object arrays with two spatial audio modes and a visual mode. In the first mode, head tracking was enabled, whereas in the second mode hand tracking was enabled. In the third mode, serving as a control, the participants were allowed to learn the targets visually. We again compared spatial updating performance with these modes and found no significant performance differences between modes. These results indicate that we can develop 3D audio interfaces on sensor rich off the shelf smartphone devices, without the need of expensive head tracking hardware. Finally, a third study, evaluated room layout learning performance by blindfolded participants with an android smartphone. Three perceptual and one non-perceptual mode were tested for cognitive map development. As expected the perceptual interfaces performed significantly better than the non-perceptual language based mode in an allocentric pointing judgment and in overall subjective rating. In sum, the perceptual interfaces led to better spatial learning performance and higher user ratings. Also there is no significant difference in a cognitive map developed through spatial audio based on tracking user’s head or hand. These results have important implications as they support development of accessible perceptually driven interfaces for smartphones
    corecore