7,632 research outputs found

    Symmetric and asymmetric action integration during cooperative object manipulation in virtual environments

    Get PDF
    Cooperation between multiple users in a virtual environment (VE) can take place at one of three levels. These are defined as where users can perceive each other (Level 1), individually change the scene (Level 2), or simultaneously act on and manipulate the same object (Level 3). Despite representing the highest level of cooperation, multi-user object manipulation has rarely been studied. This paper describes a behavioral experiment in which the piano movers' problem (maneuvering a large object through a restricted space) was used to investigate object manipulation by pairs of participants in a VE. Participants' interactions with the object were integrated together either symmetrically or asymmetrically. The former only allowed the common component of participants' actions to take place, but the latter used the mean. Symmetric action integration was superior for sections of the task when both participants had to perform similar actions, but if participants had to move in different ways (e.g., one maneuvering themselves through a narrow opening while the other traveled down a wide corridor) then asymmetric integration was superior. With both forms of integration, the extent to which participants coordinated their actions was poor and this led to a substantial cooperation overhead (the reduction in performance caused by having to cooperate with another person)

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Vibrotactile pedals : provision of haptic feedback to support economical driving

    Get PDF
    The use of haptic feedback is currently an underused modality in the driving environment, especially with respect to vehicle manufacturers. This exploratory study evaluates the effects of a vibrotactile (or haptic) accelerator pedal on car driving performance and perceived workload using a driving simulator. A stimulus was triggered when the driver exceeded a 50% throttle threshold, past which is deemed excessive for economical driving. Results showed significant decreases in mean acceleration values, and maximum and excess throttle use when the haptic pedal was active as compared to a baseline condition. As well as the positive changes to driver behaviour, subjective workload decreased when driving with the haptic pedal as compared to when drivers were simply asked to drive economically. The literature suggests that the haptic processing channel offers a largely untapped resource in the driving environment, and could provide information without overloading the other attentional resource pools used in driving

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    m-Reading: Fiction reading from mobile phones

    Get PDF
    Mobile phones are reportedly the most rapidly expanding e-reading device worldwide. However, the embodied, cognitive and affective implications of smartphone-supported fiction reading for leisure (m-reading) have yet to be investigated empirically. Revisiting the theoretical work of digitization scholar Anne Mangen, we argue that the digital reading experience is not only contingent on patterns of embodied reader–device interaction (Mangen, 2008 and later) but also embedded in the immediate environment and broader situational context. We call this the situation constraint. Its application to Mangen’s general framework enables us to identify four novel research areas, wherein m-reading should be investigated with regard to its unique affordances. The areas are reader–device affectivity, situated embodiment, attention training and long-term immersion

    The simultaneity of complementary conditions:re-integrating and balancing analogue and digital matter(s) in basic architectural education

    Get PDF
    The actual, globally established, general digital procedures in basic architectural education,producing well-behaved, seemingly attractive up-to-date projects, spaces and first general-researchon all scale levels, apparently present a certain growing amount of deficiencies. These limitations surface only gradually, as the state of things on overall extents is generally deemed satisfactory. Some skills, such as “old-fashioned” analogue drawing are gradually eased-out ofundergraduate curricula and overall modus-operandi, due to their apparent slow inefficiencies in regard to various digital media’s rapid readiness, malleability and unproblematic, quotidian availabilities. While this state of things is understandable, it nevertheless presents a definite challenge. The challenge of questioning how the assessment of conditions and especially their representation,is conducted, prior to contextual architectural action(s) of any kind

    Learning to Self-Manage by Intelligent Monitoring, Prediction and Intervention

    Get PDF
    Despite the growing prevalence of multimorbidities, current digital self-management approaches still prioritise single conditions. The future of out-of-hospital care requires researchers to expand their horizons; integrated assistive technologies should enable people to live their life well regardless of their chronic conditions. Yet, many of the current digital self-management technologies are not equipped to handle this problem. In this position paper, we suggest the solution for these issues is a model-aware and data-agnostic platform formed on the basis of a tailored self-management plan and three integral concepts - Monitoring (M) multiple information sources to empower Predictions (P) and trigger intelligent Interventions (I). Here we present our ideas for the formation of such a platform, and its potential impact on quality of life for sufferers of chronic conditions
    • 

    corecore