10 research outputs found

    A help for assisting people based on a depth cameras system dedicated to elderly and dependent people

    Get PDF
    International audienceIn this paper, we propose a help to a comfort system development based on Kinect sensor to assist people commanding their own house by using only their gestures. The system uses a multi-sensors to detect the person, recognize her gestures and communicate through an IP/KNX gateway to act on actuators related to the home. Thus, a simple gesture is able to turn on/off the lights, to switch on/off the TV, to move up or down the shutters etc. We performed a test bed within the smart home of our University Institute of Technology in Blagnac

    Towards the design of effective freehand gestural interaction for interactive TV

    Get PDF
    As interactive devices become pervasive, people are beginning to look for more advanced interaction with televisions in the living room. Interactive television has the potential to offer a very engaging experience. But most common user tasks are still challenging with such systems, such as menu selection or text input, and little work has been done on understanding and supporting the effective design of freehand interaction with a TV in the domestic environment. In this paper, we report two studies investigating freehand gestural interaction with a consumer level sensor that is suitable for TV use scenarios. In the first study, we investigate a range of design factors for tiled layout menu selection, including wearable feedback, push gesture depth, target size and position in motor space. The results show that tactile and audio feedback have no significant effect on user performance and preference, and these results inform potential designs for high selection performance. In the second study, we investigate using freehand gestures for the common TV user task of text input. We design and evaluate two virtual keyboard layouts and three freehand selection methods. Results show that ease of use and error tolerance can both be achieved using a text entry method utilizing a dual circle layout and an expanding target selection technique. Finally, we propose design guidelines for effective, usable and comfortable freehand gestural interaction for interactive TV based on the findings.</p

    Continuous recognition of one-handed and two-handed gestures using 3D full-body motion tracking sensors

    No full text
    In this paper we present a new bimanual markerless gesture interface for 3D full-body motion tracking sensors, such as the Kinect. Our interface uses a probabilistic algorithm to incrementally predict users' intended one-handed and twohanded gestures while they are still being articulated. It supports scale and translation invariant recognition of arbitrarily defined gesture templates in real-time. The interface supports two ways of gesturing commands in thin air to displays at a distance. First, users can use one-handed and two-handed gestures to directly issue commands. Second, users can use their non-dominant hand to modulate single-hand gestures. Our evaluation shows that the system recognizes one-handed and two-handed gestures with an accuracy of 92.7%-96.2%. Copyright © 2012 ACM

    Freehand Gestural Text Entry for Interactive TV

    Get PDF

    Towards the design of effective freehand gestural interaction for interactive TV

    Get PDF
    As interactive devices become pervasive, people are beginning to looking for more advanced interaction with televisions in the living room. Interactive television has the potential to offer a very engaging experience. But most common user tasks are still challenging with such systems, such as menu selection or text input. And little work has been done on understanding and sup-porting the effective design of freehand interaction with an TV in the living room. In this paper, we perform two studies investi-gating freehand gestural interaction with a consumer level sensor, which is suitable for TV scenarios. In the first study, we inves-tigate a range of design factors for tiled layout menu selection, including wearable feedback, push gesture depth, target size and position in motor space. The results show that tactile and audio feedback have no significant effect on performance and prefer-ence, and these results inform potential designs for high selection performance. In the second study, we investigate a common TV user task of text input using freehand gesture. We design and evaluate two virtual keyboard layouts and three freehand selec-tion methods. Results show that ease of use and error tolerance can be both achieved using a text entry method utilizing a dual circle layout and an expanding target selection technique. Finally, we propose design guidelines for effective, usable and com-fortable freehand gestural interaction for interactive TV based on the findings.Comment: Preprint version of our paper accepted by Journal of Intelligent and Fuzzy System

    Gesture-based interaction with modern interaction devices in digital manufacturing software

    Get PDF
    Traditionally, equipment for human-computer interaction (HCI) has been a keyboard and a mouse, but in the last two decades, the advances in technology have brought com-pletely new methods for the HCI available. Among others, digital manufacturing soft-ware 3D world has been controlled with the keyboard and mouse combination. Modern interaction devices enable more natural HCI in the form of gesture-based interaction. Touch screens are already a familiar method for interacting with computer environments, but HCI methods that utilize vision-based technologies are still quite unknown for a lot of people. The possibility of using these new methods when interacting with 3D world has never been studied before. The main research question of this MSc. thesis was how the modern interaction de-vices, namely touch screen, Microsoft Kinect and 3DConnexion SpacePilot PRO, can be used in interacting with 3D world. The other research question was how the gesture-based control should be utilized with these devices. As a part of this thesis work, inter-faces between 3D world and each of the devices were built. This thesis is divided into two main parts. The first background section deals with the interaction devices, 3D world, and also gives the necessary information that is need-ed in fully utilizing the possibilities of these interaction devices. The second part of the thesis is about building the interfaces for each the above-mentioned devices. The study indicates that the gesture-based control with these interaction devices cannot replace the functionality of a keyboard and a mouse, but each of the devices can be used for certain use cases in particular use scenarios. Two dimensional gesture-based control on touch screen suits well for using camera controls as well as doing the basic manipulation tasks. Three dimensional gesture-based control when using Kinect is ap-plicable when it is used in specially developed first person mode. Kinect interface re-quires a calm background and quite a large space around the user to be able to be used correctly. Suitable use scenario for this interface is doing a presentation to audience in front of an audience in a conference room. The interface for SpacePilot PRO suits well either for controlling the camera or manipulating object positions and rotations in 3D world

    Characterization of multiphase flows integrating X-ray imaging and virtual reality

    Get PDF
    Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution. This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS. While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner

    User-centred design of a task-oriented upper-limb assessment system for stroke

    Get PDF
    During rehabilitation from Stroke, patients require assessment of their upper-limb motor control. Outcome measures can often be subjective and objective data is required to supplement therapist/patient opinion on progress. This can be performed through goniometry; however, goniometry can be time-consuming, have inaccuracies of ±23º, and is therefore, often not used. Motion tracking technology is a possible answer to this problem, but can also be costly, time-consuming and not suitable for the clinical environment. This thesis aims to provide an objective, digital intervention method for assessing range of motion to supplement current outcome measures which is suitable for the clinical environment. This was performed by creating a low-cost technology through a user-centred design approach. Requirements elicitation demonstrated that a motivational, portable, cost-effective, non-invasive, time saving system for assessing functional activities was needed. Therefore, a system which utilised a Microsoft Kinect and EZ430 chronos wrist watch to track patient’s movements during and/or outside of therapy sessions was created. Measurements can be taken in a matter of minutes and provide a high quantity of objective data regarding patient movement. The system was verified, using healthy volunteers, by showing similar error rates in the system across 3 weeks in 10 able-bodied individuals, with error rates produced by a physiotherapist using goniometry. The system was also validated in the clinical setting with 6 stroke patients, over 15 weeks, as selected by 6 occupational therapists and 3 physiotherapists in 2 NHS stroke wards. The approach which has been created in this thesis is objective, repeatable, low-cost, portable, and non-invasive; allowing it to be the first tool for the objective assessment of upper-limb ROM which is efficiently designed and suitable for everyday use in stroke rehabilitation
    corecore