297 research outputs found

    Human Arm Motion Tracking by Kinect Sensor Using Kalman Filter for Collaborative Robotics

    Get PDF
    The rising interest in collaborative robotics leads to research solutions in order to increase robot interaction with the environment. The development of methods that permit robots to recognize and track human motion is relevant for safety and collaboration matters. A large quantity of data can be measured in real time by Microsoft Kinect®, a well-known low-cost depth sensor, able to recognize human presence and to provide postural information by extrapolating a skeleton. However, the Kinect sensor tracks motion with relatively low accuracy and jerky behavior. For this reason, the effective use in industrial applications in which the measurement of arm velocity is required can be unsuitable. The present work proposes a filtering method that allows the measurement of more accurate velocity values of human arm, based on row data provided by the Kinect sensor. The estimation of arm motion is achieved by a Kalman filter based on a kinematic model and by the imposition of fixed lengths for the skeleton links detected by the sensor. The development of the method is supported by experimental tests. The achieved results suggest the practical applicability of the developed algorithms

    Continuous Human Activity Tracking over a Large Area with Multiple Kinect Sensors

    Get PDF
    In recent years, researchers had been inquisitive about the use of technology to enhance the healthcare and wellness of patients with dementia. Dementia symptoms are associated with the decline in thinking skills and memory severe enough to reduce a person’s ability to pay attention and perform daily activities. Progression of dementia can be assessed by monitoring the daily activities of the patients. This thesis encompasses continuous localization and behavioral analysis of patient’s motion pattern over a wide area indoor living space using multiple calibrated Kinect sensors connected over the network. The skeleton data from all the sensor is transferred to the host computer via TCP sockets into Unity software where it is integrated into a single world coordinate system using calibration technique. Multiple cameras are placed with some overlap in the field of view for the successful calibration of the cameras and continuous tracking of the patients. Localization and behavioral data are stored in a CSV file for further analysis

    A General Framework for Motion Sensor Based Web Services

    Get PDF
    With the development of motion sensing technology, motion sensor based services have been put into a wide range of applications in recent years. Demand of consuming such service on mobile devices has already emerged. However, as most motion sensors are specifically designed for some heavyweight clients such as PCs or game consoles, there are several technical challenges prohibiting motion sensor from being used by lightweight clients such as mobile devices, for example: There is no direct approach to connect the motion sensor with mobile devices. Most mobile devices don't have enough computational power to consume the motion sensor outputs. To address these problems, I have designed and implemented a framework for publishing general motion sensor functionalities as a RESTful web service that is accessible to mobile devices via HTTP connections. In the framework, a pure HTML5 based interface is delivered to the clients to ensure good accessibility, a websocket based data transferring scheme is adopted to guarantee data transferring efficiency, a server side gesture pipeline is proposed to reduce the client side computational burden and a distributed architecture is designed to make the service scalable. Finally, I conducted three experiments to evaluate the framework's compatibility, scalability and data transferring performance

    Characterization of multiphase flows integrating X-ray imaging and virtual reality

    Get PDF
    Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution. This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS. While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    OPTAR: Automatic Coordinate Frame Registration between OpenPTrack and Google ARCore using Ambient Visual Features

    Get PDF
    This thesis presents a system for the estimation of the coordinate frame registration between OpenPTrack and Google ARCore. OpenPTrack is a multi-camera solution that integrates people tracking, skeleton tracking, and pose recognition. ARCore is a framework for the development of Augmented Reality applications on smartphones. The transformation between the two coordinate frames is obtained by exploiting visual features observed by both the phone and OpenPTrack cameras

    HeadOn: Real-time Reenactment of Human Portrait Videos

    Get PDF
    We propose HeadOn, the first real-time source-to-target reenactment approach for complete human portrait videos that enables transfer of torso and head motion, face expression, and eye gaze. Given a short RGB-D video of the target actor, we automatically construct a personalized geometry proxy that embeds a parametric head, eye, and kinematic torso model. A novel real-time reenactment algorithm employs this proxy to photo-realistically map the captured motion from the source actor to the target actor. On top of the coarse geometric proxy, we propose a video-based rendering technique that composites the modified target portrait video via view- and pose-dependent texturing, and creates photo-realistic imagery of the target actor under novel torso and head poses, facial expressions, and gaze directions. To this end, we propose a robust tracking of the face and torso of the source actor. We extensively evaluate our approach and show significant improvements in enabling much greater flexibility in creating realistic reenacted output videos.Comment: Video: https://www.youtube.com/watch?v=7Dg49wv2c_g Presented at Siggraph'1

    Multiple Action Recognition for Video Games (MARViG)

    Get PDF
    Action recognition research historically has focused on increasing accuracy on datasets in highly controlled environments. Perfect or near perfect offline action recognition accuracy on scripted datasets has been achieved. The aim of this thesis is to deal with the more complex problem of online action recognition with low latency in real world scenarios. To fulfil this aim two new multi-modal gaming datasets were captured and three novel algorithms for online action recognition were proposed. Two new gaming datasets, G3D and G3Di for real-time action recognition with multiple actions and multi-modal data were captured and publicly released. Furthermore, G3Di was captured using a novel game-sourcing method so the actions are realistic. Three novel algorithms for online action recognition with low latency were proposed. Firstly, Dynamic Feature Selection, which combines the discriminative power of Random Forests for feature selection with an ensemble of AdaBoost classifiers for dynamic classification. Secondly, Clustered Spatio-Temporal Manifolds, which modelled the dynamics of human actions with style invariant action templates that were combined with Dynamic Time Warping for execution rate invariance. Finally, a Hierarchical Transfer Learning framework, comprised of a novel transfer learning algorithm to detect compound actions in addition to hierarchical interaction detection to recognise the actions and interactions of multiple subjects. The proposed algorithms run in real-time with low latency ensuring they are suitable for a wide range of natural user interface applications including gaming. State-of-the art results were achieved for online action recognition. Experimental results indicate higher complexity of the G3Di dataset in comparison to the existing gaming datasets, highlighting the importance of this dataset for designing algorithms suitable for realistic interactive applications. This thesis has advanced the study of realistic action recognition and is expected to serve as a basis for further study within the research community
    • …
    corecore