128 research outputs found

    Markerless Tracking Using Polar Correlation Of Camera Optical Flow

    Get PDF
    We present a novel, real-time, markerless vision-based tracking system, employing a rigid orthogonal configuration of two pairs of opposing cameras. Our system uses optical flow over sparse features to overcome the limitation of vision-based systems that require markers or a pre-loaded model of the physical environment. We show how opposing cameras enable cancellation of common components of optical flow leading to an efficient tracking algorithm that captures five degrees of freedom including direction of translation and angular velocity. Experiments comparing our device with an electromagnetic tracker show that its average tracking accuracy is 80% over 185 frames, and it is able to track large range motions even in outdoor settings. We also present how opposing cameras in vision-based inside-looking-out systems can be used for gesture recognition. To demonstrate our approach, we discuss three different algorithms for recovering motion parameters at different levels of complete recovery. We show how optical flow in opposing cameras can be used to recover motion parameters of the multi-camera rig. Experimental results show gesture recognition accuracy of 88.0%, 90.7% and 86.7% for our three techniques, respectively, across a set of 15 gestures

    Object Tracking With Opposing Image Capture Devices (US)

    Get PDF
    Systems and method of compensating for tracking motion of an object are disclosed. One such method includes receiving a series of images captured by each of a plurality of image capture devices. The image capture devices are arranged in an orthogonal configuration of two opposing pairs. The method further includes computing a series of positions of the object and orientations of the object, by processing the images captured by each of the plurality of image capture devices

    Spatiotemporal analysis of human actions using RGB-D cameras

    Get PDF
    Markerless human motion analysis has strong potential to provide cost-efficient solution for action recognition and body pose estimation. Many applications including humancomputer interaction, video surveillance, content-based video indexing, and automatic annotation among others will benefit from a robust solution to these problems. Depth sensing technologies in recent years have positively changed the climate of the automated vision-based human action recognition problem, deemed to be very difficult due to the various ambiguities inherent to conventional video. In this work, first a large set of invariant spatiotemporal features is extracted from skeleton joints (retrieved from depth sensor) in motion and evaluated as baseline performance. Next we introduce a discriminative Random Decision Forest-based feature selection framework capable of reaching impressive action recognition performance when combined with a linear SVM classifier. This approach improves upon the baseline performance obtained using the whole feature set with a significantly less number of features (one tenth of the original). The approach can also be used to provide insights on the spatiotemporal dynamics of human actions. A novel therapeutic action recognition dataset (WorkoutSU-10) is presented. We took advantage of this dataset as a benchmark in our tests to evaluate the reliability of our proposed methods. Recently the dataset has been published publically as a contribution to the action recognition community. In addition, an interactive action evaluation application is developed by utilizing the proposed methods to help with real life problems such as 'fall detection' in the elderly people or automated therapy program for patients with motor disabilities

    Biomechanical Markerless Motion Classification Based On Stick Model Development For Shop Floor Operator

    Get PDF
    Motion classification system marks a new era of industrial technology to monitor task performance and validate the quality of manual processes using automation. However, the current study trend pointed towards the marker-based motion capture system that demanded the expensive and extensive equipment setup. The markerless motion classification model is still underdeveloped in the manufacturing industry. Therefore, this research is purposed to develop a markerless motion classification model of shopfloor operators using stick model augmentation on the motion video and identify the best data mining strategy for the industrial motion classification. Eight participants within 23 to 24 years old participated in an experiment to perform four distinct motion sequences: moving box, moving pail, sweeping and mopping the floor, recorded in separate videos. All videos were augmented with a stick model made up of keypoints and lines using the programming model. The programming model incorporated the COCO dataset and OpenCV module to estimate the coordinates and body joints for a stick model overlay. The data extracted from the stick model featured the initial velocity, cumulative velocity and acceleration for each body joint. Motion data mining process included the data normalization, random subsampling method and data classification to discover the best information for separating motion classes. The motion vector data extracted were normalized with three different techniques: the decimal scaling normalization, min-max normalization, and Z-score normalization, to create three datasets for further data mining. All the datasets were experimented with eight classifiers to determine the best machine learning classifier and normalization technique to classify the model data. The eight tested classifiers were ZeroR, OneR, J48, random forest, random tree, Naïve Bayes, K-nearest neighbours (K = 5) and multilayer perceptron. The result showed that the random forest classifier scored the best performance with the highest recorded data classification accuracy in its min-max normalized dataset, 81.75% for the dataset before random subsampling and 92.37% for the resampled dataset. The min-max normalization gives only a slight advantage over the other normalization techniques using the same dataset. However, the random subsampling method dramatically improves the classification accuracy by eliminating the noise data and replacing them with replicated instances to balance the class. The best normalization method and data mining classifier were inserted into the motion classification model to complete the development process

    Human Motion Analysis for Efficient Action Recognition

    Get PDF
    Automatic understanding of human actions is at the core of several application domains, such as content-based indexing, human-computer interaction, surveillance, and sports video analysis. The recent advances in digital platforms and the exponential growth of video and image data have brought an urgent quest for intelligent frameworks to automatically analyze human motion and predict their corresponding action based on visual data and sensor signals. This thesis presents a collection of methods that targets human action recognition using different action modalities. The first method uses the appearance modality and classifies human actions based on heterogeneous global- and local-based features of scene and humanbody appearances. The second method harnesses 2D and 3D articulated human poses and analyizes the body motion using a discriminative combination of the parts’ velocities, locations, and correlations histograms for action recognition. The third method presents an optimal scheme for combining the probabilistic predictions from different action modalities by solving a constrained quadratic optimization problem. In addition to the action classification task, we present a study that compares the utility of different pose variants in motion analysis for human action recognition. In particular, we compare the recognition performance when 2D and 3D poses are used. Finally, we demonstrate the efficiency of our pose-based method for action recognition in spotting and segmenting motion gestures in real time from a continuous stream of an input video for the recognition of the Italian sign gesture language

    Human Pose Estimation from Monocular Images : a Comprehensive Survey

    Get PDF
    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problema into several modules: feature extraction and description, human body models, and modelin methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used

    Automatic Video-based Analysis of Human Motion

    Get PDF

    Characterization of multiphase flows integrating X-ray imaging and virtual reality

    Get PDF
    Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution. This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS. While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner

    Characterization and modelling of complex motion patterns

    Get PDF
    Movement analysis is the principle of any interaction with the world and the survival of living beings completely depends on the effciency of such analysis. Visual systems have remarkably developed eficient mechanisms that analyze motion at different levels, allowing to recognize objects in dynamical and cluttered environments. In artificial vision, there exist a wide spectrum of applications for which the study of complex movements is crucial to recover salient information. Yet each domain may be different in terms of scenarios, complexity and relationships, a common denominator is that all of them require a dynamic understanding that captures the relevant information. Overall, current strategies are highly dependent on the appearance characterization and usually they are restricted to controlled scenarios. This thesis proposes a computational framework that is inspired in known motion perception mechanisms and structured as a set of modules. Each module is in due turn composed of a set of computational strategies that provide qualitative and quantitative descriptions of the dynamic associated to a particular movement. Diverse applications were herein considered and an extensive validation was performed for each of them. Each of the proposed strategies has shown to be reliable at capturing the dynamic patterns of different tasks, identifying, recognizing, tracking and even segmenting objects in sequences of video.Resumen. El análisis del movimiento es el principio de cualquier interacción con el mundo y la supervivencia de los seres vivos depende completamente de la eficiencia de este tipo de análisis. Los sistemas visuales notablemente han desarrollado mecanismos eficientes que analizan el movimiento en diferentes niveles, lo cual permite reconocer objetos en entornos dinámicos y saturados. En visión artificial existe un amplio espectro de aplicaciones para las cuales el estudio de los movimientos complejos es crucial para recuperar información saliente. A pesar de que cada dominio puede ser diferente en términos de los escenarios, la complejidad y las relaciones de los objetos en movimiento, un común denominador es que todos ellos requieren una comprensión dinámica para capturar información relevante. En general, las estrategias actuales son altamente dependientes de la caracterización de la apariencia y por lo general están restringidos a escenarios controlados. Esta tesis propone un marco computacional que se inspira en los mecanismos de percepción de movimiento conocidas y esta estructurado como un conjunto de módulos. Cada módulo esta a su vez compuesto por un conjunto de estrategias computacionales que proporcionan descripciones cualitativas y cuantitativas de la dinámica asociada a un movimiento particular. Diversas aplicaciones fueron consideradas en este trabajo y una extensa validación se llevó a cabo para cada uno de ellas. Cada una de las estrategias propuestas ha demostrado ser fiable en la captura de los patrones dinámicos de diferentes tareas identificando, reconociendo, siguiendo e incluso segmentando objetos en secuencias de video.Doctorad
    corecore