29 research outputs found

    Computer Vision Based Object Tracking as a Teaching Aid for High School Physics Experiments

    Get PDF
    Experiments play a vital role in science education. In high school physics, especially in mechanics, many experiments are conducted where tracking a single or multiple objects are required. In most situations students visually observe the motion of objects and take the measurements. This manual method is time consuming, generates higher error and incapable of producing multiple readings rapidly. The research described in this work introduces a simple mechanism to integrate computer vision based tracking to enhance the quality of measurements and to new ways of looking at experiments. The case study consists of three standard experiments. In the first experiment a motion of the simple pendulum was tracked. Using computer vision students were able to obtain a correlation of 0.99 between the calculated period and the theoretical period. In addition, it was possible to calculate the position and the velocity of the bob more than 30 times during a single oscillation. Students were able to plot the extra data points for a better understanding of the simple harmonic motion, which was not possible in the manual method. Second experiment was focused on measuring the terminal velocity of a ball moving through a viscous medium. Final case study was on tracking multiple particles in a moving fluid. In all three experiments computer vision based system provided more accurate and higher number of data points than the manual method. This helps students to understanding the underline theory better. The tracking system was consisted of a digital camera, image preprocessing sub system, feature extraction subsystem, object identification subsystem and data export subsystem. The system was successfully tested on a normal PC which is cost effective to be used in high schools. Based on the case studies it was concluded that such systems can be used in high schools to improve the quality of experiments conducted.

    A review on intelligent monitoring and activity interpretation

    Get PDF
    This survey paper provides a tour of the various monitoring and activity interpretation frameworks found in the literature. The needs of monitoring and interpretation systems are presented in relation to the area where they have been developed or applied. Their evolution is studied to better understand the characteristics of current systems. After this, the main features of monitoring and activity interpretation systems are defined.Este trabajo presenta una revisión de los marcos de trabajo para monitorización e interpretación de actividades presentes en la literatura. Dependiendo del área donde dichos marcos se han desarrollado o aplicado, se han identificado diferentes necesidades. Además, para comprender mejor las particularidades de los marcos de trabajo, esta revisión realiza un recorrido por su evolución histórica. Posteriormente, se definirían las principales características de los sistemas de monitorización e interpretación de actividades.This work was partially supported by Spanish Ministerio de Economía y Competitividad / FEDER under DPI2016-80894-R grant

    Deep Learning Techniques for Radar-Based Continuous Human Activity Recognition

    Get PDF
    Human capability to perform routine tasks declines with age and age-related problems. Remote human activity recognition (HAR) is beneficial for regular monitoring of the elderly population. This paper addresses the problem of the continuous detection of daily human activities using a mm-wave Doppler radar. In this study, two strategies have been employed: the first method uses un-equalized series of activities, whereas the second method utilizes a gradient-based strategy for equalization of the series of activities. The dynamic time warping (DTW) algorithm and Long Short-term Memory (LSTM) techniques have been implemented for the classification of un-equalized and equalized series of activities, respectively. The input for DTW was provided using three strategies. The first approach uses the pixel-level data of frames (UnSup-PLevel). In the other two strategies, a convolutional variational autoencoder (CVAE) is used to extract Un-Supervised Encoded features (UnSup-EnLevel) and Supervised Encoded features (Sup-EnLevel) from the series of Doppler frames. The second approach for equalized data series involves the application of four distinct feature extraction methods: i.e., convolutional neural networks (CNN), supervised and unsupervised CVAE, and principal component Analysis (PCA). The extracted features were considered as an input to the LSTM. This paper presents a comparative analysis of a novel supervised feature extraction pipeline, employing Sup-ENLevel-DTW and Sup-EnLevel-LSTM, against several state-of-the-art unsupervised methods, including UnSUp-EnLevel-DTW, UnSup-EnLevel-LSTM, CNN-LSTM, and PCA-LSTM. The results demonstrate the superiority of the Sup-EnLevel-LSTM strategy. However, the UnSup-PLevel strategy worked surprisingly well without using annotations and frame equalization

    A Review on Intelligent Monitoring and Activity Interpretation

    Get PDF

    Human activity recognition for the use in intelligent spaces

    Get PDF
    The aim of this Graduation Project is to develop a generic biological inspired activity recognition system for the use in intelligent spaces. Intelligent spaces form the context for this project. The goal is to develop a working prototype that can learn and recognize human activities from a limited training set in all kinds of spaces and situations. For testing purposes, the office environment is chosen as subject for the intelligent space. The purpose of the intelligent space, in this case the office, is left out of the scope of the project. The scope is limited to the perceptive system of the intelligent space. The notion is that the prototype should not be bound to a specific space, but it should be a generic perceptive system able to cope in any given space within the build environment. The fact that no space is the same, developing a prototype without any domain knowledge in which it can learn and recognize activities, is the main challenge of this project. In al layers of the prototype, the data processing is kept as abstract and low level as possible to keep it as generic as possible. This is done by using local features, scale invariant descriptors and by using hidden Markov models for pattern recognition. The novel approach of the prototype is that it combines structure as well as motion features in one system making it able to train and recognize a variety of activities in a variety of situations. From rhythmic expressive actions with a simple cyclic pattern to activities where the movement is subtle and complex like typing and reading, can all be trained and recognized. The prototype has been tested on two very different data sets. The first set in which the videos are shot in a controlled environment in which simple actions were performed. The second set in which videos are shot in a normal office where daily office activities are captured and categorized afterwards. The prototype has given some promising results proving it can cope with very different spaces, actions and activities. The aim of this Graduation Project is to develop a generic biological inspired activity recognition system for the use in intelligent spaces. Intelligent spaces form the context for this project. The goal is to develop a working prototype that can learn and recognize human activities from a limited training set in all kinds of spaces and situations. For testing purposes, the office environment is chosen as subject for the intelligent space. The purpose of the intelligent space, in this case the office, is left out of the scope of the project. The scope is limited to the perceptive system of the intelligent space. The notion is that the prototype should not be bound to a specific space, but it should be a generic perceptive system able to cope in any given space within the build environment. The fact that no space is the same, developing a prototype without any domain knowledge in which it can learn and recognize activities, is the main challenge of this project. In al layers of the prototype, the data processing is kept as abstract and low level as possible to keep it as generic as possible. This is done by using local features, scale invariant descriptors and by using hidden Markov models for pattern recognition. The novel approach of the prototype is that it combines structure as well as motion features in one system making it able to train and recognize a variety of activities in a variety of situations. From rhythmic expressive actions with a simple cyclic pattern to activities where the movement is subtle and complex like typing and reading, can all be trained and recognized. The prototype has been tested on two very different data sets. The first set in which the videos are shot in a controlled environment in which simple actions were performed. The second set in which videos are shot in a normal office where daily office activities are captured and categorized afterwards. The prototype has given some promising results proving it can cope with very different spaces, actions and activities

    Object Tracking in Video

    Get PDF
    Diplomová práce popisuje principy činnosti nejvíce používaných druhů systémů pro sledování různých objektů ve videu a poté se především zaměřuje na popis a implementaci algoritmu pro interaktivní offline sledování obecných barevných objektů, jehož přednost spočívá ve vysoké přesnosti výpočtu trajektorie. Systém vytváří trajektorii z dat, jež jsou specifikována uživatelem při startu aplikace a poté interaktivně modifikována či přidávána za účelem zlepšení přesnosti výpočtu. Algoritmus je založen na detektoru, který pracuje na základě barevných příznaků, a na časové koherenci pohybu objektu, pomocí níž se vypočte více pravděpodobných trajektorií objektu. Výsledná optimální trajektorie je poté spočtena pomocí dynamického programování, jehož parametry jsou opět interaktivně upravovány uživatelem. Systém na videu o rozlišení 480x360 bodů pracuje rychlostí v rozmezí 15 až 70 snímků za vteřinu. Práce se dále zabývá vytvořením nástroje na optimální vyhodnocení úspěšnosti sledovacího algoritmu a diskutuje dosažené výsledky.This master's thesis describes principles of the most widely used object tracking systems in video and then mainly focuses on characterization and on implementation of an interactive offline tracking system for generic color objects. The algorithm quality consists in high accuracy evaluation of object trajectory. The system creates the output trajectory from input data specified by user which may be interactively modified and added to improve the system accuracy. The algorithm is based on a detector which uses a color bin features and on the temporal coherence of object motion to generate multiple candidate object trajectories. Optimal output trajectory is then calculated by dynamic programming whose parameters are also interactively modified by user. The system achieves 15-70 fps on a 480x360 video. The thesis describes implementation of an application which purpose is to optimally evaluate the tracker accuracy. The final results are also discussed.
    corecore