2,596 research outputs found

    Improved Behavior Monitoring and Classification Using Cues Parameters Extraction from Camera Array Images

    Get PDF
    Behavior monitoring and classification is a mechanism used to automatically identify or verify individual based on their human detection, tracking and behavior recognition from video sequences captured by a depth camera. In this paper, we designed a system that precisely classifies the nature of 3D body postures obtained by Kinect using an advanced recognizer. We proposed novel features that are suitable for depth data. These features are robust to noise, invariant to translation and scaling, and capable of monitoring fast human bodyparts movements. Lastly, advanced hidden Markov model is used to recognize different activities. In the extensive experiments, we have seen that our system consistently outperforms over three depth-based behavior datasets, i.e., IM-DailyDepthActivity, MSRDailyActivity3D and MSRAction3D in both posture classification and behavior recognition. Moreover, our system handles subject's body parts rotation, self-occlusion and body parts missing which significantly track complex activities and improve recognition rate. Due to easy accessible, low-cost and friendly deployment process of depth camera, the proposed system can be applied over various consumer-applications including patient-monitoring system, automatic video surveillance, smart homes/offices and 3D games

    Going Deeper into Action Recognition: A Survey

    Full text link
    Understanding human actions in visual data is tied to advances in complementary research areas including object recognition, human dynamics, domain adaptation and semantic segmentation. Over the last decade, human action analysis evolved from earlier schemes that are often limited to controlled environments to nowadays advanced solutions that can learn from millions of videos and apply to almost all daily activities. Given the broad range of applications from video surveillance to human-computer interaction, scientific milestones in action recognition are achieved more rapidly, eventually leading to the demise of what used to be good in a short time. This motivated us to provide a comprehensive review of the notable steps taken towards recognizing human actions. To this end, we start our discussion with the pioneering methods that use handcrafted representations, and then, navigate into the realm of deep learning based approaches. We aim to remain objective throughout this survey, touching upon encouraging improvements as well as inevitable fallbacks, in the hope of raising fresh questions and motivating new research directions for the reader

    Reconstructing Human Motion

    Get PDF
    This thesis presents methods for reconstructing human motion in a variety of applications and begins with an introduction to the general motion capture hardware and processing pipeline. Then, a data-driven method for the completion of corrupted marker-based motion capture data is presented. The approach is especially suitable for challenging cases, e.g., if complete marker sets of multiple body parts are missing over a long period of time. Using a large motion capture database and without the need for extensive preprocessing the method is able to fix missing markers across different actors and motion styles. The approach can be used for incrementally increasing prior-databases, as the underlying search technique for similar motions scales well to huge databases. The resulting clean motion database could then be used in the next application: a generic data-driven method for recognizing human full body actions from live motion capture data originating from various sources. The method queries an annotated motion capture database for similar motion segments, able to handle temporal deviations from the original motion. The approach is online-capable, works in realtime, requires virtually no preprocessing and is shown to work with a variety of feature sets extracted from input data including positional data, sparse accelerometer signals, skeletons extracted from depth sensors and even video data. Evaluation is done by comparing against a frame-based Support Vector Machine approach on a freely available motion database as well as a database containing Judo referee signal motions. In the last part, a method to indirectly reconstruct the effects of the human heart's pumping motion from video data of the face is applied in the context of epileptic seizures. These episodes usually feature interesting heart rate patterns like a significant increase at seizure start as well as seizure-type dependent drop-offs near the end. The pulse detection method is evaluated for applicability regarding seizure detection in a multitude of scenarios, ranging from videos recorded in a controlled clinical environment to patient supplied videos of seizures filmed with smartphones

    Robust localization with wearable sensors

    Get PDF
    Measuring physical movements of humans and understanding human behaviour is useful in a variety of areas and disciplines. Human inertial tracking is a method that can be leveraged for monitoring complex actions that emerge from interactions between human actors and their environment. An accurate estimation of motion trajectories can support new approaches to pedestrian navigation, emergency rescue, athlete management, and medicine. However, tracking with wearable inertial sensors has several problems that need to be overcome, such as the low accuracy of consumer-grade inertial measurement units (IMUs), the error accumulation problem in long-term tracking, and the artefacts generated by movements that are less common. This thesis focusses on measuring human movements with wearable head-mounted sensors to accurately estimate the physical location of a person over time. The research consisted of (i) providing an overview of the current state of research for inertial tracking with wearable sensors, (ii) investigating the performance of new tracking algorithms that combine sensor fusion and data-driven machine learning, (iii) eliminating the effect of random head motion during tracking, (iv) creating robust long-term tracking systems with a Bayesian neural network and sequential Monte Carlo method, and (v) verifying that the system can be applied with changing modes of behaviour, defined as natural transitions from walking to running and vice versa. This research introduces a new system for inertial tracking with head-mounted sensors (which can be placed in, e.g. helmets, caps, or glasses). This technology can be used for long-term positional tracking to explore complex behaviours

    Multidimensional embedded MEMS motion detectors for wearable mechanocardiography and 4D medical imaging

    Get PDF
    Background: Cardiovascular diseases are the number one cause of death. Of these deaths, almost 80% are due to coronary artery disease (CAD) and cerebrovascular disease. Multidimensional microelectromechanical systems (MEMS) sensors allow measuring the mechanical movement of the heart muscle offering an entirely new and innovative solution to evaluate cardiac rhythm and function. Recent advances in miniaturized motion sensors present an exciting opportunity to study novel device-driven and functional motion detection systems in the areas of both cardiac monitoring and biomedical imaging, for example, in computed tomography (CT) and positron emission tomography (PET). Methods: This Ph.D. work describes a new cardiac motion detection paradigm and measurement technology based on multimodal measuring tools — by tracking the heart’s kinetic activity using micro-sized MEMS sensors — and novel computational approaches — by deploying signal processing and machine learning techniques—for detecting cardiac pathological disorders. In particular, this study focuses on the capability of joint gyrocardiography (GCG) and seismocardiography (SCG) techniques that constitute the mechanocardiography (MCG) concept representing the mechanical characteristics of the cardiac precordial surface vibrations. Results: Experimental analyses showed that integrating multisource sensory data resulted in precise estimation of heart rate with an accuracy of 99% (healthy, n=29), detection of heart arrhythmia (n=435) with an accuracy of 95-97%, ischemic disease indication with approximately 75% accuracy (n=22), as well as significantly improved quality of four-dimensional (4D) cardiac PET images by eliminating motion related inaccuracies using MEMS dual gating approach. Tissue Doppler imaging (TDI) analysis of GCG (healthy, n=9) showed promising results for measuring the cardiac timing intervals and myocardial deformation changes. Conclusion: The findings of this study demonstrate clinical potential of MEMS motion sensors in cardiology that may facilitate in time diagnosis of cardiac abnormalities. Multidimensional MCG can effectively contribute to detecting atrial fibrillation (AFib), myocardial infarction (MI), and CAD. Additionally, MEMS motion sensing improves the reliability and quality of cardiac PET imaging.Moniulotteisten sulautettujen MEMS-liiketunnistimien käyttö sydänkardiografiassa sekä lääketieteellisessä 4D-kuvantamisessa Tausta: Sydän- ja verisuonitaudit ovat yleisin kuolinsyy. Näistä kuolemantapauksista lähes 80% johtuu sepelvaltimotaudista (CAD) ja aivoverenkierron häiriöistä. Moniulotteiset mikroelektromekaaniset järjestelmät (MEMS) mahdollistavat sydänlihaksen mekaanisen liikkeen mittaamisen, mikä puolestaan tarjoaa täysin uudenlaisen ja innovatiivisen ratkaisun sydämen rytmin ja toiminnan arvioimiseksi. Viimeaikaiset teknologiset edistysaskeleet mahdollistavat uusien pienikokoisten liiketunnistusjärjestelmien käyttämisen sydämen toiminnan tutkimuksessa sekä lääketieteellisen kuvantamisen, kuten esimerkiksi tietokonetomografian (CT) ja positroniemissiotomografian (PET), tarkkuuden parantamisessa. Menetelmät: Tämä väitöskirjatyö esittelee uuden sydämen kineettisen toiminnan mittaustekniikan, joka pohjautuu MEMS-anturien käyttöön. Uudet laskennalliset lähestymistavat, jotka perustuvat signaalinkäsittelyyn ja koneoppimiseen, mahdollistavat sydämen patologisten häiriöiden havaitsemisen MEMS-antureista saatavista signaaleista. Tässä tutkimuksessa keskitytään erityisesti mekanokardiografiaan (MCG), joihin kuuluvat gyrokardiografia (GCG) ja seismokardiografia (SCG). Näiden tekniikoiden avulla voidaan mitata kardiorespiratorisen järjestelmän mekaanisia ominaisuuksia. Tulokset: Kokeelliset analyysit osoittivat, että integroimalla usean sensorin dataa voidaan mitata syketiheyttä 99% (terveillä n=29) tarkkuudella, havaita sydämen rytmihäiriöt (n=435) 95-97%, tarkkuudella, sekä havaita iskeeminen sairaus noin 75% tarkkuudella (n=22). Lisäksi MEMS-kaksoistahdistuksen avulla voidaan parantaa sydämen 4D PET-kuvan laatua, kun liikeepätarkkuudet voidaan eliminoida paremmin. Doppler-kuvantamisessa (TDI, Tissue Doppler Imaging) GCG-analyysi (terveillä, n=9) osoitti lupaavia tuloksia sydänsykkeen ajoituksen ja intervallien sekä sydänlihasmuutosten mittaamisessa. Päätelmä: Tämän tutkimuksen tulokset osoittavat, että kardiologisilla MEMS-liikeantureilla on kliinistä potentiaalia sydämen toiminnallisten poikkeavuuksien diagnostisoinnissa. Moniuloitteinen MCG voi edistää eteisvärinän (AFib), sydäninfarktin (MI) ja CAD:n havaitsemista. Lisäksi MEMS-liiketunnistus parantaa sydämen PET-kuvantamisen luotettavuutta ja laatua

    A Methodology for Extracting Human Bodies from Still Images

    Get PDF
    Monitoring and surveillance of humans is one of the most prominent applications of today and it is expected to be part of many future aspects of our life, for safety reasons, assisted living and many others. Many efforts have been made towards automatic and robust solutions, but the general problem is very challenging and remains still open. In this PhD dissertation we examine the problem from many perspectives. First, we study the performance of a hardware architecture designed for large-scale surveillance systems. Then, we focus on the general problem of human activity recognition, present an extensive survey of methodologies that deal with this subject and propose a maturity metric to evaluate them. One of the numerous and most popular algorithms for image processing found in the field is image segmentation and we propose a blind metric to evaluate their results regarding the activity at local regions. Finally, we propose a fully automatic system for segmenting and extracting human bodies from challenging single images, which is the main contribution of the dissertation. Our methodology is a novel bottom-up approach relying mostly on anthropometric constraints and is facilitated by our research in the fields of face, skin and hands detection. Experimental results and comparison with state-of-the-art methodologies demonstrate the success of our approach

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Complex Human Action Recognition in Live Videos Using Hybrid FR-DL Method

    Full text link
    Automated human action recognition is one of the most attractive and practical research fields in computer vision, in spite of its high computational costs. In such systems, the human action labelling is based on the appearance and patterns of the motions in the video sequences; however, the conventional methodologies and classic neural networks cannot use temporal information for action recognition prediction in the upcoming frames in a video sequence. On the other hand, the computational cost of the preprocessing stage is high. In this paper, we address challenges of the preprocessing phase, by an automated selection of representative frames among the input sequences. Furthermore, we extract the key features of the representative frame rather than the entire features. We propose a hybrid technique using background subtraction and HOG, followed by application of a deep neural network and skeletal modelling method. The combination of a CNN and the LSTM recursive network is considered for feature selection and maintaining the previous information, and finally, a Softmax-KNN classifier is used for labelling human activities. We name our model as Feature Reduction & Deep Learning based action recognition method, or FR-DL in short. To evaluate the proposed method, we use the UCF dataset for the benchmarking which is widely-used among researchers in action recognition research. The dataset includes 101 complicated activities in the wild. Experimental results show a significant improvement in terms of accuracy and speed in comparison with six state-of-the-art articles
    corecore