463 research outputs found

    Action Recognition in Video by Covariance Matching of Silhouette Tunnels

    Full text link
    Abstract—Action recognition is a challenging problem in video analytics due to event complexity, variations in imaging conditions, and intra- and inter-individual action-variability. Central to these challenges is the way one models actions in video, i.e., action representation. In this paper, an action is viewed as a temporal sequence of local shape-deformations of centroid-centered object silhouettes, i.e., the shape of the centroid-centered object silhouette tunnel. Each action is rep-resented by the empirical covariance matrix of a set of 13-dimensional normalized geometric feature vectors that capture the shape of the silhouette tunnel. The similarity of two actions is measured in terms of a Riemannian metric between their covariance matrices. The silhouette tunnel of a test video is broken into short overlapping segments and each segment is classified using a dictionary of labeled action covariance matrices and the nearest neighbor rule. On a database of 90 short video sequences this attains a correct classification rate of 97%, which is very close to the state-of-the-art, at almost 5-fold reduced computational cost. Majority-vote fusion of segment decisions achieves 100 % classification rate. Keywords-video analysis; action recognition; silhouette tun-nel; covariance matching; generalized eigenvalues; I

    Locomoção bípede adaptativa a partir de uma única demonstração usando primitivas de movimento

    Get PDF
    Doutoramento em Engenharia EletrotécnicaEste trabalho aborda o problema de capacidade de imitação da locomoção humana através da utilização de trajetórias de baixo nível codificadas com primitivas de movimento e utilizá-las para depois generalizar para novas situações, partindo apenas de uma demonstração única. Assim, nesta linha de pensamento, os principais objetivos deste trabalho são dois: o primeiro é analisar, extrair e codificar demonstrações efetuadas por um humano, obtidas por um sistema de captura de movimento de forma a modelar tarefas de locomoção bípede. Contudo, esta transferência não está limitada à simples reprodução desses movimentos, requerendo uma evolução das capacidades para adaptação a novas situações, assim como lidar com perturbações inesperadas. Assim, o segundo objetivo é o desenvolvimento e avaliação de uma estrutura de controlo com capacidade de modelação das ações, de tal forma que a demonstração única apreendida possa ser modificada para o robô se adaptar a diversas situações, tendo em conta a sua dinâmica e o ambiente onde está inserido. A ideia por detrás desta abordagem é resolver o problema da generalização a partir de uma demonstração única, combinando para isso duas estruturas básicas. A primeira consiste num sistema gerador de padrões baseado em primitivas de movimento utilizando sistemas dinâmicos (DS). Esta abordagem de codificação de movimentos possui propriedades desejáveis que a torna ideal para geração de trajetórias, tais como a possibilidade de modificar determinados parâmetros em tempo real, tais como a amplitude ou a frequência do ciclo do movimento e robustez a pequenas perturbações. A segunda estrutura, que está embebida na anterior, é composta por um conjunto de osciladores acoplados em fase que organizam as ações de unidades funcionais de forma coordenada. Mudanças em determinadas condições, como o instante de contacto ou impactos com o solo, levam a modelos com múltiplas fases. Assim, em vez de forçar o movimento do robô a situações pré-determinadas de forma temporal, o gerador de padrões de movimento proposto explora a transição entre diferentes fases que surgem da interação do robô com o ambiente, despoletadas por eventos sensoriais. A abordagem proposta é testada numa estrutura de simulação dinâmica, sendo que várias experiências são efetuadas para avaliar os métodos e o desempenho dos mesmos.This work addresses the problem of learning to imitate human locomotion actions through low-level trajectories encoded with motion primitives and generalizing them to new situations from a single demonstration. In this line of thought, the main objectives of this work are twofold: The first is to analyze, extract and encode human demonstrations taken from motion capture data in order to model biped locomotion tasks. However, transferring motion skills from humans to robots is not limited to the simple reproduction, but requires the evaluation of their ability to adapt to new situations, as well as to deal with unexpected disturbances. Therefore, the second objective is to develop and evaluate a control framework for action shaping such that the single-demonstration can be modulated to varying situations, taking into account the dynamics of the robot and its environment. The idea behind the approach is to address the problem of generalization from a single-demonstration by combining two basic structures. The first structure is a pattern generator system consisting of movement primitives learned and modelled by dynamical systems (DS). This encoding approach possesses desirable properties that make them well-suited for trajectory generation, namely the possibility to change parameters online such as the amplitude and the frequency of the limit cycle and the intrinsic robustness against small perturbations. The second structure, which is embedded in the previous one, consists of coupled phase oscillators that organize actions into functional coordinated units. The changing contact conditions plus the associated impacts with the ground lead to models with multiple phases. Instead of forcing the robot’s motion into a predefined fixed timing, the proposed pattern generator explores transition between phases that emerge from the interaction of the robot system with the environment, triggered by sensor-driven events. The proposed approach is tested in a dynamics simulation framework and several experiments are conducted to validate the methods and to assess the performance of a humanoid robot

    Bio-Inspired Robotics

    Get PDF
    Modern robotic technologies have enabled robots to operate in a variety of unstructured and dynamically-changing environments, in addition to traditional structured environments. Robots have, thus, become an important element in our everyday lives. One key approach to develop such intelligent and autonomous robots is to draw inspiration from biological systems. Biological structure, mechanisms, and underlying principles have the potential to provide new ideas to support the improvement of conventional robotic designs and control. Such biological principles usually originate from animal or even plant models, for robots, which can sense, think, walk, swim, crawl, jump or even fly. Thus, it is believed that these bio-inspired methods are becoming increasingly important in the face of complex applications. Bio-inspired robotics is leading to the study of innovative structures and computing with sensory–motor coordination and learning to achieve intelligence, flexibility, stability, and adaptation for emergent robotic applications, such as manipulation, learning, and control. This Special Issue invites original papers of innovative ideas and concepts, new discoveries and improvements, and novel applications and business models relevant to the selected topics of ``Bio-Inspired Robotics''. Bio-Inspired Robotics is a broad topic and an ongoing expanding field. This Special Issue collates 30 papers that address some of the important challenges and opportunities in this broad and expanding field

    Measuring skeletal kinematics with accelerometers on the skin surface

    Get PDF
    The most common motion analysis method uses cameras to track the position of markers on bodily surfaces over time. Although each species has a common skeletal frame to reference recorded motions, the soft tissue covering each is not rigid. Markers, therefore, experience motion relative to the bone and do not accurately portray underlying bone activity. This limits clinical use of motion studies and the understanding of joint motion. Use of MEMS accelerometers for removing soft tissue artifact, motion relative to the bone, from surface measurements and determining the position of the underlying bone was investigated. An animal limb was modeled experimentally as a double pendulum with soft tissue as sprung masses with motions perpendicular to the pendulums. Horizontal motion was cycled at the top joint with a 25 cm stroke. Position data obtained from the mass with a Codamotion™ system and integrated accelerometer data were combined in a Kalman filter to determine global position. Acceleration data in the sensor coordinate system determined tissue artifact and were compared to measurements using CODA markers on the mass and pendulum. Removing artifact from mass position estimated pendulum position over time. In determining mass position, integrated accelerometer data experienced drift, deviating from reasonable values and were determined impractical for Kalman filter input. This led to using only the CODA-determined position as the true position. Accelerometer artifacts resulted in mean differences with the CODA markers of less than 1 mm over 3 cm displacements excluding a mass with mechanical difficulties. The largest mean difference across four tests was 0.66 mm, which is 96.17 percent accurate. Mean differences between base positions collected from accelerometers and CODA markers were found for the global x and y directions. Maximum deviations were 1.64 mm and 4.45 mm, respectively, which are 99.56 and 99.63 percent accurate. Results show the effectiveness of this procedure in calculating the location of the bases of sprung masses in two dimensions. The basis of this research contributes to the determination of bone position over time that will increase the potential of understanding fundamental, rigid body and joint motions in a clinical setting using noninvasive methods

    Virtual Reality

    Get PDF
    At present, the virtual reality has impact on information organization and management and even changes design principle of information systems, which will make it adapt to application requirements. The book aims to provide a broader perspective of virtual reality on development and application. First part of the book is named as "virtual reality visualization and vision" and includes new developments in virtual reality visualization of 3D scenarios, virtual reality and vision, high fidelity immersive virtual reality included tracking, rendering and display subsystems. The second part named as "virtual reality in robot technology" brings forth applications of virtual reality in remote rehabilitation robot-based rehabilitation evaluation method and multi-legged robot adaptive walking in unstructured terrains. The third part, named as "industrial and construction applications" is about the product design, space industry, building information modeling, construction and maintenance by virtual reality, and so on. And the last part, which is named as "culture and life of human" describes applications of culture life and multimedia-technology

    Biomechanics

    Get PDF
    Biomechanics is a vast discipline within the field of Biomedical Engineering. It explores the underlying mechanics of how biological and physiological systems move. It encompasses important clinical applications to address questions related to medicine using engineering mechanics principles. Biomechanics includes interdisciplinary concepts from engineers, physicians, therapists, biologists, physicists, and mathematicians. Through their collaborative efforts, biomechanics research is ever changing and expanding, explaining new mechanisms and principles for dynamic human systems. Biomechanics is used to describe how the human body moves, walks, and breathes, in addition to how it responds to injury and rehabilitation. Advanced biomechanical modeling methods, such as inverse dynamics, finite element analysis, and musculoskeletal modeling are used to simulate and investigate human situations in regard to movement and injury. Biomechanical technologies are progressing to answer contemporary medical questions. The future of biomechanics is dependent on interdisciplinary research efforts and the education of tomorrow’s scientists

    Examination of the Effect of Psychophysical Factors on the Quality of Human Gait Recognition

    Get PDF
    Abstract: The paper presents an analysis concerning the influence of selected psychophysical parameters on the quality of human gait recognition. The following factors have been taken into account: body height (BH), body weight (BW), the emotional condition of the respondent, the physical condition of the respondent, previous injuries or dysfunctions of the locomotive system. The study was based on data measuring the ground reaction forces (GRF) among 179 participants (3 315 gait cycles). Based on the classification, some kind of confusion matrix were established. On the basis of the data included in the matrix, it was concluded that the wrong classification was most affected by the similar weight of two confused people. It was also noted, that people of the same gender and similar BH were confused most often. On the other hand, previous body injuries and dysfunctions of the motor system were the factors facilitating the recognition of people. The results obtained will allow for the design of more accurate biometric systems in the future

    Gait analysis, modelling, and comparison from unconstrained walks and viewpoints : view-rectification of body-part trajectories from monocular video sequences

    Get PDF
    L'analyse, la modélisation et la comparaison de la démarche de personnes à l'aide d'algorithmes de vision artificielle a récemment suscité beaucoup d'intérêt dans les domaines d'applications médicales et de surveillance. Il y a en effet plusieurs avantages à utiliser des algorithmes de vision artificielle pour faire l'analyse, la modélisation et la comparaison de la démarche de personnes. Par exemple, la démarche d'une personne peut être analysée et modélisée de loin en observant la personne à l'aide d'une caméra, ce qui ne requiert pas le placement de marqueurs ou de senseurs sur la personne. De plus, la coopération des personnes observées n'est pas requise, ce qui permet d'utiliser la démarche des personnes comme un facteur d'identification biométrique dans les systèmes de surveillance automatique. Les méthodes d'analyse et de modélisation de la démarche existantes comportent toutefois plusieurs limitations. Plusieurs de ces méthodes nécessitent une vue de profil des personnes puisque ce point de vue est optimal pour l'analyse et la modélisation de la démarche. La plupart de ces méthodes supposent également une distance assez grande entre les personnes et la caméra afin de limiter les effets néfastes que la projection de perspective a sur l'analyse et la modélisation de la démarche. Par ailleurs, ces méthodes ne gèrent pas les changements de direction et de vitesse dans les marches. Cela limite grandement les marches pouvant être analysées et modélisées dans les applications médicales et les applications de surveillance. L'approche proposée dans cette thèse permet d'effectuer l'analyse, la modélisation et la comparaison de la démarche de personnes à partir de marches et de points de vue non contraints. L'approche proposée est principalement constituée d'une méthode de rectification du point de vue qui permet de générer une vue fronto-parallèle (vue de profil) de la trajectoire imagée des membres d'une personne. Cette méthode de rectification de la vue est basée sur un modèle de marche novateur qui utilise la géométrie projective pour faire les liens spatio-temporels entre la position des membres dans la scène et leur contrepartie dans les images provenant d'une caméra. La tête et les pieds sont les seuls membres nécessaires à l'approche proposée dans cette thèse. La position et le suivi de ces membres sont automatiquement effectués par un algorithme de suivi des membres développé dans le cadre de cette thèse. L'analyse de la démarche est effectuée par une nouvelle méthode qui extrait des caractéristiques de la démarche à partir de la trajectoire rectifiée des membres. Un nouveau modèle de la démarche basé sur la trajectoire rectifiée des membres est proposé afin de permettre la modélisation et la comparaison de la démarche en utilisant les caractéristiques dynamiques de la démarche. L'approche proposée dans cette thèse est premièrement validée à l'aide de marches synthétiques comprenant plusieurs points de vue différents ainsi que des changements de direction. Les résultats de cette étape de validation montrent que la méthode de rectification de la vue fonctionne correctement, et qu'il est possible d'extraire des caractéristiques de la démarche valides à partir de la trajectoire rectifiée des membres. Par la suite, l'analyse, la modélisation et la comparaison de la démarche de personnes sont effectuées sur des marches réelles qui ont été acquises dans le cadre de cette thèse. Ces marches sont particulièrement difficiles à analyser et à modéliser puisqu'elles ont été effectuées près de la caméra et qu'elles comportent des changements de direction et de vitesse. Les résultats d'analyse de la démarche confirment que les caractéristiques de la démarche obtenues à l'aide de la méthode proposée sont réalistes et sont en accord avec les résultats présentés dans les études cliniques de la démarche. Les résultats de modélisation et de comparaison de la démarche démontrent qu'il est possible d'utiliser la méthode proposée pour reconnaître des personnes par leur démarche dans le contexte des applications de surveillance. Les taux de reconnaissance obtenus sont bons considérant la complexité des marches utilisées dans cette thèse.Gait analysis, modelling and comparison using computer vision algorithms has recently attracted much attention for medical and surveillance applications. Analyzing and modelling a person's gait with computer vision algorithms has indeed some interesting advantages over more traditional biometrics. For instance, gait can be analyzed and modelled at a distance by observing the person with a camera, which means that no markers or sensors have to be worn by the person. Moreover, gait analysis and modelling using computer vision algorithms does not require the cooperation of the observed people, which thus allows for using gait as a biometric in surveillance applications. Current gait analysis and modelling approaches have however severe limitations. For instance, several approaches require a side view of the walks since this viewpoint is optimal for gait analysis and modelling. Most approaches also require the walks to be observed far enough from the camera in order to avoid perspective distortion effects that would badly affect the resulting gait analyses and models. Moreover, current approaches do not allow for changes in walk direction and in walking speed, which greatly constraints the walks that can be analyzed and modelled in medical and surveillance applications. The approach proposed in this thesis aims at performing gait analysis, modelling and comparison from unconstrained walks and viewpoints in medical and surveillance applications. The proposed approach mainly consists in a novel view-rectification method that generates a fronto-parallel viewpoint (side view) of the imaged trajectories of body parts. The view-rectification method is based on a novel walk model that uses projective geometry to provide the spatio-temporal links between the body-part positions in the scene and their corresponding positions in the images. The head and the feet are the only body parts that are relevant for the proposed approach. They are automatically localized and tracked in monocular video sequences using a novel body parts tracking algorithm. Gait analysis is performed by a novel method that extracts standard gait measurements from the view-rectified body-part trajectories. A novel gait model based on body-part trajectories is also proposed in order to perform gait modelling and comparison using the dynamics of the gait. The proposed approach is first validated using synthetic walks comprising different viewpoints and changes in the walk direction. The validation results shows that the proposed view-rectification method works well, that is, valid gait measurements can be extracted from the view-rectified body-part trajectories. Next, gait analysis, modelling, and comparison is performed on real walks acquired as part of this thesis. These walks are challenging since they were performed close to the camera and contain changes in walk direction and in walking speed. The results first show that the obtained gait measurements are realistic and correspond to the gait measurements found in references on clinical gait analysis. The gait comparison results then show that the proposed approach can be used to perform gait modelling and comparison in the context of surveillance applications by recognizing people by their gait. The computed recognition rates are quite good considering the challenging walks used in this thesis
    corecore