12 research outputs found

    Gait recognition in the wild using shadow silhouettes

    Get PDF
    Gait recognition systems allow identification of users relying on features acquired from their body movement while walking. This paper discusses the main factors affecting the gait features that can be acquired from a 2D video sequence, proposing a taxonomy to classify them across four dimensions. It also explores the possibility of obtaining users’ gait features from the shadow silhouettes by proposing a novel gait recognition system. The system includes novel methods for: (i) shadow segmentation, (ii) walking direction identification, and (iii) shadow silhouette rectification. The shadow segmentation is performed by fitting a line through the feet positions of the user obtained from the gait texture image (GTI). The direction of the fitted line is then used to identify the walking direction of the user. Finally, the shadow silhouettes thus obtained are rectified to compensate for the distortions and deformations resulting from the acquisition setup, using the proposed four-point correspondence method. The paper additionally presents a new database, consisting of 21 users moving along two walking directions, to test the proposed gait recognition system. Results show that the performance of the proposed system is equivalent to that of the state-of-the-art in a constrained setting, but performing equivalently well in the wild, where most state-of-the-art methods fail. The results also highlight the advantages of using rectified shadow silhouettes over body silhouettes under certain conditions.info:eu-repo/semantics/acceptedVersio

    Human motion estimation and controller learning

    Get PDF
    Humans are capable of complex manipulation and locomotion tasks. They are able to achieve energy-efficient gait, reject disturbances, handle changing loads, and adapt to environmental constraints. Using inspiration from the human body, robotics researchers aim to develop systems with similar capabilities. Research suggests that humans minimize a task specific cost function when performing movements. In order to learn this cost function from demonstrations and incorporate it into a controller, it is first imperative to accurately estimate the expert motion. The captured motions can then be analyzed to extract the objective function the expert was minimizing. We propose a framework for human motion estimation from wearable sensors. Human body joints are modeled by matrix Lie groups, using special orthogonal groups SO(2) and SO(3) for joint pose and special Euclidean group SE(3) for base link pose representation. To estimate the human joint pose, velocity and acceleration, we provide the equations for employing the extended Kalman Filter on Lie Groups, thus explicitly accounting for the non-Euclidean geometry of the state space. Incorporating interaction constraints with respect to the environment or within the participant allows us to track global body position without an absolute reference and ensure viable pose estimate. The algorithms are extensively validated in both simulation and real-world experiments. Next, to learn underlying expert control strategies from the expert demonstrations we present a novel fast approximate multi-variate Gaussian Process regression. The method estimates the underlying cost function, without making assumptions on its structure. The computational efficiency of the approach allows for real time forward horizon prediction. Using a linear model predictive control framework we then reproduce the demonstrated movements on a robot. The learned cost function captures the variability in expert motion as well as the correlations between states, leading to a controller that both produces motions and reacts to disturbances in a human-like manner. The model predictive control formulation allows the controller to satisfy task and joint space constraints avoiding obstacles and self collisions, as well as torque constraints, ensuring operational feasibility. The approach is validated on the Franka Emika robot using real human motion exemplars

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    Putting reaction-diffusion systems into port-Hamiltonian framework

    Get PDF
    Reaction-diffusion systems model the evolution of the constituents distributed in space under the influence of chemical reactions and diffusion [6], [10]. These systems arise naturally in chemistry [5], but can also be used to model dynamical processes beyond the realm of chemistry such as biology, ecology, geology, and physics. In this paper, by adopting the viewpoint of port-controlled Hamiltonian systems [7] we cast reaction-diffusion systems into the portHamiltonian framework. Aside from offering conceptually a clear geometric interpretation formalized by a Stokes-Dirac structure [8], a port-Hamiltonian perspective allows to treat these dissipative systems as interconnected and thus makes their analysis, both quantitative and qualitative, more accessible from a modern dynamical systems and control theory point of view. This modeling approach permits us to draw immediately some conclusions regarding passivity and stability of reaction-diffusion systems. It is well-known that adding diffusion to the reaction system can generate behaviors absent in the ode case. This primarily pertains to the problem of diffusion-driven instability which constitutes the basis of Turing’s mechanism for pattern formation [11], [5]. Here the treatment of reaction-diffusion systems as dissipative distributed portHamiltonian systems could prove to be instrumental in supply of the results on absorbing sets, the existence of the maximal attractor and stability analysis. Furthermore, by adopting a discrete differential geometrybased approach [9] and discretizing the reaction-diffusion system in port-Hamiltonian form, apart from preserving a geometric structure, a compartmental model analogous to the standard one [1], [2] is obtaine

    Multi-view gait recognition using a doubly-kernel approach on the Grassmann manifold

    No full text
    View variation is one of the greatest challenges faced by the gait recognition research community. Recently, there are studies that model sets of gait features from multiple views as linear subspaces, which are known to form a special manifold called the Grassmann manifold. Conjecturing that modeling via linear subspace representation is not completely sufficient for gait recognition across view change, we take a step forward to consider non-linear subspace representation. A collection of multi-view gait features encapsulated in the form of a linear subspace is projected to the non-linear subspace through the expansion coefficients induced by kernel principal component analysis. Since subspace representation is inherently non-Euclidean, naïve vectorization as input to the vector-based pattern analysis machines is expected to yield suboptimal accuracy performance. We deal with this difficulty by embedding the manifold in a Reproducing Kernel Hilbert Space (RKHS) through a positive definite kernel function defined on the Grassmann manifold. A closer examination reveals that the proposed approach can actually be interpreted as a doubly-kernel method. To be specific, the first kernel maps the linear subspace representation non-linearly to a feature space; while the second kernel permits the application of kernelization-enabled machines established for vector-based data on the manifold-valued multi-view gait features. Experiments on the CASIA gait database shows that the proposed doubly-kernel method is effective against view change in gait recognition
    corecore