5,453 research outputs found

    Entropic measures of individual mobility patterns

    Full text link
    Understanding human mobility from a microscopic point of view may represent a fundamental breakthrough for the development of a statistical physics for cognitive systems and it can shed light on the applicability of macroscopic statistical laws for social systems. Even if the complexity of individual behaviors prevents a true microscopic approach, the introduction of mesoscopic models allows the study of the dynamical properties for the non-stationary states of the considered system. We propose to compute various entropy measures of the individual mobility patterns obtained from GPS data that record the movements of private vehicles in the Florence district, in order to point out new features of human mobility related to the use of time and space and to define the dynamical properties of a stochastic model that could generate similar patterns. Moreover, we can relate the predictability properties of human mobility to the distribution of time passed between two successive trips. Our analysis suggests the existence of a hierarchical structure in the mobility patterns which divides the performed activities into three different categories, according to the time cost, with different information contents. We show that a Markov process defined by using the individual mobility network is not able to reproduce this hierarchy, which seems the consequence of different strategies in the activity choice. Our results could contribute to the development of governance policies for a sustainable mobility in modern cities

    A Vector-Integration-to-Endpoint Model for Performance of Viapoint Movements

    Full text link
    Viapoint (VP) movements are movements to a desired point that are constrained to pass through an intermediate point. Studies have shown that VP movements possess properties, such as smooth curvature around the VP, that are not explicable by treating VP movements as strict concatenations of simpler point-to-point (PTP) movements. Such properties have led some theorists to propose whole-trajectory optimization models, which imply that the entire trajectory is pre-computed before movement initiation. This paper reports new experiments conducted to systematically compare VP with PTP trajectories. Analyses revealed a statistically significant early directional deviation in VP movements but no associated curvature change. An explanation of this effect is offered by extending the Vector-Integration-To-Endpoint (VITE) model (Bullock and Grossberg, 1988), which postulates that voluntary movement trajectories emerge as internal gating signals control the integration of continuously computed vector commands based on the evolving, perceptible difference between desired and actual position variables. The model explains the observed trajectories of VP and PTP movements as emergent properties of a dynamical system that does not precompute entire trajectories before movement initiation. The new model includes a working memory and a stage sensitive to time-to-contact information. These cooperate to control serial performance. The structural and functional relationships proposed in the model are consistent with available data on forebrain physiology and anatomy.Office of Naval Research (N00014-92-J-1309, N00014-93-1-1364, N0014-95-1-0409

    An intelligent real time 3D vision system for robotic welding tasks

    Get PDF
    MARWIN is a top-level robot control system that has been designed for automatic robot welding tasks. It extracts welding parameters and calculates robot trajectories directly from CAD models which are then verified by real-time 3D scanning and registration. MARWIN's 3D computer vision provides a user-centred robot environment in which a task is specified by the user by simply confirming and/or adjusting suggested parameters and welding sequences. The focus of this paper is on describing a mathematical formulation for fast 3D reconstruction using structured light together with the mechanical design and testing of the 3D vision system and show how such technologies can be exploited in robot welding tasks

    Gesteme-free context-aware adaptation of robot behavior in human–robot cooperation

    Get PDF
    Background: Cooperative robotics is receiving greater acceptance because the typical advantages provided by manipulators are combined with an intuitive usage. In particular, hands-on robotics may benefit from the adaptation of the assistant behavior with respect to the activity currently performed by the user. A fast and reliable classification of human activities is required, as well as strategies to smoothly modify the control of the manipulator. In this scenario, gesteme-based motion classification is inadequate because it needs the observation of a wide signal percentage and the definition of a rich vocabulary. Objective: In this work, a system able to recognize the user's current activity without a vocabulary of gestemes, and to accordingly adapt the manipulator's dynamic behavior is presented. Methods and material: An underlying stochastic model fits variations in the user's guidance forces and the resulting trajectories of the manipulator's end-effector with a set of Gaussian distribution. The high-level switching between these distributions is captured with hidden Markov models. The dynamic of the KUKA light-weight robot, a torque-controlled manipulator, is modified with respect to the classified activity using sigmoidal-shaped functions. The presented system is validated over a pool of 12 naive users in a scenario that addresses surgical targeting tasks on soft tissue. The robot's assistance is adapted in order to obtain a stiff behavior during activities that require critical accuracy constraint, and higher compliance during wide movements. Both the ability to provide the correct classification at each moment (sample accuracy) and the capability of correctly identify the correct sequence of activity (sequence accuracy) were evaluated. Results: The proposed classifier is fast and accurate in all the experiments conducted (80% sample accuracy after the observation of similar to 450 ms of signal). Moreover, the ability of recognize the correct sequence of activities, without unwanted transitions is guaranteed (sequence accuracy similar to 90% when computed far away from user desired transitions). Finally, the proposed activity-based adaptation of the robot's dynamic does not lead to a not smooth behavior (high smoothness, i.e. normalized jerk score <0.01). Conclusion: The provided system is able to dynamic assist the operator during cooperation in the presented scenario

    Understanding of Object Manipulation Actions Using Human Multi-Modal Sensory Data

    Full text link
    Object manipulation actions represent an important share of the Activities of Daily Living (ADLs). In this work, we study how to enable service robots to use human multi-modal data to understand object manipulation actions, and how they can recognize such actions when humans perform them during human-robot collaboration tasks. The multi-modal data in this study consists of videos, hand motion data, applied forces as represented by the pressure patterns on the hand, and measurements of the bending of the fingers, collected as human subjects performed manipulation actions. We investigate two different approaches. In the first one, we show that multi-modal signal (motion, finger bending and hand pressure) generated by the action can be decomposed into a set of primitives that can be seen as its building blocks. These primitives are used to define 24 multi-modal primitive features. The primitive features can in turn be used as an abstract representation of the multi-modal signal and employed for action recognition. In the latter approach, the visual features are extracted from the data using a pre-trained image classification deep convolutional neural network. The visual features are subsequently used to train the classifier. We also investigate whether adding data from other modalities produces a statistically significant improvement in the classifier performance. We show that both approaches produce a comparable performance. This implies that image-based methods can successfully recognize human actions during human-robot collaboration. On the other hand, in order to provide training data for the robot so it can learn how to perform object manipulation actions, multi-modal data provides a better alternative

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Recursive bayesian identification of nonlinear autonomous systems

    Get PDF
    This paper concerns the recursive identification of nonlinear discrete-time systems for which the original equations of motion are not known. Since the true model structure is not available, we replace it with a generic nonlinear model. This generic model discretizes the state space into a finite grid and associates a set of velocity vectors to the nodes of the grid. The velocity vectors are then interpolated to define a vector field on the complete state space. The proposed method follows a Bayesian framework where the identified velocity vectors are selected by the maximum a posteriori (MAP) criterion. The resulting algorithms allow a recursive update of the velocity vectors as new data is obtained. Simulation examples using the recursive algorithm are presented

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA
    • …
    corecore