23,867 research outputs found
Recommended from our members
Articular human joint modelling
Copyright @ Cambridge University Press 2009.The work reported in this paper encapsulates the theories and algorithms developed to drive the core analysis modules of the software which has been developed to model a musculoskeletal structure of anatomic joints. Due to local bone surface and contact geometry based joint kinematics, newly developed algorithms make the proposed modeller different from currently available modellers. There are many modellers that are capable of modelling gross human body motion. Nevertheless, none of the available modellers offer complete elements of joint modelling. It appears that joint modelling is an extension of their core analysis capability, which, in every case, appears to be musculoskeletal motion dynamics. It is felt that an analysis framework that is focused on human joints would have significant benefit and potential to be used in many orthopaedic applications. The local mobility of joints has a significant influence in human motion analysis, in understanding of joint loading, tissue behaviour and contact forces. However, in order to develop a bone surface based joint modeller, there are a number of major problems, from tissue idealizations to surface geometry discretization and non-linear motion analysis. This paper presents the following: (a) The physical deformation of biological tissues as linear or non-linear viscoelastic deformation, based on spring-dashpot elements. (b) The linear dynamic multibody modelling, where the linear formulation is established for small motions and is particularly useful for calculating the equilibrium position of the joint. This model can also be used for finding small motion behaviour or loading under static conditions. It also has the potential of quantifying the joint laxity. (c) The non-linear dynamic multibody modelling, where a non-matrix and algorithmic formulation is presented. The approach allows handling complex material and geometrical nonlinearity easily. (d) Shortest path algorithms for calculating soft tissue line of action geometries. The developed algorithms are based on calculating minimum ‘surface mass’ and ‘surface covariance’. An improved version of the ‘surface covariance’ algorithm is described as ‘residual covariance’. The resulting path is used to establish the direction of forces and moments acting on joints. This information is needed for linear or non-linear treatment of the joint motion. (e) The final contribution of the paper is the treatment of the collision. In the virtual world, the difficulty in analysing bodies in motion arises due to body interpenetrations. The collision algorithm proposed in the paper involves finding the shortest projected ray from one body to the other. The projection of the body is determined by the resultant forces acting on it due to soft tissue connections under tension. This enables the calculation of collision condition of non-convex objects accurately. After the initial collision detection, the analysis involves attaching special springs (stiffness only normal to the surfaces) at the ‘potentially colliding points’ and motion of bodies is recalculated. The collision algorithm incorporates the rotation as well as translation. The algorithm continues until the joint equilibrium is achieved. Finally, the results obtained based on the software are compared with experimental results obtained using cadaveric joints
Gait recognition and understanding based on hierarchical temporal memory using 3D gait semantic folding
Gait recognition and understanding systems have shown a wide-ranging application prospect. However, their use of unstructured data from image and video has affected their performance, e.g., they are easily influenced by multi-views, occlusion, clothes, and object carrying conditions. This paper addresses these problems using a realistic 3-dimensional (3D) human structural data and sequential pattern learning framework with top-down attention modulating mechanism based on Hierarchical Temporal Memory (HTM). First, an accurate 2-dimensional (2D) to 3D human body pose and shape semantic parameters estimation method is proposed, which exploits the advantages of an instance-level body parsing model and a virtual dressing method. Second, by using gait semantic folding, the estimated body parameters are encoded using a sparse 2D matrix to construct the structural gait semantic image. In order to achieve time-based gait recognition, an HTM Network is constructed to obtain the sequence-level gait sparse distribution representations (SL-GSDRs). A top-down attention mechanism is introduced to deal with various conditions including multi-views by refining the SL-GSDRs, according to prior knowledge. The proposed gait learning model not only aids gait recognition tasks to overcome the difficulties in real application scenarios but also provides the structured gait semantic images for visual cognition. Experimental analyses on CMU MoBo, CASIA B, TUM-IITKGP, and KY4D datasets show a significant performance gain in terms of accuracy and robustness
Real-time edge tracking using a tactile sensor
Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented
Robust Scene Estimation for Goal-directed Robotic Manipulation in Unstructured Environments
To make autonomous robots "taskable" so that they function properly and interact fluently with human partners, they must be able to perceive and understand the semantic aspects of their environments. More specifically, they must know what objects exist and where they are in the unstructured human world. Progresses in robot perception, especially in deep learning, have greatly improved for detecting and localizing objects. However, it still remains a challenge for robots to perform a highly reliable scene estimation in unstructured environments that is determined by robustness, adaptability and scale. In this dissertation, we address the scene estimation problem under uncertainty, especially in unstructured environments. We enable robots to build a reliable object-oriented representation that describes objects present in the environment, as well as inter-object spatial relations. Specifically, we focus on addressing following challenges for reliable scene estimation: 1) robust perception under uncertainty results from noisy sensors, objects in clutter and perceptual aliasing, 2) adaptable perception in adverse conditions by combined deep learning and probabilistic generative methods, 3) scalable perception as the number of objects grows and the structure of objects becomes more complex (e.g. objects in dense clutter).
Towards realizing robust perception, our objective is to ground raw sensor observations into scene states while dealing with uncertainty from sensor measurements and actuator control . Scene states are represented as scene graphs, where scene graphs denote parameterized axiomatic statements that assert relationships between objects and their poses. To deal with the uncertainty, we present a pure generative approach, Axiomatic Scene Estimation (AxScEs). AxScEs estimates a probabilistic distribution across plausible scene graph hypotheses describing the configuration of objects. By maintaining a diverse set of possible states, the proposed approach demonstrates the robustness to the local minimum in the scene graph state space and effectiveness for manipulation-quality perception based on edit distance on scene graphs.
To scale up to more unstructured scenarios and be adaptable to adversarial scenarios, we present Sequential Scene Understanding and Manipulation (SUM), which estimates the scene as a collection of objects in cluttered environments. SUM is a two-stage method that leverages the accuracy and efficiency from convolutional neural networks (CNNs) with probabilistic inference methods. Despite the strength from CNNs, they are opaque in understanding how the decisions are made and fragile for generalizing beyond overfit training samples in adverse conditions (e.g., changes in illumination). The probabilistic generative method complements these weaknesses and provides an avenue for adaptable perception.
To scale up to densely cluttered environments where objects are physically touching with severe occlusions, we present GeoFusion, which fuses noisy observations from multiple frames by exploring geometric consistency at object level. Geometric consistency characterizes geometric compatibility between objects and geometric similarity between observations and objects. It reasons about geometry at the object-level, offering a fast and reliable way to be robust to semantic perceptual aliasing. The proposed approach demonstrates greater robustness and accuracy than the state-of-the-art pose estimation approach.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163060/1/zsui_1.pd
- …