137 research outputs found

    Survey on assembly sequencing: a combinatorial and geometrical perspective

    Get PDF
    A systematic overview on the subject of assembly sequencing is presented. Sequencing lies at the core of assembly planning, and variants include finding a feasible sequence—respecting the precedence constraints between the assembly operations—, or determining an optimal one according to one or several operational criteria. The different ways of representing the space of feasible assembly sequences are described, as well as the search and optimization algorithms that can be used. Geometry plays a fundamental role in devising the precedence constraints between assembly operations, and this is the subject of the second part of the survey, which treats also motion in contact in the context of the actual performance of assembly operations.Peer ReviewedPostprint (author’s final draft

    Pump it up : computer animation of a biomechanically based model of muscle using the finite element method

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.Includes bibliographical references (leaves 175-179).by David Tzu-Wei Chen.Ph.D

    AMP-CAD: Automatic Assembly Motion Planning Using C AD Models of Parts

    Get PDF
    Assembly with robots involves two kinds of motions, those that are point-to-point and those that are force/torque guided, the former kind of motions being faster and more amenable to automatic planning and the latter kind being necessary for dealing with tight clearances. In this paper, we describe an assembly motion planning system that uses descriptions of assemblies and CAD models of parts to automatically figure out which motions should be point-to-point and which motions should be force/torque guided. Our planner uses graph search over a potential field representation of parts to calculate candidate assembly paths. Given the tolerances of the parts and other uncertainties, these paths are then analyzed for the likelihood of collisions. Those path segments that are prone to collisions are then marked for execution under force/torque control. The calculation of the various motions is facilitated by an object-oriented and feature-based assembly representation. A highlight of this representation is the manner in which tolerance information is taken into account: Representation of, say, a part contains a pointer to the boundary representation of the part in its most material condition form. As first defined by Requicha, the most material condition form of a geometric entity is obtained by expanding all the convexities and shrinking all the concavities by relevant tolerances. An integral part of the assembly motion planner is the execution unit. Residing in this unit is knowledge of the different types of automatic EDR (error detection and recovery) strategies. Therefore, during the execution of the force/torque guided motion, this unit invokes the EDR strategies appropriate to the geometric constraints relevant to the motion. This system, called AMP-CAD, has been experimentally verified using a Cincinnati Milacron T3-726 robot and a Puma 762 robot on a variety of assemblies

    Task level strategies for robots

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.Includes bibliographical references (p. 211-225).by Sundar Narasimhan.Ph.D

    Simulating Humans: Computer Graphics, Animation, and Control

    Get PDF
    People are all around us. They inhabit our home, workplace, entertainment, and environment. Their presence and actions are noted or ignored, enjoyed or disdained, analyzed or prescribed. The very ubiquitousness of other people in our lives poses a tantalizing challenge to the computational modeler: people are at once the most common object of interest and yet the most structurally complex. Their everyday movements are amazingly uid yet demanding to reproduce, with actions driven not just mechanically by muscles and bones but also cognitively by beliefs and intentions. Our motor systems manage to learn how to make us move without leaving us the burden or pleasure of knowing how we did it. Likewise we learn how to describe the actions and behaviors of others without consciously struggling with the processes of perception, recognition, and language

    A biomechanics-based articulation model for medical applications

    Get PDF
    Computer Graphics came into the medical world especially after the arrival of 3D medical imaging. Computer Graphics techniques are already integrated in the diagnosis procedure by means of the visual tridimensional analysis of computer tomography, magnetic resonance and even ultrasound data. The representations they provide, nevertheless, are static pictures of the patients' body, lacking in functional information. We believe that the next step in computer assisted diagnosis and surgery planning depends on the development of functional 3D models of human body. It is in this context that we propose a model of articulations based on biomechanics. Such model is able to simulate the joint functionality in order to allow for a number of medical applications. It was developed focusing on the following requirements: it must be at the same time simple enough to be implemented on computer, and realistic enough to allow for medical applications; it must be visual in order for applications to be able to explore the joint in a 3D simulation environment. Then, we propose to combine kinematical motion for the parts that can be considered as rigid, such as bones, and physical simulation of the soft tissues. We also deal with the interaction between the different elements of the joint, and for that we propose a specific contact management model. Our kinematical skeleton is based on anatomy. Special considerations have been taken to include anatomical features like axis displacements, range of motion control, and joints coupling. Once a 3D model of the skeleton is built, it can be simulated by data coming from motion capture or can be specified by a specialist, a clinician for instance. Our deformation model is an extension of the classical mass-spring systems. A spherical volume is considered around mass points, and mechanical properties of real materials can be used to parameterize the model. Viscoelasticity, anisotropy and non-linearity of the tissues are simulated. We particularly proposed a method to configure the mass-spring matrix such that the objects behave according to a predefined Young's modulus. A contact management model is also proposed to deal with the geometric interactions between the elements inside the joint. After having tested several approaches, we proposed a new method for collision detection which measures in constant time the signed distance to the closest point for each point of two meshes subject to collide. We also proposed a method for collision response which acts directly on the surfaces geometry, in a way that the physical behavior relies on the propagation of reaction forces produced inside the tissue. Finally, we proposed a 3D model of a joint combining the three elements: anatomical skeleton motion, biomechanical soft tissues deformation, and contact management. On the top of that we built a virtual hip joint and implemented a set of medical applications prototypes. Such applications allow for assessment of stress distribution on the articular surfaces, range of motion estimation based on ligament constraint, ligament elasticity estimation from clinically measured range of motion, and pre- and post-operative evaluation of stress distribution. Although our model provides physicians with a number of useful variables for diagnosis and surgery planning, it should be improved for effective clinical use. Validation has been done partially. However, a global clinical validation is necessary. Patient specific data are still difficult to obtain, especially individualized mechanical properties of tissues. The characterization of material properties in our soft tissues model can also be improved by including control over the shear modulus

    Integrated Robot Task and Motion Planning in the Now

    Get PDF
    This paper provides an approach to integrating geometric motion planning with logical task planning for long-horizon tasks in domains with many objects. We propose a tight integration between the logical and geometric aspects of planning. We use a logical representation which includes entities that refer to poses, grasps, paths and regions, without the need for a priori discretization. Given this representation and some simple mechanisms for geometric inference, we characterize the pre-conditions and effects of robot actions in terms of these logical entities. We then reason about the interaction of the geometric and non-geometric aspects of our domains using the general-purpose mechanism of goal regression (also known as pre-image backchaining). We propose an aggressive mechanism for temporal hierarchical decomposition, which postpones the pre-conditions of actions to create an abstraction hierarchy that both limits the lengths of plans that need to be generated and limits the set of objects relevant to each plan. We describe an implementation of this planning method and demonstrate it in a simulated kitchen environment in which it solves problems that require approximately 100 individual pick or place operations for moving multiple objects in a complex domain.This work was supported in part by the NSF under Grant No. 1117325. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We also gratefully acknowledge support from ONR MURI grant N00014-09-1-1051, from AFOSR grant AOARD-104135 and from the Singapore Ministry of Education under a grant to the Singapore-MIT International Design Center. We thank Willow Garage for the use of the PR2 robot as part of the PR2 Beta Program

    From surfaces to objects : Recognizing objects using surface information and object models.

    Get PDF
    This thesis describes research on recognizing partially obscured objects using surface information like Marr's 2D sketch ([MAR82]) and surface-based geometrical object models. The goal of the recognition process is to produce a fully instantiated object hypotheses, with either image evidence for each feature or explanations for their absence, in terms of self or external occlusion. The central point of the thesis is that using surface information should be an important part of the image understanding process. This is because surfaces are the features that directly link perception to the objects perceived (for normal "camera-like" sensing) and because surfaces make explicit information needed to understand and cope with some visual problems (e.g. obscured features). Further, because surfaces are both the data and model primitive, detailed recognition can be made both simpler and more complete. Recognition input is a surface image, which represents surface orientation and absolute depth. Segmentation criteria are proposed for forming surface patches with constant curvature character, based on surface shape discontinuities which become labeled segmentation- boundaries. Partially obscured object surfaces are reconstructed using stronger surface based constraints. Surfaces are grouped to form surface clusters, which are 3D identity-independent solids that often correspond to model primitives. These are used here as a context within which to select models and find all object features. True three-dimensional properties of image boundaries, surfaces and surface clusters are directly estimated using the surface data. Models are invoked using a network formulation, where individual nodes represent potential identities for image structures. The links between nodes are defined by generic and structural relationships. They define indirect evidence relationships for an identity. Direct evidence for the identities comes from the data properties. A plausibility computation is defined according to the constraints inherent in the evidence types. When a node acquires sufficient plausibility, the model is invoked for the corresponding image structure.Objects are primarily represented using a surface-based geometrical model. Assemblies are formed from subassemblies and surface primitives, which are defined using surface shape and boundaries. Variable affixments between assemblies allow flexibly connected objects. The initial object reference frame is estimated from model-data surface relationships, using correspondences suggested by invocation. With the reference frame, back-facing, tangential, partially self-obscured, totally self-obscured and fully visible image features are deduced. From these, the oriented model is used for finding evidence for missing visible model features. IT no evidence is found, the program attempts to find evidence to justify the features obscured by an unrelated object. Structured objects are constructed using a hierarchical synthesis process. Fully completed hypotheses are verified using both existence and identity constraints based on surface evidence. Each of these processes is defined by its computational constraints and are demonstrated on two test images. These test scenes are interesting because they contain partially and fully obscured object features, a variety of surface and solid types and flexibly connected objects. All modeled objects were fully identified and analyzed to the level represented in their models and were also acceptably spatially located. Portions of this work have been reported elsewhere ([FIS83], [FIS85a], [FIS85b], [FIS86]) by the author
    • …
    corecore