103 research outputs found
Path planning and collision avoidance for robots
An optimal control problem to find the fastest collision-free trajectory of a robot surrounded by obstacles is presented. The collision avoidance is based on linear programming arguments and expressed as state constraints. The optimal control problem is solved with a sequential programming method. In order to decrease the number of unknowns and constraints a backface culling active set strategy is added to the resolution technique
Distance Computation between Convex Objects using Axis-Aligned Bounding-Box in Virtual Environment Application
Performing collision detection between convex objects in virtual environment simulation is one of vital problems in computer visualization research area. Given a set of two or more colliding objects, in order to determine the exact point of contact between object we need to undergo various high computation algorithm. In this paper, we describes our current work of determining the precise contact by measuring the distance between near colliding objects in order to maintain the accuracy and improve the speed of collision detection algorithm. Common method determine the distance by checking for vertices and edges point between objects in brute force condition. Compared to our method, by given set of objects in virtual environment world, we find the closest point between near colliding objects and bound the potential colliding area with an Axis-Aligned Bounding-Box. Then, we approximate the distance by measuring the distance of the box itself and hence recognize potential colliding area faster than the common method. Our method proven to most effective and efficient for narrow phase collision detection by removing unnecessary testing and reduced computational cost
Efficient computation of discrete Voronoi diagram and homotopy-preserving simplified medial axis of a 3d polyhedron
The Voronoi diagram is a fundamental geometric data structure and has been well studied in computational geometry and related areas. A Voronoi diagram defined using the Euclidean distance metric is also closely related to the Blum medial axis, a well known skeletal representation. Voronoi diagrams and medial axes have been shown useful for many 3D computations and operations, including proximity queries, motion planning, mesh generation, finite element analysis, and shape analysis. However, their application to complex 3D polyhedral and deformable models has been limited. This is due to the difficulty of computing exact Voronoi diagrams in an efficient and reliable manner. In this dissertation, we bridge this gap by presenting efficient algorithms to compute discrete Voronoi diagrams and simplified medial axes of 3D polyhedral models with geometric and topological guarantees. We apply these algorithms to complex 3D models and use them to perform interactive proximity queries, motion planning and skeletal computations. We present three new results. First, we describe an algorithm to compute 3D distance fields of geometric models by using a linear factorization of Euclidean distance vectors. This formulation maps directly to the linearly interpolating graphics rasterization hardware and enables us to compute distance fields of complex 3D models at interactive rates. We also use clamping and culling algorithms based on properties of Voronoi diagrams to accelerate this computation. We introduce surface distance maps, which are a compact distance vector field representation based on a mesh parameterization of triangulated two-manifolds, and use them to perform proximity computations. Our second main result is an adaptive sampling algorithm to compute an approximate Voronoi diagram that is homotopy equivalent to the exact Voronoi diagram and preserves topological features. We use this algorithm to compute a homotopy-preserving simplified medial axis of complex 3D models. Our third result is a unified approach to perform different proximity queries among multiple deformable models using second order discrete Voronoi diagrams. We introduce a new query called N-body distance query and show that different proximity queries, including collision detection, separation distance and penetration depth can be performed based on Nbody distance query. We compute the second order discrete Voronoi diagram using graphics hardware and use distance bounds to overcome the sampling errors and perform conservative computations. We have applied these queries to various deformable simulations and observed up to an order of magnitude improvement over prior algorithms
Efficient motion planning using generalized penetration depth computation
Motion planning is a fundamental problem in robotics and also arises in other applications including virtual prototyping, navigation, animation and computational structural biology. It has been extensively studied for more than three decades, though most practical algorithms are based on randomized sampling. In this dissertation, we address two main issues that arise with respect to these algorithms: (1) there are no good practical approaches to check for path non-existence even for low degree-of-freedom (DOF) robots; (2) the performance of sampling-based planners can degrade if the free space of a robot has narrow passages. In order to develop effective algorithms to deal with these problems, we use the concept of penetration depth (PD) computation. By quantifying the extent of the intersection between overlapping models (e.g. a robot and an obstacle), PD can provide a distance measure for the configuration space obstacle (C-obstacle). We extend the prior notion of translational PD to generalized PD, which takes into account translational as well as rotational motion to separate two overlapping models. Moreover, we formulate generalized PD computation based on appropriate model-dependent metrics and present two algorithms based on convex decomposition and local optimization. We highlight the efficiency and robustness of our PD algorithms on many complex 3D models. Based on generalized PD computation, we present the first set of practical algorithms for low DOF complete motion planning. Moreover, we use generalized PD computation to develop a retraction-based planner to effectively generate samples in narrow passages for rigid robots. The effectiveness of the resulting planner is shown by alpha puzzle benchmark and part disassembly benchmarks in virtual prototyping
Recommended from our members
Articular human joint modelling
Copyright @ Cambridge University Press 2009.The work reported in this paper encapsulates the theories and algorithms developed to drive the core analysis modules of the software which has been developed to model a musculoskeletal structure of anatomic joints. Due to local bone surface and contact geometry based joint kinematics, newly developed algorithms make the proposed modeller different from currently available modellers. There are many modellers that are capable of modelling gross human body motion. Nevertheless, none of the available modellers offer complete elements of joint modelling. It appears that joint modelling is an extension of their core analysis capability, which, in every case, appears to be musculoskeletal motion dynamics. It is felt that an analysis framework that is focused on human joints would have significant benefit and potential to be used in many orthopaedic applications. The local mobility of joints has a significant influence in human motion analysis, in understanding of joint loading, tissue behaviour and contact forces. However, in order to develop a bone surface based joint modeller, there are a number of major problems, from tissue idealizations to surface geometry discretization and non-linear motion analysis. This paper presents the following: (a) The physical deformation of biological tissues as linear or non-linear viscoelastic deformation, based on spring-dashpot elements. (b) The linear dynamic multibody modelling, where the linear formulation is established for small motions and is particularly useful for calculating the equilibrium position of the joint. This model can also be used for finding small motion behaviour or loading under static conditions. It also has the potential of quantifying the joint laxity. (c) The non-linear dynamic multibody modelling, where a non-matrix and algorithmic formulation is presented. The approach allows handling complex material and geometrical nonlinearity easily. (d) Shortest path algorithms for calculating soft tissue line of action geometries. The developed algorithms are based on calculating minimum ‘surface mass’ and ‘surface covariance’. An improved version of the ‘surface covariance’ algorithm is described as ‘residual covariance’. The resulting path is used to establish the direction of forces and moments acting on joints. This information is needed for linear or non-linear treatment of the joint motion. (e) The final contribution of the paper is the treatment of the collision. In the virtual world, the difficulty in analysing bodies in motion arises due to body interpenetrations. The collision algorithm proposed in the paper involves finding the shortest projected ray from one body to the other. The projection of the body is determined by the resultant forces acting on it due to soft tissue connections under tension. This enables the calculation of collision condition of non-convex objects accurately. After the initial collision detection, the analysis involves attaching special springs (stiffness only normal to the surfaces) at the ‘potentially colliding points’ and motion of bodies is recalculated. The collision algorithm incorporates the rotation as well as translation. The algorithm continues until the joint equilibrium is achieved. Finally, the results obtained based on the software are compared with experimental results obtained using cadaveric joints
Point based graphics rendering with unified scalability solutions.
Standard real-time 3D graphics rendering algorithms use brute force polygon rendering, with complexity linear in the number of polygons and little regard for limiting processing to data that contributes to the image. Modern hardware can now render smaller scenes to pixel levels of detail, relaxing surface connectivity requirements. Sub-linear scalability optimizations are typically self-contained, requiring specific data structures, without shared functions and data. A new point based rendering algorithm 'Canopy' is investigated that combines multiple typically sub-linear scalability solutions, using a small core of data structures. Specifically, locale management, hierarchical view volume culling, backface culling, occlusion culling, level of detail and depth ordering are addressed. To demonstrate versatility further, shadows and collision detection are examined. Polygon models are voxelized with interpolated attributes to provide points. A scene tree is constructed, based on a BSP tree of points, with compressed attributes. The scene tree is embedded in a compressed, partitioned, procedurally based scene graph architecture that mimics conventional systems with groups, instancing, inlines and basic read on demand rendering from backing store. Hierarchical scene tree refinement constructs an image tree image space equivalent, with object space scene node points projected, forming image node equivalents. An image graph of image nodes is maintained, describing image and object space occlusion relationships, hierarchically refined with front to back ordering to a specified threshold whilst occlusion culling with occluder fusion. Visible nodes at medium levels of detail are refined further to rasterization scales. Occlusion culling defines a set of visible nodes that can support caching for temporal coherence. Occlusion culling is approximate, possibly not suiting critical applications. Qualities and performance are tested against standard rendering. Although the algorithm has a 0(f) upper bound in the scene sizef, it is shown to practically scale sub-linearly. Scenes with several hundred billion polygons conventionally, are rendered at interactive frame rates with minimal graphics hardware support
A Broad Phase Collision Detection Algorithm Adapted to Multi-cores Architectures
International audienceRecent years have seen the impressive evolution of graphics hardware and processors architecture from single core to multi and many-core architectures. Confronted to this evolution, new trends in collision detection optimisation consist in proposing a solution that maps on the runtime architecture. We present, in this paper, two contributions in the field of collision detection in large-scale environments. We present a first way to parallelise, on a multi-core architecture, the initial step of the collision detection pipeline: the broad-phase. Then, we describe a new formalism of the collision detection pipeline that takes into account runtime architecture. The well-known broadphase algorithm used is the ”Sweep and Prune” and it has been adapted to a multi-threading use. To handle one or more thread per core, critical writing sections and threads idling must be minimised. Our model is able to work on a n-core architecture reducing computation time to detect collision between 3D objects in a large-scale environment
New geometric algorithms and data structures for collision detection of dynamically deforming objects
Any virtual environment that supports interactions between virtual objects and/or a user and objects,
needs a collision detection system to handle all interactions in a physically correct or plausible way. A
collision detection system is needed to determine if objects are in contact or interpenetrates. These
interpenetrations are resolved by a collision handling system. Because of the fact, that in nearly all
simulations objects can interact with each other, collision detection is a fundamental technology, that
is needed in all these simulations, like physically based simulation, robotic path and motion planning,
virtual prototyping, and many more. Most virtual environments aim to represent the real-world as
realistic as possible and therefore, virtual environments getting more and more complex. Furthermore,
all models in a virtual environment should interact like real objects do, if forces are applied to the
objects. Nearly all real-world objects will deform or break down in its individual parts if forces are
acted upon the objects. Thus deformable objects are becoming more and more common in virtual
environments, which want to be as realistic as possible and thus, will present new challenges to the
collision detection system. The necessary collision detection computations can be very complex and this
has the effect, that the collision detection process is the performance bottleneck in most simulations.
Most rigid body collision detection approaches use a BVH as acceleration data structure. This
technique is perfectly suitable if the object does not change its shape. For a soft body an update step
is necessary to ensure that the underlying acceleration data structure is still valid after performing a
simulation step. This update step can be very time consuming, is often hard to implement and in most
cases will produce a degenerated BVH after some simulation steps, if the objects generally deform.
Therefore, the here presented collision detection approach works entirely without an acceleration data
structure and supports rigid and soft bodies. Furthermore, we can compute inter-object and intraobject
collisions of rigid and deformable objects consisting of many tens of thousands of triangles in a
few milliseconds. To realize this, a subdivision of the scene into parts using a fuzzy clustering approach
is applied. Based on that all further steps for each cluster can be performed in parallel and if desired,
distributed to different GPUs. Tests have been performed to judge the performance of our approach
against other state-of-the-art collision detection algorithms. Additionally, we integrated our approach
into Bullet, a commonly used physics engine, to evaluate our algorithm.
In order to make a fair comparison of different rigid body collision detection algorithms, we propose
a new collision detection Benchmarking Suite. Our Benchmarking Suite can evaluate both the performance
as well as the quality of the collision response. Therefore, the Benchmarking Suite is subdivided
into a Performance Benchmark and a Quality Benchmark. This approach needs to be extended to
support soft body collision detection algorithms in the future.Jede virtuelle Umgebung, welche eine Interaktion zwischen den virtuellen Objekten in der Szene
zulässt und/oder zwischen einem Benutzer und den Objekten, benötigt für eine korrekte Behandlung der
Interaktionen eine Kollisionsdetektion. Nur dank der Kollisionsdetektion können Berührungen zwischen
Objekten erkannt und mittels der Kollisionsbehandlung aufgelöst werden. Dies ist der Grund für die weite
Verbreitung der Kollisionsdetektion in die verschiedensten Fachbereiche, wie der physikalisch basierten
Simulation, der Pfadplanung in der Robotik, dem virtuellen Prototyping und vielen weiteren. Auf Grund
des Bestrebens, die reale Umgebung in der virtuellen Welt so realistisch wie möglich nachzubilden,
steigt die Komplexität der Szenen stetig. Fortwährend steigen die Anforderungen an die Objekte, sich
realistisch zu verhalten, sollten Kräfte auf die einzelnen Objekte ausgeübt werden. Die meisten Objekte,
die uns in unserer realen Welt umgeben, ändern ihre Form oder zerbrechen in ihre Einzelteile, wenn
Kräfte auf sie einwirken. Daher kommen in realitätsnahen, virtuellen Umgebungen immer häufiger
deformierbare Objekte zum Einsatz, was neue Herausforderungen an die Kollisionsdetektion stellt. Die
hierfür Notwendigen, teils komplexen Berechnungen, führen dazu, dass die Kollisionsdetektion häufig
der Performance-Bottleneck in der jeweiligen Simulation darstellt.
Die meisten Kollisionsdetektionen für starre Körper benutzen eine Hüllkörperhierarchie als Beschleunigungsdatenstruktur.
Diese Technik ist hervorragend geeignet, solange sich die Form des Objektes
nicht verändert. Im Fall von deformierbaren Objekten ist eine Aktualisierung der Datenstruktur nach
jedem Schritt der Simulation notwendig, damit diese weiterhin gültig ist. Dieser Aktualisierungsschritt
kann, je nach Hierarchie, sehr zeitaufwendig sein, ist in den meisten Fällen schwer zu implementieren
und generiert nach vielen Schritten der Simulation häufig eine entartete Hüllkörperhierarchie, sollte
sich das Objekt sehr stark verformen. Um dies zu vermeiden, verzichtet unsere Kollisionsdetektion vollständig
auf eine Beschleunigungsdatenstruktur und unterstützt sowohl rigide, wie auch deformierbare
Körper. Zugleich können wir Selbstkollisionen und Kollisionen zwischen starren und/oder deformierbaren
Objekten, bestehend aus vielen Zehntausenden Dreiecken, innerhalb von wenigen Millisekunden
berechnen. Um dies zu realisieren, unterteilen wir die gesamte Szene in einzelne Bereiche mittels eines
Fuzzy Clustering-Verfahrens. Dies ermöglicht es, dass alle Cluster unabhängig bearbeitet werden und
falls gewünscht, die Berechnungen für die einzelnen Cluster auf verschiedene Grafikkarten verteilt werden
können. Um die Leistungsfähigkeit unseres Ansatzes vergleichen zu können, haben wir diesen gegen
aktuelle Verfahren für die Kollisionsdetektion antreten lassen. Weiterhin haben wir unser Verfahren in
die Physik-Engine Bullet integriert, um das Verhalten in dynamischen Situationen zu evaluieren.
Um unterschiedliche Kollisionsdetektionsalgorithmen für starre Körper korrekt und objektiv miteinander
vergleichen zu können, haben wir eine Benchmarking-Suite entwickelt. Unsere Benchmarking-
Suite kann sowohl die Geschwindigkeit, für die Bestimmung, ob zwei Objekte sich durchdringen, wie
auch die Qualität der berechneten Kräfte miteinander vergleichen. Hierfür ist die Benchmarking-Suite
in den Performance Benchmark und den Quality Benchmark unterteilt worden. In der Zukunft wird
diese Benchmarking-Suite dahingehend erweitert, dass auch Kollisionsdetektionsalgorithmen für deformierbare
Objekte unterstützt werden
- …