369 research outputs found

    New Geometric Data Structures for Collision Detection

    Get PDF
    We present new geometric data structures for collision detection and more, including: Inner Sphere Trees - the first data structure to compute the peneration volume efficiently. Protosphere - an new algorithm to compute space filling sphere packings for arbitrary objects. Kinetic AABBs - a bounding volume hierarchy that is optimal in the number of updates when the objects deform. Kinetic Separation-List - an algorithm that is able to perform continuous collision detection for complex deformable objects in real-time. Moreover, we present applications of these new approaches to hand animation, real-time collision avoidance in dynamic environments for robots and haptic rendering, including a user study that exploits the influence of the degrees of freedom in complex haptic interactions. Last but not least, we present a new benchmarking suite for both, peformance and quality benchmarks, and a theoretic analysis of the running-time of bounding volume-based collision detection algorithms

    A Conceptual Framework to Support Natural Interaction for Virtual Assembly Tasks

    Get PDF
    Over the years, various approaches have been investigated to support natural human interaction with CAD models in an immersive virtual environment. The motivation for this avenue of research stems from the desire to provide a method where users can manipulate and assemble digital product models as if they were manipulating actual models. The ultimate goal is to produce an immersive environment where design and manufacturing decisions which involve human interaction can be made using only digital CAD models, thus avoiding the need to create costly preproduction physical prototypes. This paper presents a framework to approach the development of virtual assembly applications. The framework is based on a Two Phase model where the assembly task is divided into a free movement phase and a fine positioning phase. Each phase can be implemented using independent techniques; however, the algorithms needed to interface between the two techniques are critical to the success of the method. The paper presents a summary of three virtual assembly techniques and places them within the framework of the Two Phase model. Finally, the conclusions call for the continued development of a testbed to compare virtual assembly methods

    Simulating molecular docking with haptics

    Get PDF
    Intermolecular binding underlies various metabolic and regulatory processes of the cell, and the therapeutic and pharmacological properties of drugs. Molecular docking systems model and simulate these interactions in silico and allow the study of the binding process. In molecular docking, haptics enables the user to sense the interaction forces and intervene cognitively in the docking process. Haptics-assisted docking systems provide an immersive virtual docking environment where the user can interact with the molecules, feel the interaction forces using their sense of touch, identify visually the binding site, and guide the molecules to their binding pose. Despite a forty-year research e�ort however, the docking community has been slow to adopt this technology. Proprietary, unreleased software, expensive haptic hardware and limits on processing power are the main reasons for this. Another signi�cant factor is the size of the molecules simulated, limited to small molecules. The focus of the research described in this thesis is the development of an interactive haptics-assisted docking application that addresses the above issues, and enables the rigid docking of very large biomolecules and the study of the underlying interactions. Novel methods for computing the interaction forces of binding on the CPU and GPU, in real-time, have been developed. The force calculation methods proposed here overcome several computational limitations of previous approaches, such as precomputed force grids, and could potentially be used to model molecular exibility at haptic refresh rates. Methods for force scaling, multipoint collision response, and haptic navigation are also reported that address newfound issues, particular to the interactive docking of large systems, e.g. force stability at molecular collision. The i ii result is a haptics-assisted docking application, Haptimol RD, that runs on relatively inexpensive consumer level hardware, (i.e. there is no need for specialized/proprietary hardware)

    Risk-aware Path and Motion Planning for a Tethered Aerial Visual Assistant in Unstructured or Confined Environments

    Get PDF
    This research aims at developing path and motion planning algorithms for a tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated primary robot in unstructured or confined environments. The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying, (3) introducing a whole crew for the second teleoperated robot is not cost effective, may introduce further teamwork demands, and therefore could lead to miscommunication. This dissertation proposes to use an autonomous tethered aerial visual assistant to replace the secondary robot and its operating crew. Along with a pre-established theory of viewpoint quality based on affordances, this dissertation aims at defining and representing robot motion risk in unstructured or confined environments. Based on those theories, a novel high level path planning algorithm is developed to enable risk-aware planning, which balances the tradeoff between viewpoint quality and motion risk in order to provide safe and trustworthy visual assistance flight. The planned flight trajectory is then realized on a tethered UAV platform. The perception and actuation are tailored to fit the tethered agent in the form of a low level motion suite, including a novel tether-based localization model with negligible computational overhead, motion primitives for the tethered airframe based on position and velocity control, and two differentComment: Ph.D Dissertatio

    Risk-aware Path and Motion Planning for a Tethered Aerial Visual Assistant in Unstructured or Confined Environments

    Get PDF
    This research aims at developing path and motion planning algorithms for a tethered Unmanned Aerial Vehicle (UAV) to visually assist a teleoperated primary robot in unstructured or confined environments. The emerging state of the practice for nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the sensors by providing an external viewpoint. However, the benefits of using an assistant have been limited for at least three reasons: (1) users tend to choose suboptimal viewpoints, (2) only ground robot assistants are considered, ignoring the rapid evolution of small unmanned aerial systems for indoor flying, (3) introducing a whole crew for the second teleoperated robot is not cost effective, may introduce further teamwork demands, and therefore could lead to miscommunication. This dissertation proposes to use an autonomous tethered aerial visual assistant to replace the secondary robot and its operating crew. Along with a pre-established theory of viewpoint quality based on affordances, this dissertation aims at defining and representing robot motion risk in unstructured or confined environments. Based on those theories, a novel high level path planning algorithm is developed to enable risk-aware planning, which balances the tradeoff between viewpoint quality and motion risk in order to provide safe and trustworthy visual assistance flight. The planned flight trajectory is then realized on a tethered UAV platform. The perception and actuation are tailored to fit the tethered agent in the form of a low level motion suite, including a novel tether-based localization model with negligible computational overhead, motion primitives for the tethered airframe based on position and velocity control, and two different approaches to negotiate tether with complex obstacle-occupied environments. The proposed research provides a formal reasoning of motion risk in unstructured or confined spaces, contributes to the field of risk-aware planning with a versatile planner, and opens up a new regime of indoor UAV navigation: tethered indoor flight to ensure battery duration and failsafe in case of vehicle malfunction. It is expected to increase teleoperation productivity and reduce costly errors in scenarios such as safe decommissioning and nuclear operations in the Fukushima Daiichi facility

    New geometric algorithms and data structures for collision detection of dynamically deforming objects

    Get PDF
    Any virtual environment that supports interactions between virtual objects and/or a user and objects, needs a collision detection system to handle all interactions in a physically correct or plausible way. A collision detection system is needed to determine if objects are in contact or interpenetrates. These interpenetrations are resolved by a collision handling system. Because of the fact, that in nearly all simulations objects can interact with each other, collision detection is a fundamental technology, that is needed in all these simulations, like physically based simulation, robotic path and motion planning, virtual prototyping, and many more. Most virtual environments aim to represent the real-world as realistic as possible and therefore, virtual environments getting more and more complex. Furthermore, all models in a virtual environment should interact like real objects do, if forces are applied to the objects. Nearly all real-world objects will deform or break down in its individual parts if forces are acted upon the objects. Thus deformable objects are becoming more and more common in virtual environments, which want to be as realistic as possible and thus, will present new challenges to the collision detection system. The necessary collision detection computations can be very complex and this has the effect, that the collision detection process is the performance bottleneck in most simulations. Most rigid body collision detection approaches use a BVH as acceleration data structure. This technique is perfectly suitable if the object does not change its shape. For a soft body an update step is necessary to ensure that the underlying acceleration data structure is still valid after performing a simulation step. This update step can be very time consuming, is often hard to implement and in most cases will produce a degenerated BVH after some simulation steps, if the objects generally deform. Therefore, the here presented collision detection approach works entirely without an acceleration data structure and supports rigid and soft bodies. Furthermore, we can compute inter-object and intraobject collisions of rigid and deformable objects consisting of many tens of thousands of triangles in a few milliseconds. To realize this, a subdivision of the scene into parts using a fuzzy clustering approach is applied. Based on that all further steps for each cluster can be performed in parallel and if desired, distributed to different GPUs. Tests have been performed to judge the performance of our approach against other state-of-the-art collision detection algorithms. Additionally, we integrated our approach into Bullet, a commonly used physics engine, to evaluate our algorithm. In order to make a fair comparison of different rigid body collision detection algorithms, we propose a new collision detection Benchmarking Suite. Our Benchmarking Suite can evaluate both the performance as well as the quality of the collision response. Therefore, the Benchmarking Suite is subdivided into a Performance Benchmark and a Quality Benchmark. This approach needs to be extended to support soft body collision detection algorithms in the future.Jede virtuelle Umgebung, welche eine Interaktion zwischen den virtuellen Objekten in der Szene zulässt und/oder zwischen einem Benutzer und den Objekten, benötigt für eine korrekte Behandlung der Interaktionen eine Kollisionsdetektion. Nur dank der Kollisionsdetektion können Berührungen zwischen Objekten erkannt und mittels der Kollisionsbehandlung aufgelöst werden. Dies ist der Grund für die weite Verbreitung der Kollisionsdetektion in die verschiedensten Fachbereiche, wie der physikalisch basierten Simulation, der Pfadplanung in der Robotik, dem virtuellen Prototyping und vielen weiteren. Auf Grund des Bestrebens, die reale Umgebung in der virtuellen Welt so realistisch wie möglich nachzubilden, steigt die Komplexität der Szenen stetig. Fortwährend steigen die Anforderungen an die Objekte, sich realistisch zu verhalten, sollten Kräfte auf die einzelnen Objekte ausgeübt werden. Die meisten Objekte, die uns in unserer realen Welt umgeben, ändern ihre Form oder zerbrechen in ihre Einzelteile, wenn Kräfte auf sie einwirken. Daher kommen in realitätsnahen, virtuellen Umgebungen immer häufiger deformierbare Objekte zum Einsatz, was neue Herausforderungen an die Kollisionsdetektion stellt. Die hierfür Notwendigen, teils komplexen Berechnungen, führen dazu, dass die Kollisionsdetektion häufig der Performance-Bottleneck in der jeweiligen Simulation darstellt. Die meisten Kollisionsdetektionen für starre Körper benutzen eine Hüllkörperhierarchie als Beschleunigungsdatenstruktur. Diese Technik ist hervorragend geeignet, solange sich die Form des Objektes nicht verändert. Im Fall von deformierbaren Objekten ist eine Aktualisierung der Datenstruktur nach jedem Schritt der Simulation notwendig, damit diese weiterhin gültig ist. Dieser Aktualisierungsschritt kann, je nach Hierarchie, sehr zeitaufwendig sein, ist in den meisten Fällen schwer zu implementieren und generiert nach vielen Schritten der Simulation häufig eine entartete Hüllkörperhierarchie, sollte sich das Objekt sehr stark verformen. Um dies zu vermeiden, verzichtet unsere Kollisionsdetektion vollständig auf eine Beschleunigungsdatenstruktur und unterstützt sowohl rigide, wie auch deformierbare Körper. Zugleich können wir Selbstkollisionen und Kollisionen zwischen starren und/oder deformierbaren Objekten, bestehend aus vielen Zehntausenden Dreiecken, innerhalb von wenigen Millisekunden berechnen. Um dies zu realisieren, unterteilen wir die gesamte Szene in einzelne Bereiche mittels eines Fuzzy Clustering-Verfahrens. Dies ermöglicht es, dass alle Cluster unabhängig bearbeitet werden und falls gewünscht, die Berechnungen für die einzelnen Cluster auf verschiedene Grafikkarten verteilt werden können. Um die Leistungsfähigkeit unseres Ansatzes vergleichen zu können, haben wir diesen gegen aktuelle Verfahren für die Kollisionsdetektion antreten lassen. Weiterhin haben wir unser Verfahren in die Physik-Engine Bullet integriert, um das Verhalten in dynamischen Situationen zu evaluieren. Um unterschiedliche Kollisionsdetektionsalgorithmen für starre Körper korrekt und objektiv miteinander vergleichen zu können, haben wir eine Benchmarking-Suite entwickelt. Unsere Benchmarking- Suite kann sowohl die Geschwindigkeit, für die Bestimmung, ob zwei Objekte sich durchdringen, wie auch die Qualität der berechneten Kräfte miteinander vergleichen. Hierfür ist die Benchmarking-Suite in den Performance Benchmark und den Quality Benchmark unterteilt worden. In der Zukunft wird diese Benchmarking-Suite dahingehend erweitert, dass auch Kollisionsdetektionsalgorithmen für deformierbare Objekte unterstützt werden

    ALTERNATIVE AND FLEXIBLE CONTROL METHODS FOR ROBOTIC MANIPULATORS: On the challenge of developing a flexible control architecture that allows for controlling different manipulators

    Get PDF
    Robotic arms and cranes show some similarities in the way they operate and in the way they are designed. Both have a number of links serially attached to each other by means of joints that can be moved by some type of actuator. In both systems, the end-effector of the manipulator can be moved in space and be placed in any desired location within the system’s workspace and can carry a certain amount of load. However, traditional cranes are usually relatively big, stiff and heavy because they normally need to move heavy loads at low speeds, while industrial robots are ordinarily smaller, they usually move small masses and operate at relatively higher velocities. This is the reason why cranes are commonly actuated by hydraulic valves, while robotic arms are driven by servo motors, pneumatic or servo-pneumatic actuators. Most importantly, the fundamental difference between the two kinds of systems is that cranes are usually controlled by a human operator, joint by joint, using simple joysticks where each axis operates only one specific actuator, while robotic arms are commonly controlled by a central controller that controls and coordinates the actuators according to some specific algorithm. In other words, the controller of a crane is usually a human while the controller of a robotic arm is normally a computer program that is able to determine the joint values that provide a desired position or velocity for the end-effector. If we especially consider maritime cranes, compared with robotic arms, they rely on a much more complex model of the environment with which they interact. These kinds of cranes are in fact widely used to handle and transfer objects from large container ships to smaller lighters or to the quays of the harbours. Therefore, their control is always a challenging task, which involves many problems such as load sway, positioning accuracy, wave motion compensation and collision avoidance. Some of the similarities between robotic arms and cranes can also be extended to robotic hands. Indeed, from a kinematic point of view, a robotic hand consists of one or more kinematic chains fixed on a base. However, robotic hands usually present a higher number of degrees of freedom (DOFs) and consequentially a higher dexterity compared to robotic arms. Nevertheless, several commonalities can be found from a design and control point of views. Particularly, modular robotic hands are studied in this thesis from a design and control point of view. Emphasising these similarities, the general term of robotic manipulator is thereby used to refer to robotic arms, cranes and hands. In this work, efficient design methods for robotic manipulators are initially investigated. Successively, the possibility of developing a flexible control architecture that allows for controlling different manipulators by using a universal input device is outlined. The main challenge of doing this consists of finding a flexible way to map the normally fixed DOFs of the input controller to the variable DOFs of the specific manipulator to be controlled. This process has to be realised regardless of the differences in size, kinematic structure, body morphology, constraints and affordances. Different alternative control algorithms are investigated including effective approaches that do not assume a priori knowledge for the Inverse Kinematic (IK) models. These algorithms derive the kinematic properties from biologically-inspired approaches, machine learning procedures or optimisation methods. In this way, the system is able to automatically learn the kinematic properties of different manipulators. Finally, a methodology for performing experimental activities in the area of maritime cranes and robotic arm control is outlined. By combining the rapid-prototyping approach with the concept of interchangeable interfaces, a simulation and benchmarking framework for advanced control methods of maritime cranes and robotic arms is presented. From a control point of view, the advantages of releasing such a flexible control system rely on the possibility of controlling different manipulators by using the same framework and on the opportunity of testing different control approaches. Moreover, from a design point of view, rapidprototyping methods can be applied to fast develop new manipulators and to analyse different properties before making a physical prototype

    Architecture and Information Requirements to Assess and Predict Flight Safety Risks During Highly Autonomous Urban Flight Operations

    Get PDF
    As aviation adopts new and increasingly complex operational paradigms, vehicle types, and technologies to broaden airspace capability and efficiency, maintaining a safe system will require recognition and timely mitigation of new safety issues as they emerge and before significant consequences occur. A shift toward a more predictive risk mitigation capability becomes critical to meet this challenge. In-time safety assurance comprises monitoring, assessment, and mitigation functions that proactively reduce risk in complex operational environments where the interplay of hazards may not be known (and therefore not accounted for) during design. These functions can also help to understand and predict emergent effects caused by the increased use of automation or autonomous functions that may exhibit unexpected non-deterministic behaviors. The envisioned monitoring and assessment functions can look for precursors, anomalies, and trends (PATs) by applying model-based and data-driven methods. Outputs would then drive downstream mitigation(s) if needed to reduce risk. These mitigations may be accomplished using traditional design revision processes or via operational (and sometimes automated) mechanisms. The latter refers to the in-time aspect of the system concept. This report comprises architecture and information requirements and considerations toward enabling such a capability within the domain of low altitude highly autonomous urban flight operations. This domain may span, for example, public-use surveillance missions flown by small unmanned aircraft (e.g., infrastructure inspection, facility management, emergency response, law enforcement, and/or security) to transportation missions flown by larger aircraft that may carry passengers or deliver products. Caveat: Any stated requirements in this report should be considered initial requirements that are intended to drive research and development (R&D). These initial requirements are likely to evolve based on R&D findings, refinement of operational concepts, industry advances, and new industry or regulatory policies or standards related to safety assurance

    Autonomous RPOD Technology Challenges for the Coming Decade

    Get PDF
    Rendezvous Proximity Operations and Docking (RPOD) technologies are important to a wide range of future space endeavors. This paper will review some of the recent and ongoing activities related to autonomous RPOD capabilities and summarize the current state of the art. Gaps are identified where future investments are necessary to successfully execute some of the missions likely to be conducted within the next ten years. A proposed RPOD technology roadmap that meets the broad needs of NASA's future missions will be outlined, and ongoing activities at OSFC in support of a future satellite servicing mission are presented. The case presented shows that an evolutionary, stair-step technology development program. including a robust campaign of coordinated ground tests and space-based system-level technology demonstration missions, will ultimately yield a multi-use main-stream autonomous RPOD capability suite with cross-cutting benefits across a wide range of future applications
    corecore