21 research outputs found

    Haptic interaction with deformable objects using real-time dynamic simulation

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1995.Includes bibliographical references (p. 81-83).by Nitish Swarup.M.S

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics

    Get PDF
    This Open Access proceedings present a good overview of the current research landscape of industrial robots. The objective of MHI Colloquium is a successful networking at academic and management level. Thereby the colloquium is focussing on a high level academic exchange to distribute the obtained research results, determine synergetic effects and trends, connect the actors personally and in conclusion strengthen the research field as well as the MHI community. Additionally there is the possibility to become acquainted with the organizing institute. Primary audience are members of the scientific association for assembly, handling and industrial robots (WG MHI)

    PHYSICS-AWARE MODEL SIMPLIFICATION FOR INTERACTIVE VIRTUAL ENVIRONMENTS

    Get PDF
    Rigid body simulation is an integral part of Virtual Environments (VE) for autonomous planning, training, and design tasks. The underlying physics-based simulation of VE must be accurate and computationally fast enough for the intended application, which unfortunately are conflicting requirements. Two ways to perform fast and high fidelity physics-based simulation are: (1) model simplification, and (2) parallel computation. Model simplification can be used to allow simulation at an interactive rate while introducing an acceptable level of error. Currently, manual model simplification is the most common way of performing simulation speedup but it is time consuming. Hence, in order to reduce the development time of VEs, automated model simplification is needed. The dissertation presents an automated model simplification approach based on geometric reasoning, spatial decomposition, and temporal coherence. Geometric reasoning is used to develop an accessibility based algorithm for removing portions of geometric models that do not play any role in rigid body to rigid body interaction simulation. Removing such inaccessible portions of the interacting rigid body models has no influence on the simulation accuracy but reduces computation time significantly. Spatial decomposition is used to develop a clustering algorithm that reduces the number of fluid pressure computations resulting in significant speedup of rigid body and fluid interaction simulation. Temporal coherence algorithm reuses the computed force values from rigid body to fluid interaction based on the coherence of fluid surrounding the rigid body. The simulations are further sped up by performing computing on graphics processing unit (GPU). The dissertation also presents the issues pertaining to the development of parallel algorithms for rigid body simulations both on multi-core processors and GPU. The developed algorithms have enabled real-time, high fidelity, six degrees of freedom, and time domain simulation of unmanned sea surface vehicles (USSV) and can be used for autonomous motion planning, tele-operation, and learning from demonstration applications

    Realistic Visualization of Accessories within Interactive Simulation Systems for Garment Prototyping

    Get PDF
    In virtual garment prototyping, designers create a garment design by using Computer Aided Design (CAD). In difference to traditional CAD the word "aided" in this case refers to the computer replicating real world behavior of garments. This allows the designer to interact naturally with his design. The designer has a wide range of expressions within his work. This is done by defining details on a garment which are not limited to the type of cloth used. The way how cloth patterns are sewn together and the style and usage of details of the cloth's surface, like appliqués, have a strong impact on the visual appearance of a garment to a large degree. Therefore, virtual and real garments usually have a lot of such surface details. Interactive virtual garment prototyping itself is an interdisciplinary field. Several problems have to be solved to create an efficiently usable real-time virtual prototyping system for garment manufacturers. Such a system can be roughly separated into three sub-components. The first component deals with acquisition of material and other data needed to let a simulation mimic plausible real world behavior of the garment. The second component is the garment simulation process itself. Finally, the third component is centered on the visualization of the simulation results. Therefore, the overall process spans several scientific areas which have to take into account the needs of each other in order to get an overall interactive system. In my work I especially target the third section, which deals with the visualization. On the scientific side, the developments in the last years have shown great improvements on both speed and reliability of simulation and rendering approaches suitable for the virtual prototyping of garments. However, with the currently existing approaches there are still many problems to be solved, especially if interactive simulation and visualization need to work together and many object and surface details come into play. This is the case when using a virtual prototyping in a productive environment. The currently available approaches try to handle most of the surface details as part of the simulation. This generates a lot of data early in the pipeline which needs to be transferred and processed, requiring a lot of processing time and easily stalls the pipeline defined by the simulation and visualization system. Additionally, real world garment examples are already complicated in their cloth arrangement alone. This requires additional computational power. Therefore, the interactive garment simulation tends to lose its capability to allow interactive handling of the garment. In my work I present a solution, which solves this problem by moving the handling of design details from the simulation stage entirely to a completely GPU based rendering stage. This way, the behavior of the garment and its visual appearance are separated. Therefore, the simulation part can fully concentrate on simulating the fabric behavior, while the visualization handles the placing of surface details lighting, materials and self-shadowing. Thus, a much higher degree of surface complexity can be achieved within an interactive virtual prototyping system as can be done with the current existing approaches

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    New geometric algorithms and data structures for collision detection of dynamically deforming objects

    Get PDF
    Any virtual environment that supports interactions between virtual objects and/or a user and objects, needs a collision detection system to handle all interactions in a physically correct or plausible way. A collision detection system is needed to determine if objects are in contact or interpenetrates. These interpenetrations are resolved by a collision handling system. Because of the fact, that in nearly all simulations objects can interact with each other, collision detection is a fundamental technology, that is needed in all these simulations, like physically based simulation, robotic path and motion planning, virtual prototyping, and many more. Most virtual environments aim to represent the real-world as realistic as possible and therefore, virtual environments getting more and more complex. Furthermore, all models in a virtual environment should interact like real objects do, if forces are applied to the objects. Nearly all real-world objects will deform or break down in its individual parts if forces are acted upon the objects. Thus deformable objects are becoming more and more common in virtual environments, which want to be as realistic as possible and thus, will present new challenges to the collision detection system. The necessary collision detection computations can be very complex and this has the effect, that the collision detection process is the performance bottleneck in most simulations. Most rigid body collision detection approaches use a BVH as acceleration data structure. This technique is perfectly suitable if the object does not change its shape. For a soft body an update step is necessary to ensure that the underlying acceleration data structure is still valid after performing a simulation step. This update step can be very time consuming, is often hard to implement and in most cases will produce a degenerated BVH after some simulation steps, if the objects generally deform. Therefore, the here presented collision detection approach works entirely without an acceleration data structure and supports rigid and soft bodies. Furthermore, we can compute inter-object and intraobject collisions of rigid and deformable objects consisting of many tens of thousands of triangles in a few milliseconds. To realize this, a subdivision of the scene into parts using a fuzzy clustering approach is applied. Based on that all further steps for each cluster can be performed in parallel and if desired, distributed to different GPUs. Tests have been performed to judge the performance of our approach against other state-of-the-art collision detection algorithms. Additionally, we integrated our approach into Bullet, a commonly used physics engine, to evaluate our algorithm. In order to make a fair comparison of different rigid body collision detection algorithms, we propose a new collision detection Benchmarking Suite. Our Benchmarking Suite can evaluate both the performance as well as the quality of the collision response. Therefore, the Benchmarking Suite is subdivided into a Performance Benchmark and a Quality Benchmark. This approach needs to be extended to support soft body collision detection algorithms in the future.Jede virtuelle Umgebung, welche eine Interaktion zwischen den virtuellen Objekten in der Szene zulässt und/oder zwischen einem Benutzer und den Objekten, benötigt für eine korrekte Behandlung der Interaktionen eine Kollisionsdetektion. Nur dank der Kollisionsdetektion können Berührungen zwischen Objekten erkannt und mittels der Kollisionsbehandlung aufgelöst werden. Dies ist der Grund für die weite Verbreitung der Kollisionsdetektion in die verschiedensten Fachbereiche, wie der physikalisch basierten Simulation, der Pfadplanung in der Robotik, dem virtuellen Prototyping und vielen weiteren. Auf Grund des Bestrebens, die reale Umgebung in der virtuellen Welt so realistisch wie möglich nachzubilden, steigt die Komplexität der Szenen stetig. Fortwährend steigen die Anforderungen an die Objekte, sich realistisch zu verhalten, sollten Kräfte auf die einzelnen Objekte ausgeübt werden. Die meisten Objekte, die uns in unserer realen Welt umgeben, ändern ihre Form oder zerbrechen in ihre Einzelteile, wenn Kräfte auf sie einwirken. Daher kommen in realitätsnahen, virtuellen Umgebungen immer häufiger deformierbare Objekte zum Einsatz, was neue Herausforderungen an die Kollisionsdetektion stellt. Die hierfür Notwendigen, teils komplexen Berechnungen, führen dazu, dass die Kollisionsdetektion häufig der Performance-Bottleneck in der jeweiligen Simulation darstellt. Die meisten Kollisionsdetektionen für starre Körper benutzen eine Hüllkörperhierarchie als Beschleunigungsdatenstruktur. Diese Technik ist hervorragend geeignet, solange sich die Form des Objektes nicht verändert. Im Fall von deformierbaren Objekten ist eine Aktualisierung der Datenstruktur nach jedem Schritt der Simulation notwendig, damit diese weiterhin gültig ist. Dieser Aktualisierungsschritt kann, je nach Hierarchie, sehr zeitaufwendig sein, ist in den meisten Fällen schwer zu implementieren und generiert nach vielen Schritten der Simulation häufig eine entartete Hüllkörperhierarchie, sollte sich das Objekt sehr stark verformen. Um dies zu vermeiden, verzichtet unsere Kollisionsdetektion vollständig auf eine Beschleunigungsdatenstruktur und unterstützt sowohl rigide, wie auch deformierbare Körper. Zugleich können wir Selbstkollisionen und Kollisionen zwischen starren und/oder deformierbaren Objekten, bestehend aus vielen Zehntausenden Dreiecken, innerhalb von wenigen Millisekunden berechnen. Um dies zu realisieren, unterteilen wir die gesamte Szene in einzelne Bereiche mittels eines Fuzzy Clustering-Verfahrens. Dies ermöglicht es, dass alle Cluster unabhängig bearbeitet werden und falls gewünscht, die Berechnungen für die einzelnen Cluster auf verschiedene Grafikkarten verteilt werden können. Um die Leistungsfähigkeit unseres Ansatzes vergleichen zu können, haben wir diesen gegen aktuelle Verfahren für die Kollisionsdetektion antreten lassen. Weiterhin haben wir unser Verfahren in die Physik-Engine Bullet integriert, um das Verhalten in dynamischen Situationen zu evaluieren. Um unterschiedliche Kollisionsdetektionsalgorithmen für starre Körper korrekt und objektiv miteinander vergleichen zu können, haben wir eine Benchmarking-Suite entwickelt. Unsere Benchmarking- Suite kann sowohl die Geschwindigkeit, für die Bestimmung, ob zwei Objekte sich durchdringen, wie auch die Qualität der berechneten Kräfte miteinander vergleichen. Hierfür ist die Benchmarking-Suite in den Performance Benchmark und den Quality Benchmark unterteilt worden. In der Zukunft wird diese Benchmarking-Suite dahingehend erweitert, dass auch Kollisionsdetektionsalgorithmen für deformierbare Objekte unterstützt werden

    3D interaction with scientific data : an experimental and perceptual approach

    Get PDF

    From motion capture to interactive virtual worlds : towards unconstrained motion-capture algorithms for real-time performance-driven character animation

    Get PDF
    This dissertation takes performance-driven character animation as a representative application and advances motion capture algorithms and animation methods to meet its high demands. Existing approaches have either coarse resolution and restricted capture volume, require expensive and complex multi-camera systems, or use intrusive suits and controllers. For motion capture, set-up time is reduced using fewer cameras, accuracy is increased despite occlusions and general environments, initialization is automated, and free roaming is enabled by egocentric cameras. For animation, increased robustness enables the use of low-cost sensors input, custom control gesture definition is guided to support novice users, and animation expressiveness is increased. The important contributions are: 1) an analytic and differentiable visibility model for pose optimization under strong occlusions, 2) a volumetric contour model for automatic actor initialization in general scenes, 3) a method to annotate and augment image-pose databases automatically, 4) the utilization of unlabeled examples for character control, and 5) the generalization and disambiguation of cyclical gestures for faithful character animation. In summary, the whole process of human motion capture, processing, and application to animation is advanced. These advances on the state of the art have the potential to improve many interactive applications, within and outside virtual reality.Diese Arbeit befasst sich mit Performance-driven Character Animation, insbesondere werden Motion Capture-Algorithmen entwickelt um den hohen Anforderungen dieser Beispielanwendung gerecht zu werden. Existierende Methoden haben entweder eine geringe Genauigkeit und einen eingeschränkten Aufnahmebereich oder benötigen teure Multi-Kamera-Systeme, oder benutzen störende Controller und spezielle Anzüge. Für Motion Capture wird die Setup-Zeit verkürzt, die Genauigkeit für Verdeckungen und generelle Umgebungen erhöht, die Initialisierung automatisiert, und Bewegungseinschränkung verringert. Für Character Animation wird die Robustheit für ungenaue Sensoren erhöht, Hilfe für benutzerdefinierte Gestendefinition geboten, und die Ausdrucksstärke der Animation verbessert. Die wichtigsten Beiträge sind: 1) ein analytisches und differenzierbares Sichtbarkeitsmodell für Rekonstruktionen unter starken Verdeckungen, 2) ein volumetrisches Konturenmodell für automatische Körpermodellinitialisierung in genereller Umgebung, 3) eine Methode zur automatischen Annotation von Posen und Augmentation von Bildern in großen Datenbanken, 4) das Nutzen von Beispielbewegungen für Character Animation, und 5) die Generalisierung und Übertragung von zyklischen Gesten für genaue Charakteranimation. Es wird der gesamte Prozess erweitert, von Motion Capture bis hin zu Charakteranimation. Die Verbesserungen sind für viele interaktive Anwendungen geeignet, innerhalb und außerhalb von virtueller Realität

    Towards procedural music-driven animation: exploring audio-visual complementarity

    Get PDF
    Master dissertation (Master Degree in Computer Science)This thesis intends to describe our approach towards developing a framework for the interactive creation of music driven animations. We aim to create an integrated environment where real-time musical information is easily accessible and is able to be flexibly used for manipulating different aspects of a reactive simulation. Such modifications are specified through the use of a scripting language and include, for instance, geometrical transformations and geometry synthesis, gradual colour changes as well as the application of arbitrary forces. Our framework thus represents a proof-of-concept for converting musical information into arbitrary modifications to a dynamic simulation, producing a variety of animations. This is possible due to a bargaining between control and automation, where control is present by allowing the user to program these modifications with a scripting language and automation is present by using physics and interpolation to estimate the visual effects resulting from those modifications. The particular test case for our system was the animation/simulation of a growing tree reacting to wind. In order to control or influence both the tree growth and wind field, as well as other visual parameters, the system accepts two different but complementary representations of music: a MIDI event stream and raw audio data. Different musical features are obtainable from each of these representations. On one hand, by using MIDI, we are able to discretely synchronise visual effects with the basic elements of music, such as the sounding of notes or chords. On the other, using audio, we are able to produce continuous changes by obtaining numerical data from basic spectral analysis. Our framework provides a common interface for the combined application of these different sources of musical information to the generation of visual imagery, under the form of procedural animations. We will describe algorithms presented in multiple research papers, namely for tree generation, wind field generation and tree reaction to wind, briefly detailing our implementation and architecture. We also describe why each of these particular methods was chosen, how they are organised in our platform and how their parameters may be modified from our scripting environment leading to what we regard as the procedural generation of animations. By allowing the user to access musical information and give them control of what we have come to refer to as animation primitives, such as wind and tree growth, we believe to have taken a first step towards exploring a novel concept with a seemingly endless expressive potential.Esta dissertação descreve o desenvolvimento de uma plataforma para a criação interativa de animações dirigidas por música. Focamo-nos em desenvolver um ambiente integrado onde vários aspetos de uma animação podem ser controlados pelo processamento em tempo real de informação musical, com recurso a uma linguagem de script. O caso de teste específico da nossa aplicação consiste na animação de uma árvore em crescimento capaz de reagir a um campo de vento dinâmico. De forma a controlar ou influenciar quer o crescimento da árvore, quer o campo de vento, o sistema aceita como input duas representações diferentes, mas complementares, de música, uma sequência contínua de eventos MIDI e áudio. Realçamos a distinção entre estas duas representações visto que apesar de serem ambas referentes a música, são fundamentalmente diferentes em termos da informação que contêm. Eventos MIDI contêm informação simbólica relativa à interpretação da música, nomeadamente os tempos de começo e final de notas. Por outro lado, informação áudio consiste num sinal contínuo, que resulta da gravação de um instrumento ou de uma atuação musical. Com MIDI, a nossa plataforma é capaz de sincronizar alterações discretas à simulação, com base nos elementos fundamentais da teoria musical, como o soar de notas ou acordes. Com informação áudio, é possível produzir alterações contínuas com base nos dados numéricos obtidos por análise espectral elementar do sinal de áudio. Neste documento serão descritos vários algoritmos apresentados em artigos de investigação, nomeadamente para a geração de árvores, geração de campos de vento e reação da árvore ao vento. Iremos descrever os motivos que levaram à sua escolha, a sua organização na nossa plataforma e os vários parâmetros que podemos modificar a partir do nosso ambiente de scripting. Em suma, a nossa plataforma pode ser descrita como um sistema que converte informação musical em alterações arbitrárias a um ambiente, que por sua vez influencia uma simulação reativa, produzindo animações. Foi estabelecido um compromisso entre controlo e automação de forma a tornar esta abordagem possível. O controlo provém da capacidade de programar as modificações que ocorrem no sistema, sendo que é utilizada automação de forma a estimar o movimento resultante de tais modificações. Ao fornecer ao utilizador informação musical em tempo real e oferecer-lhe controlo sobre o que nos referimos como "primitivas de animação", como o controlo sobre vento e o crescimento da árvore, consideramos que demos um primeiro passo no que toca à exploração de um novo conceito, com um potencial expressivo aparentemente infinito

    Interactive Virtual Cinematography

    Get PDF
    corecore