369 research outputs found

    Combining physical constraints with geometric constraint-based modeling for virtual assembly

    Get PDF
    The research presented in this dissertation aims to create a virtual assembly environment capable of simulating the constant and subtle interactions (hand-part, part-part) that occur during manual assembly, and providing appropriate feedback to the user in real-time. A virtual assembly system called SHARP System for Haptic Assembly and Realistic Prototyping is created, which utilizes simulated physical constraints for part placement during assembly.;The first approach taken in this research attempt utilized Voxmap Point Shell (VPS) software for implementing collision detection and physics-based modeling in SHARP. A volumetric approach, where complex CAD models were represented by numerous small cubic-voxel elements was used to obtain fast physics update rates (500--1000 Hz). A novel dual-handed haptic interface was developed and integrated into the system allowing the user to simultaneously manipulate parts with both hands. However, coarse model approximations used for collision detection and physics-based modeling only allowed assembly when minimum clearance was limited to ∼8-10%.;To provide a solution to the low clearance assembly problem, the second effort focused on importing accurate parametric CAD data (B-Rep) models into SHARP. These accurate B-Rep representations are used for collision detection as well as for simulating physical contacts more accurately. A new hybrid approach is presented, which combines the simulated physical constraints with geometric constraints which can be defined at runtime. Different case studies are used to identify the suitable combination of methods (collision detection, physical constraints, geometric constraints) capable of best simulating intricate interactions and environment behavior during manual assembly. An innovative automatic constraint recognition algorithm is created and integrated into SHARP. The feature-based approach utilized for the algorithm design, facilitates faster identification of potential geometric constraints that need to be defined. This approach results in optimized system performance while providing a more natural user experience for assembly

    A hybrid method for haptic feedback to support manual virtual product assembly

    Get PDF
    The purpose of this research is to develop methods to support manual virtual assembly using haptic (force) feedback in a virtual environment. The results of this research will be used in an engineering framework for assembly simulation, training, and maintenance. The key research challenge is to advance the ability of users to assemble complex, low clearance CAD parts as they exist digitally without the need to create expensive physical prototypes. The proposed method consists of a Virtual Reality (VR) system that combines voxel collision detection and boundary representation methods into a hybrid algorithm containing the necessary information for both force feedback and constraint recognition. The key to this approach will be successfully developing the data structure and logic needed to switch between collision detection and constraint recognition while maintaining a haptic refresh rate of 1000 Hz. VR is a set of unique technologies that support human-centered computer interaction. Experience with current VR systems that simulate low clearance assembly operations with haptic feedback indicate that such systems are highly desirable tools in the evaluation of preliminary designs, as well as virtual training and maintenance processes. This work will result in a novel interface for assembly methods prototyping, and an interface that will allow intuitive interaction with parts based on a powerful combination of analytical, visual and haptic tools

    A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation

    Get PDF
    The application of laser technologies in surgical interventions has been accepted in the clinical domain due to their atraumatic properties. In addition to manual application of fibre-guided lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours has been prevailed in ENT surgery. However, TLM requires many years of surgical training for tumour resection in order to preserve the function of adjacent organs and thus preserve the patient’s quality of life. The positioning of the microscopic laser applicator outside the patient can also impede a direct line-of-sight to the target area due to anatomical variability and limit the working space. Further clinical challenges include positioning the laser focus on the tissue surface, imaging, planning and performing laser ablation, and motion of the target area during surgery. This dissertation aims to address the limitations of TLM through robotic approaches and intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no highly integrated platform for endoscopic delivery of focused laser radiation is available to date. Likewise, there are no known devices that incorporate scene information from endoscopic imaging into ablation planning and execution. For focusing of the laser beam close to the target tissue, this work first presents miniaturised focusing optics that can be integrated into endoscopic systems. Experimental trials characterise the optical properties and the ablation performance. A robotic platform is realised for manipulation of the focusing optics. This is based on a variable-length continuum manipulator. The latter enables movements of the endoscopic end effector in five degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the robot are integrated into a modular framework that is evaluated experimentally. The manipulation of focused laser radiation also requires precise adjustment of the focal position on the tissue. For this purpose, visual, haptic and visual-haptic assistance functions are presented. These support the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic assistance are demonstrated in a user study. The system performance and usability of the overall robotic system are assessed in an additional user study. Analogous to a clinical scenario, the subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergeführten Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die Bildgebung, die Planung und Ausführung der Laserablation sowie intraoperative Bewegungen des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen für die endoskopische Applikation fokussierter Laserstrahlung verfügbar. Ebenfalls sind keine Systeme bekannt, die Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausführung einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fünf Freiheitsgraden. Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen einen positiven Effekt der Automationskonzepte für die kontaktfreie Laserchirurgie

    AN INTEGRATED AUGMENTED REALITY METHOD TO ASSEMBLY SIMULATION AND GUIDANCE

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Impact of Ear Occlusion on In-Ear Sounds Generated by Intra-oral Behaviors

    Get PDF
    We conducted a case study with one volunteer and a recording setup to detect sounds induced by the actions: jaw clenching, tooth grinding, reading, eating, and drinking. The setup consisted of two in-ear microphones, where the left ear was semi-occluded with a commercially available earpiece and the right ear was occluded with a mouldable silicon ear piece. Investigations in the time and frequency domains demonstrated that for behaviors such as eating, tooth grinding, and reading, sounds could be recorded with both sensors. For jaw clenching, however, occluding the ear with a mouldable piece was necessary to enable its detection. This can be attributed to the fact that the mouldable ear piece sealed the ear canal and isolated it from the environment, resulting in a detectable change in pressure. In conclusion, our work suggests that detecting behaviors such as eating, grinding, reading with a semi-occluded ear is possible, whereas, behaviors such as clenching require the complete occlusion of the ear if the activity should be easily detectable. Nevertheless, the latter approach may limit real-world applicability because it hinders the hearing capabilities.</p

    Virtual Assembly and Disassembly Analysis: An Exploration into Virtual Object Interactions and Haptic Feedback

    Get PDF
    In recent years, researchers have developed virtual environments, which allow more realistic human-computer interactions and have become increasingly popular for engineering applications such as computer-aided design and process evaluation. For instance, the demand for product service, remanufacture, and recycling has forced companies to consider ease of assembly and disassembly during the design phase of their products. Evaluating these processes in a virtual environment during the early stages of design not only increases the impact of design modifications on the final product, but also eliminates the time, cost, and material associated with the construction of physical prototypes. Although numerous virtual environments for assembly analysis exist or are under development, many provide only visual feedback. A real-time haptic simulation test bed for the analysis of assembly and disassembly operations has been developed, providing the designer with force and tactile feedback in addition to traditional visual feedback. The development such a simulation requires the modeling of collisions between virtual objects, which is a computationally expensive process. Also, the demands of a real-time simulation incorporating haptic feedback introduce additional complications for reliable collision detection. Therefore, the first objective of this work was to discover ways in which current collision detection libraries can be improved or supplemented to create more robust interaction between virtual objects. Using the simulation as a test bed, studies were then conducted to determine the potential usefulness of haptic feedback for analysis of assembly and disassembly operations. The following significant contributions were accomplished: (1) a simulation combining the strengths of an impulse-based simulation with a supplemental constraint maintenance scheme for modeling object interactions, (2) a toolkit of supplemental techniques to support object interactions in situations where collision detection algorithms commonly fail, (3) a haptic assembly and disassembly simulation useful for experimentation, and (4) results from a series of five experimental user studies with the focus of determining the effectiveness of haptic feedback in such a simulation. Additional contributions include knowledge of the usability and functionality of current collision detection libraries, the limitations of haptic feedback devices, and feedback from experimental subjects regarding their comfort and overall satisfaction with the simulation.Ph.D.Committee Chair: Bras, Bert; Committee Member: Baker, Nelson; Committee Member: Griffin, Paul; Committee Member: Paredis, Chris; Committee Member: Rosen, Davi
    corecore