579 research outputs found

    Experience-based virtual training system for knee arthroscopic inspection

    Get PDF

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Robotic learning of force-based industrial manipulation tasks

    Get PDF
    Even with the rapid technological advancements, robots are still not the most comfortable machines to work with. Firstly, due to the separation of the robot and human workspace which imposes an additional financial burden. Secondly, due to the significant re-programming cost in case of changing products, especially in Small and Medium-sized Enterprises (SMEs). Therefore, there is a significant need to reduce the programming efforts required to enable robots to perform various tasks while sharing the same space with a human operator. Hence, the robot must be equipped with a cognitive and perceptual capabilities that facilitate human-robot interaction. Humans use their various senses to perform tasks such as vision, smell and taste. One sensethat plays a significant role in human activity is ’touch’ or ’force’. For example, holding a cup of tea, or making fine adjustments while inserting a key requires haptic information to achieve the task successfully. In all these examples, force and torque data are crucial for the successful completion of the activity. Also, this information implicitly conveys data about contact force, object stiffness, and many others. Hence, a deep understanding of the execution of such events can bridge the gap between humans and robots. This thesis is being directed to equip an industrial robot with the ability to deal with force perceptions and then learn force-based tasks using Learning from Demonstration (LfD).To learn force-based tasks using LfD, it is essential to extract task-relevant features from the force information. Then, knowledge must be extracted and encoded form the task-relevant features. Hence, the captured skills can be reproduced in a new scenario. In this thesis, these elements of LfD were achieved using different approaches based on the demonstrated task. In this thesis, four robotics problems were addressed using LfD framework. The first challenge was to filter out robots’ internal forces (irrelevant signals) using data-driven approach. The second robotics challenge was the recognition of the Contact State (CS) during assembly tasks. To tackle this challenge, a symbolic based approach was proposed, in which a force/torque signals; during demonstrated assembly, the task was encoded as a sequence of symbols. The third challenge was to learn a human-robot co-manipulation task based on LfD. In this case, an ensemble machine learning approach was proposed to capture such a skill. The last challenge in this thesis, was to learn an assembly task by demonstration with the presents of parts geometrical variation. Hence, a new learning approach based on Artificial Potential Field (APF) to learn a Peg-in-Hole (PiH) assembly task which includes no-contact and contact phases. To sum up, this thesis focuses on the use of data-driven approaches to learning force based task in an industrial context. Hence, different machine learning approaches were implemented, developed and evaluated in different scenarios. Then, the performance of these approaches was compared with mathematical modelling based approaches.</div

    Advances on Mechanics, Design Engineering and Manufacturing III

    Get PDF
    This open access book gathers contributions presented at the International Joint Conference on Mechanics, Design Engineering and Advanced Manufacturing (JCM 2020), held as a web conference on June 2–4, 2020. It reports on cutting-edge topics in product design and manufacturing, such as industrial methods for integrated product and process design; innovative design; and computer-aided design. Further topics covered include virtual simulation and reverse engineering; additive manufacturing; product manufacturing; engineering methods in medicine and education; representation techniques; and nautical, aeronautics and aerospace design and modeling. The book is organized into four main parts, reflecting the focus and primary themes of the conference. The contributions presented here not only provide researchers, engineers and experts in a range of industrial engineering subfields with extensive information to support their daily work; they are also intended to stimulate new research directions, advanced applications of the methods discussed and future interdisciplinary collaborations

    Varying Feedback Strategy and Scheduling in Simulator Training: Effects on Learner Perceptions, Initial Learning, and Transfer

    Get PDF
    This experimental study investigated the effects of visual feedback on initial learning, perceived self-efficacy, workload, near transfer, far transfer, and perceived realism during a simulator-based training task. Prior studies indicate that providing feedback is critical for schema development (Salmoni, Schmidt, & Walter 1984; Sterman, 1994). However, its influence has been shown to dissipate and is not directly proportionate to the frequency at which it is given (Wulf, Shea, & Matschiner, 1998). A total of 54 participants completed the study forming six treatment groups. The independent treatment, visual feedback, was manipulated as scheduling (absolute—every practice trial or relative—every third trial) and strategies (gradual decrease of visual cues within the interface, gradual increase of visual cues within the interface, or a single consistent cue for each trial). Participants completed twelve practice trials of welding under one of six feedback manipulations; then, participants completed twelve practice trials of welding without it. Lastly, participants performed the weld task on actual equipment in a shop area. No treatment showed significant difference among groups with regard to initial learning, retention, near transfer, and far transfer measures. However, a statistical significance was found during initial learning and retention within each treatment group. Findings support empirical evidence that a variability of practice paradigm promotes learning (Lee & Carnahan, 1990; Shea & Morgan, 1979). Learner perceptions of realism suggest that novice learners perceive simulator fidelity as high, however, these perceptions may dissipate as the learner practices. Those groups that involved the greatest number of cues at the onset of practice or having cues available at every other trial reported the greatest amount of workload. All groups reported increases in perceptions of self-efficacy during practice on the simulator, but those perceptions decreased when participants performed the weld task on actual equipment. Findings suggest that contextual-interference of increasing, decreasing, or changing feedback counteracts the guidance effect of feedback as found in previous studies

    An Asynchronous Simulation Framework for Multi-User Interactive Collaboration: Application to Robot-Assisted Surgery

    Get PDF
    The field of surgery is continually evolving as there is always room for improvement in the post-operative health of the patient as well as the comfort of the Operating Room (OR) team. While the success of surgery is contingent upon the skills of the surgeon and the OR team, the use of specialized robots has shown to improve surgery-related outcomes in some cases. These outcomes are currently measured using a wide variety of metrics that include patient pain and recovery, surgeon’s comfort, duration of the operation and the cost of the procedure. There is a need for additional research to better understand the optimal criteria for benchmarking surgical performance. Presently, surgeons are trained to perform robot-assisted surgeries using interactive simulators. However, in the absence of well-defined performance standards, these simulators focus primarily on the simulation of the operative scene and not the complexities associated with multiple inputs to a real-world surgical procedure. Because interactive simulators are typically designed for specific robots that perform a small number of tasks controlled by a single user, they are inflexible in terms of their portability to different robots and the inclusion of multiple operators (e.g., nurses, medical assistants). Additionally, while most simulators provide high-quality visuals, simplification techniques are often employed to avoid stability issues for physics computation, contact dynamics and multi-manual interaction. This study addresses the limitations of existing simulators by outlining various specifications required to develop techniques that mimic real-world interactions and collaboration. Moreover, this study focuses on the inclusion of distributed control, shared task allocation and assistive feedback -- through machine learning, secondary and tertiary operators -- alongside the primary human operator

    The evaluation of a novel haptic machining VR-based process planning system using an original process planning usability method

    Get PDF
    This thesis provides an original piece of work and contribution to knowledge by creating a new process planning system; Haptic Aided Process Planning (HAPP). This system is based on the combination of haptics and virtual reality (VR). HAPP creates a simulative machining environment where Process plans are automatically generated from the real time logging of a user’s interaction. Further, through the application of a novel usability test methodology, a deeper study of how this approach compares to conventional process planning was undertaken. An abductive research approach was selected and an iterative and incremental development methodology chosen. Three development cycles were undertaken with evaluation studies carried out at the end of each. Each study, the pre-pilot, pilot and industrial, identified progressive refinements to both the usability of HAPP and the usability evaluation method itself. HAPP provided process planners with an environment similar to which they are already familiar. Visual images were used to represent tools and material whilst a haptic interface enabled their movement and positioning by an operator in a manner comparable to their native setting. In this way an intuitive interface was developed that allowed users to plan the machining of parts consisting of features that can be machined on a pillar drill, 21/2D axis milling machine or centre lathe. The planning activities included single or multiple set ups, fixturing and sequencing of cutting operations. The logged information was parsed and output to a process plan including route sheets, operation sheets, tool lists and costing information, in a human readable format. The system evaluation revealed that HAPP, from an expert planners perspective is perceived to be 70% more satisfying to use, 66% more efficient in completing process plans, primarily due to the reduced cognitive load, is more effective producing a higher quality output of information and is 20% more learnable than a traditional process planning approach

    The Application of Mixed Reality Within Civil Nuclear Manufacturing and Operational Environments

    Get PDF
    This thesis documents the design and application of Mixed Reality (MR) within a nuclear manufacturing cell through the creation of a Digitally Assisted Assembly Cell (DAAC). The DAAC is a proof of concept system, combining full body tracking within a room sized environment and bi-directional feedback mechanism to allow communication between users within the Virtual Environment (VE) and a manufacturing cell. This allows for training, remote assistance, delivery of work instructions, and data capture within a manufacturing cell. The research underpinning the DAAC encompasses four main areas; the nuclear industry, Virtual Reality (VR) and MR technology, MR within manufacturing, and finally the 4 th Industrial Revolution (IR4.0). Using an array of Kinect sensors, the DAAC was designed to capture user movements within a real manufacturing cell, which can be transferred in real time to a VE, creating a digital twin of the real cell. Users can interact with each other via digital assets and laser pointers projected into the cell, accompanied by a built-in Voice over Internet Protocol (VoIP) system. This allows for the capture of implicit knowledge from operators within the real manufacturing cell, as well as transfer of that knowledge to future operators. Additionally, users can connect to the VE from anywhere in the world. In this way, experts are able to communicate with the users in the real manufacturing cell and assist with their training. The human tracking data fills an identified gap in the IR4.0 network of Cyber Physical System (CPS), and could allow for future optimisations within manufacturing systems, Material Resource Planning (MRP) and Enterprise Resource Planning (ERP). This project is a demonstration of how MR could prove valuable within nuclear manufacture. The DAAC is designed to be low cost. It is hoped this will allow for its use by groups who have traditionally been priced out of MR technology. This could help Small to Medium Enterprises (SMEs) close the double digital divide between themselves and larger global corporations. For larger corporations it offers the benefit of being low cost, and, is consequently, easier to roll out across the value chain. Skills developed in one area can also be transferred to others across the internet, as users from one manufacturing cell can watch and communicate with those in another. However, as a proof of concept, the DAAC is at Technology Readiness Level (TRL) five or six and, prior to its wider application, further testing is required to asses and improve the technology. The work was patented in both the UK (S. R EDDISH et al., 2017a), the US (S. R EDDISH et al., 2017b) and China (S. R EDDISH et al., 2017c). The patents are owned by Rolls-Royce and cover the methods of bi-directional feedback from which users can interact from the digital to the real and vice versa. Stephen Reddish Mixed Mode Realities in Nuclear Manufacturing Key words: Mixed Mode Reality, Virtual Reality, Augmented Reality, Nuclear, Manufacture, Digital Twin, Cyber Physical Syste

    Kaynak dikiş formunun yapay sinir ağı ve vokselleme yöntemleriyle modellenmesi

    Get PDF
    06.03.2018 tarihli ve 30352 sayılı Resmi Gazetede yayımlanan “Yükseköğretim Kanunu İle Bazı Kanun Ve Kanun Hükmünde Kararnamelerde Değişiklik Yapılması Hakkında Kanun” ile 18.06.2018 tarihli “Lisansüstü Tezlerin Elektronik Ortamda Toplanması, Düzenlenmesi ve Erişime Açılmasına İlişkin Yönerge” gereğince tam metin erişime açılmıştır.Bu çalışmada, kaynakçı adaylarının eğitimi amacıyla geliştirilen düşük maliyetli sanal kaynak simülatörü için gerçek zamanlı ve üç boyutlu bir kaynak dikiş formu modellenmiştir. Adaylar bu simülatör vasıtasıyla kaynak tekniklerini herhangi bir iş kazasına neden olmadan güvenli bir ortamda öğrenebilir ve kısa sürede normalden daha fazla uygulama yaparak becerilerini geliştirebilirler. Geliştirilen simülatörde, Flock of Birds konum ve oryantasyon sensörü ile başa takılan ekran gibi özel sanal gerçeklik aygıtları kullanılmıştır. Simülasyon, torcun konumunu izleyen Flock of Birds sensör cihazından gelen verilere dayanarak, kaynak dolgu şeklini ve nufuziyet miktarını belirler. Kaynak dolgu şekli oluşturulurken, kaynak dikiş kesitinin parabol ile benzerliği nedeniyle bu şekil temel dolgu birimi olarak kullanılmıştır. Kaynak dikişimizi oluşturacak temel dolgu şeklinin yükseklik, genişlik ve nufuziyet parametrelerine ait değerler literatürdeki kaynak dikişi deneylerinden elde edilmiştir. Sanal kaynak işlemi esnasında, kaynak dolgu şekli parametre değerleri belirli zaman aralıklarında, ileri beslemeli geri yayılımlı yapay sinir ağı kullanılarak hesaplanır. Ağ kurgusu yapılırken eğitim fonksiyonu olarak TrainLM (Levenberg-Marquardt) referans alınmıştır. En uygun transfer fonksiyonu belirlenirken de en iyi sonucu LogSig() fonksiyonunun verdiği saptanmıştır. Ara katman sayısı ve her ara katmandaki proses elemanı (nöron) sayısının kaç olacağına deneme/yanılma yöntemiyle karar verilmiştir. Aynı zaman aralığında voksel haritası ve buna karşılık gelen hash tabanlı sekizli ağaç veri yapısı gerçek zamanlı olarak oluşturulur. Voksellenen veriler kullanılarak, kaynak dolgusunun üçgenlerden oluşan eş yüzeyleri, yürüyen küpler algoritması ile yeniden oluşturulur. Bu sayede daha gerçekçi bir kaynak dikiş görüntüsü elde edilir. Bu görüntü ve sanal sahne devamlı olarak başa takılan ekrana yollanarak sanal ortam içindeki gerçeklik hissi devam ettirilir. Vokselleme ve eş yüzey oluşturma işlemleri için yüksek çözünürlüklü sanal sahnelerde işlem süresini kısaltmak için de çok iş parçacıklı programlama tekniği kullanılmıştır. Farklı iş parçacığı sayıları için eş yüzey oluşturma süreleri de gösterilmiştir.In this study, a real time and three dimensional weld seam form was modeled for a low cost virtual welding simulator developed for training welder candidates. Through this simulator, candidates can learn welding techniques in a safe environment without causing any work accidents and improve their skills by performing more applications than usual in a short time. In the developed simulator, special virtual reality devices such as Flock of Birds position and orientation sensor and head mounted display are used. The simulation determines the weld bead shape and amount of penetration based on data from the Flock of Birds sensor device monitoring the position of the torch. When forming the weld bead shape, parabola was used as the basic bead shape unit due to the similarity of the weld bead slice with the parabola. The values of the height, width and penetration parameters of the basic weld bead shape that will form our weld seam were obtained from the weld seam experiments in the literature. During the virtual welding process, the weld bead shape parameter values are calculated at specified time intervals using the feed-forward back-propagation artificial neural network. TrainLM (Levenberg-Marquardt) was used as the training function for network design. While determining the most appropriate transfer function, it was found that LogSig () function gave the best result. The number of hidden layers and the number of process elements (neurons) in each hidden layer were determined by trial and error method. In the same time interval, the voxel map and the corresponding hash-based octree data structure are generated in real time. By using voxelized data, the triangular isosurfaces of the weld bead are reconstructed using the marching cubes algorithm. This results a more realistic weld seam appearance. This image and virtual scene are continuously sent to the head mounted display to maintain the sense of reality in the virtual environment. Multi-threaded programming technique is also used to shorten the processing time in high resolution virtual scenes for voxelization and isosurface extraction processes. The isosurface extraction times for different number of threads are also shown

    Advances on Mechanics, Design Engineering and Manufacturing III

    Get PDF
    This open access book gathers contributions presented at the International Joint Conference on Mechanics, Design Engineering and Advanced Manufacturing (JCM 2020), held as a web conference on June 2–4, 2020. It reports on cutting-edge topics in product design and manufacturing, such as industrial methods for integrated product and process design; innovative design; and computer-aided design. Further topics covered include virtual simulation and reverse engineering; additive manufacturing; product manufacturing; engineering methods in medicine and education; representation techniques; and nautical, aeronautics and aerospace design and modeling. The book is organized into four main parts, reflecting the focus and primary themes of the conference. The contributions presented here not only provide researchers, engineers and experts in a range of industrial engineering subfields with extensive information to support their daily work; they are also intended to stimulate new research directions, advanced applications of the methods discussed and future interdisciplinary collaborations
    corecore