852 research outputs found

    Teleoperation Methods for High-Risk, High-Latency Environments

    Get PDF
    In-Space Servicing, Assembly, and Manufacturing (ISAM) can enable larger-scale and longer-lived infrastructure projects in space, with interest ranging from commercial entities to the US government. Servicing, in particular, has the potential to vastly increase the usable lifetimes of satellites. However, the vast majority of spacecraft on low Earth orbit today were not designed to be serviced on-orbit. As such, several of the manipulations during servicing cannot easily be automated and instead require ground-based teleoperation. Ground-based teleoperation of on-orbit robots brings its own challenges of high latency communications, with telemetry delays of several seconds, and difficulties in visualizing the remote environment due to limited camera views. We explore teleoperation methods to alleviate these difficulties, increase task success, and reduce operator load. First, we investigate a model-based teleoperation interface intended to provide the benefits of direct teleoperation even in the presence of time delay. We evaluate the model-based teleoperation method using professional robot operators, then use feedback from that study to inform the design of a visual planning tool for this task, Interactive Planning and Supervised Execution (IPSE). We describe and evaluate the IPSE system and two interfaces, one 2D using a traditional mouse and keyboard and one 3D using an Intuitive Surgical da Vinci master console. We then describe and evaluate an alternative 3D interface using a Meta Quest head-mounted display. Finally, we describe an extension of IPSE to allow human-in-the-loop planning for a redundant robot. Overall, we find that IPSE improves task success rate and decreases operator workload compared to a conventional teleoperation interface

    Shared-Control Teleoperation Paradigms on a Soft Growing Robot Manipulator

    Full text link
    Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.Comment: 15 pages, 14 figure

    The dynamics and control of large space structures with distributed actuation

    Get PDF
    Future large space structures are likely to be constructed at much greater length-scales, and lower areal mass densities than has been achieved to-date. This could be enabled by ongoing developments in on-orbit manufacturing, whereby large structures are 3D-printed in space from raw feedstock materials. This thesis proposes and analyses a number of attitude control strategies which could be adopted for this next generation of ultra-lightweight, large space structures. Each of the strategies proposed makes use of distributed actuation, which is demonstrated early in the thesis to reduce structural deformations during attitude manoeuvres. All of the proposed strategies are considered to be particularly suitable for structures which are 3d-printed on-orbit, due to the relative simplicity of the actuators and ease with which the actuator placement or construction could be integrated with the on-orbit fabrication of the structure itself. The first strategy proposed is the use of distributed arrays of magnetorquer rods. First, distributed torques are shown to effectively rotate highly flexible structures. This is compared with torques applied to the centre-of-mass of the structure, which cause large surface deformations and can fail to enact a rotation. This is demonstrated using a spring-mass model of a planar structure with embedded actuators. A torque distribution algorithm is then developed to control an individually addressable array of actuators. Attitude control simulations are performed, using the array to control a large space structure, again modelled as a spring-mass system. The attitude control system is demonstrated to effectively detumble a representative 75×75m flexible structure, and perform slew manoeuvres, in the presence of both gravity-gradient torques and a realistic magnetic field model. The development of a Distributed Magnetorquer Demonstration Platform is then presented, a laboratory-scale implementation of the distributed magnetorquer array concept. The platform consists of 48 addressable magnetorquers, arranged with two perpendicular torquers at the nodes of a 5×5 grid. The control algorithms proposed previously in the thesis are implemented and tested on this hardware, demonstrating the practical feasibility of the concept. Results of experiments using a spherical air bearing and Helmholtz cage are presented, demonstrating rest-to-rest slew manoeuvres and detumbling around a single axis using the developed algorithms. The next attitude control strategy presented is the use of embedded current loops, conductive pathways which can be integrated with a spacecraft support structure and used to generate control torques through interaction with the Earth’s magnetic field. Length-scaling laws are derived by determining what fraction of a planar spacecraft’s mass would need to be allocated to the conductive current loops in order to produce a torque at least as large as the gravity gradient torque. Simulations are then performed of a flexible truss structure, modelled as a spring-mass system, for a range of structural flexibilities and a variety of current loop geometries. Simulations demonstrate rotation of the structure via the electromagnetic force on the current carrying elements, and are also used to characterise the structural deformations caused by the various current loop geometries. An attitude control simulation is then performed, demonstrating a 90◩ slew manoeuvre of a 250×250 m flexible structure through the use of three orthogonal sets of current loops embedded within the spacecraft. The final concept investigated in this thesis is a self-reconfiguring OrigamiSat, where reconfiguration of the proposed OrigamiSat is triggered by changes in the local surface optical properties of an origami structure to harness the solar radiation pressure induced acceleration. OrigamiSats are origami spacecraft with reflective panels which, when flat, operate as a conventional solar sail. Shape reconfiguration, i.e. “folding” of the origami design, allows the OrigamiSat to change operational modes, performing different functions as per mission requirements. For example, a flat OrigamiSat could be reconfigured into the shape of a parabolic reflector, before returning to the flat configuration when required to again operate as a solar sail, providing propellant-free propulsion. Shape reconfiguration or folding of OrigamiSats through the use of surface reflectivity modulation is investigated in this thesis. First, a simplified, folding facet model is used to perform a length-scaling analysis, and then a 2d multibody dynamics simulation is used to demonstrate the principle of solar radiation presure induced folding. A 3d multibody dynamics simulation is then developed and used to demonstrate shape reconfiguration for different origami folding patterns. Here, the attitude dynamics and shape reconfiguration of OrigamiSats are found to be highly coupled, and thus present a challenge from a control perspective. The problem of integrating attitude and shape control of a Miura-fold pattern OrigamiSat through the use of variable reflectivity is then investigated, and a control algorithm developed which uses surface reflectivity modulation of the OrigamiSat facets to enact shape reconfiguration and attitude manoeuvres simultaneously

    Evaluating EEG–EMG Fusion-Based Classification as a Method for Improving Control of Wearable Robotic Devices for Upper-Limb Rehabilitation

    Get PDF
    Musculoskeletal disorders are the biggest cause of disability worldwide, and wearable mechatronic rehabilitation devices have been proposed for treatment. However, before widespread adoption, improvements in user control and system adaptability are required. User intention should be detected intuitively, and user-induced changes in system dynamics should be unobtrusively identified and corrected. Developments often focus on model-dependent nonlinear control theory, which is challenging to implement for wearable devices. One alternative is to incorporate bioelectrical signal-based machine learning into the system, allowing for simpler controller designs to be augmented by supplemental brain (electroencephalography/EEG) and muscle (electromyography/EMG) information. To extract user intention better, sensor fusion techniques have been proposed to combine EEG and EMG; however, further development is required to enhance the capabilities of EEG–EMG fusion beyond basic motion classification. To this end, the goals of this thesis were to investigate expanded methods of EEG–EMG fusion and to develop a novel control system based on the incorporation of EEG–EMG fusion classifiers. A dataset of EEG and EMG signals were collected during dynamic elbow flexion–extension motions and used to develop EEG–EMG fusion models to classify task weight, as well as motion intention. A variety of fusion methods were investigated, such as a Weighted Average decision-level fusion (83.01 ± 6.04% accuracy) and Convolutional Neural Network-based input-level fusion (81.57 ± 7.11% accuracy), demonstrating that EEG–EMG fusion can classify more indirect tasks. A novel control system, referred to as a Task Weight Selective Controller (TWSC), was implemented using a Gain Scheduling-based approach, dictated by external load estimations from an EEG–EMG fusion classifier. To improve system stability, classifier prediction debouncing was also proposed to reduce misclassifications through filtering. Performance of the TWSC was evaluated using a developed upper-limb brace simulator. Due to simulator limitations, no significant difference in error was observed between the TWSC and PID control. However, results did demonstrate the feasibility of prediction debouncing, showing it provided smoother device motion. Continued development of the TWSC, and EEG–EMG fusion techniques will ultimately result in wearable devices that are able to adapt to changing loads more effectively, serving to improve the user experience during operation

    Undergraduate and Graduate Course Descriptions, 2023 Spring

    Get PDF
    Wright State University undergraduate and graduate course descriptions from Spring 2023

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    Path and Motion Planning for Autonomous Mobile 3D Printing

    Get PDF
    Autonomous robotic construction was envisioned as early as the ‘90s, and yet, con- struction sites today look much alike ones half a century ago. Meanwhile, highly automated and efficient fabrication methods like Additive Manufacturing, or 3D Printing, have seen great success in conventional production. However, existing efforts to transfer printing technology to construction applications mainly rely on manufacturing-like machines and fail to utilise the capabilities of modern robotics. This thesis considers using Mobile Manipulator robots to perform large-scale Additive Manufacturing tasks. Comprised of an articulated arm and a mobile base, Mobile Manipulators, are unique in their simultaneous mobility and agility, which enables printing-in-motion, or Mobile 3D Printing. This is a 3D printing modality, where a robot deposits material along larger-than-self trajectories while in motion. Despite profound potential advantages over existing static manufacturing-like large- scale printers, Mobile 3D printing is underexplored. Therefore, this thesis tack- les Mobile 3D printing-specific challenges and proposes path and motion planning methodologies that allow this printing modality to be realised. The work details the development of Task-Consistent Path Planning that solves the problem of find- ing a valid robot-base path needed to print larger-than-self trajectories. A motion planning and control strategy is then proposed, utilising the robot-base paths found to inform an optimisation-based whole-body motion controller. Several Mobile 3D Printing robot prototypes are built throughout this work, and the overall path and motion planning strategy proposed is holistically evaluated in a series of large-scale 3D printing experiments

    Development and Validation of Mechatronic Systems for Image-Guided Needle Interventions and Point-of-Care Breast Cancer Screening with Ultrasound (2D and 3D) and Positron Emission Mammography

    Get PDF
    The successful intervention of breast cancer relies on effective early detection and definitive diagnosis. While conventional screening mammography has substantially reduced breast cancer-related mortalities, substantial challenges persist in women with dense breasts. Additionally, complex interrelated risk factors and healthcare disparities contribute to breast cancer-related inequities, which restrict accessibility, impose cost constraints, and reduce inclusivity to high-quality healthcare. These limitations predominantly stem from the inadequate sensitivity and clinical utility of currently available approaches in increased-risk populations, including those with dense breasts, underserved and vulnerable populations. This PhD dissertation aims to describe the development and validation of alternative, cost-effective, robust, and high-resolution systems for point-of-care (POC) breast cancer screening and image-guided needle interventions. Specifically, 2D and 3D ultrasound (US) and positron emission mammography (PEM) were employed to improve detection, independent of breast density, in conjunction with mechatronic and automated approaches for accurate image acquisition and precise interventional workflow. First, a mechatronic guidance system for US-guided biopsy under high-resolution PEM localization was developed to improve spatial sampling of early-stage breast cancers. Validation and phantom studies showed accurate needle positioning and 3D spatial sampling under simulated PEM localization. Subsequently, a whole-breast spatially-tracked 3DUS system for point-of-care screening was developed, optimized, and validated within a clinically-relevant workspace and healthy volunteer studies. To improve robust image acquisition and adaptability to diverse patient populations, an alternative, cost-effective, portable, and patient-dedicated 3D automated breast (AB) US system for point-of-care screening was developed. Validation showed accurate geometric reconstruction, feasible clinical workflow, and proof-of-concept utility across healthy volunteers and acquisition conditions. Lastly, an orthogonal acquisition and 3D complementary breast (CB) US generation approach were described and experimentally validated to improve spatial resolution uniformity by recovering poor out-of-plane resolution. These systems developed and described throughout this dissertation show promise as alternative, cost-effective, robust, and high-resolution approaches for improving early detection and definitive diagnosis. Consequently, these contributions may advance breast cancer-related equities and improve outcomes in increased-risk populations and limited-resource settings

    From visuomotor control to latent space planning for robot manipulation

    Get PDF
    Deep visuomotor control is emerging as an active research area for robot manipulation. Recent advances in learning sensory and motor systems in an end-to-end manner have achieved remarkable performance across a range of complex tasks. Nevertheless, a few limitations restrict visuomotor control from being more widely adopted as the de facto choice when facing a manipulation task on a real robotic platform. First, imitation learning-based visuomotor control approaches tend to suffer from the inability to recover from an out-of-distribution state caused by compounding errors. Second, the lack of versatility in task definition limits skill generalisability. Finally, the training data acquisition process and domain transfer are often impractical. In this thesis, individual solutions are proposed to address each of these issues. In the first part, we find policy uncertainty to be an effective indicator of potential failure cases, in which the robot is stuck in out-of-distribution states. On this basis, we introduce a novel uncertainty-based approach to detect potential failure cases and a recovery strategy based on action-conditioned uncertainty predictions. Then, we propose to employ visual dynamics approximation to our model architecture to capture the motion of the robot arm instead of the static scene background, making it possible to learn versatile skill primitives. In the second part, taking inspiration from the recent progress in latent space planning, we propose a gradient-based optimisation method operating within the latent space of a deep generative model for motion planning. Our approach bypasses the traditional computational challenges encountered by established planning algorithms, and has the capability to specify novel constraints easily and handle multiple constraints simultaneously. Moreover, the training data comes from simple random motor-babbling of kinematically feasible robot states. Our real-world experiments further illustrate that our latent space planning approach can handle both open and closed-loop planning in challenging environments such as heavily cluttered or dynamic scenes. This leads to the first, to our knowledge, closed-loop motion planning algorithm that can incorporate novel custom constraints, and lays the foundation for more complex manipulation tasks

    Advances in flexible manipulation through the application of AI-based techniques

    Get PDF
    282 p.Objektuak hartu eta uztea oinarrizko bi eragiketa dira ia edozein aplikazio robotikotan. Gaur egun, "pick and place" aplikazioetarako erabiltzen diren robot industrialek zeregin sinpleak eta errepikakorrak egiteko duten eraginkortasuna dute ezaugarri. Hala ere, sistema horiek oso zurrunak dira, erabat kontrolatutako inguruneetan lan egiten dute, eta oso kostu handia dakarte beste zeregin batzuk egiteko birprogramatzeak. Gaur egun, industria-ingurune desberdinetako zereginak daude (adibidez, logistika-ingurune batean eskaerak prestatzea), zeinak objektuak malgutasunez manipulatzea eskatzen duten, eta oraindik ezin izan dira automatizatu beren izaera dela-eta. Automatizazioa zailtzen duten botila-lepo nagusiak manipulatu beharreko objektuen aniztasuna, roboten trebetasun falta eta kontrolatu gabeko ingurune dinamikoen ziurgabetasuna dira.Adimen artifizialak (AA) gero eta paper garrantzitsuagoa betetzen du robotikaren barruan, robotei zeregin konplexuak betetzeko beharrezko adimena ematen baitie. Gainera, AAk benetako esperientzia erabiliz portaera konplexuak ikasteko aukera ematen du, programazioaren kostua nabarmen murriztuz. Objektuak manipulatzeko egungo sistema robotikoen mugak ikusita, lan honen helburu nagusia manipulazio-sistemen malgutasuna handitzea da AAn oinarritutako algoritmoak erabiliz, birprogramatu beharrik gabe ingurune dinamikoetara egokitzeko beharrezko gaitasunak emanez
    • 

    corecore