7,835 research outputs found

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Doctor of Philosophy

    Get PDF
    dissertationIn this dissertation, we present methods for intuitive telemanipulation of manipulators that use piezoelectric stick-slip actuators (PSSAs). Commercial micro/nano-manipulators, which utilize PSSAs to achieve high precision over a large workspace, are typically controlled by a human operator at the joint level, leading to unintuitive and time-consuming telemanipulation. Prior work has considered the use of computer-vision-feedback to close a control loop for improved performance, but computer-vision-feedback is not a viable option for many end users. We discuss how open-loop models of the micro/nano-manipulator can be used to achieve desired end-effector movements, and we explain the process of obtaining open-loop models. We propose a rate-control telemanipulation method that utilizes the obtained model, and we experimentally quantify the effectiveness of the method using a common commercial manipulator (the Kleindiek MM3A). The utility of open-loop control methods for PSSAs with a human in the loop depends directly on the accuracy of the open-loop models of the manipulator. Prior research has shown that modeling of piezoelectric actuators is not a trivial task as they are known to suffer from nonlinearities that degrade their performance. We study the effect of static (non-inertial) loads on a prismatic and a rotary PSSA, and obtain a model relating the step size of the actuator to the load. The actuator-specific parameters of the model are calibrated by taking measurements in specific configurations of the manipulator. Results comparing the obtained model to experimental data are presented. PSSAs have properties that make them desirable over traditional DC-motor actuators for use in retinal surgery. We present a telemanipulation system for retinal surgery that uses a full range of existing disposable instruments. The system uses a PSSA-based manipulator that is compact and light enough that it could reasonably be made head-mounted to passively compensate for head movements. Two mechanisms are presented that enable the system to use existing disposable actuated instruments, and an instrument adapter enables quick-change of instruments during surgery. A custom stylus for a haptic interface enables intuitive and ergonomic telemanipulation of actuated instruments. Experimental results with a force-sensitive phantom eye show that telemanipulated surgery results in reduced forces on the retina compared to manual surgery, and training with the system results in improved performance. Finally, we evaluate operator efficiency with different haptic-interface kinematics for telemanipulated retinal surgery. Surgical procedures of the retina require precise manipulation of instruments inserted through trocars in the sclera. Telemanipulated robotic systems have been developed to improve retinal surgery, but there is not a unique mapping of the motions of the surgeon's hand to the lower-dimensional motions of the instrument through the trocar. We study operator performance during a precision positioning task on a force-sensing phantom retina, reminiscent of telemanipulated retinal surgery, with three common haptic-interface kinematics implemented in software on a PHANTOM Premium 6DOF haptic interface. Results from a study with 12 human subjects show that overall performance is best with the kinematics that represent a compact and inexpensive option, and that subjects' subjective preference agrees with the objective performance results

    Development of Platform Independent Remote Experiments

    Get PDF
    Remote laboratory or online laboratory is the use of the Internet to conduct real experiments remotely when the client is geographically away from the real experiments. Current remote laboratories such as the remote laboratory in Mechanical Engineering at University of Houston require the client to install plug-ins before conducting remote experiments. This thesis presents an advanced technology using JavaScript and Socket.IO to develop plug-in free remote experiments without firewall issue. A scalable plug-in free remote laboratory integrated with two remote experiments has been set up in the Mechanical Engineering Department at the Texas A&M University at Qatar (TAMUQ) in Qatar under the collaboration from the University of Houston and the Texas Southern University in Houston, Texas. The plug-free remote laboratory has been successfully tested in Windows PC, Mac OS, iPhone and iPad (iOS).Mechanical Engineering, Department o

    High Fidelity Dynamic Modeling and Nonlinear Control of Fluidic Artificial Muscles

    Get PDF
    A fluidic artificial muscle is a type of soft actuator. Soft actuators transmit power with elastic or hyper-elastic bladders that are deformed with a pressurized fluid. In a fluidic artificial muscle a rubber tube is encompassed by a helical fiber braid with caps on both ends. One of the end caps has an orifice, allowing the control of fluid flow in and out of the device. As the actuator is pressurized, the rubber tube expands radially and is constrained by the helical fiber braid. This constraint results in a contractile motion similar to that of biological muscles. Although artificial muscles have been extensively studied, physics-based models do not exist that predict theirmotion.This dissertation presents a new comprehensive lumped-parameter dynamic model for both pneumatic and hydraulic artificial muscles. It includes a tube stiffness model derived from the theory of large deformations, thin wall pressure vessel theory, and a classical artificial muscle force model. Furthermore, it incorporates models for the kinetic friction and braid deformation. The new comprehensive dynamic model is able to accurately predict the displacement of artificial muscles as a function of pressure. On average, the model can predict the quasi-static position of the artificial muscles within 5% error and the dynamic displacement within 10% error with respect to the maximum stroke. Results show the potential utility of the model in mechanical system design and control design. Applications include wearable robots, mobile robots, and systems requiring compact, powerful actuation.The new model was used to derive sliding mode position and impedance control laws. The accuracy of the controllers ranged from ± 6 µm to ± 50 µm, with respect to a 32 mm and 24 mm stroke artificial muscles, respectively. Tracking errors were reduced by 59% or more when using the high-fidelity model sliding mode controller compared to classical methods. The newmodel redefines the state-of-the-art in controller performance for fluidic artificial muscles

    Co-exploring Actuator Antagonism and Bio-inspired Control in a Printable Robot Arm

    Get PDF
    The human arm is capable of performing fast targeted movements with high precision, say in pointing with a mouse cursor, but is inherently ‘soft’ due to the muscles, tendons and other tissues of which it is composed. Robot arms are also becoming softer, to enable robustness when operating in real-world environments, and to make them safer to use around people. But softness comes at a price, typically an increase in the complexity of the control required for a given task speed/accuracy requirement. Here we explore how fast and precise joint movements can be simply and effectively performed in a soft robot arm, by taking inspiration from the human arm. First, viscoelastic actuator-tendon systems in an agonist-antagonist setup provide joints with inherent damping, and stiffness that can be varied in real-time through co-contraction. Second, a light-weight and learnable inverse model for each joint enables a fast ballistic phase that drives the arm close to a desired equilibrium point and co-contraction tuple, while the final adjustment is done by a feedback controller. The approach is embodied in the GummiArm, a robot which can almost entirely be printed on hobby-grade 3D printers. This enables rapid and iterative co-exploration of ‘brain’ and ‘body’, and provides a great platform for developing adaptive and bio-inspired behaviours

    An under-ice hyperspectral and RGB imaging system to capture fine-scale biophysical properties of sea ice

    Get PDF
    Sea-ice biophysical properties are characterized by high spatio-temporal variability ranging from the meso- to the millimeter scale. Ice coring is a common yet coarse point sampling technique that struggles to capture such variability in a non-invasive manner. This hinders quantification and understanding of ice algae biomass patchiness and its complex interaction with some of its sea ice physical drivers. In response to these limitations, a novel under-ice sled system was designed to capture proxies of biomass together with 3D models of bottom topography of land-fast sea-ice. This system couples a pushbroom hyperspectral imaging (HI) sensor with a standard digital RGB camera and was trialed at Cape Evans, Antarctica. HI aims to quantify per-pixel chlorophyll-a content and other ice algae biological properties at the ice-water interface based on light transmitted through the ice. RGB imagery processed with digital photogrammetry aims to capture under-ice structure and topography. Results from a 20 m transect capturing a 0.61 m wide swath at sub-mm spatial resolution are presented. We outline the technical and logistical approach taken and provide recommendations for future deployments and developments of similar systems. A preliminary transect subsample was processed using both established and novel under-ice bio-optical indices (e.g., normalized difference indexes and the area normalized by the maximal band depth) and explorative analyses (e.g., principal component analyses) to establish proxies of algal biomass. This first deployment of HI and digital photogrammetry under-ice provides a proof-of-concept of a novel methodology capable of delivering non-invasive and highly resolved estimates of ice algal biomass in-situ, together with some of its environmental drivers. Nonetheless, various challenges and limitations remain before our method can be adopted across a range of sea-ice conditions. Our work concludes with suggested solutions to these challenges and proposes further method and system developments for future research
    • …
    corecore