34,412 research outputs found

    Improved GelSight Tactile Sensor for Measuring Geometry and Slip

    Full text link
    A GelSight sensor uses an elastomeric slab covered with a reflective membrane to measure tactile signals. It measures the 3D geometry and contact force information with high spacial resolution, and successfully helped many challenging robot tasks. A previous sensor, based on a semi-specular membrane, produces high resolution but with limited geometry accuracy. In this paper, we describe a new design of GelSight for robot gripper, using a Lambertian membrane and new illumination system, which gives greatly improved geometric accuracy while retaining the compact size. We demonstrate its use in measuring surface normals and reconstructing height maps using photometric stereo. We also use it for the task of slip detection, using a combination of information about relative motions on the membrane surface and the shear distortions. Using a robotic arm and a set of 37 everyday objects with varied properties, we find that the sensor can detect translational and rotational slip in general cases, and can be used to improve the stability of the grasp.Comment: IEEE/RSJ International Conference on Intelligent Robots and System

    Nonlinear modeling of FES-supported standing-up in paraplegia for selection of feedback sensors

    Get PDF
    This paper presents analysis of the standing-up manoeuvre in paraplegia considering the body supportive forces as a potential feedback source in functional electrical stimulation (FES)-assisted standing-up. The analysis investigates the significance of arm, feet, and seat reaction signals to the human body center-of-mass (COM) trajectory reconstruction. The standing-up behavior of eight paraplegic subjects was analyzed, measuring the motion kinematics and reaction forces to provide the data for modeling. Two nonlinear empirical modeling methods are implemented-Gaussian process (GP) priors and multilayer perceptron artificial neural networks (ANN)-and their performance in vertical and horizontal COM component reconstruction is compared. As the input, ten sensory configurations that incorporated different number of sensors were evaluated trading off the modeling performance for variables chosen and ease-of-use in everyday application. For the purpose of evaluation, the root-mean-square difference was calculated between the model output and the kinematics-based COM trajectory. Results show that the force feedback in COM assessment in FES assisted standing-up is comparable alternative to the kinematics measurement systems. It was demonstrated that the GP provided better modeling performance, at higher computational cost. Moreover, on the basis of averaged results, the use of a sensory system incorporating a six-dimensional handle force sensor and an instrumented foot insole is recommended. The configuration is practical for realization and with the GP model achieves an average accuracy of COM estimation 16 /spl plusmn/ 1.8 mm in horizontal and 39 /spl plusmn/ 3.7 mm in vertical direction. Some other configurations analyzed in the study exhibit better modeling accuracy, but are less practical for everyday usage

    A Comparison of Video and Accelerometer Based Approaches Applied to Performance Monitoring in Swimming.

    Get PDF
    The aim of this paper is to present a comparison of video- and sensor based studies of swimming performance. The video-based approach is reviewed and contrasted to the newer sensor-based technology, specifically accelerometers based upon Micro-Electro-Mechanical Systems (MEMS) technology. Results from previously published swim performance studies using both the video and sensor technologies are summarised and evaluated against the conventional theory that upper arm movements are of primary interest when quantifying free-style technique. The authors conclude that multiple sensor-based measurements of swimmers’ acceleration profiles have the potential to offer significant advances in coaching technique over the traditional video based approach

    FlightGoggles: A Modular Framework for Photorealistic Camera, Exteroceptive Sensor, and Dynamics Simulation

    Full text link
    FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s). While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex extrinsic dynamics are generated organically through the natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest.Comment: Initial version appeared at IROS 2019. Supplementary material can be found at https://flightgoggles.mit.edu. Revision includes description of new FlightGoggles features, such as a photogrammetric model of the MIT Stata Center, new rendering settings, and a Python AP

    Primary stability of cementless threaded acetabular cups at first implantation and in the case of revision regarding micromotions as indicators

    Get PDF
    The primary stability of cementless total hip endoprosthesis is of vital importance for proximate, long-term osteointegration. The extent of micromotions between implant and acetabulum is an indicator of primary stability. Based on this hypothesis, different cementless hip joint endoprosthesis were studied with regard to their micromotions. The primary stability of nine different cementless threaded acetabular cups was studied in an experimental setup with blocks of rigid foam. The micromotions between implant and implant bearing were therefore evaluated under cyclic, sinusoidal exposure. The blocks of polymer foam were prepared according to the Paprosky defect classifications. The micromotions increased with the increasing degree of the defect with all acetabuli tested. Occasionally coefficients of over 200 mu m were measured. From a defect degree of 3b according to Paprosky, the implants could no longer be appropriately placed. The exterior form of the spherical implants tended to exhibit better coefficients than the conical/parabolic implants

    Human-activity-centered measurement system:challenges from laboratory to the real environment in assistive gait wearable robotics

    Get PDF
    Assistive gait wearable robots (AGWR) have shown a great advancement in developing intelligent devices to assist human in their activities of daily living (ADLs). The rapid technological advancement in sensory technology, actuators, materials and computational intelligence has sped up this development process towards more practical and smart AGWR. However, most assistive gait wearable robots are still confined to be controlled, assessed indoor and within laboratory environments, limiting any potential to provide a real assistance and rehabilitation required to humans in the real environments. The gait assessment parameters play an important role not only in evaluating the patient progress and assistive device performance but also in controlling smart self-adaptable AGWR in real-time. The self-adaptable wearable robots must interactively conform to the changing environments and between users to provide optimal functionality and comfort. This paper discusses the performance parameters, such as comfortability, safety, adaptability, and energy consumption, which are required for the development of an intelligent AGWR for outdoor environments. The challenges to measuring the parameters using current systems for data collection and analysis using vision capture and wearable sensors are presented and discussed

    Magnetic-Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

    Full text link
    Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.Comment: submitted to IROS 201
    corecore