2,939 research outputs found

    Experimental Validation of Contact Dynamics for In-Hand Manipulation

    Full text link
    This paper evaluates state-of-the-art contact models at predicting the motions and forces involved in simple in-hand robotic manipulations. In particular it focuses on three primitive actions --linear sliding, pivoting, and rolling-- that involve contacts between a gripper, a rigid object, and their environment. The evaluation is done through thousands of controlled experiments designed to capture the motion of object and gripper, and all contact forces and torques at 250Hz. We demonstrate that a contact modeling approach based on Coulomb's friction law and maximum energy principle is effective at reasoning about interaction to first order, but limited for making accurate predictions. We attribute the major limitations to 1) the non-uniqueness of force resolution inherent to grasps with multiple hard contacts of complex geometries, 2) unmodeled dynamics due to contact compliance, and 3) unmodeled geometries dueto manufacturing defects.Comment: International Symposium on Experimental Robotics, ISER 2016, Tokyo, Japa

    Real-time 3D reconstruction of non-rigid shapes with a single moving camera

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/This paper describes a real-time sequential method to simultaneously recover the camera motion and the 3D shape of deformable objects from a calibrated monocular video. For this purpose, we consider the Navier-Cauchy equations used in 3D linear elasticity and solved by finite elements, to model the time-varying shape per frame. These equations are embedded in an extended Kalman filter, resulting in sequential Bayesian estimation approach. We represent the shape, with unknown material properties, as a combination of elastic elements whose nodal points correspond to salient points in the image. The global rigidity of the shape is encoded by a stiffness matrix, computed after assembling each of these elements. With this piecewise model, we can linearly relate the 3D displacements with the 3D acting forces that cause the object deformation, assumed to be normally distributed. While standard finite-element-method techniques require imposing boundary conditions to solve the resulting linear system, in this work we eliminate this requirement by modeling the compliance matrix with a generalized pseudoinverse that enforces a pre-fixed rank. Our framework also ensures surface continuity without the need for a post-processing step to stitch all the piecewise reconstructions into a global smooth shape. We present experimental results using both synthetic and real videos for different scenarios ranging from isometric to elastic deformations. We also show the consistency of the estimation with respect to 3D ground truth data, include several experiments assessing robustness against artifacts and finally, provide an experimental validation of our performance in real time at frame rate for small mapsPeer ReviewedPostprint (author's final draft

    Shape basis interpretation for monocular deformable 3D reconstruction

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper, we propose a novel interpretable shape model to encode object non-rigidity. We first use the initial frames of a monocular video to recover a rest shape, used later to compute a dissimilarity measure based on a distance matrix measurement. Spectral analysis is then applied to this matrix to obtain a reduced shape basis, that in contrast to existing approaches, can be physically interpreted. In turn, these pre-computed shape bases are used to linearly span the deformation of a wide variety of objects. We introduce the low-rank basis into a sequential approach to recover both camera motion and non-rigid shape from the monocular video, by simply optimizing the weights of the linear combination using bundle adjustment. Since the number of parameters to optimize per frame is relatively small, specially when physical priors are considered, our approach is fast and can potentially run in real time. Validation is done in a wide variety of real-world objects, undergoing both inextensible and extensible deformations. Our approach achieves remarkable robustness to artifacts such as noisy and missing measurements and shows an improved performance to competing methods.Peer ReviewedPostprint (author's final draft

    TWO-DIMENSIONAL FLUIDIZATION OF NANOMATERIALS VIA BIOMIMETIC MEMBRANES TOWARDS ASSISTED SELF-ASSEMBLY

    Get PDF
    Materials that take advantage of the exceptional properties of nano-meter sized aggregates of atoms are poised to play an important role in future technologies. Prime examples for such nano-materials that have an extremely large surface to volume ratio and thus are physically determined by surface related effects are quantum dots (qdots) and carbon nanotubes (CNTs). The production of such manmade nano-objects has by now become routine and even commercialized. However, the controlled assembly of individual nano-sized building blocks into larger structures of higher geometric and functional complexity has proven to be much more challenging. Yet, this is exactly what is required for many applications that have transformative potential for new technologies. If the tedious procedure to sequentially position individual nano-objects is to be forgone, the assembly of such objects into larger structures needs to be implicitly encoded and many ways to bestow such self-assembly abilities onto nano objects are being developed. Yet, as overall size and complexity of such self-assembled structures increases, kinetic and geometric frustration begin to prevent the system to achieve the desired configuration. In nature, this problem is solved by relying on guided or forced variants of the self-assembly approach. To translate such concepts into the realm of man-made nano-technology, ways to dynamically manipulate nano-materials need to be devised. Thus, in the first part of this work, I provide a proof of concept that supported lipid bilayers (SLBs) that exhibit free lateral diffusion of their constituents can be utilized as a two-dimensional platform for active nano-material manipulation. We used streptavidin coated quantum dots (Q-dots) as a model nano-building-block. Q-dots are 0-dimensional nanomaterials engineered to be fluorescent based solely on their diameter making visualization convenient. Biotinylated lipids were used to tether Q-dots to a SLB and we observed that the 2-dimensional fluidity of the bilayer was translated to the quantum dots as they freely diffused. The quantum dots were visualized using wide-field fluorescent microscopy and single particle tracking techniques were employed to analyze their dynamic behavior. Next, an electric field was applied to the system to induce electroosmotic flow (EOF) which creates a bulk flow of the buffer solution. The quantum dots were again tracked and ballistic motion was observed in the particle tracks due to the electroosmosis in the system. This proved that SLBs could be used as a two-dimensional fluid platform for nanomaterials and electroosmosis can be used to manipulate the motion of the Q-dots once they are tethered to the membrane. Next, we set out to employ the same technique to carbon nanotubes (CNTs), which are known for their highly versatile mechanical and electrical properties. However, carbon nanotubes are extremely hydrophobic and tend to aggregate in aqueous solutions which negatively impacts the viability of tethering the CNTs to the bilayer, fluorescently staining and then imaging them. First, we had to solubilize the CNTs such that they were monodisperse and characterize the CNT-detergent solutions. We were able to create monodisperse solutions of CNTs such that the detergent levels were low enough that the integrity of the bilayer was intact. We were also able to fluorescently label the CNTs in order to visualize them, and tether them to a SLB using a peptide sequence. Future directions of this project would include employing EOF to mobilize the CNTs and use a more sophisticated single particle tracking software to track individual CNTs and analyze their motion

    Improving FRAP and SPT for mobility and interaction measurements of molecules and nanoparticles in biomaterials

    Get PDF
    An increasing amount of pharmaceutical technologies are being developed in which nanoparticles play a crucial role. The rational development of these technologies requires detailed knowledge of the mobility and interaction of the nanoparticles inside complex biomaterials. The aim of this PhD thesis is to improve fluorescence microscopy based methods that allow to extract this information from time sequences of images. In particular, the fluorescence microscopy techniques Fluorescence Recovery After Photobleaching (FRAP) and Single Particle Tracking (SPT) are considered. FRAP modelling is revisited in order to incorporate the effect of the microscope's scanning laser beam on the shape of the photobleached region. The new model should lead to more straightforward an accurate FRAP measurements. SPT is the main focus of the PhD thesis, starting with an investigation of how motion during image acquisition affects the experimental uncertainty with which the nanoparticle positions are determined. This knowledge is used to develop a method that is able to identify interactions between nanoparticles in high detail, by scanning their trajectories for correlated positions. The method is proven to be useful in the context of drug delivery, where it was used to study the intracellular trafficking of polymeric gene complexes. Besides SPT data analysis, it is also explored how light sheet illumination, which allows to strongly reduce the out of focus fluorescence that degrades the contrast in SPT experiments, can be generated by a planar waveguide that is incorporated on a disposable chip. The potential as platform for diagnostic measurements was demonstrated by using the chip to perform SPT size and concentration measurements of cell-derived membrane vesicles. The results of this PhD thesis are expected to contribute to the effort of making accurate SPT and FRAP measurements of nanoparticle properties in biomaterials more accessible to the pharmaceutical research community

    Automatic camera tracking

    Get PDF

    Multiple View Geometry For Video Analysis And Post-production

    Get PDF
    Multiple view geometry is the foundation of an important class of computer vision techniques for simultaneous recovery of camera motion and scene structure from a set of images. There are numerous important applications in this area. Examples include video post-production, scene reconstruction, registration, surveillance, tracking, and segmentation. In video post-production, which is the topic being addressed in this dissertation, computer analysis of the motion of the camera can replace the currently used manual methods for correctly aligning an artificially inserted object in a scene. However, existing single view methods typically require multiple vanishing points, and therefore would fail when only one vanishing point is available. In addition, current multiple view techniques, making use of either epipolar geometry or trifocal tensor, do not exploit fully the properties of constant or known camera motion. Finally, there does not exist a general solution to the problem of synchronization of N video sequences of distinct general scenes captured by cameras undergoing similar ego-motions, which is the necessary step for video post-production among different input videos. This dissertation proposes several advancements that overcome these limitations. These advancements are used to develop an efficient framework for video analysis and post-production in multiple cameras. In the first part of the dissertation, the novel inter-image constraints are introduced that are particularly useful for scenes where minimal information is available. This result extends the current state-of-the-art in single view geometry techniques to situations where only one vanishing point is available. The property of constant or known camera motion is also described in this dissertation for applications such as calibration of a network of cameras in video surveillance systems, and Euclidean reconstruction from turn-table image sequences in the presence of zoom and focus. We then propose a new framework for the estimation and alignment of camera motions, including both simple (panning, tracking and zooming) and complex (e.g. hand-held) camera motions. Accuracy of these results is demonstrated by applying our approach to video post-production applications such as video cut-and-paste and shadow synthesis. As realistic image-based rendering problems, these applications require extreme accuracy in the estimation of camera geometry, the position and the orientation of the light source, and the photometric properties of the resulting cast shadows. In each case, the theoretical results are fully supported and illustrated by both numerical simulations and thorough experimentation on real data

    Structure from Recurrent Motion: From Rigidity to Recurrency

    Full text link
    This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action. Departing from the traditional idea of using linear low-order or lowrank shape model for the task of NRSfM, our method exploits the property of shape recurrency (i.e., many deforming shapes tend to repeat themselves in time). We show that recurrency is in fact a generalized rigidity. Based on this, we reduce NRSfM problems to rigid ones provided that certain recurrency condition is satisfied. Given such a reduction, standard rigid-SfM techniques are directly applicable (without any change) to the reconstruction of non-rigid dynamic shapes. To implement this idea as a practical approach, this paper develops efficient algorithms for automatic recurrency detection, as well as camera view clustering via a rigidity-check. Experiments on both simulated sequences and real data demonstrate the effectiveness of the method. Since this paper offers a novel perspective on rethinking structure-from-motion, we hope it will inspire other new problems in the field.Comment: To appear in CVPR 201

    A neural tracking and motor control approach to improve rehabilitation of upper limb movements

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Restoration of upper limb movements in subjects recovering from stroke is an essential keystone in rehabilitative practices. Rehabilitation of arm movements, in fact, is usually a far more difficult one as compared to that of lower extremities. For these reasons, researchers are developing new methods and technologies so that the rehabilitative process could be more accurate, rapid and easily accepted by the patient. This paper introduces the proof of concept for a new non-invasive FES-assisted rehabilitation system for the upper limb, called smartFES (sFES), where the electrical stimulation is controlled by a biologically inspired neural inverse dynamics model, fed by the kinematic information associated with the execution of a planar goal-oriented movement. More specifically, this work details two steps of the proposed system: an <it>ad hoc </it>markerless motion analysis algorithm for the estimation of kinematics, and a neural controller that drives a synthetic arm. The vision of the entire system is to acquire kinematics from the analysis of video sequences during planar arm movements and to use it together with a neural inverse dynamics model able to provide the patient with the electrical stimulation patterns needed to perform the movement with the assisted limb.</p> <p>Methods</p> <p>The markerless motion tracking system aims at localizing and monitoring the arm movement by tracking its silhouette. It uses a specifically designed motion estimation method, that we named Neural Snakes, which predicts the arm contour deformation as a first step for a silhouette extraction algorithm. The starting and ending points of the arm movement feed an Artificial Neural Controller, enclosing the muscular Hill's model, which solves the inverse dynamics to obtain the FES patterns needed to move a simulated arm from the starting point to the desired point. Both position error with respect to the requested arm trajectory and comparison between curvature factors have been calculated in order to determine the accuracy of the system.</p> <p>Results</p> <p>The proposed method has been tested on real data acquired during the execution of planar goal-oriented arm movements. Main results concern the capability of the system to accurately recreate the movement task by providing a synthetic arm model with the stimulation patterns estimated by the inverse dynamics model. In the simulation of movements with a length of ± 20 cm, the model has shown an unbiased angular error, and a mean (absolute) position error of about 1.5 cm, thus confirming the ability of the system to reliably drive the model to the desired targets. Moreover, the curvature factors of the factual human movements and of the reconstructed ones are similar, thus encouraging future developments of the system in terms of reproducibility of the desired movements.</p> <p>Conclusion</p> <p>A novel FES-assisted rehabilitation system for the upper limb is presented and two parts of it have been designed and tested. The system includes a markerless motion estimation algorithm, and a biologically inspired neural controller that drives a biomechanical arm model and provides the stimulation patterns that, in a future development, could be used to drive a smart Functional Electrical Stimulation system (sFES). The system is envisioned to help in the rehabilitation of post stroke hemiparetic patients, by assisting the movement of the paretic upper limb, once trained with a set of movements performed by the therapist or in virtual reality. Future work will include the application and testing of the stimulation patterns in real conditions.</p

    Improved Multistage Learning for Multibody Motion Segmentation

    Get PDF
    We present an improved version of the MSL method of Sugaya and Kanatani for multibody motion segmentation. We replace their initial segmentation based on heuristic clustering by an analytical computation based on GPCA, fitting two 2-D affine spaces in 3-D by the Taubin method. This initial segmentation alone can segment most of the motions in natural scenes fairly correctly, and the result is successively optimized by the EM algorithm in 3-D, 5-D, and 7-D. Using simulated and real videos, we demonstrate that our method outperforms the previous MSL and other existing methods. We also illustrate its mechanism by our visualization technique
    • …
    corecore