411 research outputs found

    Measuring cellular traction forces on non-planar substrates

    Full text link
    Animal cells use traction forces to sense the mechanics and geometry of their environment. Measuring these traction forces requires a workflow combining cell experiments, image processing and force reconstruction based on elasticity theory. Such procedures have been established before mainly for planar substrates, in which case one can use the Green's function formalism. Here we introduce a worksflow to measure traction forces of cardiac myofibroblasts on non-planar elastic substrates. Soft elastic substrates with a wave-like topology were micromolded from polydimethylsiloxane (PDMS) and fluorescent marker beads were distributed homogeneously in the substrate. Using feature vector based tracking of these marker beads, we first constructed a hexahedral mesh for the substrate. We then solved the direct elastic boundary volume problem on this mesh using the finite element method (FEM). Using data simulations, we show that the traction forces can be reconstructed from the substrate deformations by solving the corresponding inverse problem with a L1-norm for the residue and a L2-norm for 0th order Tikhonov regularization. Applying this procedure to the experimental data, we find that cardiac myofibroblast cells tend to align both their shapes and their forces with the long axis of the deformable wavy substrate.Comment: 34 pages, 9 figure

    A topological sampling theorem for Robust boundary reconstruction and image segmentation

    Get PDF
    AbstractExisting theories on shape digitization impose strong constraints on admissible shapes, and require error-free data. Consequently, these theories are not applicable to most real-world situations. In this paper, we propose a new approach that overcomes many of these limitations. It assumes that segmentation algorithms represent the detected boundary by a set of points whose deviation from the true contours is bounded. Given these error bounds, we reconstruct boundary connectivity by means of Delaunay triangulation and α-shapes. We prove that this procedure is guaranteed to result in topologically correct image segmentations under certain realistic conditions. Experiments on real and synthetic images demonstrate the good performance of the new method and confirm the predictions of our theory

    Reliable fusion of ToF and stereo depth driven by confidence measures

    Get PDF
    In this paper we propose a framework for the fusion of depth data produced by a Time-of-Flight (ToF) camera and stereo vision system. Initially, depth data acquired by the ToF camera are upsampled by an ad-hoc algorithm based on image segmentation and bilateral filtering. In parallel a dense disparity map is obtained using the Semi- Global Matching stereo algorithm. Reliable confidence measures are extracted for both the ToF and stereo depth data. In particular, ToF confidence also accounts for the mixed-pixel effect and the stereo confidence accounts for the relationship between the pointwise matching costs and the cost obtained by the semi-global optimization. Finally, the two depth maps are synergically fused by enforcing the local consistency of depth data accounting for the confidence of the two data sources at each location. Experimental results clearly show that the proposed method produces accurate high resolution depth maps and outperforms the compared fusion algorithms

    From Multiview Image Curves to 3D Drawings

    Full text link
    Reconstructing 3D scenes from multiple views has made impressive strides in recent years, chiefly by correlating isolated feature points, intensity patterns, or curvilinear structures. In the general setting - without controlled acquisition, abundant texture, curves and surfaces following specific models or limiting scene complexity - most methods produce unorganized point clouds, meshes, or voxel representations, with some exceptions producing unorganized clouds of 3D curve fragments. Ideally, many applications require structured representations of curves, surfaces and their spatial relationships. This paper presents a step in this direction by formulating an approach that combines 2D image curves into a collection of 3D curves, with topological connectivity between them represented as a 3D graph. This results in a 3D drawing, which is complementary to surface representations in the same sense as a 3D scaffold complements a tent taut over it. We evaluate our results against truth on synthetic and real datasets.Comment: Expanded ECCV 2016 version with tweaked figures and including an overview of the supplementary material available at multiview-3d-drawing.sourceforge.ne

    A fruit recognition method for automatic harvesting

    Get PDF
    Automation of harvesting is always one of the hottest topics in greenhouse operation. But before this, a reliable method of identifying mature fruit clusters on plants is required. This thesis presents a method to detect and recognize mature tomato fruit clusters on a complex-structured tomato plant containing clutter and occlusion in a tomato greenhouse. A color stereo vision camera is applied as the vision sensor. The proposed method performs a 3D reconstruction with the data collected by the stereo camera to create a 3D environment for further processing. The Color Layer Growing (CLG) method is introduced to segment the mature fruits from the leaves, stalks, background and noise. Target fruit clusters can then be located by depth segmentation. The experimental data was collected from a tomato greenhouse and the method is justified by the experimental results

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Structured Light-Based 3D Reconstruction System for Plants.

    Get PDF
    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance

    A Vision-Based Sensor for Noncontact Structural Displacement Measurement

    Get PDF
    Conventional displacement sensors have limitations in practical applications. This paper develops a vision sensor system for remote measurement of structural displacements. An advanced template matching algorithm, referred to as the upsampled cross correlation, is adopted and further developed into a software package for real-time displacement extraction from video images. By simply adjusting the upsampling factor, better subpixel resolution can be easily achieved to improve the measurement accuracy. The performance of the vision sensor is first evaluated through a laboratory shaking table test of a frame structure, in which the displacements at all the floors are measured by using one camera to track either high-contrast artificial targets or low-contrast natural targets on the structural surface such as bolts and nuts. Satisfactory agreements are observed between the displacements measured by the single camera and those measured by high-performance laser displacement sensors. Then field tests are carried out on a railway bridge and a pedestrian bridge, through which the accuracy of the vision sensor in both time and frequency domains is further confirmed in realistic field environments. Significant advantages of the noncontact vision sensor include its low cost, ease of operation, and flexibility to extract structural displacement at any point from a single measurement
    corecore