776 research outputs found

    Part Description and Segmentation Using Contour, Surface and Volumetric Primitives

    Get PDF
    The problem of part definition, description, and decomposition is central to the shape recognition systems. The Ultimate goal of segmenting range images into meaningful parts and objects has proved to be very difficult to realize, mainly due to the isolation of the segmentation problem from the issue of representation. We propose a paradigm for part description and segmentation by integration of contour, surface, and volumetric primitives. Unlike previous approaches, we have used geometric properties derived from both boundary-based (surface contours and occluding contours), and primitive-based (quadric patches and superquadric models) representations to define and recover part-whole relationships, without a priori knowledge about the objects or object domain. The object shape is described at three levels of complexity, each contributing to the overall shape. Our approach can be summarized as answering the following question : Given that we have all three different modules for extracting volume, surface and boundary properties, how should they be invoked, evaluated and integrated? Volume and boundary fitting, and surface description are performed in parallel to incorporate the best of the coarse to fine and fine to coarse segmentation strategy. The process involves feedback between the segmentor (the Control Module) and individual shape description modules. The control module evaluates the intermediate descriptions and formulates hypotheses about parts. Hypotheses are further tested by the segmentor and the descriptors. The descriptions thus obtained are independent of position, orientation, scale, domain and domain properties, and are based purely on geometric considerations. They are extremely useful for the high level domain dependent symbolic reasoning processes, which need not deal with tremendous amount of data, but only with a rich description of data in terms of primitives recovered at various levels of complexity

    Perception of 3-D Surfaces from 2-D Contours

    Get PDF
    Inference of 3-D shape from 2-D contours in a single image is an important problem in machine vision. We survey classes of techniques proposed in the past and provide a critical analysis. We propose that two kinds of symmetries in figures, which are known as parallel and skew symmetries, give significant information about surface shape for a variety of objects. We derive the constraints imposed by these symmetries and show how to use them to infer 3-D shape. We discuss the zero Gaussian curvature (ZGC) surfaces in depth and show results on the recovery of surface orientation for various ZGC surfaces. © 1993 IEE

    Flaw reconstruction in NDE using a limited number of x-ray radiographic projections

    Get PDF
    One of the major problems in nondestructive evaluation (NDE) is the evaluation of flaw sizes and locations in a limited inspectability environment. In NDE x-ray radiography, this frequently occurs when the geometry of the part under test does not allow x-ray penetration in certain directions. Other times, the inspection setup in the field does not allow for inspection at all angles around the object. This dissertation presents a model based reconstruction technique which requires a small number of x-ray projections from one side of the object under test. The estimation and reconstruction of model parameters rather than the flaw distribution itself requires much less information, thereby reducing the number of required projections. Crack-like flaws are modeled as piecewise linear curves (connected points) and are reconstructed stereographically from at least two projections by matching corresponding endpoints of the linear segments. Volumetric flaws are modeled as ellipsoids and elliptical slices through ellipsoids. The elliptical principal axes lengths, orientation angles and locations are estimated by fitting a forward model to the projection data. The fitting procedure is highly nonlinear and requires stereographic projections to obtain initial estimates of the model parameters. The methods are tested both on simulated and experimental data. Comparisons are made with models from the field of stereology. Finally, analysis of reconstruction errors is presented for both models

    Statistical image reconstruction for quantitative computed tomography

    Get PDF
    Statistical iterative reconstruction (SIR) algorithms for x-ray computed tomography (CT) have the potential to reconstruct images with less noise and systematic error than the conventional filtered backprojection (FBP) algorithm. More accurate reconstruction algorithms are important for reducing imaging dose and for a wide range of quantitative CT applications. The work presented herein investigates some potential advantages of one such statistically motivated algorithm called Alternating Minimization (AM). A simulation study is used to compare the tradeoff between noise and resolution in images reconstructed with the AM and FBP algorithms. The AM algorithm is employed with an edge-preserving penalty function, which is shown to result in images with contrast-dependent resolution. The AM algorithm always reconstructed images with less image noise than the FBP algorithm. Compared to previous studies in the literature, this is the first work to clearly illustrate that the reported noise advantage when using edge-preserving penalty functions can be highly dependent on the contrast of the object used for quantifying resolution. A polyenergetic version of the AM algorithm, which incorporates knowledge of the scanner’s x-ray spectrum, is then commissioned from data acquired on a commercially available CT scanner. Homogeneous cylinders are used to assess the absolute accuracy of the polyenergetic AM algorithm and to compare systematic errors to conventional FBP reconstruction. Methods to estimate the x-ray spectrum, model the bowtie filter and measure scattered radiation are outlined which support AM reconstruction to within 0.5% of the expected ground truth. The polyenergetic AM algorithm reconstructs the cylinders with less systematic error than FBP, in terms of better image uniformity and less object-size dependence. Finally, the accuracy of a post-processing dual-energy CT (pDECT) method to non-invasively measure a material’s photon cross-section information is investigated. Data is acquired on a commercial scanner for materials of known composition. Since the pDECT method has been shown to be highly sensitive to reconstructed image errors, both FBP and polyenergetic AM reconstruction are employed. Linear attenuation coefficients are estimated with residual errors of around 1% for energies of 30 keV to 1 MeV with errors rising to 3%-6% at lower energies down to 10 keV. In the ideal phantom geometry used here, the main advantage of AM reconstruction is less random cross-section uncertainty due to the improved noise performance

    3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints

    Get PDF
    International audienceThis article introduces a novel representation for three-dimensional (3D) objects in terms of local affine-invariant descriptors of their images and the spatial relationships between the corresponding surface patches. Geometric constraints associated with different views of the same patches under affine projection are combined with a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true 3D affine and Euclidean models from multiple unregistered images, as well as their recognition in photographs taken from arbitrary viewpoints. The proposed approach does not require a separate segmentation stage, and it is applicable to highly cluttered scenes. Modeling and recognition results are presented

    Metamodel-based uncertainty quantification for the mechanical behavior of braided composites

    Get PDF
    The main design requirement for any high-performance structure is minimal dead weight. Producing lighter structures for aerospace and automotive industry directly leads to fuel efficiency and, hence, cost reduction. For wind energy, lighter wings allow larger rotor blades and, consequently, better performance. Prosthetic implants for missing body parts and athletic equipment such as rackets and sticks should also be lightweight for augmented functionality. Additional demands depending on the application, can very often be improved fatigue strength and damage tolerance, crashworthiness, temperature and corrosion resistance etc. Fiber-reinforced composite materials lie within the intersection of all the above requirements since they offer competing stiffness and ultimate strength levels at much lower weight than metals, and also high optimization and design potential due to their versatility. Braided composites are a special category with continuous fiber bundles interlaced around a preform. The automated braiding manufacturing process allows simultaneous material-structure assembly, and therefore, high-rate production with minimal material waste. The multi-step material processes and the intrinsic heterogeneity are the basic origins of the observed variability during mechanical characterization and operation of composite end-products. Conservative safety factors are applied during the design process accounting for uncertainties, even though stochastic modeling approaches lead to more rational estimations of structural safety and reliability. Such approaches require statistical modeling of the uncertain parameters which is quite expensive to be performed experimentally. A robust virtual uncertainty quantification framework is presented, able to integrate material and geometric uncertainties of different nature and statistically assess the response variability of braided composites in terms of effective properties. Information-passing multiscale algorithms are employed for high-fidelity predictions of stiffness and strength. In order to bypass the numerical cost of the repeated multiscale model evaluations required for the probabilistic approach, smart and efficient solutions should be applied. Surrogate models are, thus, trained to map manifolds at different scales and eventually substitute the finite element models. The use of machine learning is viable for uncertainty quantification, optimization and reliability applications of textile materials, but not straightforward for failure responses with complex response surfaces. Novel techniques based on variable-fidelity data and hybrid surrogate models are also integrated. Uncertain parameters are classified according to their significance to the corresponding response via variance-based global sensitivity analysis procedures. Quantification of the random properties in terms of mean and variance can be achieved by inverse approaches based on Bayesian inference. All stochastic and machine learning methods included in the framework are non-intrusive and data-driven, to ensure direct extensions towards more load cases and different materials. Moreover, experimental validation of the adopted multiscale models is presented and an application of stochastic recreation of random textile yarn distortions based on computed tomography data is demonstrated

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Fast Determination of Soil Behavior in the Capillary Zone Using Simple Laboratory Tests

    Get PDF
    INE/AUTC 13.1
    • …
    corecore