11 research outputs found

    3D reconstruction of point clouds using multi-view orthographic projections

    Get PDF
    Cataloged from PDF version of article.A method to reconstruct 3D point clouds using multi-view orthographic projections is examined. Point clouds are generated by means of a stochastic process. This stochastic process is designed to generate point clouds that mimic microcalcification formation in breast tissue. Point clouds are generated using a Gibbs sampler algorithm. Orthographic projections of point clouds from any desired orientation are generated. Volumetric intersection method is employed to perform the reconstruction from these orthographic projections. The reconstruction may yield erroneous reconstructed points. The types of these erroneous points are analyzed along with their causes and a performance measure based on linear combination is devised. Experiments have been designed to investigate the effect of the number of projections and the number of points to the performance of reconstruction. Increasing the number of projections and decreasing the number of points resulted in better reconstructions that are more similar to the original point clouds. However, it is observed that reconstructions do not improve considerably upon increasing the number of projections after some number. This method of reconstruction serves well to find locations of original points.Topçu, OsmanM.S

    3D non-invasive inspection of the skin lesions by close-range and low-cost photogrammetric techniques

    Get PDF
    The main research group is CCI in collaboration with HLS (School of Pharmacy) Open Access articleIn dermatology, one of the most common causes of skin abnormality is an unusual change in skin lesion structure which may exhibit very subtle physical deformation of its 3D shape. However the geometrical sensitivity of current cost-effective inspection and measurement methods may not be sufficient to detect such small progressive changes in skin lesion structure at micro-scale. Our proposed method could provide a low-cost, non-invasive solution by a compact system solution to overcome these shortcomings by using close-range photogrammetric imaging techniques to build a 3D surface model for a continuous observation of subtle changes in skin lesions and other features.https://www.ias-iss.org/ojs/IAS/article/view/1730/105

    3D NON-INVASIVE INSPECTION OF THE SKIN LESIONS BY CLOSE-RANGE AND LOW-COST PHOTOGRAMMETRIC TECHNIQUES

    Get PDF
    In dermatology, one of the most common causes of skin abnormality is an unusual change in skin lesion structure which may exhibit very subtle physical deformation of its 3D shape. However the geometrical sensitivity of current cost-effective inspection and measurement methods may not be sufficient to detect such small progressive changes in skin lesion structure at micro-scale. Our proposed method could provide a low-cost, non-invasive solution by a compact system solution to overcome these shortcomings by using close-range photogrammetric imaging techniques to build a 3D surface model for a continuous observation of subtle changes in skin lesions and other features

    A System for 3D Shape Estimation and Texture Extraction via Structured Light

    Get PDF
    Shape estimation is a crucial problem in the fields of computer vision, robotics and engineering. This thesis explores a shape from structured light (SFSL) approach using a pyramidal laser projector, and the application of texture extraction. The specific SFSL system is chosen for its hardware simplicity, and efficient software. The shape estimation system is capable of estimating the 3D shape of both static and dynamic objects by relying on a fixed pattern. In order to eliminate the need for precision hardware alignment and to remove human error, novel calibration schemes were developed. In addition, selecting appropriate system geometry reduces the typical correspondence problem to that of a labeling problem. Simulations and experiments verify the effectiveness of the built system. Finally, we perform texture extraction by interpolating and resampling sparse range estimates, and subsequently flattening the 3D triangulated graph into a 2D triangulated graph via graph and manifold methods

    Orientation computation of an inclined textured plane: accuracy and performances

    Get PDF
    The aim of this paper is to present one method for computing the orientation of an inclined textured plane with only one view of this plane. Two steps are used for this computation. First we build a local scales map by a wavelets decomposition of the image of the plane. Then we have to do an interpolation of this map by use the theoretical equation of the local scales variation. So we obtain features values which allow us to compute the tilt and the slant angles. After developing the computation technique, we do a theoretical study in order to know the precision of the method. For the tilt angle, the precision is about one degree, but for the slant angle the precision is only about five degrees, if the slant angle is over forty degrees. But, we have to know the camera parameters for computing the slant angle. If there is some errors about these parameters, so the slant angle will be bad. After this study, we build a data base of one hundred images of real textures with different tilt and slant angles. The camera which has been used for acquiring the images has been calibrated. Results on this data base are agree with the theoretical study.Le but de cet article est de présenter une méthode de calcul de l'orientation d'un plan texturé incliné à partir d'une seule vue de ce plan. Cette méthode est constituée de deux étapes. Dans un premier temps on calcule, à partir de l'image initiale, une carte des échelles locales. Ces échelles sont obtenues au moyen d'une décomposition en ondelettes de l'image d'origine. Puis on interpole cette carte des échelles locales par l'équation théorique de leurs variations. On obtient ainsi des paramètres qui permettent de calculer les angles de tilt et de slant, décrivant l'orientation du plan. Pour valider cette démarche, nous avons mené une étude théorique sur la précision qui pouvait être atteinte par une telle méthode. Nous avons pu mettre en évidence que, si la précision sur l'angle de tilt était assez bonne (de l'ordre de 1°), celle sur l'angle de slant n'excédait pas 5°, à condition que cet angle soit suffisamment important (supérieur à 40°). Mais la précision sur l'angle de slant est conditionnée par la connaissance des paramètres de prise de vue. En effet, nous avons mis en évidence que l'utilisation de valeurs erronées des paramètres de la caméra entraînerait une erreur maximum pour un slant entre 40° et 50°, c'est à dire, a priori, là où la méthode est la meilleure. Cette étude théorique a été validée par des expérimentations sur des images de synthèse et sur des images de textures réelles. Une base de données d'une centaine d'images a été constituée, au moyen d'une caméra préalablement calibrée, pour évaluer la qualité des résultats fournis par notre méthode

    The visual representation of texture

    Get PDF
    This research is concerned with texture: a source of visual information, that has motivated a huge amount of psychophysical and computational research. This thesis questions how useful the accepted view of texture perception is. From a theoretical point of view, work to date has largely avoided two critical aspects of a computational theory of texture perception. Firstly, what is texture? Secondly, what is an appropriate representation for texture? This thesis argues that a task dependent definition of texture is necessary, and proposes a multi-local, statistical scheme for representing texture orientation. Human performance on a series of psychophysical orientation discrimination tasks are compared to specific predictions from the scheme. The first set of experiments investigate observers' ability to directly derive statistical estimates from texture. An analogy is reported between the way texture statistics are derived, and the visual processing of spatio-luminance features. The second set of experiments are concerned with the way texture elements are extracted from images (an example of the generic grouping problem in vision). The use of highly constrained experimental tasks, typically texture orientation discriminations, allows for the formulation of simple statistical criteria for setting critical parameters of the model (such as the spatial scale of analysis). It is shown that schemes based on isotropic filtering and symbolic matching do not suffice for performing this grouping, but that the scheme proposed, base on oriented mechanisms, does. Taken together these results suggest a view of visual texture processing, not as a disparate collection of processes, but as a general strategy for deriving statistical representations of images common to a range of visual tasks

    Structure from Motion on Textures: Theory and Application to Calibration

    Get PDF
    This dissertation introduces new mathematical constraints that enable us, for the first time, to investigate the correspondence problem using texture rather than point and lines. These three multilinear constraints are formulated on parallel equidistant lines embedded in a plane. We choose these sets of parallel lines as proxies for fourier harmonics embedded on a plane as a sort of ``texture atom''. From these texture atoms we can build up arbitrarily textured surfaces in the world. If we decompose these textures in a Fourier sense rather than as points and lines, we use these new constraints rather than the standard multifocal constraints such as the epipolar or trifocal. We propose some mechanisms for a possible feedback solution to the correspondence problem. As the major application of these constraints, we describe a multicamera calibration system written in C and MATLAB which will be made available to the public. We describe the operation of the program and give some preliminary results

    Shape, motion, and inertial parameter estimation of space objects using teams of cooperative vision sensors

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2005."February 2005."Includes bibliographical references (leaves 133-140).Future space missions are expected to use autonomous robotic systems to carry out a growing number of tasks. These tasks may include the assembly, inspection, and maintenance of large space structures; the capture and servicing of satellites; and the redirection of space debris that threatens valuable spacecraft. Autonomous robotic systems will require substantial information about the targets with which they interact, including their motions, dynamic model parameters, and shape. However, this information is often not available a priori, and therefore must be estimated in orbit. This thesis develops a method for simultaneously estimating dynamic state, model parameters, and geometric shape of arbitrary space targets, using information gathered from range imaging sensors. The method exploits two key features of this application: (1) the dynamics of targets in space are highly deterministic and can be accurately modeled; and (2) several sensors will be available to provide information from multiple viewpoints. These features enable an estimator design that is not reliant on feature detection, model matching, optical flow, or other computation-intensive pixel-level calculations. It is therefore robust to the harsh lighting and sensing conditions found in space. Further, these features enable an estimator design that can be implemented in real- time on space-qualified hardware. The general solution approach consists of three parts that effectively decouple spatial- and time-domain estimations. The first part, referred to as kinematic data fusion, condenses detailed range images into coarse estimates of the target's high-level kinematics (position, attitude, etc.).(cont.) A Kalman filter uses the high-fidelity dynamic model to refine these estimates and extract the full dynamic state and model parameters of the target. With an accurate understanding of target motions, shape estimation reduces to the stochastic mapping of a static scene. This thesis develops the estimation architecture in the context of both rigid and flexible space targets. Simulations and experiments demonstrate the potential of the approach and its feasibility in practical systems.by Matthew D. Lichter.Ph.D
    corecore