196,368 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Quantifying error introduced by iterative closest point image registration

    Get PDF
    Objectives: The aim of this paper was to quantify the analysis error introduced by iterative closest point (ICP) image registration. We also investigated whether a subsequent subtraction process can reduce process error. Methods: We tested metrology and two 3D inspection software using calibration standards at 0.39 μm, and 2.64 μm and mathematically perfect defects (softgauges) at 2 and 20 μm, on free form surfaces of increasing complexity and area, both with and without registration. Errors were calculated in percentage relative to the size of the defect being measured. Data were analysed in GraphPad Prism 9, normal and two-way ANOVA with post-hoc Tukey's was applied. Significance was inferred at p &lt; 0.05. Results: Using ICP registration introduced errors from 0 % to 15.63 % of the defect size depending on the surface complexity and size of the defect. Significant differences were observed in analysis measurements between metrology and 3D inspection software and within different 3D inspection software, however, one did not show clear superiority over another. Even in the absence of registration, defects at 0.39 μm, and 2.64 μm produced substantial measurement error (13.39–77.50 % of defect size) when using 3D inspection software. Adding an additional data subtraction process reduced registration error to negligible levels (&lt;1 % independent of surface complexity or area). Conclusions: Commercial 3D inspection software introduces error during direct measurements below 3 μm. When using an ICP registration, errors over 15 % of the defect size can be introduced regardless of the accuracy of adjacent registration surfaces. Analysis output between software are not consistently repeatable or comparable and do not utilise ISO standards. Subtracting the datasets and analysing the residual difference reduced error to negligible levels. Clinical significance: This paper quantifies the significant errors and inconsistencies introduced during the registration process even when 3D datasets are true and precise. This may impact on research diagnostics and clinical performance. An additional data processing step of scan subtraction can reduce this error but increases computational complexity.</p

    Experimental philosophy leading to a small scale digital data base of the conterminous United States for designing experiments with remotely sensed data

    Get PDF
    Research using satellite remotely sensed data, even within any single scientific discipline, often lacked a unifying principle or strategy with which to plan or integrate studies conducted over an area so large that exhaustive examination is infeasible, e.g., the U.S.A. However, such a series of studies would seem to be at the heart of what makes satellite remote sensing unique, that is the ability to select for study from among remotely sensed data sets distributed widely over the U.S., over time, where the resources do not exist to examine all of them. Using this philosophical underpinning and the concept of a unifying principle, an operational procedure for developing a sampling strategy and formal testable hypotheses was constructed. The procedure is applicable across disciplines, when the investigator restates the research question in symbolic form, i.e., quantifies it. The procedure is set within the statistical framework of general linear models. The dependent variable is any arbitrary function of remotely sensed data and the independent variables are values or levels of factors which represent regional climatic conditions and/or properties of the Earth's surface. These factors are operationally defined as maps from the U.S. National Atlas (U.S.G.S., 1970). Eighty-five maps from the National Atlas, representing climatic and surface attributes, were automated by point counting at an effective resolution of one observation every 17.6 km (11 miles) yielding 22,505 observations per map. The maps were registered to one another in a two step procedure producing a coarse, then fine scale registration. After registration, the maps were iteratively checked for errors using manual and automated procedures. The error free maps were annotated with identification and legend information and then stored as card images, one map to a file. A sampling design will be accomplished through a regionalization analysis of the National Atlas data base (presently being conducted). From this analysis a map of homogeneous regions of the U.S.A. will be created and samples (LANDSAT scenes) assigned by region

    Region-based saliency estimation for 3D shape analysis and understanding

    Get PDF
    The detection of salient regions is an important pre-processing step for many 3D shape analysis and understanding tasks. This paper proposes a novel method for saliency detection in 3D free form shapes. Firstly, we smooth the surface normals by a bilateral filter. Such a method is capable of smoothing the surfaces and retaining the local details. Secondly, a novel method is proposed for the estimation of the saliency value of each vertex. To this end, two new features are defined: Retinex-based Importance Feature (RIF) and Relative Normal Distance (RND). They are based on the human visual perception characteristics and surface geometry respectively. Since the vertex based method cannot guarantee that the detected salient regions are semantically continuous and complete, we propose to refine such values based on surface patches. The detected saliency is finally used to guide the existing techniques for mesh simplification, interest point detection, and overlapping point cloud registration. The comparative studies based on real data from three publicly accessible databases show that the proposed method usually outperforms five selected state of the art ones both qualitatively and quantitatively for saliency detection and 3D shape analysis and understanding

    T-spline based unifying registration procedure for free-form surface workpieces in intelligent CMM

    Get PDF
    With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs). To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs

    Review of the mathematical foundations of data fusion techniques in surface metrology

    Get PDF
    The recent proliferation of engineered surfaces, including freeform and structured surfaces, is challenging current metrology techniques. Measurement using multiple sensors has been proposed to achieve enhanced benefits, mainly in terms of spatial frequency bandwidth, which a single sensor cannot provide. When using data from different sensors, a process of data fusion is required and there is much active research in this area. In this paper, current data fusion methods and applications are reviewed, with a focus on the mathematical foundations of the subject. Common research questions in the fusion of surface metrology data are raised and potential fusion algorithms are discussed

    Laser Deposition Cladding On-Line Inspection Using 3-D Scanner

    Get PDF
    Laser deposition directly deposits metal cladding to fabricate and repair components. In order to finish the fabrication or repair, 3-D shape of the deposition needs to be inspected, and thus it can be determined if it has sufficient cladding to fabricate a part after deposition process. In the present hybrid system in the Laser Aided Manufacturing Lab (LAMP) at the University of Missouri - Rolla, a CMM system is used to do the inspection. A CMM requires point-by-point contact, which is time consuming and difficult to plan for an irregular deposition geometry. Also, the CMM is a separate device, which requires removal of the part from the hybrid system, which can induce fixture errors. The 3-D scanner is a non-contact tool to measure the 3-D shape of laser deposition cladding which is fast and accurate. In this paper, A prototype non-contact 3-D scanner approach has been implemented to inspect the free-form and complex parts built by laser deposition. Registration of the measured model and 3-D CAD model allows the comparison between the two models. It enables us to determine if the deposition is sufficient before machining.Mechanical Engineerin
    • …
    corecore