2,471 research outputs found

    Visualizing Cells in Three Dimensions Using Confocal Microscopy, Image Reconstruction and Isosurface Rendering: Application to Glial Cells in Mouse Central Nervous System

    Get PDF
    This paper describes a general method for visualizing individual cells in intact tissue in three dimensions. The method involves immunostaining intact tissue to label specific cells, optical sectioning the stained tissue by laser scanning confocal microscopy, computationally reconstructing a three dimensional image data set from the digitized confocal optical sections, delineating isosurfaces of specific intensity within the reconstructed image by a marching cubes algorithm to generate polygon meshes defining boundaries of cells, and displaying individual cells, identified as three dimensional objects enclosed by contiguous polygon meshes, using computer graphics techniques. Each of the components of this method has been described previously in conjunction with other applications. However the combination of these techniques to visualize a variety of different individual cell types in three dimensions in intact tissue represents a new approach. To illustrate the application of this method, we have visualized three different glial cell types in mouse CNS tissue. Oligodendrocytes, specifically stained with antibody to myelin basic protein, were used as an example of cells labelled with an internal membrane antigen. Astrocytes, specifically stained with antibody to glial fibrillary acidic protein, were used as an example of cells labelled with a cytoplasmic antigen. Microglia, specifically stained with Mac.1 antibody, were used as an example of cells labelled with an external membrane antigen. The images that are generated contain remarkably detailed volumetric and textural information that is not obtainable by conventional imaging techniques

    Polylidar3D -- Fast Polygon Extraction from 3D Data

    Full text link
    Flat surfaces captured by 3D point clouds are often used for localization, mapping, and modeling. Dense point cloud processing has high computation and memory costs making low-dimensional representations of flat surfaces such as polygons desirable. We present Polylidar3D, a non-convex polygon extraction algorithm which takes as input unorganized 3D point clouds (e.g., LiDAR data), organized point clouds (e.g., range images), or user-provided meshes. Non-convex polygons represent flat surfaces in an environment with interior cutouts representing obstacles or holes. The Polylidar3D front-end transforms input data into a half-edge triangular mesh. This representation provides a common level of input data abstraction for subsequent back-end processing. The Polylidar3D back-end is composed of four core algorithms: mesh smoothing, dominant plane normal estimation, planar segment extraction, and finally polygon extraction. Polylidar3D is shown to be quite fast, making use of CPU multi-threading and GPU acceleration when available. We demonstrate Polylidar3D's versatility and speed with real-world datasets including aerial LiDAR point clouds for rooftop mapping, autonomous driving LiDAR point clouds for road surface detection, and RGBD cameras for indoor floor/wall detection. We also evaluate Polylidar3D on a challenging planar segmentation benchmark dataset. Results consistently show excellent speed and accuracy.Comment: 40 page

    Software for full-color 3D reconstruction of the biological tissues internal structure

    Full text link
    A software for processing sets of full-color images of biological tissue histological sections is developed. We used histological sections obtained by the method of high-precision layer-by-layer grinding of frozen biological tissues. The software allows restoring the image of the tissue for an arbitrary cross-section of the tissue sample. Thus, our method is designed to create a full-color 3D reconstruction of the biological tissue structure. The resolution of 3D reconstruction is determined by the quality of the initial histological sections. The newly developed technology available to us provides a resolution of up to 5 - 10 {\mu}m in three dimensions.Comment: 11 pages, 8 figure

    A unified formulation for circle and polygon concrete-filled steel tube columns under axial compression

    Get PDF
    Current design practice of concrete-filled steel tube (CFST) columns uses different formulas for different section profiles to predict the axial load bearing capacity. It has always been a challenge and practically important issue for researchers and design engineers who want to find a unified formula that can be used in the design of the columns with various sections, including solid, hollow, circular and polygonal sections. This has been driven by modern design requirements for continuous optimization of structures in terms of not only the use of materials, but also the topology of structural components. This paper extends the authors’ previous work [1] on a unified formulation of the axial load bearing capacity for circular hollow and solid CFST columns to, now, including hollow and solid CFST columns with regular polygonal sections. This is done by taking a circular section as a special case of a polygonal one. Finally, a unified formula is proposed for calculating the axial load bearing capacity of solid and hollow CFST columns with either circular or polygonal sections. In addition, laboratory tests on hollow circular and square CFST long columns are reported. These results are useful addition to the very limited open literature on testing these columns, and are also as a part of the validation process of the proposed analytical formulas

    Geometry Processing of Conventionally Produced Mouse Brain Slice Images

    Full text link
    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as an application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data.Comment: 14 pages, 11 figure

    Curve Skeleton and Moments of Area Supported Beam Parametrization in Multi-Objective Compliance Structural Optimization

    Get PDF
    This work addresses the end-to-end virtual automation of structural optimization up to the derivation of a parametric geometry model that can be used for application areas such as additive manufacturing or the verification of the structural optimization result with the finite element method. A holistic design in structural optimization can be achieved with the weighted sum method, which can be automatically parameterized with curve skeletonization and cross-section regression to virtually verify the result and control the local size for additive manufacturing. is investigated in general. In this paper, a holistic design is understood as a design that considers various compliances as an objective function. This parameterization uses the automated determination of beam parameters by so-called curve skeletonization with subsequent cross-section shape parameter estimation based on moments of area, especially for multi-objective optimized shapes. An essential contribution is the linking of the parameterization with the results of the structural optimization, e.g., to include properties such as boundary conditions, load conditions, sensitivities or even density variables in the curve skeleton parameterization. The parameterization focuses on guiding the skeletonization based on the information provided by the optimization and the finite element model. In addition, the cross-section detection considers circular, elliptical, and tensor product spline cross-sections that can be applied to various shape descriptors such as convolutional surfaces, subdivision surfaces, or constructive solid geometry. The shape parameters of these cross-sections are estimated using stiffness distributions, moments of area of 2D images, and convolutional neural networks with a tailored loss function to moments of area. Each final geometry is designed by extruding the cross-section along the appropriate curve segment of the beam and joining it to other beams by using only unification operations. The focus of multi-objective structural optimization considering 1D, 2D and 3D elements is on cases that can be modeled using equations by the Poisson equation and linear elasticity. This enables the development of designs in application areas such as thermal conduction, electrostatics, magnetostatics, potential flow, linear elasticity and diffusion, which can be optimized in combination or individually. Due to the simplicity of the cases defined by the Poisson equation, no experts are required, so that many conceptual designs can be generated and reconstructed by ordinary users with little effort. Specifically for 1D elements, a element stiffness matrices for tensor product spline cross-sections are derived, which can be used to optimize a variety of lattice structures and automatically convert them into free-form surfaces. For 2D elements, non-local trigonometric interpolation functions are used, which should significantly increase interpretability of the density distribution. To further improve the optimization, a parameter-free mesh deformation is embedded so that the compliances can be further reduced by locally shifting the node positions. Finally, the proposed end-to-end optimization and parameterization is applied to verify a linear elasto-static optimization result for and to satisfy local size constraint for the manufacturing with selective laser melting of a heat transfer optimization result for a heat sink of a CPU. For the elasto-static case, the parameterization is adjusted until a certain criterion (displacement) is satisfied, while for the heat transfer case, the manufacturing constraints are satisfied by automatically changing the local size with the proposed parameterization. This heat sink is then manufactured without manual adjustment and experimentally validated to limit the temperature of a CPU to a certain level.:TABLE OF CONTENT III I LIST OF ABBREVIATIONS V II LIST OF SYMBOLS V III LIST OF FIGURES XIII IV LIST OF TABLES XVIII 1. INTRODUCTION 1 1.1 RESEARCH DESIGN AND MOTIVATION 6 1.2 RESEARCH THESES AND CHAPTER OVERVIEW 9 2. PRELIMINARIES OF TOPOLOGY OPTIMIZATION 12 2.1 MATERIAL INTERPOLATION 16 2.2 TOPOLOGY OPTIMIZATION WITH PARAMETER-FREE SHAPE OPTIMIZATION 17 2.3 MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION WITH THE WEIGHTED SUM METHOD 18 3. SIMULTANEOUS SIZE, TOPOLOGY AND PARAMETER-FREE SHAPE OPTIMIZATION OF WIREFRAMES WITH B-SPLINE CROSS-SECTIONS 21 3.1 FUNDAMENTALS IN WIREFRAME OPTIMIZATION 22 3.2 SIZE AND TOPOLOGY OPTIMIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 27 3.3 PARAMETER-FREE SHAPE OPTIMIZATION EMBEDDED IN SIZE OPTIMIZATION 32 3.4 WEIGHTED SUM SIZE AND TOPOLOGY OPTIMIZATION 36 3.5 CROSS-SECTION COMPARISON 39 4. NON-LOCAL TRIGONOMETRIC INTERPOLATION IN TOPOLOGY OPTIMIZATION 41 4.1 FUNDAMENTALS IN MATERIAL INTERPOLATIONS 43 4.2 NON-LOCAL TRIGONOMETRIC SHAPE FUNCTIONS 45 4.3 NON-LOCAL PARAMETER-FREE SHAPE OPTIMIZATION WITH TRIGONOMETRIC SHAPE FUNCTIONS 49 4.4 NON-LOCAL AND PARAMETER-FREE MULTI-OBJECTIVE TOPOLOGY OPTIMIZATION 54 5. FUNDAMENTALS IN SKELETON GUIDED SHAPE PARAMETRIZATION IN TOPOLOGY OPTIMIZATION 58 5.1 SKELETONIZATION IN TOPOLOGY OPTIMIZATION 61 5.2 CROSS-SECTION RECOGNITION FOR IMAGES 66 5.3 SUBDIVISION SURFACES 67 5.4 CONVOLUTIONAL SURFACES WITH META BALL KERNEL 71 5.5 CONSTRUCTIVE SOLID GEOMETRY 73 6. CURVE SKELETON GUIDED BEAM PARAMETRIZATION OF TOPOLOGY OPTIMIZATION RESULTS 75 6.1 FUNDAMENTALS IN SKELETON SUPPORTED RECONSTRUCTION 76 6.2 SUBDIVISION SURFACE PARAMETRIZATION WITH PERIODIC B-SPLINE CROSS-SECTIONS 78 6.3 CURVE SKELETONIZATION TAILORED TO TOPOLOGY OPTIMIZATION WITH PRE-PROCESSING 82 6.4 SURFACE RECONSTRUCTION USING LOCAL STIFFNESS DISTRIBUTION 86 7. CROSS-SECTION SHAPE PARAMETRIZATION FOR PERIODIC B-SPLINES 96 7.1 PRELIMINARIES IN B-SPLINE CONTROL GRID ESTIMATION 97 7.2 CROSS-SECTION EXTRACTION OF 2D IMAGES 101 7.3 TENSOR SPLINE PARAMETRIZATION WITH MOMENTS OF AREA 105 7.4 B-SPLINE PARAMETRIZATION WITH MOMENTS OF AREA GUIDED CONVOLUTIONAL NEURAL NETWORK 110 8. FULLY AUTOMATED COMPLIANCE OPTIMIZATION AND CURVE-SKELETON PARAMETRIZATION FOR A CPU HEAT SINK WITH SIZE CONTROL FOR SLM 115 8.1 AUTOMATED 1D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINED SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 118 8.2 AUTOMATED 2D THERMAL COMPLIANCE MINIMIZATION, CONSTRAINT SURFACE RECONSTRUCTION AND ADDITIVE MANUFACTURING 120 8.3 USING THE HEAT SINK PROTOTYPES COOLING A CPU 123 9. CONCLUSION 127 10. OUTLOOK 131 LITERATURE 133 APPENDIX 147 A PREVIOUS STUDIES 147 B CROSS-SECTION PROPERTIES 149 C CASE STUDIES FOR THE CROSS-SECTION PARAMETRIZATION 155 D EXPERIMENTAL SETUP 15

    Linking Spatial Video and GIS

    Get PDF
    Spatial Video is any form of geographically referenced videographic data. The forms in which it is acquired, stored and used vary enormously; as does the standard of accuracy in the spatial data and the quality of the video footage. This research deals with a specific form of Spatial Video where these data have been captured from a moving road-network survey vehicle. The spatial data are GPS sentences while the video orientation is approximately orthogonal and coincident with the direction of travel. GIS that use these data are usually bespoke standalone systems or third party extensions to existing platforms. They specialise in using the video as a visual enhancement with limited spatial functionality and interoperability. While enormous amounts of these data exist, they do not have a generalised, cross-platform spatial data structure that is suitable for use within a GIS. The objectives of this research have been to define, develop and implement a novel Spatial Video data structure and demonstrate how this can achieve a spatial approach to the study of video. This data structure is called a Viewpoint and represents the capture location and geographical extent of each video frame. It is generalised to represent any form or format of Spatial Video. It is shown how a Viewpoint improves on existing data structure methodologies and how it can be theoretically defined in 3D space. A 2D implementation is then developed where Viewpoints are constructed from the spatial and camera parameters of each survey in the study area. A number of problems are defined and solutions provided towards the implementation of a post-processing system to calculate, index and store each video frame Viewpoint in a centralised spatial database. From this spatial database a number of geospatial analysis approaches are demonstrated that represent novel ways of using and studying Spatial Video based on the Viewpoint data structure. Also, a unique application is developed where the Viewpoints are used as a spatial control to dynamically access and play video in a location aware system. While video has been to date largely ignored as a GIS spatial data source; it is shown through this novel Viewpoint implementation and the geospatial analysis demonstrations that this need not be the case anymore

    Diagnostic checking and intra-daily effects in time series models

    Get PDF
    A variety of topics on the statistical analysis of time series are addressed in this thesis. The main emphasis is on the state space methodology and, in particular, on structural time series (STS) models. There are now many applications of STS models in the literature and they have proved to be very successful. The keywords of this thesis vary from - Kalman filter, smoothing and diagnostic checking - to - time-varying cubic splines and intra-daily effects -. Five separate studies are carried out for this research project and they are reflected in the chapters 2 to 6. All studies concern time series models which are placed in the state space form (SSF) so that the Kalman filter (KF) can be applied for estimation. The SSF and the KF play a central role in time series analysis that can be compared with the important role of the regression model and the method of least squares estimation in econometrics. Chapter 2 gives an overview of the latest developments in the state space methodology including diffuse likelihood evaluation, stable calculations, etc. Smoothing algorithms evaluate the full sample estimates of unobserved components in time series models. New smoothing algorithms are developed for the state and the disturbance vector of the SSF which are computationally efficient and outperform existing methods. Chapter 3 discusses the existing and the new smoothing algorithms with an emphasis on theory, algorithms and practical implications. The new smoothing results pave the way to use auxiliary residuals, that is full sample estimates of the disturbances, for diagnostic checking of unobserved components time series models. Chapter 4 develops test statistics for auxiliary residuals and it presents applications showing how they can be used to detect and distinguish between outliers and structural change. A cubic spline is a polynomial function of order three which is regularly used for interpolation and curve-fitting. It has also been applied to piecewise regressions, density approximations, etc. Chapter 5 develops the cubic spline further by allowing it to vary over time and by introducing it into time series models. These timevarying cubic splines are an efficient way of handling slowly changing periodic movements in time series. This method for modelling a changing periodic pattern is applied in a structural time series model used to forecast hourly electricity load demand, with the periodic movements being intradaily or intra-weekly. The full model contains other components, including a temperature response which is also modelled using cubic splines. A statistical computer package (SHELF) is developed to produce, at any time, hourly load forecasts three days ahead
    • …
    corecore