3,158 research outputs found

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure

    Automated segmentation, detection and fitting of piping elements from terrestrial LIDAR data

    Get PDF
    Since the invention of light detection and ranging (LIDAR) in the early 1960s, it has been adopted for use in numerous applications, from topographical mapping with airborne LIDAR platforms to surveying of urban sites with terrestrial LIDAR systems. Static terrestrial LIDAR has become an especially effective tool for surveying, in some cases replacing traditional techniques such as electronic total stations and GPS methods. Current state-of-the-art LIDAR scanners have very fine spatial resolution, generating precise 3D point cloud data with millimeter accuracy. Therefore, LIDAR data can provide 3D details of a scene with an unprecedented level of details. However, automated exploitation of LIDAR data is challenging, due to the non-uniform spatial sampling of the point clouds as well as to the massive volumes of data, which may range from a few million points to hundreds of millions of points depending on the size and complexity of the scene being scanned. ^ This dissertation focuses on addressing these challenges to automatically exploit large LIDAR point clouds of piping systems in industrial sites, such as chemical plants, oil refineries, and steel mills. A complete processing chain is proposed in this work, using raw LIDAR point clouds as input and generating cylinder parameter estimates for pipe segments as the output, which could then be used to produce computer aided design (CAD) models of pipes. The processing chain consists of three stages: (1) segmentation of LIDAR point clouds, (2) detection and identification of piping elements, and (3) cylinder fitting and parameter estimation. The final output of the cylinder fitting stage gives the estimated orientation, position, and radius of each detected pipe element. ^ A robust octree-based split and merge segmentation algorithm is proposed in this dissertation that can efficiently process LIDAR data. Following octree decomposition of the point cloud, graph theory analysis is used during the splitting process to separate points within each octant into components based on spatial connectivity. A series of connectivity criteria (proximity, orientation, and curvature) are developed for the merging process, which exploits contextual information to effectively merge cylindrical segments into complete pipes and planar segments into complete walls. Furthermore, by conducting surface fitting of segments and analyzing their principal curvatures, the proposed segmentation approach is capable of detecting and identifying the piping segments. ^ A novel cylinder fitting technique is proposed to accurately estimate the cylinder parameters for each detected piping segment from the terrestrial LIDAR point cloud. Specifically, the orientation, radius, and position of each piping element must be robustly estimated in the presence of noise. An original formulation has been developed to estimate the cylinder axis orientation using gradient descent optimization of an angular distance cost function. The cost function is based on the concept that surface normals of points in a cylinder point cloud are perpendicular to the cylinder axis. The key contribution of this algorithm is its capability to accurately estimate the cylinder orientation in the presence of noise without requiring a good initial starting point. After estimation of the cylinder\u27s axis orientation, the radius and position are then estimated in the 2D space formed from the projection of the 3D cylinder point cloud onto the plane perpendicular to the cylinder\u27s axis. With these high quality approximations, a least squares estimation in 3D is made for the final cylinder parameters. ^ Following cylinder fitting, the estimated parameters of each detected piping segment are used to generate a CAD model of the piping system. The algorithms and techniques in this dissertation form a complete processing chain that can automatically exploit large LIDAR point cloud of piping systems and generate CAD models

    Joint segmentation of color and depth data based on splitting and merging driven by surface fitting

    Get PDF
    This paper proposes a segmentation scheme based on the joint usage of color and depth data together with a 3D surface estimation scheme. Firstly a set of multi-dimensional vectors is built from color, geometry and surface orientation information. Normalized cuts spectral clustering is then applied in order to recursively segment the scene in two parts thus obtaining an over-segmentation. This procedure is followed by a recursive merging stage where close segments belonging to the same object are joined together. At each step of both procedures a NURBS model is fitted on the computed segments and the accuracy of the fitting is used as a measure of the plausibility that a segment represents a single surface or object. By comparing the accuracy to the one at the previous step, it is possible to determine if each splitting or merging operation leads to a better scene representation and consequently whether to perform it or not. Experimental results show how the proposed method provides an accurate and reliable segmentation

    Man-made Surface Structures from Triangulated Point Clouds

    Get PDF
    Photogrammetry aims at reconstructing shape and dimensions of objects captured with cameras, 3D laser scanners or other spatial acquisition systems. While many acquisition techniques deliver triangulated point clouds with millions of vertices within seconds, the interpretation is usually left to the user. Especially when reconstructing man-made objects, one is interested in the underlying surface structure, which is not inherently present in the data. This includes the geometric shape of the object, e.g. cubical or cylindrical, as well as corresponding surface parameters, e.g. width, height and radius. Applications are manifold and range from industrial production control to architectural on-site measurements to large-scale city models. The goal of this thesis is to automatically derive such surface structures from triangulated 3D point clouds of man-made objects. They are defined as a compound of planar or curved geometric primitives. Model knowledge about typical primitives and relations between adjacent pairs of them should affect the reconstruction positively. After formulating a parametrized model for man-made surface structures, we develop a reconstruction framework with three processing steps: During a fast pre-segmentation exploiting local surface properties we divide the given surface mesh into planar regions. Making use of a model selection scheme based on minimizing the description length, this surface segmentation is free of control parameters and automatically yields an optimal number of segments. A subsequent refinement introduces a set of planar or curved geometric primitives and hierarchically merges adjacent regions based on their joint description length. A global classification and constraint parameter estimation combines the data-driven segmentation with high-level model knowledge. Therefore, we represent the surface structure with a graphical model and formulate factors based on likelihood as well as prior knowledge about parameter distributions and class probabilities. We infer the most probable setting of surface and relation classes with belief propagation and estimate an optimal surface parametrization with constraints induced by inter-regional relations. The process is specifically designed to work on noisy data with outliers and a few exceptional freeform regions not describable with geometric primitives. It yields full 3D surface structures with watertightly connected surface primitives of different types. The performance of the proposed framework is experimentally evaluated on various data sets. On small synthetically generated meshes we analyze the accuracy of the estimated surface parameters, the sensitivity w.r.t. various properties of the input data and w.r.t. model assumptions as well as the computational complexity. Additionally we demonstrate the flexibility w.r.t. different acquisition techniques on real data sets. The proposed method turns out to be accurate, reasonably fast and little sensitive to defects in the data or imprecise model assumptions.KĂŒnstliche OberflĂ€chenstrukturen aus triangulierten Punktwolken Ein Ziel der Photogrammetrie ist die Rekonstruktion der Form und GrĂ¶ĂŸe von Objekten, die mit Kameras, 3D-Laserscannern und anderern rĂ€umlichen Erfassungssystemen aufgenommen wurden. WĂ€hrend viele Aufnahmetechniken innerhalb von Sekunden triangulierte Punktwolken mit Millionen von Punkten liefern, ist deren Interpretation gewöhnlicherweise dem Nutzer ĂŒberlassen. Besonders bei der Rekonstruktion kĂŒnstlicher Objekte (i.S.v. engl. man-made = „von Menschenhand gemacht“ ist man an der zugrunde liegenden OberflĂ€chenstruktur interessiert, welche nicht inhĂ€rent in den Daten enthalten ist. Diese umfasst die geometrische Form des Objekts, z.B. quaderförmig oder zylindrisch, als auch die zugehörigen OberflĂ€chenparameter, z.B. Breite, Höhe oder Radius. Die Anwendungen sind vielfĂ€ltig und reichen von industriellen Fertigungskontrollen ĂŒber architektonische Raumaufmaße bis hin zu großmaßstĂ€bigen Stadtmodellen. Das Ziel dieser Arbeit ist es, solche OberflĂ€chenstrukturen automatisch aus triangulierten Punktwolken von kĂŒnstlichen Objekten abzuleiten. Sie sind definiert als ein Verbund ebener und gekrĂŒmmter geometrischer Primitive. Modellwissen ĂŒber typische Primitive und Relationen zwischen Paaren von ihnen soll die Rekonstruktion positiv beeinflussen. Nachdem wir ein parametrisiertes Modell fĂŒr kĂŒnstliche OberflĂ€chenstrukturen formuliert haben, entwickeln wir ein Rekonstruktionsverfahren mit drei Verarbeitungsschritten: Im Rahmen einer schnellen Vorsegmentierung, die lokale OberflĂ€cheneigenschaften berĂŒcksichtigt, teilen wir die gegebene vermaschte OberflĂ€che in ebene Regionen. Unter Verwendung eines Schemas zur Modellauswahl, das auf der Minimierung der BeschreibungslĂ€nge beruht, ist diese OberflĂ€chensegmentierung unabhĂ€ngig von Kontrollparametern und liefert automatisch eine optimale Anzahl an Regionen. Eine anschließende Verbesserung fĂŒhrt eine Menge von ebenen und gekrĂŒmmten geometrischen Primitiven ein und fusioniert benachbarte Regionen hierarchisch basierend auf ihrer gemeinsamen BeschreibungslĂ€nge. Eine globale Klassifikation und bedingte ParameterschĂ€tzung verbindet die datengetriebene Segmentierung mit hochrangigem Modellwissen. Dazu stellen wir die OberflĂ€chenstruktur in Form eines graphischen Modells dar und formulieren Faktoren basierend auf der Likelihood sowie auf apriori Wissen ĂŒber die Parameterverteilungen und Klassenwahrscheinlichkeiten. Wir leiten die wahrscheinlichste Konfiguration von FlĂ€chen- und Relationsklassen mit Hilfe von Belief-Propagation ab und schĂ€tzen eine optimale OberflĂ€chenparametrisierung mit Bedingungen, die durch die Relationen zwischen benachbarten Primitiven induziert werden. Der Prozess ist eigens fĂŒr verrauschte Daten mit Ausreißern und wenigen Ausnahmeregionen konzipiert, die nicht durch geometrische Primitive beschreibbar sind. Er liefert wasserdichte 3D-OberflĂ€chenstrukturen mit OberflĂ€chenprimitiven verschiedener Art. Die LeistungsfĂ€higkeit des vorgestellten Verfahrens wird an verschiedenen DatensĂ€tzen experimentell evaluiert. Auf kleinen, synthetisch generierten OberflĂ€chen untersuchen wir die Genauigkeit der geschĂ€tzten OberflĂ€chenparameter, die SensitivitĂ€t bzgl. verschiedener Eigenschaften der Eingangsdaten und bzgl. Modellannahmen sowie die RechenkomplexitĂ€t. Außerdem demonstrieren wir die FlexibilitĂ€t bzgl. verschiedener Aufnahmetechniken anhand realer DatensĂ€tze. Das vorgestellte Rekonstruktionsverfahren erweist sich als genau, hinreichend schnell und wenig anfĂ€llig fĂŒr Defekte in den Daten oder falsche Modellannahmen

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Cumulative object categorization in clutter

    Get PDF
    In this paper we present an approach based on scene- or part-graphs for geometrically categorizing touching and occluded objects. We use additive RGBD feature descriptors and hashing of graph conïŹguration parameters for describing the spatial arrangement of constituent parts. The presented experiments quantify that this method outperforms our earlier part-voting and sliding window classiïŹcation. We evaluated our approach on cluttered scenes, and by using a 3D dataset containing over 15000 Kinect scans of over 100 objects which were grouped into general geometric categories. Additionally, color, geometric, and combined features were compared for categorization tasks

    Solar Magnetic Tracking. I. Software Comparison and Recommended Practices

    Full text link
    Feature tracking and recognition are increasingly common tools for data analysis, but are typically implemented on an ad-hoc basis by individual research groups, limiting the usefulness of derived results when selection effects and algorithmic differences are not controlled. Specific results that are affected include the solar magnetic turnover time, the distributions of sizes, strengths, and lifetimes of magnetic features, and the physics of both small scale flux emergence and the small-scale dynamo. In this paper, we present the results of a detailed comparison between four tracking codes applied to a single set of data from SOHO/MDI, describe the interplay between desired tracking behavior and parameterization of tracking algorithms, and make recommendations for feature selection and tracking practice in future work.Comment: In press for Astrophys. J. 200

    Song Exposure affects HVC Ultrastructure in Juvenile Zebra Finches

    Get PDF
    • 

    corecore