3,993 research outputs found

    Automatic Structural Scene Digitalization

    Get PDF
    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.Comment: paper submitted to PloS On

    Deformable meshes for shape recovery: models and applications

    Get PDF
    With the advance of scanning and imaging technology, more and more 3D objects become available. Among them, deformable objects have gained increasing interests. They include medical instances such as organs, a sequence of objects in motion, and objects of similar shapes where a meaningful correspondence can be established between each other. Thus, it requires tools to store, compare, and retrieve them. Many of these operations depend on successful shape recovery. Shape recovery is the task to retrieve an object from the environment where its geometry is hidden or implicitly known. As a simple and versatile tool, mesh is widely used in computer graphics for modelling and visualization. In particular, deformable meshes are meshes which can take the deformation of deformable objects. They extend the modelling ability of meshes. This dissertation focuses on using deformable meshes to approach the 3D shape recovery problem. Several models are presented to solve the challenges for shape recovery under different circumstances. When the object is hidden in an image, a PDE deformable model is designed to extract its surface shape. The algorithm uses a mesh representation so that it can model any non-smooth surface with an arbitrary precision compared to a parametric model. It is more computational efficient than a level-set approach. When the explicit geometry of the object is known but is hidden in a bank of shapes, we simplify the deformation of the model to a graph matching procedure through a hierarchical surface abstraction approach. The framework is used for shape matching and retrieval. This idea is further extended to retain the explicit geometry during the abstraction. A novel motion abstraction framework for deformable meshes is devised based on clustering of local transformations and is successfully applied to 3D motion compression

    Composing quadrilateral meshes for animation

    Get PDF
    The modeling-by-composition paradigm can be a powerful tool in modern animation pipelines. We propose two novel interactive techniques to compose 3D assets that enable the artists to freely remove, detach and combine components of organic models. The idea behind our methods is to preserve most of the original information in the input characters and blend accordingly where necessary. The first method, QuadMixer, provides a robust tool to compose the quad layouts of watertight pure quadrilateral meshes, exploiting the boolean operations defined on triangles. Quad Layout is a crucial property for many applications since it conveys important information that would otherwise be destroyed by techniques that aim only at preserving the shape. Our technique keeps untouched all the quads in the patches which are not involved in the blending. The resulting meshes preserve the originally designed edge flows that, by construction, are captured and incorporated into the new quads. SkinMixer extends this approach to compose skinned models, taking into account not only the surface but also the data structures for animating the character. We propose a new operation-based technique that preserves and smoothly merges meshes, skeletons, and skinning weights. The retopology approach of QuadMixer is extended to work on quad-dominant and arbitrary complex surfaces. Instead of relying on boolean operations on triangle meshes, we manipulate signed distance fields to generate an implicit surface. The results preserve most of the information in the input assets, blending accordingly in the intersection regions. The resulting characters are ready to be used in animation pipelines. Given the high quality of the results generated, we believe that our methods could have a huge impact on the entertainment industry. Integrated into current software for 3D modeling, they would certainly provide a powerful tool for the artists. Allowing them to automatically reuse parts of their well-designed characters could lead to a new approach for creating models, which would significantly reduce the cost of the process

    Learning cellular morphology with neural networks

    Get PDF
    Reconstruction and annotation of volume electron microscopy data sets of brain tissue is challenging but can reveal invaluable information about neuronal circuits. Significant progress has recently been made in automated neuron reconstruction as well as automated detection of synapses. However, methods for automating the morphological analysis of nanometer-resolution reconstructions are less established, despite the diversity of possible applications. Here, we introduce cellular morphology neural networks (CMNs), based on multi-view projections sampled from automatically reconstructed cellular fragments of arbitrary size and shape. Using unsupervised training, we infer morphology embeddings (Neuron2vec) of neuron reconstructions and train CMNs to identify glia cells in a supervised classification paradigm, which are then used to resolve neuron reconstruction errors. Finally, we demonstrate that CMNs can be used to identify subcellular compartments and the cell types of neuron reconstructions

    Extraction of protein profiles from primary neurons using active contour models and wavelets

    Get PDF
    AbstractThe function of complex networks in the nervous system relies on the proper formation of neuronal contacts and their remodeling. To decipher the molecular mechanisms underlying these processes, it is essential to establish unbiased automated tools allowing the correlation of neurite morphology and the subcellular distribution of molecules by quantitative means.We developed NeuronAnalyzer2D, a plugin for ImageJ, which allows the extraction of neuronal cell morphologies from two dimensional high resolution images, and in particular their correlation with protein profiles determined by indirect immunostaining of primary neurons. The prominent feature of our approach is the ability to extract subcellular distributions of distinct biomolecules along neurites. To extract the complete areas of neurons, required for this analysis, we employ active contours with a new distance based energy. For locating the structural parts of neurons and various morphological parameters we adopt a wavelet based approach. The presented approach is able to extract distinctive profiles of several proteins and reports detailed morphology measurements on neurites.We compare the detected neurons from NeuronAnalyzer2D with those obtained by NeuriteTracer and Vaa3D-Neuron, two popular tools for automatic neurite tracing. The distinctive profiles extracted for several proteins, for example, of the mRNA binding protein ZBP1, and a comparative evaluation of the neuron segmentation results proves the high quality of the quantitative data and proves its practical utility for biomedical analyses

    Robust 3D Action Recognition through Sampling Local Appearances and Global Distributions

    Full text link
    3D action recognition has broad applications in human-computer interaction and intelligent surveillance. However, recognizing similar actions remains challenging since previous literature fails to capture motion and shape cues effectively from noisy depth data. In this paper, we propose a novel two-layer Bag-of-Visual-Words (BoVW) model, which suppresses the noise disturbances and jointly encodes both motion and shape cues. First, background clutter is removed by a background modeling method that is designed for depth data. Then, motion and shape cues are jointly used to generate robust and distinctive spatial-temporal interest points (STIPs): motion-based STIPs and shape-based STIPs. In the first layer of our model, a multi-scale 3D local steering kernel (M3DLSK) descriptor is proposed to describe local appearances of cuboids around motion-based STIPs. In the second layer, a spatial-temporal vector (STV) descriptor is proposed to describe the spatial-temporal distributions of shape-based STIPs. Using the Bag-of-Visual-Words (BoVW) model, motion and shape cues are combined to form a fused action representation. Our model performs favorably compared with common STIP detection and description methods. Thorough experiments verify that our model is effective in distinguishing similar actions and robust to background clutter, partial occlusions and pepper noise
    • …
    corecore