19 research outputs found

    Automatic Annotation of Spatial Expression Patterns via Sparse Bayesian Factor Models

    Get PDF
    Advances in reporters for gene expression have made it possible to document and quantify expression patterns in 2D–4D. In contrast to microarrays, which provide data for many genes but averaged and/or at low resolution, images reveal the high spatial dynamics of gene expression. Developing computational methods to compare, annotate, and model gene expression based on images is imperative, considering that available data are rapidly increasing. We have developed a sparse Bayesian factor analysis model in which the observed expression diversity of among a large set of high-dimensional images is modeled by a small number of hidden common factors. We apply this approach on embryonic expression patterns from a Drosophila RNA in situ image database, and show that the automatically inferred factors provide for a meaningful decomposition and represent common co-regulation or biological functions. The low-dimensional set of factor mixing weights is further used as features by a classifier to annotate expression patterns with functional categories. On human-curated annotations, our sparse approach reaches similar or better classification of expression patterns at different developmental stages, when compared to other automatic image annotation methods using thousands of hard-to-interpret features. Our study therefore outlines a general framework for large microscopy data sets, in which both the generative model itself, as well as its application for analysis tasks such as automated annotation, can provide insight into biological questions

    A Reaction-Diffusion Model to Capture Disparity Selectivity in Primary Visual Cortex

    Get PDF
    Decades of experimental studies are available on disparity selective cells in visual cortex of macaque and cat. Recently, local disparity map for iso-orientation sites for near-vertical edge preference is reported in area 18 of cat visual cortex. No experiment is yet reported on complete disparity map in V1. Disparity map for layer IV in V1 can provide insight into how disparity selective complex cell receptive field is organized from simple cell subunits. Though substantial amounts of experimental data on disparity selective cells is available, no model on receptive field development of such cells or disparity map development exists in literature. We model disparity selectivity in layer IV of cat V1 using a reaction-diffusion two-eye paradigm. In this model, the wiring between LGN and cortical layer IV is determined by resource an LGN cell has for supporting connections to cortical cells and competition for target space in layer IV. While competing for target space, the same type of LGN cells, irrespective of whether it belongs to left-eye-specific or right-eye-specific LGN layer, cooperate with each other while trying to push off the other type. Our model captures realistic 2D disparity selective simple cell receptive fields, their response properties and disparity map along with orientation and ocular dominance maps. There is lack of correlation between ocular dominance and disparity selectivity at the cell population level. At the map level, disparity selectivity topography is not random but weakly clustered for similar preferred disparities. This is similar to the experimental result reported for macaque. The details of weakly clustered disparity selectivity map in V1 indicate two types of complex cell receptive field organization

    Object Registration in Semi-cluttered and Partial-occluded Scenes for Augmented Reality

    Get PDF
    This paper proposes a stable and accurate object registration pipeline for markerless augmented reality applications. We present two novel algorithms for object recognition and matching to improve the registration accuracy from model to scene transformation via point cloud fusion. Whilst the first algorithm effectively deals with simple scenes with few object occlusions, the second algorithm handles cluttered scenes with partial occlusions for robust real-time object recognition and matching. The computational framework includes a locally supported Gaussian weight function to enable repeatable detection of 3D descriptors. We apply a bilateral filtering and outlier removal to preserve edges of point cloud and remove some interference points in order to increase matching accuracy. Extensive experiments have been carried to compare the proposed algorithms with four most used methods. Results show improved performance of the algorithms in terms of computational speed, camera tracking and object matching errors in semi-cluttered and partial-occluded scenes

    Epitomized priors for multi-labeling problems

    No full text
    Image parsing remains difficult due to the need to combine local and contextual information when labeling a scene. We approach this problem by using the epitome as a prior over label configurations. Several properties make it suited to this task. First, it allows a condensed patch-based representation. Second, efficient E-M based learning and inference algorithms can he used. Third, non-stationarity is easily incorporated. We consider three existing priors, and show how each can be extended using the epitome. The simplest prior assumes patches of labels are drawn independently from either a mixture model or an epitome. Next we investigate a 'conditional epitome' model, which substitutes an epitome for a conditional mixture model. Finally, we develop an 'epitome tree' model, which combines the epitome with a tree structured belief network prior Each model is combined with a per-pixel classifier to perform segmentation. In each case, the epitomized form of the prior provides superior segmentation performance, with the epitome tree performing best overall. We also apply the same models to denoising binary images, with similar results

    Harmonic Suppression in Nonlinear Systems

    Get PDF
    Markerless Augmented Reality registration using standard Homography matrix is unstable and has low registration accuracy. In this paper, we present a new method to improve the augmented reality registration method based on the Visual Simultaneous Localization and Mapping (VSLAM). We improved the method implemented in ORB- SLAM in order to increase stability and accuracy of AR registration. VSLAM algorithm generate 3D scene maps in dynamic camera tracking process. Hence, for AR based on VSLAM utilizes the 3D map of the scene reconstruction to compute the location for virtual object augmen- tation. In this paper, a Maximum Consistency with Minimum Distance and Robust Z-score (MCMD Z) algorithm is used to perform the planar detection of 3D maps, then the Singular Value Decomposition (SVD) and Lie group are used to calculate the rotation matrix that helps to solve the problem of virtual object orientation. Finally, the method integrates camera poses on the virtual object registration. We show experimental results to demonstrate the robustness and registration accuracy of the method for augmented reality applications

    Modelling airway geometry as stock market data using Bayesian changepoint detection

    Get PDF
    Numerous lung diseases, such as idiopathic pulmonary fibrosis (IPF), exhibit dilation of the airways. Accurate measurement of dilatation enables assessment of the progression of disease. Unfortunately the combination of image noise and airway bifurcations causes high variability in the profiles of cross-sectional areas, rendering the identification of affected regions very difficult. Here we introduce a noise-robust method for automatically detecting the location of progressive airway dilatation given two profiles of the same airway acquired at different time points. We propose a probabilistic model of abrupt relative variations between profiles and perform inference via Reversible Jump Markov Chain Monte Carlo sampling. We demonstrate the efficacy of the proposed method on two datasets; (i) images of healthy airways with simulated dilatation; (ii) pairs of real images of IPF-affected airways acquired at 1 year intervals. Our model is able to detect the starting location of airway dilatation with an accuracy of 2.5 mm on simulated data. The experiments on the IPF dataset display reasonable agreement with radiologists. We can compute a relative change in airway volume that may be useful for quantifying IPF disease progression.<br/

    Full explicit consistency constraints in uncalibrated multiple homography estimation

    No full text
    We reveal a complete set of constraints that need to be imposed on a set of 3×3 matrices to ensure that the matrices represent genuine homographies associated with multiple planes between two views. We also show how to exploit the constraints to obtain more accurate estimates of homography matrices between two views. Our study resolves a long-standing research question and provides a fresh perspective and a more in-depth understanding of the multiple homography estimation task.Wojciech Chojnacki and Zygmunt L. Szpa
    corecore