115 research outputs found
A Hierarchical Framework for Phylogenetic and Ancestral Genome Reconstruction on Whole Genome Data
Gene order gets evolved under events such as rearrangements, duplications, and losses, which can change both the order and content along the genome, through the long history of genome evolution. Recently, the accumulation of genomic sequences provides researchers with the chance to handle long-standing problems about the phylogenies, or evolutionary histories, of sets of species, and ancestral genomic content and orders. Over the past few years, such problems have been proven so interesting that a large number of algorithms have been proposed in the attempt to resolve them, following different standards. The work presented in this dissertation focuses on algorithms and models for whole-genome evolution and their applications in phylogeny and ancestor inference from gene order. We developed a flexible ancestor reconstruction method (FARM) within the framework of maximum likelihood and weighted maximum matching. We designed binary code based framework to reconstruct evolutionary history for whole genome gene orders. We developed algorithms to estimate/predict missing adjacencies in ancestral reconstruction procedure to restore gene order from species, when leaf genomes are far from each other. We developed a pipeline involving maximum likelihood, weighted maximum matching and variable length binary encoding for estimation of ancestral gene content, to reconstruct ancestral genomes under the various evolutionary model, including genome rearrangements, additions, losses and duplications, with high accuracy and low time consumption. Phylogenetic analyses of whole-genome data have been limited to small collections of genomes and low-resolution data, or data without massive duplications. We designed a maximum-likelihood approach to phylogeny analysis (VLWD) based on variable length binary encoding, under maximum likelihood model, to reconstruct phylogenies from whole genome data, scaling up in accuracy and make it capable of reconstructing phylogeny from whole genome data, like triploids and tetraploids. Maximum likelihood based approaches have been applied to ancestral reconstruction but remain primitive for whole-genome data. We developed a hierarchical framework for ancestral reconstruction, using variable length binary encoding in content estimation, then adjacencies fixing and missing adjacencies predicting in adjacencies collection and finally, weighted maximum matching in gene order assembly. Therefore it extensively improves the performance of ancestral gene order reconstruction. We designed a series of experiments to validate these methods and compared the results with the most recent and comparable methods. According to the results, they are proven to be fast and accurate
A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans
Deep neural networks have been widely adopted for automatic organ
segmentation from abdominal CT scans. However, the segmentation accuracy of
some small organs (e.g., the pancreas) is sometimes below satisfaction,
arguably because deep networks are easily disrupted by the complex and variable
background regions which occupies a large fraction of the input volume. In this
paper, we formulate this problem into a fixed-point model which uses a
predicted segmentation mask to shrink the input region. This is motivated by
the fact that a smaller input region often leads to more accurate segmentation.
In the training process, we use the ground-truth annotation to generate
accurate input regions and optimize network weights. On the testing stage, we
fix the network parameters and update the segmentation results in an iterative
manner. We evaluate our approach on the NIH pancreas segmentation dataset, and
outperform the state-of-the-art by more than 4%, measured by the average
Dice-S{\o}rensen Coefficient (DSC). In addition, we report 62.43% DSC in the
worst case, which guarantees the reliability of our approach in clinical
applications.Comment: Accepted to MICCAI 2017 (8 pages, 3 figures
Recurrent Saliency Transformation Network: Incorporating Multi-Stage Visual Cues for Small Organ Segmentation
We aim at segmenting small organs (e.g., the pancreas) from abdominal CT
scans. As the target often occupies a relatively small region in the input
image, deep neural networks can be easily confused by the complex and variable
background. To alleviate this, researchers proposed a coarse-to-fine approach,
which used prediction from the first (coarse) stage to indicate a smaller input
region for the second (fine) stage. Despite its effectiveness, this algorithm
dealt with two stages individually, which lacked optimizing a global energy
function, and limited its ability to incorporate multi-stage visual cues.
Missing contextual information led to unsatisfying convergence in iterations,
and that the fine stage sometimes produced even lower segmentation accuracy
than the coarse stage.
This paper presents a Recurrent Saliency Transformation Network. The key
innovation is a saliency transformation module, which repeatedly converts the
segmentation probability map from the previous iteration as spatial weights and
applies these weights to the current iteration. This brings us two-fold
benefits. In training, it allows joint optimization over the deep networks
dealing with different input scales. In testing, it propagates multi-stage
visual information throughout iterations to improve segmentation accuracy.
Experiments in the NIH pancreas segmentation dataset demonstrate the
state-of-the-art accuracy, which outperforms the previous best by an average of
over 2%. Much higher accuracies are also reported on several small organs in a
larger dataset collected by ourselves. In addition, our approach enjoys better
convergence properties, making it more efficient and reliable in practice.Comment: Accepted to CVPR 2018 (10 pages, 6 figures
Visual Concepts and Compositional Voting
It is very attractive to formulate vision in terms of pattern theory
\cite{Mumford2010pattern}, where patterns are defined hierarchically by
compositions of elementary building blocks. But applying pattern theory to real
world images is currently less successful than discriminative methods such as
deep networks. Deep networks, however, are black-boxes which are hard to
interpret and can easily be fooled by adding occluding objects. It is natural
to wonder whether by better understanding deep networks we can extract building
blocks which can be used to develop pattern theoretic models. This motivates us
to study the internal representations of a deep network using vehicle images
from the PASCAL3D+ dataset. We use clustering algorithms to study the
population activities of the features and extract a set of visual concepts
which we show are visually tight and correspond to semantic parts of vehicles.
To analyze this we annotate these vehicles by their semantic parts to create a
new dataset, VehicleSemanticParts, and evaluate visual concepts as unsupervised
part detectors. We show that visual concepts perform fairly well but are
outperformed by supervised discriminative methods such as Support Vector
Machines (SVM). We next give a more detailed analysis of visual concepts and
how they relate to semantic parts. Following this, we use the visual concepts
as building blocks for a simple pattern theoretical model, which we call
compositional voting. In this model several visual concepts combine to detect
semantic parts. We show that this approach is significantly better than
discriminative methods like SVM and deep networks trained specifically for
semantic part detection. Finally, we return to studying occlusion by creating
an annotated dataset with occlusion, called VehicleOcclusion, and show that
compositional voting outperforms even deep networks when the amount of
occlusion becomes large.Comment: It is accepted by Annals of Mathematical Sciences and Application
Phylogeny Analysis from Gene-Order Data with Massive Duplications
Background: Gene order changes, under rearrangements, insertions, deletions and duplications, have been used as a new type of data source for phylogenetic reconstruction. Because these changes are rare compared to sequence mutations, they allow the inference of phylogeny further back in evolutionary time. There exist many computational methods for the reconstruction of gene-order phylogenies, including widely used maximum parsimonious methods and maximum likelihood methods. However, both methods face challenges in handling large genomes with many duplicated genes, especially in the presence of whole genome duplication.
Methods: In this paper, we present three simple yet powerful methods based on maximum-likelihood (ML) approaches that encode multiplicities of both gene adjacency and gene content information for phylogenetic reconstruction.
Results: Extensive experiments on simulated data sets show that our new method achieves the most accurate phylogenies compared to existing approaches. We also evaluate our method on real whole-genome data from eleven mammals. The package is publicly accessible at http://www.geneorder.org.
Conclusions: Our new encoding schemes successfully incorporate the multiplicity information of gene adjacencies and gene content into an ML framework, and show promising results in reconstruct phylogenies for whole-genome data in the presence of massive duplications
Alternate Interchange Signing Study for Indiana Highways
The main objectives of this research were to (1) understand signing issues from the perspective of drivers and (2) develop recommendations for improving interchange signing in Indiana to aid driver understanding and increase the safety and efficiency of highway traffic operations. An online survey with specific questions was designed and distributed through email, social media, online newspapers, and a survey company with the goal of better understanding driver thinking when approaching decision-making areas on the interstate. The analysis of the survey results revealed the following. Drivers usually do not know the interchange types as they approach an interchange on the freeway. Drivers are most interested in which lanes they should be in when approaching an interchange, even in advance of typical signing locations. Drivers do not like signs that require cognitive work since it will delay their driving decision by creating uncertainty. Different drivers need different types of information from signs, such as cardinal direction, destination name, road name, and lane assignments. Therefore, a perfect sign for one driver may be confusing or information overload for another driver. In some instances, a driver who is familiar with the area is confused by the signs because the sign information contradicts the driver’s knowledge
Segment Anything in 3D with NeRFs
The Segment Anything Model (SAM) has demonstrated its effectiveness in
segmenting any object/part in various 2D images, yet its ability for 3D has not
been fully explored. The real world is composed of numerous 3D scenes and
objects. Due to the scarcity of accessible 3D data and high cost of its
acquisition and annotation, lifting SAM to 3D is a challenging but valuable
research avenue. With this in mind, we propose a novel framework to Segment
Anything in 3D, named SA3D. Given a neural radiance field (NeRF) model, SA3D
allows users to obtain the 3D segmentation result of any target object via only
one-shot manual prompting in a single rendered view. With input prompts, SAM
cuts out the target object from the according view. The obtained 2D
segmentation mask is projected onto 3D mask grids via density-guided inverse
rendering. 2D masks from other views are then rendered, which are mostly
uncompleted but used as cross-view self-prompts to be fed into SAM again.
Complete masks can be obtained and projected onto mask grids. This procedure is
executed via an iterative manner while accurate 3D masks can be finally
learned. SA3D can adapt to various radiance fields effectively without any
additional redesigning. The entire segmentation process can be completed in
approximately two minutes without any engineering optimization. Our experiments
demonstrate the effectiveness of SA3D in different scenes, highlighting the
potential of SAM in 3D scene perception. The project page is at
https://jumpat.github.io/SA3D/.Comment: Work in progress. Project page: https://jumpat.github.io/SA3D
- …