5,133 research outputs found

    SLIQ: Simple Linear Inequalities for Efficient Contig Scaffolding

    Full text link
    Scaffolding is an important subproblem in "de novo" genome assembly in which mate pair data are used to construct a linear sequence of contigs separated by gaps. Here we present SLIQ, a set of simple linear inequalities derived from the geometry of contigs on the line that can be used to predict the relative positions and orientations of contigs from individual mate pair reads and thus produce a contig digraph. The SLIQ inequalities can also filter out unreliable mate pairs and can be used as a preprocessing step for any scaffolding algorithm. We tested the SLIQ inequalities on five real data sets ranging in complexity from simple bacterial genomes to complex mammalian genomes and compared the results to the majority voting procedure used by many other scaffolding algorithms. SLIQ predicted the relative positions and orientations of the contigs with high accuracy in all cases and gave more accurate position predictions than majority voting for complex genomes, in particular the human genome. Finally, we present a simple scaffolding algorithm that produces linear scaffolds given a contig digraph. We show that our algorithm is very efficient compared to other scaffolding algorithms while maintaining high accuracy in predicting both contig positions and orientations for real data sets.Comment: 16 pages, 6 figures, 7 table

    Write-limited sorts and joins for persistent memory

    Get PDF
    To mitigate the impact of the widening gap between the memory needs of CPUs and what standard memory technology can deliver, system architects have introduced a new class of memory technology termed persistent memory. Persistent memory is byteaddressable, but exhibits asymmetric I/O: writes are typically one order of magnitude more expensive than reads. Byte addressability combined with I/O asymmetry render the performance profile of persistent memory unique. Thus, it becomes imperative to find new ways to seamlessly incorporate it into database systems. We do so in the context of query processing. We focus on the fundamental operations of sort and join processing. We introduce the notion of write-limited algorithms that effectively minimize the I/O cost. We give a high-level API that enables the system to dynamically optimize the workflow of the algorithms; or, alternatively, allows the developer to tune the write profile of the algorithms. We present four different techniques to incorporate persistent memory into the database processing stack in light of this API. We have implemented and extensively evaluated all our proposals. Our results show that the algorithms deliver on their promise of I/O-minimality and tunable performance. We showcase the merits and deficiencies of each implementation technique, thus taking a solid first step towards incorporating persistent memory into query processing. 1

    Automatic Liver Segmentation Using an Adversarial Image-to-Image Network

    Full text link
    Automatic liver segmentation in 3D medical images is essential in many clinical applications, such as pathological diagnosis of hepatic diseases, surgical planning, and postoperative assessment. However, it is still a very challenging task due to the complex background, fuzzy boundary, and various appearance of liver. In this paper, we propose an automatic and efficient algorithm to segment liver from 3D CT volumes. A deep image-to-image network (DI2IN) is first deployed to generate the liver segmentation, employing a convolutional encoder-decoder architecture combined with multi-level feature concatenation and deep supervision. Then an adversarial network is utilized during training process to discriminate the output of DI2IN from ground truth, which further boosts the performance of DI2IN. The proposed method is trained on an annotated dataset of 1000 CT volumes with various different scanning protocols (e.g., contrast and non-contrast, various resolution and position) and large variations in populations (e.g., ages and pathology). Our approach outperforms the state-of-the-art solutions in terms of segmentation accuracy and computing efficiency.Comment: Accepted by MICCAI 201

    Olfactory learning alters navigation strategies and behavioral variability in C. elegans

    Full text link
    Animals adjust their behavioral response to sensory input adaptively depending on past experiences. The flexible brain computation is crucial for survival and is of great interest in neuroscience. The nematode C. elegans modulates its navigation behavior depending on the association of odor butanone with food (appetitive training) or starvation (aversive training), and will then climb up the butanone gradient or ignore it, respectively. However, the exact change in navigation strategy in response to learning is still unknown. Here we study the learned odor navigation in worms by combining precise experimental measurement and a novel descriptive model of navigation. Our model consists of two known navigation strategies in worms: biased random walk and weathervaning. We infer weights on these strategies by applying the model to worm navigation trajectories and the exact odor concentration it experiences. Compared to naive worms, appetitive trained worms up-regulate the biased random walk strategy, and aversive trained worms down-regulate the weathervaning strategy. The statistical model provides prediction with >90%>90 \% accuracy of the past training condition given navigation data, which outperforms the classical chemotaxis metric. We find that the behavioral variability is altered by learning, such that worms are less variable after training compared to naive ones. The model further predicts the learning-dependent response and variability under optogenetic perturbation of the olfactory neuron AWCON^\mathrm{ON}. Lastly, we investigate neural circuits downstream from AWCON^\mathrm{ON} that are differentially recruited for learned odor-guided navigation. Together, we provide a new paradigm to quantify flexible navigation algorithms and pinpoint the underlying neural substrates
    corecore