2,643 research outputs found

    The Footprint Sorting Problem

    Get PDF
    Phylogenetic footprints are short pieces of noncoding DNA sequence in the vicinity of a gene that are conserved between evolutionary distant species. A seemingly simple problem is to sort footprints in their order along the genomes. It is complicated by the fact that not all footprints are collinear:  they may cross each other. The problem thus becomes the identification of the crossing footprints, the sorting of the remaining collinear cliques, and finally the insertion of the noncollinear ones at “reasonable” positions. We show that solving the footprint sorting problem requires the solution of the “Minimum Weight Vertex Feedback Set Problem”, which is known to be NP-complete and APX-hard. Nevertheless good approximations can be obtained for data sets of interest. The remaining steps of the sorting process are straightforward:  computation of the transitive closure of an acyclic graph, linear extension of the resulting partial order, and finally sorting w.r.t. the linear extension. Alternatively, the footprint sorting problem can be rephrased as a combinatorial optimization problem for which approximate solutions can be obtained by means of general purpose heuristics. Footprint sortings obtained with different methods can be compared using a version of multiple sequence alignment that allows the identification of unambiguously ordered sublists. As an application we show that the rat has a slighly increased insertion/deletion rate in comparison to the mouse genome

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    MIDAS, prototype Multivariate Interactive Digital Analysis System for large area earth resources surveys. Volume 1: System description

    Get PDF
    A third-generation, fast, low cost, multispectral recognition system (MIDAS) able to keep pace with the large quantity and high rates of data acquisition from large regions with present and projected sensots is described. The program can process a complete ERTS frame in forty seconds and provide a color map of sixteen constituent categories in a few minutes. A principle objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in the overall program is described. The system contains a midi-computer to control the various high speed processing elements in the data path, a preprocessor to condition data, and a classifier which implements an all digital prototype multivariate Gaussian maximum likelihood or a Bayesian decision algorithm. Sufficient software was developed to perform signature extraction, control the preprocessor, compute classifier coefficients, control the classifier operation, operate the color display and printer, and diagnose operation

    Hierarchical N-Body problem on graphics processor unit

    Get PDF
    Galactic simulation is an important cosmological computation, and represents a classical N-body problem suitable for implementation on vector processors. Barnes-Hut algorithm is a hierarchical N-Body method used to simulate such galactic evolution systems. Stream processing architectures expose data locality and concurrency available in multimedia applications. On the other hand, there are numerous compute-intensive scientific or engineering applications that can potentially benefit from such computational and communication models. These applications are traditionally implemented on vector processors. Stream architecture based graphics processor units (GPUs) present a novel computational alternative for efficiently implementing such high-performance applications. Rendering on a stream architecture sustains high performance, while user-programmable modules allow implementing complex algorithms efficiently. GPUs have evolved over the years, from being fixed-function pipelines to user programmable processors. In this thesis, we focus on the implementation of Barnes-Hut algorithm on typical current-generation programmable GPUs. We exploit computation and communication requirements present in Barnes-Hut algorithm to expose their suitability for user-programmable GPUs. Our implementation of the Barnes-Hut algorithm is formulated as a fragment shader targeting the selected GPU. We discuss implementation details, design issues, results, and challenges encountered in programming the fragment shader

    Gunrock: A High-Performance Graph Processing Library on the GPU

    Full text link
    For large-scale graph analytics on the GPU, the irregularity of data access and control flow, and the complexity of programming GPUs have been two significant challenges for developing a programmable high-performance graph library. "Gunrock", our graph-processing system designed specifically for the GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on operations on a vertex or edge frontier. Gunrock achieves a balance between performance and expressiveness by coupling high performance GPU computing primitives and optimization strategies with a high-level programming model that allows programmers to quickly develop new graph primitives with small code size and minimal GPU programming knowledge. We evaluate Gunrock on five key graph primitives and show that Gunrock has on average at least an order of magnitude speedup over Boost and PowerGraph, comparable performance to the fastest GPU hardwired primitives, and better performance than any other GPU high-level graph library.Comment: 14 pages, accepted by PPoPP'16 (removed the text repetition in the previous version v5

    Virtual prototyping with surface reconstruction and freeform geometric modeling using level-set method

    Get PDF
    More and more products with complex geometries are being designed and manufactured by computer aided design (CAD) and rapid prototyping (RP) technologies. Freeform surface is a geometrical feature widely used in modern products like car bodies, airfoils and turbine blades as well as in aesthetic artifacts. How to efficiently design and generate digital prototypes with freeform surfaces is an important issue in CAD. This paper presents the development of a Virtual Sculpting system and addresses the issues of surface reconstruction from dexel data structures and freeform geometric modeling using the level-set method from distance field structure. Our virtual sculpting method is based on the metaphor of carving a solid block into a 3D freeform object using a 3D haptic input device integrated with the computer visualization. This dissertation presents the result of the study and consists primarily of four papers --Abstract, page iv

    Tele-Autonomous control involving contact

    Get PDF
    Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed

    Surface reconstruction using variational interpolation

    Get PDF
    Surface reconstruction of anatomical structures is an integral part of medical modeling. Contour information is extracted from serial cross-sections of tissue data and is stored as "slice" files. Although there are several reasonably efficient triangulation algorithms that reconstruct surfaces from slice data, the models generated from them have a jagged or faceted appearance due to the large inter-slice distance created by the sectioning process. Moreover, inconsistencies in user input aggravate the problem. So, we created a method that reduces inter-slice distance, as well as ignores the inconsistencies in the user input. Our method called the piecewise weighted implicit functions, is based on the approach of weighting smaller implicit functions. It takes only a few slices at a time to construct the implicit function. This method is based on a technique called variational interpolation. Other approaches based on variational interpolation have the disadvantage of becoming unstable when the model is quite large with more than a few thousand constraint points. Furthermore, tracing the intermediate contours becomes expensive for large models. Even though some fast fitting methods handle such instability problems, there is no apparent improvement in contour tracing time, because, the value of each data point on the contour boundary is evaluated using a single large implicit function that essentially uses all constraint points. Our method handles both these problems using a sliding window approach. As our method uses only a local domain to construct each implicit function, it achieves a considerable run-time saving over the other methods. The resulting software produces interpolated models from large data sets in a few minutes on an ordinary desktop computer
    corecore