121 research outputs found

    Phylogenomic analysis reveals extensive phylogenetic mosaicism in the Human GPCR Superfamily

    Get PDF
    A novel high throughput phylogenomic analysis (HTP) was applied to the rhodopsin G-protein coupled receptor (GPCR) family. Instances of phylogenetic mosaicism between receptors were found to be frequent, often as instances of correlated mosaicism and repeated mosaicism. A null data set was constructed with the same phylogenetic topology as the rhodopsin GPCRs. Comparison of the two data sets revealed that mosaicism was found in GPCRs in a higher frequency than would be expected by homoplasy or the effects of topology alone. Various evolutionary models of differential conservation, recombination and homoplasy are explored which could result in the patterns observed in this analysis. We find that the results are most consistent with frequent recombination events. A complex evolutionary history is illustrated in which it is likely frequent recombination has endowed GPCRs with new functions. The pattern of mosaicism is shown to be informative for functional prediction for orphan receptors. HTP analysis is complementary to conventional phylogenomic analyses revealing mosaicism that would not otherwise have been detectable through conventional phylogenetics

    Working Harder, Working Smarter, or Doing Both? How the Interpretation of Combined Learning and Performance Goals Affects Complex Task Performance

    Get PDF
    Goal setting research has shown that on novel, complex tasks people perform better with learning than performance goals. In practice, people must often learn and perform at the same time. Does setting both types of goals simultaneously enhance performance compared to singular goals? This dissertation consists of two studies using a complex business simulation that examine setting simultaneous learning and performance goals (“combined goals”) for highly complex tasks. The first study is a cognitive interview study where I examine how people interpret assigned singular goals (learning or performance) and combined goals at various difficulty levels. The second study is a laboratory experiment which examines how combined goals affect performance under dynamic conditions. The results of both studies suggest that regardless of the goals people are assigned, they focus on the performance goal more than the learning goal or both goals equally. Combined goals appear to have a strong goal hierarchy where performance goals are the dominant goal and learning goals are the background goal. In terms of task performance, as predicted people who consistently focus on both goals equally – particularly early in the task – perform better than those who focus on only one. Also as predicted, assigned combined goals that emphasize learning over performance result in the highest performance. Overall, the results suggest that how people interpret combined goals within a goal hierarchy influences the goals they focus on which in turn influences task performance. This dissertation highlights the role of goal hierarchy in understanding how combined goals influence performance

    Intelligent Systems for Molecular Biology 2002 (ISMB02)

    Get PDF
    This report profiles the keynote talks given at ISMB02 in Edmonton, Canada, by Michael Ashburner, Barry Honig, Isidore Rigoutsos, Ford Doolittle, Stephen Altschul, Terry Gaasterland, John Reinitz, and the Overton Prize winner, David Baker

    Generative feature-based design-by-constraints as a means of integration within the manufacturing industry

    Get PDF
    The article examines the development of computer aids within manufacturing industry and proposes an alternative approach to the way we design and the designer's role within manufacturing. A feature-based generative design-by-constraints approach is applied, which requires the designer to specify solutions in terms of manufacturing data, which is captured by means of an interactive simulation of machining processes, in which the constraints of equipment, materials and tools are displayed to the designer. The effect of this approach on the integration of all areas within a manufacturing environment is explored, as is the simultaneous design nature of this approach

    Sequence Search Algorithms for Single Pass Sequence Identification: Does One Size Fit All?

    Get PDF
    Bioinformatic tools have become essential to biologists in their quest to understand the vast quantities of sequence data, and now whole genomes, which are being produced at an ever increasing rate. Much of these sequence data are single-pass sequences, such as sample sequences from organisms closely related to other organisms of interest which have already been sequenced, or cDNAs or expressed sequence tags (ESTs). These single-pass sequences often contain errors, including frameshifts, which complicate the identification of homologues, especially at the protein level. Therefore, sequence searches with this type of data are often performed at the nucleotide level. The most commonly used sequence search algorithms for the identification of homologues are Washington University’s and the National Center for Biotechnology Information's (NCBI) versions of the BLAST suites of tools, which are to be found on websites all over the world. The work reported here examines the use of these tools for comparing sample sequence datasets to a known genome. It shows that care must be taken when choosing the parameters to use with the BLAST algorithms. NCBI’s version of gapped BLASTn gives much shorter, and sometimes different, top alignments to those found using Washington University’s version of BLASTn (which also allows for gaps), when both are used with their default parameters. Most of the differences in performance were found to be due to the choices of default parameters rather than underlying differences between the two algorithms. Washington University’s version, used with defaults, compares very favourably with the results obtained using the accurate but computationally intensive Smith–Waterman algorithm

    Progressive refinement rendering of implicit surfaces

    Get PDF
    The visualisation of implicit surfaces can be an inefficient task when such surfaces are complex and highly detailed. Visualising a surface by first converting it to a polygon mesh may lead to an excessive polygon count. Visualising a surface by direct ray casting is often a slow procedure. In this paper we present a progressive refinement renderer for implicit surfaces that are Lipschitz continuous. The renderer first displays a low resolution estimate of what the final image is going to be and, as the computation progresses, increases the quality of this estimate at an interactive frame rate. This renderer provides a quick previewing facility that significantly reduces the design cycle of a new and complex implicit surface. The renderer is also capable of completing an image faster than a conventional implicit surface rendering algorithm based on ray casting

    A progressive refinement approach for the visualisation of implicit surfaces

    Get PDF
    Visualising implicit surfaces with the ray casting method is a slow procedure. The design cycle of a new implicit surface is, therefore, fraught with long latency times as a user must wait for the surface to be rendered before being able to decide what changes should be introduced in the next iteration. In this paper, we present an attempt at reducing the design cycle of an implicit surface modeler by introducing a progressive refinement rendering approach to the visualisation of implicit surfaces. This progressive refinement renderer provides a quick previewing facility. It first displays a low quality estimate of what the final rendering is going to be and, as the computation progresses, increases the quality of this estimate at a steady rate. The progressive refinement algorithm is based on the adaptive subdivision of the viewing frustrum into smaller cells. An estimate for the variation of the implicit function inside each cell is obtained with an affine arithmetic range estimation technique. Overall, we show that our progressive refinement approach not only provides the user with visual feedback as the rendering advances but is also capable of completing the image faster than a conventional implicit surface rendering algorithm based on ray casting
    corecore