1,170 research outputs found

    jMOTU and Taxonerator: Turning DNA Barcode Sequences into Annotated Operational Taxonomic Units

    Get PDF
    BACKGROUND: DNA barcoding and other DNA sequence-based techniques for investigating and estimating biodiversity require explicit methods for associating individual sequences with taxa, as it is at the taxon level that biodiversity is assessed. For many projects, the bioinformatic analyses required pose problems for laboratories whose prime expertise is not in bioinformatics. User-friendly tools are required for both clustering sequences into molecular operational taxonomic units (MOTU) and for associating these MOTU with known organismal taxonomies. RESULTS: Here we present jMOTU, a Java program for the analysis of DNA barcode datasets that uses an explicit, determinate algorithm to define MOTU. We demonstrate its usefulness for both individual specimen-based Sanger sequencing surveys and bulk-environment metagenetic surveys using long-read next-generation sequencing data. jMOTU is driven through a graphical user interface, and can analyse tens of thousands of sequences in a short time on a desktop computer. A companion program, Taxonerator, that adds traditional taxonomic annotation to MOTU, is also presented. Clustering and taxonomic annotation data are stored in a relational database, and are thus amenable to subsequent data mining and web presentation. CONCLUSIONS: jMOTU efficiently and robustly identifies the molecular taxa present in survey datasets, and Taxonerator decorates the MOTU with putative identifications. jMOTU and Taxonerator are freely available from http://www.nematodes.org/

    Network Capital and Social Trust: Pre-Conditions for ‘Good’ Diversity?

    Get PDF
    This paper unpicks the assumption that because social networks underpin social capital, they directly create it – more of one inevitably making more of the other. If it were that simple, the sheer quantity of networks criss-crossing a defined urban space would be a proxy measure for the local stock of social capital. Of course the interrelationships are more complex. Two kinds of complication stand out. The first is specific: networks have both quantitative and qualitative dimensions, but the two elements have no necessary bearing on each other. The shape and extent of a network says nothing about the content of the links between its nodes. Certainly the line we draw between any two of them indicates contact and potential connection, but what kind of contact, how often, how trusting, in what circumstances, to what end
? Reliable answers to these questions need more than surface maps or bird’s eye accounts of who goes where, who speaks to whom. The second complication is a general, not to say universal, difficulty. We are stuck with the fact that sociological concepts - networks, social capital and trust included - are ‘only’ abstractions. They are ways of thinking about the apparent chaos of people behaving all over the place – here, to make it worse, in multi-cultural urban environments - but none of them is visible to be measured, weighed or quantified. This does not make the concepts ‘untrue’, and it should not stop them being useful. My hope is that we can find a nuanced perspective which will at least make the complications intelligible. At best, a multi-layered model will account for diversity in the nature of trust; and for variations in the way social capital is hoarded or distributed within and across ethnic boundaries. It would be contribution enough if we were able to specify the conditions which cause social capital, as Puttnam formulates it, to be exclusionary or inclusionary in its effect.Network capital, Social trust, ‘Good’ diversity

    Widening the context for interdisciplinary social research : SFL as a method for sociology, anthropology, and communication research

    Get PDF
    In this paper, I show clear links between the theoretical underpinnings of SFL and those of specific sociological, anthropological, and communication research traditions. My purpose in doing so is to argue that SFL is an excellent interdisciplinary research method for the social sciences, especially considering the emergent form of political economy being touted by new media enthusiasts: the so called knowledge (or information) economy. To demonstrate the flexibility and salience of SFL in diverse traditions of social research, and as evidence of its ability to be deployed as a flexible research method across formerly impermeable disciplinary and social boundaries, I use analyses from my doctoral research, relating these - theoretically speaking - to specific research traditions in sociology, communication, and anthropology

    Object reconstruction using close-range all-round digital photogrammetry for applications in industry

    Get PDF
    Bibliography: p. 66-68.Photogrammetry has many inherent advantages in engineering and industrial applications, which include the ability to obtain accurate, non-contact measurements from data rapidly acquired with the object in situ. Along with these advantages, digital photogrammetry offers the potential for the automation or semi-automation of many of the conventional photogrammetric procedures, leading to real-time or near real-time measurement capabilities. However, all-round surface measurement of an object usually benefits less from the above advantages of photogrammetry. To obtain the necessary imagery from all sides of the measurement object, real-time processing is nearly impossible, and it becomes difficult to avoid moving the object, thus precluding in situ measurement. However, all-round digital photogrammetry and, in particular, the procedure presented here, still offer advantages over other methods of full surface measurement, including rapid, non-contact data acquisition along with the ability to store and reprocess data at a later date. Conventional or topographic photogrammetry is well-established as a tool for mapping simple terrain surfaces and for acquiring accurate 3-D point data. The complexities of all-round photogrammetry make many of the standard photogrammetric methods all but redundant. The work presented in this thesis was aimed at the development of a reliable method of obtaining complete surface data of an object with non-topographic, all-round, close-range digital photogrammetry. A method was developed to improve the integrity of the data, and possibilities for the presentation and visualisation of the data were explored. The potential for automation was considered important, as was the need to keep the overall time required to a minimum. A measurement system was developed to take as input an object, and produce as output an accurate, representative point cloud, allowing for the reconstruction of the surface. This system included the following procedures: ■ a novel technique of achieving high-accuracy system pre-calibration using a cubic control frame and fixed camera stations, ■ separate image capture for the control frame and the object, ■ surface sub-division and all-round step-wise image matching to produce a comprehensive 3-D data set, ■ point cloud refinement, and ■ surface reconstruction by separate surface generation. The development and reliability of these new approaches is discussed and investigated; and the results of various test procedures are presented. The technique of system pre-calibration involved the use of a mechanical device - a rotary table - to impart precisely repeatable rotations to the control frame and, separately, the object. The actual repeatability precision was tested and excellent results achieved, with standard deviations for the resected camera station coordinates of between 0.05 and 0.5 mm. In a detailed test case, actual rotations differed from the desired rotations by an average of 0.7" with a standard deviation of less than 2'. The image matching for the test case, from a set of forty-eight images, achieved a satisfactory final accuracy, comparable to that achieved in other similar work. The meaningful reconstruction of surfaces presented problems, although an acceptable rendering was achieved, and a thorough survey of current commercially available software failed to produce a package capable of all-round modelling from random 3-D data. The final analysis of the results indicated that digital photogrammetry, and this method in particular, are highly suited to accurate all-round surface measurement. The potential for automation - and, therefore, for near real-time results - of the method in the stages of image acquisition and processing, calibration, image matching and data visualisation is great. The method thus lends itself to industrial applications. However, the need for a robust and rapid method of surface reconstruction needs to be fulfilled

    Evidencing a place for the hippocampus within the core scene processing network

    Get PDF
    Functional neuroimaging studies have identified several “core” brain regions that are preferentially activated by scene stimuli, namely posterior parahippocampal gyrus (PHG), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS). The hippocampus (HC), too, is thought to play a key role in scene processing, although no study has yet investigated scene-sensitivity in the HC relative to these other “core” regions. Here, we characterised the frequency and consistency of individual scene-preferential responses within these regions by analysing a large dataset (n = 51) in which participants performed a one-back working memory task for scenes, objects, and scrambled objects. An unbiased approach was adopted by applying independently-defined anatomical ROIs to individual-level functional data across different voxel-wise thresholds and spatial filters. It was found that the majority of subjects had preferential scene clusters in PHG (max = 100% of participants), RSC (max = 76%), and TOS (max = 94%). A comparable number of individuals also possessed significant scene-related clusters within their individually defined HC ROIs (max = 88%), evidencing a HC contribution to scene processing. While probabilistic overlap maps of individual clusters showed that overlap “peaks” were close to those identified in group-level analyses (particularly for TOS and HC), inter-individual consistency varied across regions and statistical thresholds. The inter-regional and inter-individual variability revealed by these analyses has implications for how scene-sensitive cortex is localised and interrogated in functional neuroimaging studies, particularly in medial temporal lobe regions, such as the H

    Program Analysis in A Combined Abstract Domain

    Get PDF
    Automated verification of heap-manipulating programs is a challenging task due to the complexity of aliasing and mutability of data structures used in these programs. The properties of a number of important data structures do not only relate to one domain, but to combined multiple domains, such as sorted list, priority queues, height-balanced trees and so on. The safety and sometimes efficiency of programs do rely on the properties of those data structures. This thesis focuses on developing a verification system for both functional correctness and memory safety of such programs which involve heap-based data structures. Two automated inference mechanisms are presented for heap-manipulating programs in this thesis. Firstly, an abstract interpretation based approach is proposed to synthesise program invariants in a combined pure and shape domain. Newly designed abstraction, join and widening operators have been defined for the combined domain. Furthermore, a compositional analysis approach is described to discover both pre-/post-conditions of programs with a bi-abduction technique in the combined domain. As results of my thesis, both inference approaches have been implemented and the obtained results validate the feasibility and precision of proposed approaches. The outcomes of the thesis confirm that it is possible and practical to analyse heap-manipulating programs automatically and precisely by using abstract interpretation in a sophisticated combined domain

    Separation Logic

    Get PDF
    Separation logic is a key development in formal reasoning about programs, opening up new lines of attack on longstanding problems

    Doctor of Philosophy

    Get PDF
    dissertationThe medial axis of an object is a shape descriptor that intuitively presents the morphology or structure of the object as well as intrinsic geometric properties of the objectñ€ℱs shape. These properties have made the medial axis a vital ingredient for shape analysis applications, and therefore the computation of which is a fundamental problem in computational geometry. This dissertation presents new methods for accurately computing the 2D medial axis of planar objects bounded by B-spline curves, and the 3D medial axis of objects bounded by B-spline surfaces. The proposed methods for the 3D case are the first techniques that automatically compute the complete medial axis along with its topological structure directly from smooth boundary representations. Our approach is based on the eikonal (grassfire) flow where the boundary is offset along the inward normal direction. As the boundary deforms, different regions start intersecting with each other to create the medial axis. In the generic situation, the (self-) intersection set is born at certain creation-type transition points, then grows and undergoes intermediate transitions at special isolated points, and finally ends at annihilation-type transition points. The intersection set evolves smoothly in between transition points. Our approach first computes and classifies all types of transition points. The medial axis is then computed as a time trace of the evolving intersection set of the boundary using theoretically derived evolution vector fields. This dynamic approach enables accurate tracking of elements of the medial axis as they evolve and thus also enables computation of topological structure of the solution. Accurate computation of geometry and topology of 3D medial axes enables a new graph-theoretic method for shape analysis of objects represented with B-spline surfaces. Structural components are computed via the cycle basis of the graph representing the 1-complex of a 3D medial axis. This enables medial axis based surface segmentation, and structure based surface region selection and modification. We also present a new approach for structural analysis of 3D objects based on scalar functions defined on their surfaces. This approach is enabled by accurate computation of geometry and structure of 2D medial axes of level sets of the scalar functions. Edge curves of the 3D medial axis correspond to a subset of ridges on the bounding surfaces. Ridges are extremal curves of principal curvatures on a surface indicating salient intrinsic features of its shape, and hence are of particular interest as tools for shape analysis. This dissertation presents a new algorithm for accurately extracting all ridges directly from B-spline surfaces. The proposed technique is also extended to accurately extract ridges from isosurfaces of volumetric data using smooth implicit B-spline representations. Accurate ridge curves enable new higher-order methods for surface analysis. We present a new definition of salient regions in order to capture geometrically significant surface regions in the neighborhood of ridges as well as to identify salient segments of ridges
    • 

    corecore