3,900 research outputs found

    Statistical Image Reconstruction for High-Throughput Thermal Neutron Computed Tomography

    Full text link
    Neutron Computed Tomography (CT) is an increasingly utilised non-destructive analysis tool in material science, palaeontology, and cultural heritage. With the development of new neutron imaging facilities (such as DINGO, ANSTO, Australia) new opportunities arise to maximise their performance through the implementation of statistically driven image reconstruction methods which have yet to see wide scale application in neutron transmission tomography. This work outlines the implementation of a convex algorithm statistical image reconstruction framework applicable to the geometry of most neutron tomography instruments with the aim of obtaining similar imaging quality to conventional ramp filtered back-projection via the inverse Radon transform, but using a lower number of measured projections to increase object throughput. Through comparison of the output of these two frameworks using a tomographic scan of a known 3 material cylindrical phantom obtain with the DINGO neutron radiography instrument (ANSTO, Australia), this work illustrates the advantages of statistical image reconstruction techniques over conventional filter back-projection. It was found that the statistical image reconstruction framework was capable of obtaining image estimates of similar quality with respect to filtered back-projection using only 12.5% the number of projections, potentially increasing object throughput at neutron imaging facilities such as DINGO eight-fold

    Predictive approaches to assessing the fit of evolutionary models

    Get PDF

    PuMA: Bayesian analysis of partitioned (and unpartitioned) model adequacy

    Get PDF
    The accuracy of Bayesian phylogenetic inference using molecular data depends on the use of proper models of sequence evolution. Although choosing the best model available from a pool of alternatives has become standard practice in statistical phylogenetics, assessment of the chosen model\u27s adequacy is rare. Programs for Bayesian phylogenetic inference have recently begun to implement models of sequence evolution that account for heterogeneity across sites beyond variation in rates of evolution, yet no program exists to assess the adequacy of these models. PuMA implements a posterior predictive simulation approach to assessing the adequacy of partitioned, unpartitioned and mixture models of DNA sequence evolution in a Bayesian context. Assessment of model adequacy allows empirical phylogeneticists to have appropriate confidence in their results and guides efforts to improve models of sequence evolution. © The Author 2008. Published by Oxford University Press. All rights reserved

    Prologue to Perfectly Parsing Proxy Patterns

    Get PDF
    As libraries spend an increasing percentage of precious collection funds on electronic resources, important questions arise to drive collection management decisions: What is being used? How much? and finally, Who is using our resources? Vendor-supplied statistics can help answer the first two questions, but we have encountered specific questions about our users at Mercer University. To help answer this question, we turned to our proxy server logs and began a pilot study in the spring semester 2017. This presentation will explain the methodology we used in mining data from our proxy server logs in combination with our existing user database. It will describe the demographic information we were able to glean from this combination of information resources. We uncovered valuable insights to our database usage including: usage pattern over time, database popularity by program, database usage by enrollment status, usage by faculty/ employee group, and usage by campus group

    On the Diversity of Friendship and Network Ties: A Comparison of Religious Versus Nonreligious Group Membership in the Rural American South

    Get PDF
    Social science has long been interested in the effects and predictors of community participation, especially regarding voluntary membership or civic participation. Likewise, the role of social institutions has been given much attention in understanding their possible effect as an outlet for both individual desires to become civically engaged as well the institution’s ability to shelter an individual and surround them with others like themselves. We use data from the 2000 Social Capital Benchmark Survey to examine the effect of group membership on the overall diversity of friendships. The diversity of friendships gives us a good proxy to the degree of the closure created by existing in-group dynamics. Furthermore, the effect of membership is comparatively examined between religious group membership and the degree of nonreligious group membership. Our findings indicate differing effects based on the type of membership on the diversity of friendships at the individual level

    Striving for Uniqueness: Data-Driven Database Deselection

    Get PDF
    As libraries endure an ongoing crisis of available funds to meet inflating electronic content costs, the idea to dispatch the perceived least important e-resources to help balance the budget is a tempting solution. Mercer University Libraries recognizes the challenge of finding areas in which to cut back on its resources. They closely examine their subscriptions to prioritize their patrons’ needs, maintain budgetary equilibrium, and remain true to their goals. The Library Systems Department has worked to develop their own tool to assist decision makers with pertinent information about the uniqueness of both their full text and index databases and packages, both to save money and to improve their programming skills. This paper highlights their approach to this tool and the effects it has had so far

    Collection Data Visualization: Seeing the Forest Through the Treemap

    Get PDF
    Collection management is one of the more complicated responsibilities in librarianship. In this task, the librarian must simultaneously synthesize the needs, desires, and aspirations of the institution, departments, and individuals. While much of this is elusive qualitative data that may not yield a definitive answer, we also have increasingly accessible hard data from our integrated library systems (ILSs) that we can synthesize to complement it. In the latest generations of ILSs, this information is readily available to use for statistical analysis and visualization. When it comes to our increasingly limited materials budgets, it is important to make sure that we make the best decisions possible, thus it is advantageous to analyze all the data at our disposal. We introduce a web application that produces live statistics from the ILS. The system uses data points, including collection use and metrics, which describe a collection (e.g., age, quantity). This system goes beyond traditional charts and graphs by employing several visualization techniques that lend a unique perspective to these data points. The particular techniques allow collection managers to visualize multiple data points simultaneously and reveal data correlations that might not otherwise be obvious
    • …
    corecore