207 research outputs found

    Low-background gamma spectroscopy at the Boulby Underground Laboratory

    Get PDF
    The Boulby Underground Germanium Suite (BUGS) comprises three low-background, high-purity germanium detectors operating in the Boulby Underground Laboratory, located 1.1 km underground in the north-east of England, UK. BUGS utilises three types of detector to facilitate a high-sensitivity, high-throughput radio-assay programme to support the development of rare-event search experiments. A Broad Energy Germanium (BEGe) detector delivers sensitivity to low-energy gamma-rays such as those emitted by 210 Pb and 234 Th. A Small Anode Germanium (SAGe) well-type detector is employed for efficient screening of small samples. Finally, a standard p-type coaxial detector provides fast screening of standard samples. This paper presents the steps used to characterise the performance of these detectors for a variety of sample geometries, including the corrections applied to account for cascade summing effects. For low-density materials, BUGS is able to radio-assay to specific activities down to 3.6mBqkg −1 for 234 Th and 6.6mBqkg −1 for 210 Pb both of which have uncovered some significant equilibrium breaks in the 238 U chain. In denser materials, where gamma-ray self-absorption increases, sensitivity is demonstrated to specific activities of 0.9mBqkg −1 for 226 Ra, 1.1mBqkg −1 for 228 Ra, 0.3mBqkg −1 for 224 Ra, and 8.6mBqkg −1 for 40 K with all upper limits at a 90% confidence level. These meet the requirements of most screening campaigns presently under way for rare-event search experiments, such as the LUX-ZEPLIN (LZ) dark matter experiment. We also highlight the ability of the BEGe detector to probe the X-ray fluorescence region which can be important to identify the presence of radioisotopes associated with neutron production; this is of particular relevance in experiments sensitive to nuclear recoils

    Assessing the Role of Formal Specifications in Verification and Validation of Knowledge‑Based Systems

    Full text link
    This paper examines how formal specification techniques can support the verification and validation (V&V) of knowledge-based systems. Formal specification techniques provide levels of description which support both verification and validation, and V&V techniques feed back to assist the development of the specifications. Developing a formal specification for a system requires the prior construction of a conceptual model for the intended system. Many elements of this conceptual model can be effectively used to support V&V. Using these elements, the V&V process becomes deeper and more elaborate and it produces results of a better quality compared with the V&V activities which can be performed on systems developed without conceptual models. However, we note that there are concerns in using formal specification techniques for V&V, not least being the effort involved in creating the specifications

    Predicting the location of the hip joint centres, impact of age group and sex

    Get PDF
    Clinical gait analysis incorporating three-dimensional motion analysis plays a key role in planning surgical treatments in people with gait disability. The position of the Hip Joint Centre (HJC) within the pelvis is thus critical to ensure accurate data interpretation. The position of the HJC is determined from regression equations based on anthropometric measurements derived from relatively small datasets. Current equations do not take sex or age into account, even though pelvis shape is known to differ between sex, and gait analysis is performed in populations with wide range of age. Three dimensional images of 157 deceased individuals (37 children, 120 skeletally matured) were collected with computed tomography. The location of the HJC within the pelvis was determined and regression equations to locate the HJC were developed using various anthropometrics predictors. We determined if accuracy improved when age and sex were introduced as variables. Statistical analysis did not support differentiating the equations according to sex. We found that age only modestly improved accuracy. We propose a range of new regression equations, derived from the largest dataset collected for this purpose to date

    A Cross-Study Transcriptional Analysis of Parkinson's Disease

    Get PDF
    The study of Parkinson's disease (PD), like other complex neurodegenerative disorders, is limited by access to brain tissue from patients with a confirmed diagnosis. Alternatively the study of peripheral tissues may offer some insight into the molecular basis of disease susceptibility and progression, but this approach still relies on brain tissue to benchmark relevant molecular changes against. Several studies have reported whole-genome expression profiling in post-mortem brain but reported concordance between these analyses is lacking. Here we apply a standardised pathway analysis to seven independent case-control studies, and demonstrate increased concordance between data sets. Moreover data convergence increased when the analysis was limited to the five substantia nigra (SN) data sets; this highlighted the down regulation of dopamine receptor signaling and insulin-like growth factor 1 (IGF1) signaling pathways. We also show that case-control comparisons of affected post mortem brain tissue are more likely to reflect terminal cytoarchitectural differences rather than primary pathogenic mechanisms. The implementation of a correction factor for dopaminergic neuronal loss predictably resulted in the loss of significance of the dopamine signaling pathway while axon guidance pathways increased in significance. Interestingly the IGF1 signaling pathway was also over-represented when data from non-SN areas, unaffected or only terminally affected in PD, were considered. Our findings suggest that there is greater concordance in PD whole-genome expression profiling when standardised pathway membership rather than ranked gene list is used for comparison

    Interpretability of deep learning models: A survey of results

    Get PDF
    Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process-incorporating these networks into mission critical processes such as medical diagnosis, planning and control-requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability

    Span of control in supervision of rail track work

    Get PDF
    The supervision of engineering work on the railways has received relatively little examination despite being both safety-critical in its own right and having wider implications for the successful running of the railways. The present paper is concerned with understanding the factors that make different engineering works perceived as easier or harder to manage. We describe an approach building on notions of ‘span of control’, through which we developed the TOECAP inventory (Team, Organisation, Environment, Communication, Activity and Personal). This tool was validated through both interviews and questionnaires. As well as identifying the physical factors involved, the work also emphasised the importance of collaborative and attitudinal factors. We conclude by discussing limitations of the present work and future directions for development

    Keep them alive! Design and Evaluation of the “Community Fostering Reference Model”

    Get PDF
    Firms host online communities for commercial purposes, for example in order to integrate customers into ideation for new product development. The success of these firm-hosted online communities depends entirely on the cooperation of a high number of customers that constantly produce valuable knowledge for firms. However, in practice, the majority of successfully implemented communities suffers from stagnation and even a decrease of member activities over time. Literature provides numerous guidelines on how to build and launch these online communities. While these models describe the initial steps of acquiring and activating a community base from scratch very well and explicitly, they neglect continuous member activation and acquistion after a successful launch. Against this background, the authors propose the Community Fostering Reference Model (CoFoRM), which represents a set of general procedures and instruments to continuously foster member activity. In this paper, the authors present the theory-driven design as well as the evaluation of the CoFoRM in a practical use setting. The evaluation results reveal that the CoFoRM represents a valuable instrument in the daily working routine of community managers, since it efficiently helps activating community members especially in the late phases of a community’s LifeCycle

    Tephrochronology

    Get PDF
    Tephrochronology is the use of primary, characterized tephras or cryptotephras as chronostratigraphic marker beds to connect and synchronize geological, paleoenvironmental, or archaeological sequences or events, or soils/paleosols, and, uniquely, to transfer relative or numerical ages or dates to them using stratigraphic and age information together with mineralogical and geochemical compositional data, especially from individual glass-shard analyses, obtained for the tephra/cryptotephra deposits. To function as an age-equivalent correlation and chronostratigraphic dating tool, tephrochronology may be undertaken in three steps: (i) mapping and describing tephras and determining their stratigraphic relationships, (ii) characterizing tephras or cryptotephras in the laboratory, and (iii) dating them using a wide range of geochronological methods. Tephrochronology is also an important tool in volcanology, informing studies on volcanic petrology, volcano eruption histories and hazards, and volcano-climate forcing. Although limitations and challenges remain, multidisciplinary applications of tephrochronology continue to grow markedly

    User-centered virtual environment design for virtual rehabilitation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>As physical and cognitive rehabilitation protocols utilizing virtual environments transition from single applications to comprehensive rehabilitation programs there is a need for a new design cycle methodology. Current human-computer interaction designs focus on usability without benchmarking technology within a user-in-the-loop design cycle. The field of virtual rehabilitation is unique in that determining the efficacy of this genre of computer-aided therapies requires prior knowledge of technology issues that may confound patient outcome measures. Benchmarking the technology (e.g., displays or data gloves) using healthy controls may provide a means of characterizing the "normal" performance range of the virtual rehabilitation system. This standard not only allows therapists to select appropriate technology for use with their patient populations, it also allows them to account for technology limitations when assessing treatment efficacy.</p> <p>Methods</p> <p>An overview of the proposed user-centered design cycle is given. Comparisons of two optical see-through head-worn displays provide an example of benchmarking techniques. Benchmarks were obtained using a novel vision test capable of measuring a user's stereoacuity while wearing different types of head-worn displays. Results from healthy participants who performed both virtual and real-world versions of the stereoacuity test are discussed with respect to virtual rehabilitation design.</p> <p>Results</p> <p>The user-centered design cycle argues for benchmarking to precede virtual environment construction, especially for therapeutic applications. Results from real-world testing illustrate the general limitations in stereoacuity attained when viewing content using a head-worn display. Further, the stereoacuity vision benchmark test highlights differences in user performance when utilizing a similar style of head-worn display. These results support the need for including benchmarks as a means of better understanding user outcomes, especially for patient populations.</p> <p>Conclusions</p> <p>The stereoacuity testing confirms that without benchmarking in the design cycle poor user performance could be misconstrued as resulting from the participant's injury state. Thus, a user-centered design cycle that includes benchmarking for the different sensory modalities is recommended for accurate interpretation of the efficacy of the virtual environment based rehabilitation programs.</p
    • 

    corecore