204,756 research outputs found

    The internal reliability of some City & Guilds tests

    Get PDF

    A literature review of the anthropometric studies of school students for ergonomics purposes: are accuracy, precision and reliability being considered?

    Get PDF
    BACKGROUND: Despite offering many benefits, direct manual anthropometric measurement method can be problematic due to their vulnerability to measurement errors. OBJECTIVE: The purpose of this literature review was to determine, whether or not the currently published anthropometric studies of school children, related to ergonomics, mentioned or evaluated the variables precision, reliability and/or accuracy in the direct manual measurement method. METHODS: Two bibliographic databases, and the bibliographic references of all the selected papers were used for finding relevant published papers in the fields considered in this study. RESULTS: Forty-six (46) studies met the criteria previously defined for this literature review. However, only ten (10) studies mentioned at least one of the analyzed variables, and none has evaluated all of them. Only reliability was assessed by three papers. Moreover, in what regards the factors that affect precision, reliability and accuracy, the reviewed papers presented large differences. This was particularly clear in the instruments used for the measurements, which were not consistent throughout the studies. Additionally, it was also clear that there was a lack of information regarding the evaluators’ training and procedures for anthropometric data collection, which are assumed to be the most important issues that affect precision, reliability and accuracy. CONCLUSIONS: Based on the results it was possible to conclude that the considered anthropometric studies had not focused their attention to the analysis of precision, reliability and accuracy of the manual measurement methods. Hence, and with the aim of avoiding measurement errors and misleading data, anthropometric studies should put more efforts and care on testing measurement error and defining the procedures used to collect anthropometric data

    Grid infrastructures for secure access to and use of bioinformatics data: experiences from the BRIDGES project

    Get PDF
    The BRIDGES project was funded by the UK Department of Trade and Industry (DTI) to address the needs of cardiovascular research scientists investigating the genetic causes of hypertension as part of the Wellcome Trust funded (ÂŁ4.34M) cardiovascular functional genomics (CFG) project. Security was at the heart of the BRIDGES project and an advanced data and compute grid infrastructure incorporating latest grid authorisation technologies was developed and delivered to the scientists. We outline these grid infrastructures and describe the perceived security requirements at the project start including data classifications and how these evolved throughout the lifetime of the project. The uptake and adoption of the project results are also presented along with the challenges that must be overcome to support the secure exchange of life science data sets. We also present how we will use the BRIDGES experiences in future projects at the National e-Science Centre

    Big Data and Due Process

    Get PDF
    Today, electronic footprints may follow us wherever we go. Electronic traces, left through a smartphone or other device, can be tracked to the scene of a crime, or they can place a person far from a crime scene. By the same token, individuals may be falsely implicated due to errors in large government or commercial databases, and evidence of innocence may linger in such archives without ever coming to light. Professors Joshua Fairfield and Erik Luna and have done an important service by carefully introducing the problem of “digital innocence” and marking out areas in need of clear thinking and policy. In this online response to their wonderful piece, I discuss four additional problems at the intersection of big data and due process rights: (1) the need for developed electronic discovery rules in criminal cases; (2) the need to reconsider the meaning of Brady v. Maryland and the due process obligations of prosecutors and government agencies in the context of government data; (3) the parallel need to reconsider standards for effective assistance of defense counsel; and (4) the need for broader and better-adapted postconviction electronic discovery and remedies

    A scalable reliable instant messenger using the SD Erlang libraries

    Get PDF
    Erlang has world leading reliability capabilities, but while it scales extremely well within a single node, distributed Erlang has some scalability issues. The Scalable Distributed (SD) Erlang libraries have been designed to address the scalability limitations while preserving the reliability model, and shown to deliver significant performance benefits above 40 hosts using some relatively simple benchmarks. This paper compares the reliability and scalability of SD Erlang and distributed Erlang using an Instant Messaging (IM) server benchmark that is a far more typical Erlang application; a relatively large and sophisticated benchmark; has throughput as the key performance metric; and uses non-trivial reliability mechanisms. We provide a careful reliability evaluation using chaos monkey. The key performance results consider scenarios with and without failures on up to 17 server hosts (272 cores). We show that SD Erlang adds no performance overhead when all nodes are grouped in a single s_group. However, either adding redundant router nodes in distributed Erlang applications, or dividing a set of nodes into small s_groups in SD Erlang applications, have small negative impact. Both the distributed Erlang and SD Erlang IM tolerate failures and, up to the failure rates measured, the failures have no impact on throughput. The IM implementations show that SD Erlang preserves the distributed Erlang reliability properties and mechanisms

    Results of a UK industrial tribological survey

    No full text
    During the summer of 2012, the National Centre for Advanced Tribology at Southampton (nCATS) undertook a UK-wide industrial tribological survey in order to assess the explicit need for tribological testing within the UK. The survey was designed and implemented by a summer intern student, Mr Simon King, under the supervision of Drs John Walker and Terry Harvey and supported by the director of nCATS, Professor Robert Wood. The survey built upon on two previous tribological surveys conducted through the National Physical Laboratory (NPL) during the 1990’s. The aim was to capture a snapshot of the current use of tribological testing within UK industry and its perceived reliability in terms of the test data generated. The survey also invited participants to speculate about how UK tribology could improve its approach to testing. The survey was distributed through the nCATS industrial contact list, which comprises of over 400 contacts from a broad spectrum of commercial industries. The Institute of Physics (IOP) tribology group also assisted by distributing the survey to its membership list. A total of 60 responses were received for the survey, out of which 39 had fully completed the questionnaire. Participants came from a broad spread of industrial backgrounds, with the energy sector having the highest representation. Only 40% of respondents were dedicated tribologists/surface engineers, again reflecting the multi-disciplinary nature of the field. It was found that the companies that had the highest annual turnover also appeared to expend the most on tribology. The majority of respondents indicated that as a percentage of turnover tribology accounted for less than 1%, however the lack of hard figures only for tribology make this a conservative estimate. The greatest concern in relation to tribology of those who responded was the cost; however the influence of legislation and product reliability were also driving factors. Abrasive wear was still considered the number one tribological wear mechanism, with sliding contacts ranking as the most common type of wear interface. Metallic and hard coated surfaces were the most commonly encountered type of material suffering from tribological wear phenomena. Laboratory scale testing was a significant part of introducing a new tribological component, however component specific testing was considered the most reliable form of testing a new component over standardised test geometries. Overall there appeared to be much potential for improving the reliability of tribological test data, with most respondents indicating that simply more testing was not the best perceived approach to improving tribological data but rather more reliable, representative tests with improved knowledge capture. Most companies possessed an internal database to assist them with tribological information; however, many also expressed a strong desire for the use of a commercial or national database, although the format this might take was less clear. Opinions appeared split as to whether there would be a collective willingness to contribute to a centralised database, presumably on the grounds on the sensitivity of data

    Impliance: A Next Generation Information Management Appliance

    Full text link
    ably successful in building a large market and adapting to the changes of the last three decades, its impact on the broader market of information management is surprisingly limited. If we were to design an information management system from scratch, based upon today's requirements and hardware capabilities, would it look anything like today's database systems?" In this paper, we introduce Impliance, a next-generation information management system consisting of hardware and software components integrated to form an easy-to-administer appliance that can store, retrieve, and analyze all types of structured, semi-structured, and unstructured information. We first summarize the trends that will shape information management for the foreseeable future. Those trends imply three major requirements for Impliance: (1) to be able to store, manage, and uniformly query all data, not just structured records; (2) to be able to scale out as the volume of this data grows; and (3) to be simple and robust in operation. We then describe four key ideas that are uniquely combined in Impliance to address these requirements, namely the ideas of: (a) integrating software and off-the-shelf hardware into a generic information appliance; (b) automatically discovering, organizing, and managing all data - unstructured as well as structured - in a uniform way; (c) achieving scale-out by exploiting simple, massive parallel processing, and (d) virtualizing compute and storage resources to unify, simplify, and streamline the management of Impliance. Impliance is an ambitious, long-term effort to define simpler, more robust, and more scalable information systems for tomorrow's enterprises.Comment: This article is published under a Creative Commons License Agreement (http://creativecommons.org/licenses/by/2.5/.) You may copy, distribute, display, and perform the work, make derivative works and make commercial use of the work, but, you must attribute the work to the author and CIDR 2007. 3rd Biennial Conference on Innovative Data Systems Research (CIDR) January 710, 2007, Asilomar, California, US
    • …
    corecore