16,463 research outputs found

    Evidence of preference construction in a comparison of variants of the standard gamble method

    Get PDF
    An increasingly important debate has emerged around the extent to which techniques such as the standard gamble, which is used, amongst other things, to value health states, actually serve to construct respondents' preferences rather than simply elicit them. According to standard theory, the variant used should have no bearing on the numbers elicited from respondents, i.e. procedural invariance should hold. This study addresses this debate by comparing two variants of standard gamble in the valuation of health states. It is a mixed methods study that combines a quantitative comparison with the probing of respondents in order to ascertain possible reasons for the differences that emerged. Significant differences were found between variants and, furthermore, there was evidence of an ordering effect. Respondents' responses to probing suggested that they were influenced by the method of elicitation

    Identification of Design Principles

    Get PDF
    This report identifies those design principles for a (possibly new) query and transformation language for the Web supporting inference that are considered essential. Based upon these design principles an initial strawman is selected. Scenarios for querying the Semantic Web illustrate the design principles and their reflection in the initial strawman, i.e., a first draft of the query language to be designed and implemented by the REWERSE working group I4

    Analysis of equivalence mapping for terminology services

    Get PDF
    This paper assesses the range of equivalence or mapping types required to facilitate interoperability in the context of a distributed terminology server. A detailed set of mapping types were examined, with a view to determining their validity for characterizing relationships between mappings from selected terminologies (AAT, LCSH, MeSH, and UNESCO) to the Dewey Decimal Classification (DDC) scheme. It was hypothesized that the detailed set of 19 match types proposed by Chaplan in 1995 is unnecessary in this context and that they could be reduced to a less detailed conceptually-based set. Results from an extensive mapping exercise support the main hypothesis and a generic suite of match types are proposed, although doubt remains over the current adequacy of the developing Simple Knowledge Organization System (SKOS) Core Mapping Vocabulary Specification (MVS) for inter-terminology mapping

    Science of Digital Libraries(SciDL)

    Get PDF
    Our purpose is to ensure that people and institutions better manage information through digital libraries (DLs). Thus we address a fundamental human and social need, which is particularly urgent in the modern Information (and Knowledge) Age. Our goal is to significantly advance both the theory and state-of-theart of DLs (and other advanced information systems) - thoroughly validating our approach using highly visible testbeds. Our research objective is to leverage our formal, theory-based approach to the problems of defining, understanding, modeling, building, personalizing, and evaluating DLs. We will construct models and tools based on that theory so organizations and individuals can easily create and maintain fully functional DLs, whose components can interoperate with corresponding components of related DLs. This research should be highly meritorious intellectually. We bring together a team of senior researchers with expertise in information retrieval, human-computer interaction, scenario-based design, personalization, and componentized system development and expect to make important contributions in each of those areas. Of crucial import, however, is that we will integrate our prior research and experience to achieve breakthrough advances in the field of DLs, regarding theory, methodology, systems, and evaluation. We will extend the 5S theory, which has identified five key dimensions or onstructs underlying effective DLs: Streams, Structures, Spaces, Scenarios, and Societies. We will use that theory to describe and develop metamodels, models, and systems, which can be tailored to disciplines and/or groups, as well as personalized. We will disseminate our findings as well as provide toolkits as open source software, encouraging wide use. We will validate our work using testbeds, ensuring broad impact. We will put powerful tools into the hands of digital librarians so they may easily plan and configure tailored systems, to support an extensible set of services, including publishing, discovery, searching, browsing, recommending, and access control, handling diverse types of collections, and varied genres and classes of digital objects. With these tools, end-users will for be able to design personal DLs. Testbeds are crucial to validate scientific theories and will be thoroughly integrated into SciDL research and evaluation. We will focus on two application domains, which together should allow comprehensive validation and increase the significance of SciDL's impact on scholarly communities. One is education (through CITIDEL); the other is libraries (through DLA and OCKHAM). CITIDEL deals with content from publishers (e.g, ACM Digital Library), corporate research efforts e.g., CiteSeer), volunteer initiatives (e.g., DBLP, based on the database and logic rogramming literature), CS departments (e.g., NCSTRL, mostly technical reports), educational initiatives (e.g., Computer Science Teaching Center), and universities (e.g., theses and dissertations). DLA is a unit of the Virginia Tech library that virtually publishes scholarly communication such as faculty-edited journals and rare and unique resources including image collections and finding aids from Special Collections. The OCKHAM initiative, calling for simplicity in the library world, emphasizes a three-part solution: lightweightprotocols, component-based development, and open reference models. It provides a framework to research the deployment of the SciDL approach in libraries. Thus our choice of testbeds also will nsure that our research will have additional benefit to and impact on the fields of computing and library and information science, supporting transformations in how we learn and deal with information

    Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Get PDF
    While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus of usability tests has remained on the search interface rather than the discovery process for users. The informal site-­‐ and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA) as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g., retrieving a relevant book or a journal article) method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal charts from two perspectives: a) users’ physical interactions (i.e., clicks), and b) user’s cognitive steps (i.e., decision points for what to do next). A brief comparison of HTA and usability test findings is offered as a way of conclusion

    Research on computer aided testing of pilot response to critical in-flight events

    Get PDF
    Experiments on pilot decision making are described. The development of models of pilot decision making in critical in flight events (CIFE) are emphasized. The following tests are reported on the development of: (1) a frame system representation describing how pilots use their knowledge in a fault diagnosis task; (2) assessment of script norms, distance measures, and Markov models developed from computer aided testing (CAT) data; and (3) performance ranking of subject data. It is demonstrated that interactive computer aided testing either by touch CRT's or personal computers is a useful research and training device for measuring pilot information management in diagnosing system failures in simulated flight situations. Performance is dictated by knowledge of aircraft sybsystems, initial pilot structuring of the failure symptoms and efficient testing of plausible causal hypotheses

    Testing the Consistency of Performance Scores Reported for Binary Classification Problems

    Full text link
    Binary classification is a fundamental task in machine learning, with applications spanning various scientific domains. Whether scientists are conducting fundamental research or refining practical applications, they typically assess and rank classification techniques based on performance metrics such as accuracy, sensitivity, and specificity. However, reported performance scores may not always serve as a reliable basis for research ranking. This can be attributed to undisclosed or unconventional practices related to cross-validation, typographical errors, and other factors. In a given experimental setup, with a specific number of positive and negative test items, most performance scores can assume specific, interrelated values. In this paper, we introduce numerical techniques to assess the consistency of reported performance scores and the assumed experimental setup. Importantly, the proposed approach does not rely on statistical inference but uses numerical methods to identify inconsistencies with certainty. Through three different applications related to medicine, we demonstrate how the proposed techniques can effectively detect inconsistencies, thereby safeguarding the integrity of research fields. To benefit the scientific community, we have made the consistency tests available in an open-source Python package

    Measuring adaptation to non-permanent employment contracts using a conjoint analysis approach

    Get PDF
    This study attempts to uncover the ‘real’ impact of temporary contracts on workers’ perceived job quality, prior to the psychological phenomena of adaptation, coping and cognitive dissonance coming into play. This is done by using a novel conjoint analysis approach that examines the ex ante preferences over different contract statuses of a newly generated sample of low-skilled employees from seven European countries. Other things equal, it is shown that the anticipated psychological ‘costs’ of moving from a riskless permanent contract to the insecurity of a temporary job or no work at all appear to be quite significant. In contrast, temporary employees, who have presumably already adapted to the circumstances surrounding a non-permanent contract, are found to be statistically indifferent between permanent and temporary employment, and request much smaller wage premiums in order to switch from one status to the other. The well-documented distress associated with joblessness is also confirmed in our data. The methodology developed here can provide policymakers with an alternative and relatively inexpensive method of quantifying the immediate impact of any shift in their employment policies.European Commissio
    • 

    corecore