464,256 research outputs found

    Are we here for the same reason? Exploring the motivational values that shape the professional decision making of signed language interpreters.

    Get PDF
    The goal of this research is to begin a discussion in the ASL/English interpreting field about how personally held motivations and values impact the decision making process. From the decision to enter this field to the decisions an interpreter makes on a daily basis, values are central to understanding that process. The first step in this analysis was to collect data from current interpreters and interpreting students to see what motivational values are prioritized within professional communities. This data was collected through an online questionnaire made available through multiple social media websites that support various ASL/English interpreting communities. Through statistical analysis of the results of this questionnaire and the coding of one short answer question the following questions are addressed: What motivational values do ASL/English interpreters prioritize? How are these values expressed when interpreters are asked to articulate the reasons for pursuing a career in this field? Do participant’s demographic characteristics (e.g., native language(s), educational background, ethnic identity, and specialized work settings) relate with prioritization of motivational value types? The results showed that the sample prioritized the motivational types of self-direction, benevolence, and universalism most highly. Some possible reasons for this value prioritization will be explored as well as sub-populations with the sample that diverged from this motivational value system. The hope is that by examining the findings of this data, practicing interpreters and interpreting students can begin to explore their own individually held values and how conflicting and congruent values are expressed and assessed within their practice

    MLPAinter for MLPA interpretation: An integrated approach for the analysis, visualisation and data management of Multiplex Ligation-dependent Probe Amplification

    Get PDF
    Background: Multiplex Ligation-Dependent Probe Amplification (MLPA) is an application that can be used for the detection of multiple chromosomal aberrations in a single experiment. In one reaction, up to 50 different genomic sequences can be analysed. For a reliable work-flow, tools are needed for administrative support, data management, normalisation, visualisation, reporting and interpretation.Results: Here, we developed a data management system, MLPAInter for MLPA interpretation, that is windows executable and has a stand-alone database for monitoring and interpreting the MLPA data stream that is generated from the experimental setup to analysis, quality control and visualisation. A statistical approach is applied for the normalisation and analysis of large series of MLPA traces, making use of multiple control samples and internal controls.Conclusions: MLPAinter visualises MLPA data in plots with information about sample replicates, normalisation settings, and sample characteristics. This integrated approach helps in the automated handling of large series of MLPA data and guarantees a quick and streamlined dataflow from the beginning of an experiment to an authorised report

    The International Land Model Benchmarking (ILAMB) System: Design, Theory, and Implementation

    Full text link
    The increasing complexity of Earth system models has inspired efforts to quantitatively assess model fidelity through rigorous comparison with best available measurements and observational data products. Earth system models exhibit a high degree of spread in predictions of land biogeochemistry, biogeophysics, and hydrology, which are sensitive to forcing from other model components. Based on insights from prior land model evaluation studies and community workshops, the authors developed an open source model benchmarking software package that generates graphical diagnostics and scores model performance in support of the International Land Model Benchmarking (ILAMB) project. Employing a suite of in situ, remote sensing, and reanalysis data sets, the ILAMB package performs comprehensive model assessment across a wide range of land variables and generates a hierarchical set of web pages containing statistical analyses and figures designed to provide the user insights into strengths and weaknesses of multiple models or model versions. Described here is the benchmarking philosophy and mathematical methodology embodied in the most recent implementation of the ILAMB package. Comparison methods unique to a few specific data sets are presented, and guidelines for configuring an ILAMB analysis and interpreting resulting model performance scores are discussed. ILAMB is being adopted by modeling teams and centers during model development and for model intercomparison projects, and community engagement is sought for extending evaluation metrics and adding new observational data sets to the benchmarking framework.Key PointThe ILAMB benchmarking system broadly compares models to observational data sets and provides a synthesis of overall performancePeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146994/1/jame20779_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146994/2/jame20779.pd

    Measuring Risk Literacy: The Berlin Numeracy Test

    Get PDF
    We introduce the Berlin Numeracy Test, a new psychometrically sound instrument that quickly assesses statistical numeracy and risk literacy. We present 21 studies (n=5336) showing robust psychometric discriminability across 15 countries (e.g., Germany, Pakistan, Japan, USA) and diverse samples (e.g., medical professionals, general populations, Mechanical Turk web panels). Analyses demonstrate desirable patterns of convergent validity (e.g., numeracy, general cognitive abilities), discriminant validity (e.g., personality, motivation), and criterion validity (e.g., numerical and nonnumerical questions about risk). The Berlin Numeracy Test was found to be the strongest predictor of comprehension of everyday risks (e.g., evaluating claims about products and treatments; interpreting forecasts), doubling the predictive power of other numeracy instruments and accounting for unique variance beyond other cognitive tests (e.g., cognitive reflection, working memory, intelligence). The Berlin Numeracy Test typically takes about three minutes to complete and is available in multiple languages and formats, including a computer adaptive test that automatically scores and reports data to researchers (www.riskliteracy.org). The online forum also provides interactive content for public outreach and education, and offers a recommendation system for test format selection. Discussion centers on construct validity of numeracy for risk literacy, underlying cognitive mechanisms, and applications in adaptive decision support

    Secondary mathematics guidance papers: summer 2008

    Get PDF

    Using school performance feedback: perceptions of primary school principals

    Get PDF
    The present study focuses on the perception of primary school principals of school performance feedback (SPF) and of the actual use of this information. This study is part of a larger project which aims to develop a new school performance feedback system (SPFS). The study builds on an eclectic framework that integrates the literature on SPFSs. Through in-depth interviews with 16 school principals, 4 clusters of factors influencing school feedback use were identified: context, school and user, SPFS, and support. This study refines the description of feedback use in terms of phases and types of use and effects on school improvement. Although school performance feedback can be seen as an important instrument for school improvement, no systematic use of feedback by school principals was observed. This was partly explained by a lack of skills, time, and support

    GeoZui3D: Data Fusion for Interpreting Oceanographic Data

    Get PDF
    GeoZui3D stands for Geographic Zooming User Interface. It is a new visualization software system designed for interpreting multiple sources of 3D data. The system supports gridded terrain models, triangular meshes, curtain plots, and a number of other display objects. A novel center of workspace interaction method unifies a number of aspects of the interface. It creates a simple viewpoint control method, it helps link multiple views, and is ideal for stereoscopic viewing. GeoZui3D has a number of features to support real-time input. Through a CORBA interface external entities can influence the position and state of objects in the display. Extra windows can be attached to moving objects allowing for their position and data to be monitored. We describe the application of this system for heterogeneous data fusion, for multibeam QC and for ROV/AUV monitoring

    Technical and vocational skills (TVS): a means of preventing violence among youth in Nigeria

    Get PDF
    Technical and vocational skills are an important tool for reducing violence among youth, especially in Nigeria, who face security challenges due to different kinds of violence. This paper focusses on the policies and programmes intended to provide youth with skills that can help them improve their life instead of engaging in violence. The paper also studies youth participation in violence. The study shows that youth in Nigeria participate in violence because of unemployment and economic pressure. These youth are mostly from poor families and are mostly used by others to achieve their own unlawful ambition. The data were collected from various secondary sources such as textbooks, journals and conference papers that were carefully reviewed. The results obtained from the literature revealed that youth are not committed, sensitised and mobilised to taking advantage of the opportunities available to them. The results also revealed that almost all the programmes meant to provide youths with skills have failed. Poverty alleviation programmes established to create jobs, self-employment and self-reliance have been unsuccessful. Therefore, alternatives must be provided to help the younger generations. Based on the literature reviewed, the paper discusses related issues and outcomes and ends with recommendations to improve the situation

    Using Qualitative Hypotheses to Identify Inaccurate Data

    Full text link
    Identifying inaccurate data has long been regarded as a significant and difficult problem in AI. In this paper, we present a new method for identifying inaccurate data on the basis of qualitative correlations among related data. First, we introduce the definitions of related data and qualitative correlations among related data. Then we put forward a new concept called support coefficient function (SCF). SCF can be used to extract, represent, and calculate qualitative correlations among related data within a dataset. We propose an approach to determining dynamic shift intervals of inaccurate data, and an approach to calculating possibility of identifying inaccurate data, respectively. Both of the approaches are based on SCF. Finally we present an algorithm for identifying inaccurate data by using qualitative correlations among related data as confirmatory or disconfirmatory evidence. We have developed a practical system for interpreting infrared spectra by applying the method, and have fully tested the system against several hundred real spectra. The experimental results show that the method is significantly better than the conventional methods used in many similar systems.Comment: See http://www.jair.org/ for any accompanying file

    Alternative model for the administration and analysis of research-based assessments

    Full text link
    Research-based assessments represent a valuable tool for both instructors and researchers interested in improving undergraduate physics education. However, the historical model for disseminating and propagating conceptual and attitudinal assessments developed by the physics education research (PER) community has not resulted in widespread adoption of these assessments within the broader community of physics instructors. Within this historical model, assessment developers create high quality, validated assessments, make them available for a wide range of instructors to use, and provide minimal (if any) support to assist with administration or analysis of the results. Here, we present and discuss an alternative model for assessment dissemination, which is characterized by centralized data collection and analysis. This model provides a greater degree of support for both researchers and instructors in order to more explicitly support adoption of research-based assessments. Specifically, we describe our experiences developing a centralized, automated system for an attitudinal assessment we previously created to examine students' epistemologies and expectations about experimental physics. This system provides a proof-of-concept that we use to discuss the advantages associated with centralized administration and data collection for research-based assessments in PER. We also discuss the challenges that we encountered while developing, maintaining, and automating this system. Ultimately, we argue that centralized administration and data collection for standardized assessments is a viable and potentially advantageous alternative to the default model characterized by decentralized administration and analysis. Moreover, with the help of online administration and automation, this model can support the long-term sustainability of centralized assessment systems.Comment: 7 pages, 1 figure, accepted in Phys. Rev. PE
    • …
    corecore