458,326 research outputs found

    A General Method for Estimating the Classification Reliability of Complex Decisions Based on Configural Combinations of Multiple Assessment Scores

    Get PDF
    This study presents a general method for estimating the classification reliability of complex decisions based on multiple scores from a single test administration. The proposed method consists of four steps that can be applied to a variety of measurement models and configural rules for combining test scores: Step 1: Fit a measurement model to the observed data. Step 2: Simulate replicate distributions of plausible observed scores based on the measurement model. Step 3: Construct a contingency table that shows the congruence between true and replicate scores for decision accuracy, and two replicate scores for decision consistency. Step 4: Calculate measures to characterize agreement in the contingency tables. Using a classical test theory model, a simulation study explores the effect of increasing the number of tests, strength of relationship among tests, and number of opportunities to pass on classification accuracy and consistency. Next the model is applied to actual data from the GED Testing Service to illustrate the utility of the method for informing practical decisions. Simulation results support the validity of the method for estimating classification reliability, and the method provides credible estimation of classification reliability for the GED Tests. Application of configural rules results in complex findings which sometimes show different results for classification accuracy and consistency. Unexpected findings support the value of using the method to explore classification reliability as a means of improving decision rules. Highlighted findings: 1) The compensatory rule (in which test scores are added) performs consistently well across almost all conditions; 2) Conjunctive and complementary rules frequently show opposite results; 3) Including more tests in the decision rule influences classification reliability differently depending on the rule; 4) Combining scores from highly-related tests increases classification reliability; 5) Providing multiple opportunities to pass yields mixed results. Future studies are suggested to explore use of other measurement models, varying levels of test reliability, modeling multiple attempts in which learning occurs between testings; and in-depth study of incorrectly classified examinees

    Development and Validation of a Rule-based Time Series Complexity Scoring Technique to Support Design of Adaptive Forecasting DSS

    Get PDF
    Evidence from forecasting research gives reason to believe that understanding time series complexity can enable design of adaptive forecasting decision support systems (FDSSs) to positively support forecasting behaviors and accuracy of outcomes. Yet, such FDSS design capabilities have not been formally explored because there exists no systematic approach to identifying series complexity. This study describes the development and validation of a rule-based complexity scoring technique (CST) that generates a complexity score for time series using 12 rules that rely on 14 features of series. The rule-based schema was developed on 74 series and validated on 52 holdback series using well-accepted forecasting methods as benchmarks. A supporting experimental validation was conducted with 14 participants who generated 336 structured judgmental forecasts for sets of series classified as simple or complex by the CST. Benchmark comparisons validated the CST by confirming, as hypothesized, that forecasting accuracy was lower for series scored by the technique as complex when compared to the accuracy of those scored as simple. The study concludes with a comprehensive framework for design of FDSS that can integrate the CST to adaptively support forecasters under varied conditions of series complexity. The framework is founded on the concepts of restrictiveness and guidance and offers specific recommendations on how these elements can be built in FDSS to support complexity

    A General Setting for Flexibly Combining and Augmenting Decision Procedures

    Get PDF

    Strict General Setting for Building Decision Procedures into Theorem Provers

    Get PDF
    The efficient and flexible incorporating of decision procedures into theorem provers is very important for their successful use. There are several approaches for combining and augmenting of decision procedures; some of them support handling uninterpreted functions, congruence closure, lemma invoking etc. In this paper we present a variant of one general setting for building decision procedures into theorem provers (gs framework [18]). That setting is based on macro inference rules motivated by techniques used in different approaches. The general setting enables a simple describing of different combination/augmentation schemes. In this paper, we further develop and extend this setting by an imposed ordering on the macro inference rules. That ordering leads to a ”strict setting”. It makes implementing and using variants of well-known or new schemes within this framework a very easy task even for a non-expert user. Also, this setting enables easy comparison of different combination/augmentation schemes and combination of their ideas
    corecore