687 research outputs found
Incentive Payment Programs for Environmental Protection: A Framework for Eliciting and Estimating Landowners' Willingness to Participate
This paper considers the role of incentive payment programs in eliciting, estimating, and predicting landowners’ conservation enrollments. Using both program participation and the amount of land enrolled, we develop two econometric approaches for predicting enrollments. The first is a multivariate censored regression model that handles zero enrollments and heterogeneity in the opportunity cost of enrollments by combining an inverse hyperbolic sine transformation of enrollments with alternative-specific correlation and random parameters. The second is a beta-binomial model, which recognizes that in practice elicited enrollments are essentially integer valued. We apply these approaches to Finland, where the protection of private nonindustrial forests is an important environmental policy problem. We compare both econometric approaches via cross-validation and find that the beta-binomial model predicts as well as the multivariate censored model yet has fewer parameters. The beta-binomial model also facilitates policy predictions and simulations, which we use to illustrate the framework.protection, endangered, voluntary, incentive, tobit, beta-binomial, stated preferences
A Censored Random Coefficients Model for Pooled Survey Data with Application to the Estimation of Power Outage Costs
In many surveys multiple observations on the dependent variable are collected from a given respondent. The resulting pooled data set is likely to be censored and to exhibit cross-sectional heterogeneity. We propose a model that addresses both issues by allowing regression coefficients to vary randomly across respondents and by using the Geweke-Hajivassiliou-Keane simulator and Halton sequences to estimate high-order probabilities. We show how this framework can be usefully applied to the estimation of power outage costs to firms using data from a recent survey conducted by a U.S. utility. Our results strongly reject the hypotheses of parameter constancy and cross-sectional homogeneity.
LOGIT MODELS FOR POOLED CONTINGENT VALUATION AND CONTINGENT RATING AND RANKING DATA: VALUING BENEFITS FROM FOREST BIODIVERSITY CONSERVATION
Contingent valuation and contingent rating and ranking methods for measuring willingness-to-pay for non-market goods are compared by using random coefficient models and data pooling methods. Pooled models on CV data and CR data on the preferred choice accept pooling if scale differences between the model estimates of CV and CR methods are allowed for. More detailed response models, such as pooled CV model and rank-ordered models for two or three ranks, reject pooling of the data.Resource /Energy Economics and Policy,
Discrete Choice Survey Experiments: A Comparison Using Flexible Models
This study investigates the convergent validity of discrete choice contingent valuation (CV) and contingent rating/ranking (CR) methods using flexible econometric methods. Our results suggest that CV and CR can produce consistent data (achieve convergent validity) when respondent’s preferred choices and the same changes in environmental quality are considered. We also find that CR models that go beyond modeling the preferred choice and include additional ranks cannot be pooled with the CV models. Accounting for preference heterogeneity via random coefficient models and their flexible structure does not make rejection of the hypothesis of convergent validity less likely, but instead rejects the hypothesis to about the same degree or perhaps more frequently than fixed parameter models commonly used in the literature.valuation, stated preferences, data pooling, random coefficients, Rayleigh, habitat conservation
Web-based diagnosis of misconceptions in rational numbers
A thesis submitted to the Wits School of Education, Faculty of Humanities, University of the Witwatersrand in fulfilment of the requirements for the degree of Doctor of Philosophy.
Johannesburg, 2016.This study explores the potential for Web-based diagnostic assessments in the classroom, with specific focus on certain common challenges experienced by learners in the development of their rational number knowledge. Two schools were used in this study, both having adequate facilities for this study, comprising a well-equipped computer room with one-computer-per-learner and a fast, reliable broadband connection.
Prior research on misconceptions in the rational numbers has been surveyed to identify a small set of problem types with proven effectiveness in eliciting evidence of misconceptions in learners. In addition to the problem types found from prior studies, other problem types have been included to examine how the approach can be extended. For each problem type a small item bank was created and these items were presented to the learners in test batteries of between four and ten questions. A multiple-choice format was used, with distractor choices included to elicit misconceptions, including those previously reported in prior research. The test batteries were presented in dedicated lessons to learners over four consecutive weeks to Grade 7 (school one) and Grade 8 (school two) classes from the participating schools. A number of test batteries were presented in each weekly session and, following the learners’ completion of each battery, feedback was provided to the learner with notes to help them reflect on their performance.
The focus of this study has been on diagnosis alone, rather than remediation, with the intention of building a base for producing valid evidence of the fine-grained thinking of learners. This evidence can serve a variety of purposes, most significantly to inform the teacher on each learners’ stage of development in the specific micro-domains. Each micro-domain is a fine-grained area of knowledge that is the basis for lesson-sized teaching and learning, and which is highly suited to diagnostic assessment.
A fine-grained theory of constructivist learning is introduced for positioning learners at a development stage in each micro-domain. This theory of development stages is the foundation I have used to explore the role of diagnostic assessment as it may be used in future classroom activity. To achieve successful implementation into time-constrained mathematics classrooms requires that diagnostic assessments are conducted as effectively and efficiently as possible. To meet this requirement, the following elements of diagnostic assessments were investigated: (1) Why are some questions better than others for diagnostic purposes? (2) How many questions need to be asked to produce valid conclusions? (3) To what extent is learner self-knowledge of item difficulty useful to identify learner thinking?
A Rasch modeling approach was used for analyzing the data, and this was applied in a novel way by measuring the construct of the learners’ propensity to select a distractor for a misconception, as distinct from the common application of Rasch to measure learner ability. To accommodate multiple possible misconceptions used by a learner, parallel Rasch analyses were performed to determine the likely causes of learner mistakes. These analyses were used to then identify which questions appeared to be better for diagnosis.
The results produced clear evidence that some questions are far better diagnostic discriminators than others for specific misconceptions, but failed to identify the detailed rules which govern this behavior, with the conclusion that to determine these would require a far larger research population. The results also determined that the number of such good diagnostic questions needed is often surprisingly low, and in some cases a single question and response is sufficient to infer learner thinking. The results show promise for a future in which Web-based diagnostic assessments are a daily part of classroom practice. However, there appears to be no
additional benefit in gathering subjective self-knowledge from the learners, over using the
objective test item results alone.
Keywords: diagnostic assessment; rational numbers; common fractions; decimal numbers;
decimal fractions; misconceptions; Rasch models; World-Wide Web; Web-based assessment;
computer-based assessments; formative assessment; development stages; learning trajectories
Aggregation of common-metric attributes in preference revelation in choice experiments and implications for willingness to pay
There is a growing literature that promotes the notion of process heterogeneity in the way that individuals evaluate packages of attributes in real or hypothetical markets and make choices. Empirical evidence suggests that individuals use a number of processing strategies such as cancellation, referencing, and attribute aggregation, the latter occurring where there is a common metric. In this paper we consider the threshold relationship between attributes that are defined on a common metric (e.g., minutes or dollars), in order to gain evidence on the extent to which such attributes might be added up or not in preference revelation. The model specification does not require supplementary information on whether specific individuals claimed to have added up attributes; rather we structure a nonlinear utility function that permits a probabilistic preservation or aggregation of each attribute. We translate this new evidence into a willingness to pay (WTP) for travel time savings, and contrast it with the results from the traditional linear additive model, as well as establishing the extent to which self-stated attribute addition systematically varies with WTP and component inputs into WTP. The implications for environmental assessment are highlighted
Recommended from our members
The Effect of Layer Orientation on the Tensile Properties of Net Shape Parts Fabricated in Stereolithography
Stereolithographic technologies create parts in thermoset plastic polymeric mixtures of
acrylates and epoxies. In order to predict the mechanical behavior of these parts, it is critical to
understand the effects that build parameters have on the final properties of the polymer. Using a
statistics based approach, the build parameters of layer orientation, layer thickness, and resin
class are used as inputs. The response variables, peak stress, elongation at break and Young’s
modulus (modulus of elasticity), are examined using the methodology specified in ASTM D638-
01 with modifications as noted. An initial test in Somos 8120 showed the surprising (and
statistically significant) result that load bearing capability in the build direction was greater than
in the in-layer direction. Additional tensile tests in Somos 8120 and Vantico SL-5510 were
undertaken to verify this result, and determine if this effect is present across different classes of
resin. This report details the rationale behind this experiment, presents the results to date, and
outlines future efforts.Mechanical Engineerin
- …