12,073 research outputs found

    A Bayes interpretation of stacking for M-complete and M-open settings

    Get PDF
    In M-open problems where no true model can be conceptualized, it is common to back off from modeling and merely seek good prediction. Even in M-complete problems, taking a predictive approach can be very useful. Stacking is a model averaging procedure that gives a composite predictor by combining individual predictors from a list of models using weights that optimize a cross-validation criterion. We show that the stacking weights also asymptotically minimize a posterior expected loss. Hence we formally provide a Bayesian justification for cross-validation. Often the weights are constrained to be positive and sum to one. For greater generality, we omit the positivity constraint and relax the `sum to one' constraint. A key question is `What predictors should be in the average?' We first verify that the stacking error depends only on the span of the models. Then we propose using bootstrap samples from the data to generate empirical basis elements that can be used to form models. We use this in two computed examples to give stacking predictors that are (i) data driven, (ii) optimal with respect to the number of component predictors, and (iii) optimal with respect to the weight each predictor gets.Comment: 37 pages, 2 figure

    Choice and information in the public sector: a Higher Education case study

    Get PDF
    Successive governments have encouraged the view of users of public services as consumers, choosing between different providers on the basis of information about the quality of service. As part of this approach, prospective students are expected to make their decisions about which universities to apply to with reference to the consumer evaluations provided by the National Student Survey. However, a case study of a post-1992 university showed that not all students made genuine choices and those who did tended to be in stronger social and economic positions. Where choices were made, they were infrequently based on external evaluations of quality

    Using the Bayesian Shtarkov solution for predictions

    Get PDF
    AbstractThe Bayes Shtarkov predictor can be defined and used for a variety of data sets that are exceedingly hard if not impossible to model in any detailed fashion. Indeed, this is the setting in which the derivation of the Shtarkov solution is most compelling. The computations show that anytime the numerical approximation to the Shtarkov solution is ‘reasonable’, it is better in terms of predictive error than a variety of other general predictive procedures. These include two forms of additive model as well as bagging or stacking with support vector machines, Nadaraya–Watson estimators, or draws from a Gaussian Process Prior

    A Bayes Interpretation of Stacking for M-Complete and M-Open Settings

    Get PDF
    In M-open problems where no true model can be conceptualized, it is common to back off from modeling and merely seek good prediction. Even in M-complete problems, taking a predictive approach can be very useful. Stacking is a model averaging procedure that gives a composite predictor by combining individual predictors from a list of models using weights that optimize a cross validation criterion. We show that the stacking weights also asymptotically minimize a posterior expected loss. Hence we formally provide a Bayesian justification for cross-validation. Often the weights are constrained to be positive and sum to one. For greater generality, we omit the positivity constraint and relax the ‘sum to one’ constraint

    Processes and priorities in planning mathematics teaching

    Full text link
    Insights into teachers' planning of mathematics reported here were gathered as part of a broader project examining aspects of the implementation of the Australian curriculum in mathematics (and English). In particular, the responses of primary and secondary teachers to a survey of various aspects of decisions that inform their use of curriculum documents and assessment processes to plan their teaching are discussed. Teachers appear to have a clear idea of the overall topic as the focus of their planning, but they are less clear when asked to articulate the important ideas in that topic. While there is considerable diversity in the processes that teachers use for planning and in the ways that assessment information informs that planning, a consistent theme was that teachers make active decisions at all stages in the planning process. Teachers use a variety of assessment data in various ways, but these are not typically data extracted from external assessments. This research has important implications for those responsible for supporting teachers in the transition to the Australian Curriculum: Mathematic

    Minimax Estimation of Nonregular Parameters and Discontinuity in Minimax Risk

    Full text link
    When a parameter of interest is nondifferentiable in the probability, the existing theory of semiparametric efficient estimation is not applicable, as it does not have an influence function. Song (2014) recently developed a local asymptotic minimax estimation theory for a parameter that is a nondifferentiable transform of a regular parameter, where the nondifferentiable transform is a composite map of a continuous piecewise linear map with a single kink point and a translation-scale equivariant map. The contribution of this paper is two fold. First, this paper extends the local asymptotic minimax theory to nondifferentiable transforms that are a composite map of a Lipschitz continuous map having a finite set of nondifferentiability points and a translation-scale equivariant map. Second, this paper investigates the discontinuity of the local asymptotic minimax risk in the true probability and shows that the proposed estimator remains to be optimal even when the risk is locally robustified not only over the scores at the true probability, but also over the true probability itself. However, the local robustification does not resolve the issue of discontinuity in the local asymptotic minimax risk

    Development of, and signalling to, oligodendrocytes and their precursors

    Get PDF
    Oligodendrocytes myelinate axons in the CNS to increase the speed of action potential conduction. Myelinating oligodendrocytes develop from oligodendrocyte precursor cells (OPCs). OPCs can express voltage-gated sodium and potassium channels, and receive excitatory and inhibitory synaptic input from axons. The functional relevance of voltage-gated and synaptic currents in OPCs is unknown, but electrical signalling from axons might regulate OPC development and myelination. In this thesis I investigate the electrical properties of OPCs, and how signalling to these cells regulates their proliferation, differentiation and myelination. I studied the electrical properties of OPCs in different brain areas, to investigate whether their electrical properties differ between embryonic sites of origin (in dorsal versus ventral parts of the CNS), and between brain regions (white matter versus grey matter). Firstly, using a dual reporter mouse line to colour code ventrally- and dorsally-derived oligodendrocyte lineage cells, I demonstrated that oligodendrocyte lineage cells derived from different embryonic sites are electrically similar. However, despite having indistinguishable electrical properties, dorsally-derived oligodendrocytes myelinated specific tracts in the spinal cord. Secondly, I have shown that OPCs in different brain regions have a similar expression of ion channels and precursor proteins, are all mitotically active, and generate differentiated oligodendrocytes but not neurons. Finally, having determined that all OPCs apparently have similar membrane properties, I investigated whether the inhibitory neurotransmitter GABA can regulate OPC proliferation, differentiation and myelination. I found that both oligodendrocytes and their precursors respond to GABA via the activation of GABAA receptors. In addition, endogenously released GABA was found to reduce the number of oligodendrocyte lineage cells formed, reduce the amount of myelin per axon and increase internode length. These results demonstrate that GABA, presumably released from inhibitory interneurons, can regulate myelination, and raise the possibility that GABA could also modulate CNS remyelination
    • 

    corecore