8,679 research outputs found
Intelligence of school children: Los Angeles as a case study 1922-1932
In an effort to construct the most advanced school system in
the nation, Los Angeles school administrators and educators initiated a
new scientific method of group intelligence testing. Almost immediately
educators discovered serious limitations with the process and resisted
its exclusive use.
This study examines the reception of this new technology in Los
Angeles between 1922 and 1932. Many historians have seen those
associated with I.Q. measuring as bulwarks supporting the hegemony of
Anglo-Saxon upper-middle class society. While their criticism has
brought some non-equitable aspects of twentieth-century public
education to surface, it has not led to our understanding of how
educators interpreted the tests. An analysis of the sources, including
reports published in the Department of Psychology and Education
Research Bulletin of the Los Angeles City Schools, the Teachers' and
Principals' School Journal, and the Minute~ of the Board of Education,
provides insight into how Los Angeles educators viewed standardized
testing
Use and Communication of Probabilistic Forecasts
Probabilistic forecasts are becoming more and more available. How should they
be used and communicated? What are the obstacles to their use in practice? I
review experience with five problems where probabilistic forecasting played an
important role. This leads me to identify five types of potential users: Low
Stakes Users, who don't need probabilistic forecasts; General Assessors, who
need an overall idea of the uncertainty in the forecast; Change Assessors, who
need to know if a change is out of line with expectatations; Risk Avoiders, who
wish to limit the risk of an adverse outcome; and Decision Theorists, who
quantify their loss function and perform the decision-theoretic calculations.
This suggests that it is important to interact with users and to consider their
goals. The cognitive research tells us that calibration is important for trust
in probability forecasts, and that it is important to match the verbal
expression with the task. The cognitive load should be minimized, reducing the
probabilistic forecast to a single percentile if appropriate. Probabilities of
adverse events and percentiles of the predictive distribution of quantities of
interest seem often to be the best way to summarize probabilistic forecasts.
Formal decision theory has an important role, but in a limited range of
applications
Model-based Methods of Classification: Using the mclust Software in Chemometrics
Due to recent advances in methods and software for model-based clustering, and to the interpretability of the results, clustering procedures based on probability models are increasingly preferred over heuristic methods. The clustering process estimates a model for the data that allows for overlapping clusters, producing a probabilistic clustering that quantifies the uncertainty of observations belonging to components of the mixture. The resulting clustering model can also be used for some other important problems in multivariate analysis, including density estimation and discriminant analysis. Examples of the use of model-based clustering and classification techniques in chemometric studies include multivariate image analysis, magnetic resonance imaging, microarray image segmentation, statistical process control, and food authenticity. We review model-based clustering and related methods for density estimation and discriminant analysis, and show how the R package mclust can be applied in each instance.
Variable selection and updating in model-based discriminant analysis for high dimensional data with food authenticity applications
Food authenticity studies are concerned with determining if food samples have been correctly labelled or not. Discriminant analysis methods are an integral part of the methodology for food authentication. Motivated by food authenticity applications, a model-based discriminant analysis method that includes variable selection is presented. The discriminant analysis model is fitted in a semi-supervised manner using both labeled and unlabeled data. The method is shown to give excellent classification
performance on several high-dimensional multiclass food authenticity datasets with more variables than observations. The variables selected by the proposed method provide information about which variables are meaningful for classification purposes. A headlong search strategy for variable selection is shown to be efficient in terms of computation and achieves excellent classification performance. In applications to several food authenticity datasets, our proposed method outperformed default implementations of Random Forests, AdaBoost, transductive SVMs and Bayesian Multinomial Regression by substantial margins
- ā¦