35,643 research outputs found
An Empirical Test of Reder Competition and Specific Human Capital Against Standard Wage Competition
A firm that faces insufficient supply of labor can either increase the wage offer to attract more applicants, or reduce the hiring standard to enlarge the pool of potential employees, or do both. This simultaneous adjustment of wages and hiring standards has been emphasized in a classical contribution by Reder (1955) and implies that wage reactions to employment changes can be expected to be more pronounced for low wage workers than for high wage workers.
We test this hypothesis
(together with a related hypothesis on firm-specific human capital) by applying a bootstrap-based quantile regression approach to censored panel data from the German employment register. Our findings suggest that market clearing is achieved by a combination of wage and hiring standards adjustment
On Sharp Identification Regions for Regression Under Interval Data
The reliable analysis of interval data (coarsened data) is one of the
most promising applications of imprecise probabilities in statistics. If one
refrains from making untestable, and often materially unjustified, strong
assumptions on the coarsening process, then the empirical distribution
of the data is imprecise, and statistical models are, in Manskiās terms,
partially identified. We first elaborate some subtle differences between
two natural ways of handling interval data in the dependent variable of
regression models, distinguishing between two different types of identification
regions, called Sharp Marrow Region (SMR) and Sharp Collection
Region (SCR) here. Focusing on the case of linear regression analysis, we
then derive some fundamental geometrical properties of SMR and SCR,
allowing a comparison of the regions and providing some guidelines for
their canonical construction.
Relying on the algebraic framework of adjunctions of two mappings between
partially ordered sets, we characterize SMR as a right adjoint and
as the monotone kernel of a criterion function based mapping, while SCR
is indeed interpretable as the corresponding monotone hull. Finally we
sketch some ideas on a compromise between SMR and SCR based on a
set-domained loss function.
This paper is an extended version of a shorter paper with the same title,
that is conditionally accepted for publication in the Proceedings of
the Eighth International Symposium on Imprecise Probability: Theories
and Applications. In the present paper we added proofs and the seventh
chapter with a small Monte-Carlo-Illustration, that would have made the
original paper too long
In a Different Light
This module develops the understanding that visible light is composed of a spectrum of colors of light from red to violet, extends the concept of a spectrum to include non-visible light through discovery, and develops tools and strategies for student inquiry. Educational levels: Middle school, High school
Robust Inference of Trees
This paper is concerned with the reliable inference of optimal
tree-approximations to the dependency structure of an unknown distribution
generating data. The traditional approach to the problem measures the
dependency strength between random variables by the index called mutual
information. In this paper reliability is achieved by Walley's imprecise
Dirichlet model, which generalizes Bayesian learning with Dirichlet priors.
Adopting the imprecise Dirichlet model results in posterior interval
expectation for mutual information, and in a set of plausible trees consistent
with the data. Reliable inference about the actual tree is achieved by focusing
on the substructure common to all the plausible trees. We develop an exact
algorithm that infers the substructure in time O(m^4), m being the number of
random variables. The new algorithm is applied to a set of data sampled from a
known distribution. The method is shown to reliably infer edges of the actual
tree even when the data are very scarce, unlike the traditional approach.
Finally, we provide lower and upper credibility limits for mutual information
under the imprecise Dirichlet model. These enable the previous developments to
be extended to a full inferential method for trees.Comment: 26 pages, 7 figure
Statistical modelling under epistemic data imprecision : some results on estimating multinomial distributions and logistic regression for coarse categorical data
Paper presented at 9th International Symposium on Imprecise Probability: Theories and Applications, Pescara, Italy, 2015. Abstract: The paper deals with parameter estimation for categorical data under epistemic data imprecision, where for a part of the data only coarse(ned) versions of the true values are observable. For different observation models formalizing the information available on the coarsening process, we derive the (typically set-valued) maximum likelihood estimators of the underlying distributions. We discuss the homogeneous case of independent and identically distributed variables as well as logistic regression under a categorical covariate. We start with the imprecise point estimator under an observation model describing the coarsening process without any further assumptions. Then we determine several sensitivity parameters that allow the refinement of the estimators in the presence of auxiliary information
Uncertainty management in the IPCC: agreeing to disagree
Looking back over three and a half Assessment Reports, we see that the Intergovernmental Panel on Climate Change (IPCC) has given increasing attention to the management and reporting of uncertainties, but coordination across working groups (WGs) has remained an issue. We argue that there are good reasons for working groups to use different methods to assess uncertainty, thus it is better that working groups agree to disagree rather than seek to bring everybody on one party line.IPCC; uncertainty
Updating beliefs with incomplete observations
Currently, there is renewed interest in the problem, raised by Shafer in
1985, of updating probabilities when observations are incomplete. This is a
fundamental problem in general, and of particular interest for Bayesian
networks. Recently, Grunwald and Halpern have shown that commonly used updating
strategies fail in this case, except under very special assumptions. In this
paper we propose a new method for updating probabilities with incomplete
observations. Our approach is deliberately conservative: we make no assumptions
about the so-called incompleteness mechanism that associates complete with
incomplete observations. We model our ignorance about this mechanism by a
vacuous lower prevision, a tool from the theory of imprecise probabilities, and
we use only coherence arguments to turn prior into posterior probabilities. In
general, this new approach to updating produces lower and upper posterior
probabilities and expectations, as well as partially determinate decisions.
This is a logical consequence of the existing ignorance about the
incompleteness mechanism. We apply the new approach to the problem of
classification of new evidence in probabilistic expert systems, where it leads
to a new, so-called conservative updating rule. In the special case of Bayesian
networks constructed using expert knowledge, we provide an exact algorithm for
classification based on our updating rule, which has linear-time complexity for
a class of networks wider than polytrees. This result is then extended to the
more general framework of credal networks, where computations are often much
harder than with Bayesian nets. Using an example, we show that our rule appears
to provide a solid basis for reliable updating with incomplete observations,
when no strong assumptions about the incompleteness mechanism are justified.Comment: Replaced with extended versio
Treatment of imprecision in data repositories with the aid of KNOLAP
Traditional data repositories introduced for the needs of business
processing, typically focus on the storage and querying of crisp
domains of data. As a result, current commercial data repositories
have no facilities for either storing or querying imprecise/
approximate data.
No significant attempt has been made for a generic and applicationindependent
representation of value imprecision mainly as a
property of axes of analysis and also as part of dynamic
environment, where potential users may wish to define their āownā
axes of analysis for querying either precise or imprecise facts. In
such cases, measured values and facts are characterised by
descriptive values drawn from a number of dimensions, whereas
values of a dimension are organised as hierarchical levels.
A solution named H-IFS is presented that allows the representation
of flexible hierarchies as part of the dimension structures. An
extended multidimensional model named IF-Cube is put forward,
which allows the representation of imprecision in facts and
dimensions and answering of queries based on imprecise
hierarchical preferences. Based on the H-IFS and IF-Cube
concepts, a post relational OLAP environment is delivered, the
implementation of which is DBMS independent and its performance
solely dependent on the underlying DBMS engine
- ā¦