53,372 research outputs found
Population Synthesis via k-Nearest Neighbor Crossover Kernel
The recent development of multi-agent simulations brings about a need for
population synthesis. It is a task of reconstructing the entire population from
a sampling survey of limited size (1% or so), supplying the initial conditions
from which simulations begin. This paper presents a new kernel density
estimator for this task. Our method is an analogue of the classical
Breiman-Meisel-Purcell estimator, but employs novel techniques that harness the
huge degree of freedom which is required to model high-dimensional nonlinearly
correlated datasets: the crossover kernel, the k-nearest neighbor restriction
of the kernel construction set and the bagging of kernels. The performance as a
statistical estimator is examined through real and synthetic datasets. We
provide an "optimization-free" parameter selection rule for our method, a
theory of how our method works and a computational cost analysis. To
demonstrate the usefulness as a population synthesizer, our method is applied
to a household synthesis task for an urban micro-simulator.Comment: 10 pages, 4 figures, IEEE International Conference on Data Mining
(ICDM) 201
Microlocal analysis of generalized functions: pseudodifferential techniques and propagation of singularities
We characterize microlocal regularity of Colombeau generalized functions by
an appropriate extension of the classical notion of micro-ellipticity to
pseudodifferential operators with slow scale generalized symbols. Thus we
obtain an alternative, yet equivalent, way to determine generalized wave front
sets, which is analogous to the original definition of the wave front set of
distributions via intersections over characteristic sets. The new methods are
then applied to regularity theory of generalized solutions of
(pseudo-)differential equations, where we extend the general noncharacteristic
regularity result for distributional solutions and consider propagation of
generalized singularities for homogeneous first-order hyperbolic equations
Downward Nominal Rigidity in US Wage Data from the PSID - An Application of the Kernel-Location Approach
Earlier studies of US wage data from the PSID with a variety of methods have led to mixed results with respect to the existence and extent of downward nominal wage rigidity. Here the kernellocation approach to the analysis of downward nominal wage rigidity in micro data is applied to that data for the first time, in order to non-parametrically estimate counterfactual and factual distributions of annual nominal wage changes, the rigidity function and the average degree of downward nominal wage rigidity. Avoiding several problems of earlier studies by the use of the kernellocation approach, a substantial degree of downward nominal wage rigidity is found, and earlier evidence in favor of the hypothesis of downwardly rigid nominal wages is corroborated, weakening
the institutionalist view of downward nominal wage rigidity.
Frühere Untersuchungen zu US-amerikanischen Lohndaten der PSID unter Verwendung unterschiedlicher Methoden haben recht verschiedene Ergebnisse bzgl. der Existenz und des Ausmaßes von Abwärtsnominallohnstarrheit gezeitigt. In diesem Beitrag wird zum ersten Mal der Kernel-Location-Approach zur Analyse der Abwärtsnominallohnstarrheit in Mikrodaten verwandt. Mit ihm wird eine nicht-parametrische Schätzung der kontrafaktischen und der faktischen Verteilungen, der Starrheitsfunktion und des durchschnittlichen Grades der Abwärtsnominallohnstarrheit durchgeführt. Die Vermeidung vieler Probleme früherer Studien durch die Anwendung des Kernel-Location-Approach ermöglicht den Nachweis eines erheblichen Grades an Abwärtsnominallohnstarrheit. Dadurch können frühere, die Hypothese abwärtsstarrer Nominallöhne stützende, Ergebnisse untermauert werden, wobei gleichzeitig die institutionalistische Position zur Abwärtsnominallohnstarrheit geschwächt wird
Automatic Throughput and Critical Path Analysis of x86 and ARM Assembly Kernels
Useful models of loop kernel runtimes on out-of-order architectures require
an analysis of the in-core performance behavior of instructions and their
dependencies. While an instruction throughput prediction sets a lower bound to
the kernel runtime, the critical path defines an upper bound. Such predictions
are an essential part of analytic (i.e., white-box) performance models like the
Roofline and Execution-Cache-Memory (ECM) models. They enable a better
understanding of the performance-relevant interactions between hardware
architecture and loop code. The Open Source Architecture Code Analyzer (OSACA)
is a static analysis tool for predicting the execution time of sequential
loops. It previously supported only x86 (Intel and AMD) architectures and
simple, optimistic full-throughput execution. We have heavily extended OSACA to
support ARM instructions and critical path prediction including the detection
of loop-carried dependencies, which turns it into a versatile
cross-architecture modeling tool. We show runtime predictions for code on Intel
Cascade Lake, AMD Zen, and Marvell ThunderX2 micro-architectures based on
machine models from available documentation and semi-automatic benchmarking.
The predictions are compared with actual measurements.Comment: 6 pages, 3 figure
Automated Instruction Stream Throughput Prediction for Intel and AMD Microarchitectures
An accurate prediction of scheduling and execution of instruction streams is
a necessary prerequisite for predicting the in-core performance behavior of
throughput-bound loop kernels on out-of-order processor architectures. Such
predictions are an indispensable component of analytical performance models,
such as the Roofline and the Execution-Cache-Memory (ECM) model, and allow a
deep understanding of the performance-relevant interactions between hardware
architecture and loop code. We present the Open Source Architecture Code
Analyzer (OSACA), a static analysis tool for predicting the execution time of
sequential loops comprising x86 instructions under the assumption of an
infinite first-level cache and perfect out-of-order scheduling. We show the
process of building a machine model from available documentation and
semi-automatic benchmarking, and carry it out for the latest Intel Skylake and
AMD Zen micro-architectures. To validate the constructed models, we apply them
to several assembly kernels and compare runtime predictions with actual
measurements. Finally we give an outlook on how the method may be generalized
to new architectures.Comment: 11 pages, 4 figures, 7 table
Microlocal analysis in the dual of a Colombeau algebra: generalized wave front sets and noncharacteristic regularity
We introduce different notions of wave front set for the functionals in the
dual of the Colombeau algebra \Gc(\Om) providing a way to measure the \G
and the \Ginf- regularity in \LL(\Gc(\Om),\wt{\C}). For the smaller family
of functionals having a ``basic structure'' we obtain a Fourier
transform-characterization for this type of generalized wave front sets and
results of noncharacteristic \G and \Ginf-regularity
- …