570 research outputs found
Ising Spins on Thin Graphs
The Ising model on ``thin'' graphs (standard Feynman diagrams) displays
several interesting properties. For ferromagnetic couplings there is a mean
field phase transition at the corresponding Bethe lattice transition point. For
antiferromagnetic couplings the replica trick gives some evidence for a spin
glass phase. In this paper we investigate both the ferromagnetic and
antiferromagnetic models with the aid of simulations. We confirm the Bethe
lattice values of the critical points for the ferromagnetic model on
and graphs and examine the putative spin glass phase in the
antiferromagnetic model by looking at the overlap between replicas in a
quenched ensemble of graphs. We also compare the Ising results with those for
higher state Potts models and Ising models on ``fat'' graphs, such as those
used in 2D gravity simulations.Comment: LaTeX 13 pages + 9 postscript figures, COLO-HEP-340,
LPTHE-Orsay-94-6
Critical behavior of O(2)xO(N) symmetric models
We investigate the controversial issue of the existence of universality
classes describing critical phenomena in three-dimensional statistical systems
characterized by a matrix order parameter with symmetry O(2)xO(N) and
symmetry-breaking pattern O(2)xO(N) -> O(2)xO(N-2). Physical realizations of
these systems are, for example, frustrated spin models with noncollinear order.
Starting from the field-theoretical Landau-Ginzburg-Wilson Hamiltonian, we
consider the massless critical theory and the minimal-subtraction scheme
without epsilon expansion. The three-dimensional analysis of the corresponding
five-loop expansions shows the existence of a stable fixed point for N=2 and
N=3, confirming recent field-theoretical results based on a six-loop expansion
in the alternative zero-momentum renormalization scheme defined in the massive
disordered phase.
In addition, we report numerical Monte Carlo simulations of a class of
three-dimensional O(2)xO(2)-symmetric lattice models. The results provide
further support to the existence of the O(2)xO(2) universality class predicted
by the field-theoretical analyses.Comment: 45 pages, 20 figs, some additions, Phys.Rev.B in pres
Multicritical behavior in the fully frustrated XY model and related systems
We study the phase diagram and critical behavior of the two-dimensional
square-lattice fully frustrated XY model (FFXY) and of two related models, a
lattice discretization of the Landau-Ginzburg-Wilson Hamiltonian for the
critical modes of the FFXY model, and a coupled Ising-XY model. We present a
finite-size-scaling analysis of the results of high-precision Monte Carlo
simulations on square lattices L x L, up to L=O(10^3).
In the FFXY model and in the other models, when the transitions are
continuous, there are two very close but separate transitions. There is an
Ising chiral transition characterized by the onset of chiral long-range order
while spins remain paramagnetic. Then, as temperature decreases, the systems
undergo a Kosterlitz-Thouless spin transition to a phase with quasi-long-range
order.
The FFXY model and the other models in a rather large parameter region show a
crossover behavior at the chiral and spin transitions that is universal to some
extent. We conjecture that this universal behavior is due to a multicritical
point. The numerical data suggest that the relevant multicritical point is a
zero-temperature transition. A possible candidate is the O(4) point that
controls the low-temperature behavior of the 4-vector model.Comment: 62 page
Reliable statistical modeling of weakly structured information
The statistical analysis of "real-world" data is often confronted with the fact that most standard statistical methods were developed under some kind of idealization of the data that is often not adequate in practical situations. This concerns among others i) the potentially deficient quality of the data that can arise for example due to measurement error, non-response in surveys or data processing errors and ii) the scale quality of the data, that is idealized as "the data have some clear scale of measurement that can be uniquely located within the scale hierarchy of Stevens (or that of Narens and Luce or Orth)".
Modern statistical methods like, e.g., correction techniques for measurement error or robust methods cope with issue i). In the context of missing or coarsened data, imputation techniques and methods that explicitly model the missing/coarsening process are nowadays wellestablished tools of refined data analysis. Concerning ii) the typical statistical viewpoint is a more pragmatical one, in case of doubt one simply presumes the strongest scale of measurement that is clearly "justified". In more complex situations, like for example in the context of the analysis of ranking data, statisticians often simply do not worry about purely measurement theoretic reservations too much, but instead embed the data structure in an appropriate, easy to handle space, like e.g. a metric space and then use all statistical tools available for this space.
Against this background, the present cumulative dissertation tries to contribute from different perspectives to the appropriate handling of data that challenge the above-mentioned idealizations. A focus here is on the one hand on analysis of interval-valued and set-valued data within the methodology of partial identification, and on the other hand on the analysis of data with values in a partially ordered set (poset-valued data). Further tools of statistical modeling treated in the dissertation are necessity measures in the context of possibility theory and concepts of stochastic dominance for poset-valued data. The present dissertation consists of 8 contributions, which will be detailedly discussed in the following sections:
Contribution 1 analyzes different identification regions for partially identified linear models under interval-valued responses and develops a further kind of identification region (as well as a corresponding estimator). Estimates for the identifcation regions are compared to each other and also to classical statistical approaches for a data set on wine quality.
Contribution 2 deals with logistic regression under coarsened responses, analyzes point-identifying assumptions and develops likelihood-based estimators for the identified set. The methods are illustrated with data of a wave of the panel study "Labor Market and Social Security" (PASS).
Contribution 3 analyzes the combinatorial structure of the extreme points and the edges of a polytope (called credal set or core in the literature) that plays a crucial role in imprecise probability theory. Furthermore, an efficient algorithm for enumerating all extreme points is given and compared to existing standard methods.
Contribution 4 develops a quantile concept for data or random variables with values in a complete lattice, which is applied in Contribution 5 to the case of ranking data in the context of a data set on the wisdom of the crowd phenomena.
In Contribution 6 a framework for evaluating the quality of different aggregation functions of Social Choice Theory is developed, which enables analysis of quality in dependence of group specific homogeneity. In a simulation study, selected aggregation functions, including an aggregation function based on the concepts of Contribution 4 and Contribution 5, are analyzed.
Contribution 7 supplies a linear program that allows for detecting stochastic dominance for poset-valued random variables, gives proposals for inference and regularization, and generalizes the approach to the general task of optimizing a linear function on a closure system. The generality of the developed methods is illustrated with data examples in the context of multivariate inequality analysis, item impact and differential item functioning in the context of item response theory, analyzing distributional differences in spatial statistics and guided regularization in the context of cognitive diagnosis models.
Contribution 8 uses concepts of stochastic dominance to establish a descriptive approach for a relational analysis of person ability and item difficulty in the context of multidimensional item response theory. All developed methods have been implemented in the language R ([R Development Core Team, 2014]) and are available from the author upon request.
The application examples corroborate the usefulness of weak types of statistical modeling examined in this thesis, which, beyond their flexibility to deal with many kinds of data deficiency, can still lead to informative substance matter conclusions that are then more reliable due to the weak modeling.Die statistische Analyse real erhobener Daten sieht sich oft mit der Tatsache konfrontiert, dass übliche statistische Standardmethoden unter einer starken Idealisierung der Datensituation entwickelt wurden, die in der Praxis jedoch oft nicht angemessen ist. Dies betrifft i) die möglicherweise defizitäre Qualität der Daten, die beispielsweise durch Vorhandensein von Messfehlern, durch systematischen Antwortausfall im Kontext sozialwissenschaftlicher Erhebungen
oder auch durch Fehler während der Datenverarbeitung bedingt ist und ii) die Skalenqualität der Daten an sich: Viele Datensituationen lassen sich nicht in die einfachen Skalenhierarchien von Stevens (oder die von Narens und Luce oder Orth) einordnen.
Modernere statistische Verfahren wie beispielsweise Messfehlerkorrekturverfahren oder robuste Methoden versuchen, der Idealisierung der Datenqualität im Nachhinein Rechnung zu tragen. Im Zusammenhang mit fehlenden bzw. intervallzensierten Daten haben sich Imputationsverfahren zur Vervollständigung fehlender Werte bzw. Verfahren, die den Entstehungprozess der vergröberten Daten explizit modellieren, durchgesetzt. In Bezug auf die Skalenqualität geht die Statistik meist eher pragmatisch vor, im Zweifelsfall wird das niedrigste Skalenniveau gewählt, das klar gerechtfertigt ist. In komplexeren multivariaten Situationen, wie beispielsweise der Analyse von Ranking-Daten, die kaum noch in das Stevensche "Korsett" gezwungen werden können, bedient man sich oft der einfachen Idee der Einbettung der Daten in einen geeigneten metrischen Raum, um dann anschließend alle Werkzeuge metrischer Modellierung nutzen zu können.
Vor diesem Hintergrund hat die hier vorgelegte kumulative Dissertation deshalb zum Ziel, aus verschiedenen Blickwinkeln Beiträge zum adäquaten Umgang mit Daten, die jene Idealisierungen herausfordern, zu leisten. Dabei steht hier vor allem die Analyse intervallwertiger bzw. mengenwertiger Daten mittels partieller Identifikation auf der Seite defzitärer Datenqualität im Vordergrund, während bezüglich Skalenqualität der Fall von verbandswertigen Daten behandelt wird. Als weitere Werkzeuge statistischer Modellierung werden hier insbesondere Necessity-Maße im Rahmen der Imprecise Probabilities und Konzepte stochastischer Dominanz für Zufallsvariablen mit Werten in einer partiell geordneten Menge betrachtet.
Die vorliegende Dissertation umfasst 8 Beiträge, die in den folgenden Kapiteln näher diskutiert werden:
Beitrag 1 analysiert verschiedene Identifikationsregionen für partiell identifizierte lineare Modelle unter intervallwertig beobachteter Responsevariable und schlägt eine neue Identifikationsregion (inklusive Schätzer) vor. Für einen Datensatz, der die Qualität von verschiedenen Rotweinen, gegeben durch ExpertInnenurteile, in Abhängigkeit von verschiedenen physikochemischen Eigenschaften beschreibt, werden Schätzungen für die Identifikationsregionen analysiert. Die Ergebnisse werden ebenfalls mit den Ergebissen klassischer Methoden für Intervalldaten verglichen.
Beitrag 2 behandelt logistische Regression unter vergröberter Responsevariable, analysiert punktidentifizierende Annahmen und entwickelt likelihoodbasierte Schätzer für die entsprechenden Identifikationsregionen. Die Methode wird mit Daten einer Welle der Panelstudie "Arbeitsmarkt und Soziale Sicherung" (PASS) illustriert.
Beitrag 3 analysiert die kombinatorische Struktur der Extrempunkte und der Kanten eines Polytops (sogenannte Struktur bzw. Kern einer Intervallwahrscheinlichkeit bzw. einer nicht-additiven Mengenfunktion), das von wesentlicher Bedeutung in vielen Gebieten der Imprecise Probability Theory ist. Ein effizienter Algorithmus zur Enumeration aller Extrempunkte wird ebenfalls gegeben und mit existierenden Standardenumerationsmethoden verglichen.
In Beitrag 4 wird ein Quantilkonzept für verbandswertige Daten bzw. Zufallsvariablen vorgestellt. Dieses Quantilkonzept wird in Beitrag 5 auf Ranking-Daten im Zusammenhang mit einem Datensatz, der das "Weisheit der Vielen"-Phänomen untersucht, angewendet.
Beitrag 6 entwickelt eine Methode zur probabilistischen Analyse der "Qualität" verschiedener Aggregationsfunktionen der Social Choice Theory. Die Analyse wird hier in Abhäangigkeit der Homogenität der betrachteten Gruppen durchgeführt. In einer simulationsbasierten Studie werden exemplarisch verschiedene klassische Aggregationsfunktionen, sowie eine neue Aggregationsfunktion basierend auf den Beiträgen 4 und 5, verglichen.
Beitrag 7 stellt einen Ansatz vor, um das Vorliegen stochastischer Dominanz zwischen zwei Zufallsvariablen zu überprüfen. Der Anstaz nutzt Techniken linearer Programmierung. Weiterhin werden Vorschläge für statistische Inferenz und Regularisierung gemacht. Die Methode wird anschließend auch auf den allgemeineren Fall des Optimierens einer linearen Funktion auf einem Hüllensystem ausgeweitet. Die flexible Anwendbarkeit wird durch verschiedene Anwendungsbeispiele illustriert.
Beitrag 8 nutzt Ideen stochastischer Dominanz, um Datensätze der multidimensionalen Item Response Theory relational zu analysieren, indem Paare von sich gegenseitig empirisch stützenden Fähigkeitsrelationen der Personen und Schwierigkeitsrelationen der Aufgaben entwickelt werden. Alle entwickelten Methoden wurden in R ([R Development Core Team, 2014]) implementiert.
Die Anwendungsbeispiele zeigen die Flexibilität der hier betrachteten Methoden relationaler bzw. "schwacher" Modellierung insbesondere zur Behandlung defizitärer Daten und unterstreichen die Tatsache, dass auch mit Methoden schwacher Modellierung oft immer noch nichttriviale substanzwissenschaftliche Rückschlüsse möglich sind, die aufgrund der inhaltlich vorsichtigeren Modellierung dann auch sehr viel stärker belastbar sind
Triple, quadruple, and higher-order helices: historical phenomena and (neo-)evolutionary models
Carayannis and Campbell (2009; 2010) have argued for using quadruple and quintuple helices as models encompassing and generalizing triple-helix dynamics. In the meantime, quadruple and quintuple helices have been adopted by the European Committee for the Regions and the European Commission as metaphors for further strategy development such as in EU-programs in Smart Specialization, Plan S, Open Innovation 2.0, etc. Here we argue that the transition from a double helix to a triple helix can change the dynamic from a trajectory to a regime. However, next-order transitions (e.g., to quadruple, quintuple, or n-tuple helices) can be decomposed and recombined into interacting Triple Helices. For example, in the case of four helices A, B, C, and D, one can distinguish ABC, ABD, ACD, and BCD; each triplet can generate synergy. The triple-helix synergy indicator can thus be elaborated for more than three dimensions. However, whether innovation systems are national, regional, sectorial, triple-helix, quadruple-helix, etc., can inform policies with evidence if one proceeds to measurement. A variety of perspectives can be used to interpret the data. Software for testing perspectives will be introduced
Effective Field Theories in Nuclear Particle and Atomic Physics
These are the proceedings of the workshop on ``Effective Field Theories in
Nuclear, Particle and Atomic Physics'' held at the Physikzentrum Bad Honnef of
the Deutsche Physikalische Gesellschaft, Bad Honnef, Germany from December 13
to 17, 2005. The workshop concentrated on Effective Field Theory in many
contexts. A first part was concerned with Chiral Perturbation Theory in its
various settings and explored strongly its use in relation with lattice QCD.
The second part consisted of progress in effective field theories in systems
with one, two or more nucleons as well as atomic physics. Included are a short
contribution per talk.Comment: 56 pages, mini proceedings of the 337. WE-Heraeus-Seminar "Effective
Field Theories in Nuclear Particle and Atomic Physics," Physikzentrum Bad
Honnef, Bad Honnef, Germany, December 13 -- 17, 200
A flexible multivariate conditional autoregression with application to road safety performance indicators.
There is a dearth of models for multivariate spatially correlated data recorded on
a lattice. Existing models incorporate some combination of three correlation terms:
(i) the correlation between the multiple variables within each site, (ii) the spatial
autocorrelation for each variable across the lattice, and (iii) the correlation between
each variable at one site and a different variable at a neighbouring site. These may
be thought of as correlation, spatial autocorrelation and spatial cross-correlation
parameters respectively.
This thesis develops a
exible multivariate conditional autoregression model where
the spatial cross-correlation is asymmetric. A comparison of the performance of the
FMCAR with existing MCARs is performed through a simulation exercise. The
FMCAR compares well with the other models, in terms of model fit and shrinkage,
when applied to a range of simulated data. However, the FMCAR out performs all
of the existing MCAR models when applied to data with asymmetric spatial crosscorrelations.
To demonstrate the model, the FMCAR model is applied to road safety
performance indicators. Namely, casualty counts by mode and severity for vulnerable
road users in London, taken from the STATS19 dataset for 2006. However,
by exploiting correlation between multiple performance indicators within local
authorities and spatial auto and cross-correlation for the variables across local
authorities, the FMCAR results in considerable shrinkage of the estimates of
local authority performance. Whilst this does not enable local authorities to be
differentiated based upon their road safety performance it produces a considerable
reduction in the uncertainty surrounding their rankings. This is consistent with
previous attempts to improve performance rankings. Further, although the findings
of this thesis indicate that there is only mild evidence of asymmetry in the spatial
cross-correlations for road casualty counts, the thesis provides a demonstration of the
applicability of this model to real world social and economic problems
- …