225,415 research outputs found
Comment on "Critique of q-entropy for thermal statistics" by M. Nauenberg
It was recently published by M. Nauenberg [1] a quite long list of objections
about the physical validity for thermal statistics of the theory sometimes
referred to in the literature as {\it nonextensive statistical mechanics}. This
generalization of Boltzmann-Gibbs (BG) statistical mechanics is based on the
following expression for the entropy:
S_q= k\frac{1- \sum_{i=1}^Wp_i^q}{q-1} (q \in {\cal R}; S_1=S_{BG} \equiv
-k\sum_{i=1}^W p_i \ln p_i) .
The author of [1] already presented orally the essence of his arguments in
1993 during a scientific meeting in Buenos Aires. I am replying now
simultaneously to the just cited paper, as well as to the 1993 objections
(essentially, the violation of "fundamental thermodynamic concepts", as stated
in the Abstract of [1]).Comment: 7 pages including 2 figures. This is a reply to M. Nauenberg, Phys.
Rev. E 67, 036114 (2003
Gibbsian representation for point processes via hyperedge potentials
We consider marked point processes on the d-dimensional euclidean space,
defined in terms of a quasilocal specification based on marked Poisson point
processes. We investigate the possibility of constructing absolutely-summable
Hamiltonians in terms of hyperedge potentials in the sense of Georgii et al.
These potentials are a natural generalization of physical multi-body potentials
which are useful in models of stochastic geometry. We prove that such
representations can be achieved, under appropriate locality conditions of the
specification. As an illustration we also provide such potential
representations for the Widom-Rowlinson model under independent spin-flip
time-evolution. Our paper draws a link between the abstract theory of point
processes in infinite volume, the study of measures under transformations, and
statistical mechanics of systems of point particles.Comment: 21 pages, 2 figure, 1 tabl
Recommended from our members
Towards a Learning-Based Account of Underlying Forms: A Case Study in Turkish
A traditional concept in phonological theory is that of the underlying form. However, the history of phonology has witnessed a debate about how abstract underlying representations ought to be allowed to be, and a number of arguments have been given that phonology should abandon such representations altogether. In this paper, we consider a learning-based approach to the question. We propose a model that, by default, constructs concrete representations of morphemes. When and only when such concrete representations make it challenging to generalize in the face of the sparse statistical profile of language, our proposed model constructs abstract underlying forms that allow for effective generalization. As a case study, we consider the highly agglutinative language, Turkish. We demonstrate that the underlying forms that our model constructs account for the complexities of Turkish phonology resulting from its multifaceted vowel harmony. Moreover, these underlying forms enable the highly-accurate prediction of novel surface forms, demonstrating the importance of some underlying forms to generalization
Optimal measures and Markov transition kernels
We study optimal solutions to an abstract optimization problem for measures, which is a generalization of classical variational problems in information theory and statistical physics. In the classical problems, information and relative entropy are defined using the Kullback-Leibler divergence, and for this reason optimal measures belong to a one-parameter exponential family. Measures within such a family have the property of mutual absolute continuity. Here we show that this property characterizes other families of optimal positive measures if a functional representing information has a strictly convex dual. Mutual absolute continuity of optimal probability measures allows us to strictly separate deterministic and non-deterministic Markov transition kernels, which play an important role in theories of decisions, estimation, control, communication and computation. We show that deterministic transitions are strictly sub-optimal, unless information resource with a strictly convex dual is unconstrained. For illustration, we construct an example where, unlike non-deterministic, any deterministic kernel either has negatively infinite expected utility (unbounded expected error) or communicates infinite information
Nesting statistics in the loop model on random planar maps
In the loop model on random planar maps, we study the depth -- in
terms of the number of levels of nesting -- of the loop configuration, by means
of analytic combinatorics. We focus on the `refined' generating series of
pointed disks or cylinders, which keep track of the number of loops separating
the marked point from the boundary (for disks), or the two boundaries (for
cylinders). For the general loop model, we show that these generating
series satisfy functional relations obtained by a modification of those
satisfied by the unrefined generating series. In a more specific model
where loops cross only triangles and have a bending energy, we explicitly
compute the refined generating series. We analyze their non generic critical
behavior in the dense and dilute phases, and obtain the large deviations
function of the nesting distribution, which is expected to be universal. Using
the framework of Liouville quantum gravity (LQG), we show that a rigorous
functional KPZ relation can be applied to the multifractal spectrum of extreme
nesting in the conformal loop ensemble () in the Euclidean
unit disk, as obtained by Miller, Watson and Wilson, or to its natural
generalization to the Riemann sphere. It allows us to recover the large
deviations results obtained for the critical random planar map models.
This offers, at the refined level of large deviations theory, a rigorous check
of the fundamental fact that the universal scaling limits of random planar map
models as weighted by partition functions of critical statistical models are
given by LQG random surfaces decorated by independent CLEs.Comment: 71 pages, 11 figures. v2: minor text and abstract edits, references
adde
Usage-based and emergentist approaches to language acquisition
It was long considered to be impossible to learn grammar based on linguistic experience alone. In the past decade, however, advances in usage-based linguistic theory, computational linguistics, and developmental psychology changed the view on this matter. So-called usage-based and emergentist approaches to language acquisition state that language can be learned from language use itself, by means of social skills like joint attention, and by means of powerful generalization mechanisms. This paper first summarizes the assumptions regarding the nature of linguistic representations and processing. Usage-based theories are nonmodular and nonreductionist, i.e., they emphasize the form-function relationships, and deal with all of language, not just selected levels of representations. Furthermore, storage and processing is considered to be analytic as well as holistic, such that there is a continuum between children's unanalyzed chunks and abstract units found in adult language. In the second part, the empirical evidence is reviewed. Children's linguistic competence is shown to be limited initially, and it is demonstrated how children can generalize knowledge based on direct and indirect positive evidence. It is argued that with these general learning mechanisms, the usage-based paradigm can be extended to multilingual language situations and to language acquisition under special circumstances
Generalization from correlated sets of patterns in the perceptron
Generalization is a central aspect of learning theory. Here, we propose a
framework that explores an auxiliary task-dependent notion of generalization,
and attempts to quantitatively answer the following question: given two sets of
patterns with a given degree of dissimilarity, how easily will a network be
able to "unify" their interpretation? This is quantified by the volume of the
configurations of synaptic weights that classify the two sets in a similar
manner. To show the applicability of our idea in a concrete setting, we compute
this quantity for the perceptron, a simple binary classifier, using the
classical statistical physics approach in the replica-symmetric ansatz. In this
case, we show how an analytical expression measures the "distance-based
capacity", the maximum load of patterns sustainable by the network, at fixed
dissimilarity between patterns and fixed allowed number of errors. This curve
indicates that generalization is possible at any distance, but with decreasing
capacity. We propose that a distance-based definition of generalization may be
useful in numerical experiments with real-world neural networks, and to explore
computationally sub-dominant sets of synaptic solutions
Recommended from our members
A comparative survey of integrated learning systems
This paper presents the duction framework for unifying the three basic forms of inference - deduction, abduction, and induction - by specifying the possible relationships and influences among them in the context of integrated learning. Special assumptive forms of inference are defined that extend the use of these inference methods, and the properties of these forms are explored. A comparison to a related inference-based learning frame work is made. Finally several existing integrated learning programs are examined in the perspective of the duction framework
- âŠ