426 research outputs found
The Little-Hopfield model on a Random Graph
We study the Hopfield model on a random graph in scaling regimes where the
average number of connections per neuron is a finite number and where the spin
dynamics is governed by a synchronous execution of the microscopic update rule
(Little-Hopfield model).We solve this model within replica symmetry and by
using bifurcation analysis we prove that the spin-glass/paramagnetic and the
retrieval/paramagnetictransition lines of our phase diagram are identical to
those of sequential dynamics.The first-order retrieval/spin-glass transition
line follows by direct evaluation of our observables using population dynamics.
Within the accuracy of numerical precision and for sufficiently small values of
the connectivity parameter we find that this line coincides with the
corresponding sequential one. Comparison with simulation experiments shows
excellent agreement.Comment: 14 pages, 4 figure
Effects of Water Stress on Seed Production in Ruzi Grass \u3ci\u3e(Brachiaria ruziziensis Germain and Everard)\u3c/i\u3e
Water stress at different stages of reproductive development influenced seed yield in Ruzi grass differently. Under mild water stress, the earlier in the reproductive developmental stage the stress was applied (before ear emergence) the faster the plants recovered and the less the ultimate damage to inflorescence structure and seed set compared with the situation where water stress occurred during the later stages after inflorescences had emerged. Conversely, severe water stress before ear emergence had a severe effect in damaging both inflorescence numbers and seed quality. Permanent damage to the reproductive structures resulted in deformed inflorescences. Moreover, basal vegetative tillers were stunted and were capable of only limited regrowth after re-watering
Thermodynamic properties of extremely diluted symmetric Q-Ising neural networks
Using the replica-symmetric mean-field theory approach the thermodynamic and
retrieval properties of extremely diluted {\it symmetric} -Ising neural
networks are studied. In particular, capacity-gain parameter and
capacity-temperature phase diagrams are derived for and .
The zero-temperature results are compared with those obtained from a study of
the dynamics of the model. Furthermore, the de Almeida-Thouless line is
determined. Where appropriate, the difference with other -Ising
architectures is outlined.Comment: 16 pages Latex including 6 eps-figures. Corrections, also in most of
the figures have been mad
Phase transitions in optimal unsupervised learning
We determine the optimal performance of learning the orientation of the
symmetry axis of a set of P = alpha N points that are uniformly distributed in
all the directions but one on the N-dimensional sphere. The components along
the symmetry breaking direction, of unitary vector B, are sampled from a
mixture of two gaussians of variable separation and width. The typical optimal
performance is measured through the overlap Ropt=B.J* where J* is the optimal
guess of the symmetry breaking direction. Within this general scenario, the
learning curves Ropt(alpha) may present first order transitions if the clusters
are narrow enough. Close to these transitions, high performance states can be
obtained through the minimization of the corresponding optimal potential,
although these solutions are metastable, and therefore not learnable, within
the usual bayesian scenario.Comment: 9 pages, 8 figures, submitted to PRE, This new version of the paper
contains one new section, Bayesian versus optimal solutions, where we explain
in detail the results supporting our claim that bayesian learning may not be
optimal. Figures 4 of the first submission was difficult to understand. We
replaced it by two new figures (Figs. 4 and 5 in this new version) containing
more detail
Recommended from our members
Red cell differential width (RDW) as a predictor of survival outcomes with palliative and adjuvant chemotherapy for metastatic penile cancer.
PURPOSE: Red cell distribution width (RDW) measures red cells' size variability. Metastatic penile cancer displays poor chemotherapy response. As no validated prognostic predictor exists, we investigated whether RDW correlates independently with survival outcomes in metastatic penile cancer treated by chemotherapy. METHODS: Electronic chemotherapy files of patients with metastatic penile cancer (M1 or N3) from a large academic supra-regional centre were retrospectively analysed between 2005 and 2018. Patients were stratified into RDW > 13.9% and < 13.9%, as per published data on RDW in renal cell carcinoma. Survival time was calculated from the date of chemotherapy initiation until the date of death. RESULTS: 58 patients were analysed. The RDW-high group (n = 31) had a poorer survival than the RDW-low group (n = 27). Median overall survival (mOS) in all patients was 19.0 months (95% CI 13.1-24.9). mOS for RDW-high was 15.0 months (95% CI 10.1-19.9) and 37.0 months (95% CI 32.3-43.1) for RDW-low. Kaplan-Meier curves showed a clear disparity in survival (log rank p = 0.025). Cox proportional hazard ratio for death, corrected for T-stage, grade, age and deprivation score was 0.43 (p = 0.04). Sub-analysis of the M1 patients showed mOS in RDW-high of 17 m (95% CI 11.6-22.4) vs. NR; HR for death of 0.42. N3 patients' mOS in RDW-high cohort was 30 months (95% CI 4.5-55.9) vs. 13 months (95% CI 1.8-24.2) in RDW-low; HR for death was 0.30. CONCLUSION: RDW correlates independently with survival outcomes in metastatic penile cancer and may act as a potential predictor of survival outcomes for patients with metastatic penile cancer receiving chemotherapy
Slowly evolving geometry in recurrent neural networks I: extreme dilution regime
We study extremely diluted spin models of neural networks in which the
connectivity evolves in time, although adiabatically slowly compared to the
neurons, according to stochastic equations which on average aim to reduce
frustration. The (fast) neurons and (slow) connectivity variables equilibrate
separately, but at different temperatures. Our model is exactly solvable in
equilibrium. We obtain phase diagrams upon making the condensed ansatz (i.e.
recall of one pattern). These show that, as the connectivity temperature is
lowered, the volume of the retrieval phase diverges and the fraction of
mis-aligned spins is reduced. Still one always retains a region in the
retrieval phase where recall states other than the one corresponding to the
`condensed' pattern are locally stable, so the associative memory character of
our model is preserved.Comment: 18 pages, 6 figure
Replicated Transfer Matrix Analysis of Ising Spin Models on `Small World' Lattices
We calculate equilibrium solutions for Ising spin models on `small world'
lattices, which are constructed by super-imposing random and sparse Poissonian
graphs with finite average connectivity c onto a one-dimensional ring. The
nearest neighbour bonds along the ring are ferromagnetic, whereas those
corresponding to the Poisonnian graph are allowed to be random. Our models thus
generally contain quenched connectivity and bond disorder. Within the replica
formalism, calculating the disorder-averaged free energy requires the
diagonalization of replicated transfer matrices. In addition to developing the
general replica symmetric theory, we derive phase diagrams and calculate
effective field distributions for two specific cases: that of uniform sparse
long-range bonds (i.e. `small world' magnets), and that of (+J/-J) random
sparse long-range bonds (i.e. `small world' spin-glasses).Comment: 22 pages, LaTeX, IOP macros, eps figure
Generalizing with perceptrons in case of structured phase- and pattern-spaces
We investigate the influence of different kinds of structure on the learning
behaviour of a perceptron performing a classification task defined by a teacher
rule. The underlying pattern distribution is permitted to have spatial
correlations. The prior distribution for the teacher coupling vectors itself is
assumed to be nonuniform. Thus classification tasks of quite different
difficulty are included. As learning algorithms we discuss Hebbian learning,
Gibbs learning, and Bayesian learning with different priors, using methods from
statistics and the replica formalism. We find that the Hebb rule is quite
sensitive to the structure of the actual learning problem, failing
asymptotically in most cases. Contrarily, the behaviour of the more
sophisticated methods of Gibbs and Bayes learning is influenced by the spatial
correlations only in an intermediate regime of , where
specifies the size of the training set. Concerning the Bayesian case we show,
how enhanced prior knowledge improves the performance.Comment: LaTeX, 32 pages with eps-figs, accepted by J Phys
Statistical Mechanics of Soft Margin Classifiers
We study the typical learning properties of the recently introduced Soft
Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the
tools of Statistical Mechanics. We derive analytically the behaviour of the
learning curves in the regime of very large training sets. We obtain
exponential and power laws for the decay of the generalization error towards
the asymptotic value, depending on the task and on general characteristics of
the distribution of stabilities of the patterns to be learned. The optimal
learning curves of the SMCs, which give the minimal generalization error, are
obtained by tuning the coefficient controlling the trade-off between the error
and the regularization terms in the cost function. If the task is realizable by
the SMC, the optimal performance is better than that of a hard margin Support
Vector Machine and is very close to that of a Bayesian classifier.Comment: 26 pages, 12 figures, submitted to Physical Review
Retarded Learning: Rigorous Results from Statistical Mechanics
We study learning of probability distributions characterized by an unknown
symmetry direction. Based on an entropic performance measure and the
variational method of statistical mechanics we develop exact upper and lower
bounds on the scaled critical number of examples below which learning of the
direction is impossible. The asymptotic tightness of the bounds suggests an
asymptotically optimal method for learning nonsmooth distributions.Comment: 8 pages, 1 figur
- …