148 research outputs found
Private costs on water conservation: study case at Cantareira Mantiqueira Corridor Region.
This study aims to evaluate the private opportunity cost for an extensive forest recover program in the Cantareira-Mantiqueira Corridor Region and discuss its results focusing on three central questions: i. what is the private opportunity cost of forest restoration for the main land use activities in the Cantareira-Mantiqueira Corridor Region? ii. how the private opportunity costs varies throughout the region? iii. What are the most cost-effectiveness PES strategies available for the Cantareira- Mantiqueira Corridor Region
Thouless-Anderson-Palmer equation for analog neural network with temporally fluctuating white synaptic noise
Effects of synaptic noise on the retrieval process of associative memory
neural networks are studied from the viewpoint of neurobiological and
biophysical understanding of information processing in the brain. We
investigate the statistical mechanical properties of stochastic analog neural
networks with temporally fluctuating synaptic noise, which is assumed to be
white noise. Such networks, in general, defy the use of the replica method,
since they have no energy concept. The self-consistent signal-to-noise analysis
(SCSNA), which is an alternative to the replica method for deriving a set of
order parameter equations, requires no energy concept and thus becomes
available in studying networks without energy functions. Applying the SCSNA to
stochastic network requires the knowledge of the Thouless-Anderson-Palmer (TAP)
equation which defines the deterministic networks equivalent to the original
stochastic ones. The study of the TAP equation which is of particular interest
for the case without energy concept is very few, while it is closely related to
the SCSNA in the case with energy concept. This paper aims to derive the TAP
equation for networks with synaptic noise together with a set of order
parameter equations by a hybrid use of the cavity method and the SCSNA.Comment: 13 pages, 3 figure
Diagonalization of replicated transfer matrices for disordered Ising spin systems
We present an alternative procedure for solving the eigenvalue problem of
replicated transfer matrices describing disordered spin systems with (random)
1D nearest neighbor bonds and/or random fields, possibly in combination with
(random) long range bonds. Our method is based on transforming the original
eigenvalue problem for a matrix (where ) into an
eigenvalue problem for integral operators. We first develop our formalism for
the Ising chain with random bonds and fields, where we recover known results.
We then apply our methods to models of spins which interact simultaneously via
a one-dimensional ring and via more complex long-range connectivity structures,
e.g. dimensional neural networks and `small world' magnets.
Numerical simulations confirm our predictions satisfactorily.Comment: 24 pages, LaTex, IOP macro
Distance to range edge determines sensitivity to deforestation
It is generally assumed that deforestation affects a species consistently across space, however populations near their geographic range edge may exist at their niche limits and therefore be more sensitive to disturbance. We found that both within and across Atlantic Forest bird species, populations are more sensitive to deforestation when near their range edge. In fact, the negative effects of deforestation on bird occurrences switched to positive in the range core (>829âkm), in line with Ellenbergâs rule. We show that the proportion of populations at their range core and edge varies across Brazil, suggesting deforestation effects on communities, and hence the most appropriate conservation action, also vary geographically
Hierarchical Self-Programming in Recurrent Neural Networks
We study self-programming in recurrent neural networks where both neurons
(the `processors') and synaptic interactions (`the programme') evolve in time
simultaneously, according to specific coupled stochastic equations. The
interactions are divided into a hierarchy of groups with adiabatically
separated and monotonically increasing time-scales, representing sub-routines
of the system programme of decreasing volatility. We solve this model in
equilibrium, assuming ergodicity at every level, and find as our
replica-symmetric solution a formalism with a structure similar but not
identical to Parisi's -step replica symmetry breaking scheme. Apart from
differences in details of the equations (due to the fact that here
interactions, rather than spins, are grouped into clusters with different
time-scales), in the present model the block sizes of the emerging
ultrametric solution are not restricted to the interval , but are
independent control parameters, defined in terms of the noise strengths of the
various levels in the hierarchy, which can take any value in [0,\infty\ket.
This is shown to lead to extremely rich phase diagrams, with an abundance of
first-order transitions especially when the level of stochasticity in the
interaction dynamics is chosen to be low.Comment: 53 pages, 19 figures. Submitted to J. Phys.
Identification of an elaborate complex mediating postsynaptic inhibition
Inhibitory synapses dampen neuronal activity through postsynaptic hyperpolarization. The composition of the inhibitory postsynapse and the mechanistic basis of its regulation, however, remains poorly understood. We used an in vivo chemico-genetic proximity-labeling approach to discover inhibitory postsynaptic proteins. Quantitative mass spectrometry not only recapitulated known inhibitory postsynaptic proteins, but also revealed a large network of new proteins, many of which are either implicated in neurodevelopmental disorders or are of unknown function. CRISPR-depletion of one of these previously uncharacterized proteins, InSyn1, led to decreased postsynaptic inhibitory sites, reduced frequency of miniature inhibitory currents, and increased excitability in the hippocampus. Our findings uncover a rich and functionally diverse assemblage of previously unknown proteins that regulate postsynaptic inhibition and might contribute to developmental brain disorders
Statistical Mechanics of Soft Margin Classifiers
We study the typical learning properties of the recently introduced Soft
Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the
tools of Statistical Mechanics. We derive analytically the behaviour of the
learning curves in the regime of very large training sets. We obtain
exponential and power laws for the decay of the generalization error towards
the asymptotic value, depending on the task and on general characteristics of
the distribution of stabilities of the patterns to be learned. The optimal
learning curves of the SMCs, which give the minimal generalization error, are
obtained by tuning the coefficient controlling the trade-off between the error
and the regularization terms in the cost function. If the task is realizable by
the SMC, the optimal performance is better than that of a hard margin Support
Vector Machine and is very close to that of a Bayesian classifier.Comment: 26 pages, 12 figures, submitted to Physical Review
Slowly evolving geometry in recurrent neural networks I: extreme dilution regime
We study extremely diluted spin models of neural networks in which the
connectivity evolves in time, although adiabatically slowly compared to the
neurons, according to stochastic equations which on average aim to reduce
frustration. The (fast) neurons and (slow) connectivity variables equilibrate
separately, but at different temperatures. Our model is exactly solvable in
equilibrium. We obtain phase diagrams upon making the condensed ansatz (i.e.
recall of one pattern). These show that, as the connectivity temperature is
lowered, the volume of the retrieval phase diverges and the fraction of
mis-aligned spins is reduced. Still one always retains a region in the
retrieval phase where recall states other than the one corresponding to the
`condensed' pattern are locally stable, so the associative memory character of
our model is preserved.Comment: 18 pages, 6 figure
Slowly evolving random graphs II: Adaptive geometry in finite-connectivity Hopfield models
We present an analytically solvable random graph model in which the
connections between the nodes can evolve in time, adiabatically slowly compared
to the dynamics of the nodes. We apply the formalism to finite connectivity
attractor neural network (Hopfield) models and we show that due to the
minimisation of the frustration effects the retrieval region of the phase
diagram can be significantly enlarged. Moreover, the fraction of misaligned
spins is reduced by this effect, and is smaller than in the infinite
connectivity regime. The main cause of this difference is found to be the
non-zero fraction of sites with vanishing local field when the connectivity is
finite.Comment: 17 pages, 8 figure
- âŠ