33,011 research outputs found
A Lagrangian Piunikhin-Salamon-Schwarz morphism and two comparison homomorphisms in Floer homology
In this article we address two issues. First, we explore to what extent the
techniques of Piunikhin, Salamon and Schwarz in [PSS96] can be carried over to
Lagrangian Floer homology. In [PSS96] an isomorphism between Hamiltonian Floer
homology and the singular homology is established. In contrast, Lagrangian
Floer homology is not isomorphic to the singular homology of the Lagrangian
submanifold, in general. Depending on the minimal Maslov number, we construct
for certain degrees two homomorphisms between Lagrangian Floer homology and
singular homology. In degrees where both maps are defined we prove them to be
isomorphisms. Examples show that this statement is sharp.
Second, we construct two comparison homomorphisms between Lagrangian and
Hamiltonian Floer homology. They underly no degree restrictions and are proven
to be the natural analogs to the homomorphisms in singular homology induced by
the inclusion map of the Lagrangian submanifold into the ambient symplectic
manifold.Comment: 41 pages, 14 figures. v2: major revision, v3: included detailed
transversality proofs. accepted by IMR
Theta rhythmicity enhances learning in adaptive STDP
The classical STDP window captures changes of a synaptic weight in response to the relative timing of a pre and a postsynaptic spike (see e.g. Bi and Poo, 1998). Due to its static nature, however, it cannot account for nonlinear interactions between spikes. Several theoretical studies offer dynamic formulations for STDP, for example by modulating the synaptic weight change by variables like synaptic calcium concentration (Shouval et al., 2002) or somatic depolarisation (Clopath et al., 2010), or by introducing spike triplet interactions (Pfister and Gerstner, 2006). Here, we propose a new model which is formulated as a set of differential equations (Schmiedt et al., 2010). The weight change is given by a differential Hebbian learning rule, which reproduces the STDP window for spike pairs. To account for the effects of repeated neuronal firing on the synaptic weight, we introduce modulations of the spike impact, which act on exponential traces of the spiking activity. We found that this model captures a series of experiments on STDP with complex spike pattern in cortex (Froemke et al., 2006) and hippocampus (Wang et al., 2005). When applied to continuous firing rates, our approach allows us to analyze the effects of given time courses of firing rates on the synaptic weight change, i.e. the filter properties of STDP. For sinusoidal modulations of baseline firing rates we find the strongest weight changes for modulation frequencies in the theta band, which plays a key role in learning. Furthermore, weight modifications in the hippocampus are predicted to be most prominent for baseline rates of around 5Hz in striking agreement with experimental findings.
This suggests that STDP-dependent learning is mediated by theta oscillations and modulated by the background firing rate which are both testable predictions of our theory
Comparing Survival Curves Using Rank Tests
Survival times of patients can be compared using rank tests in various experimental setups, including the two-sample case and the case of paired data. Attention is focussed on two frequently occurring complications in medical applications: censoring and tail alternatives. A review is given of the author's recent work on a new and simple class of censored rank tests. Various models for tail alternatives are discussed and the relation to censoring is demonstrated
Foundations of a Theory of Prominence in the Decimal System - Part II: Exactness Selection Rule, and Confirming Results
The information that is used to create a numerical response is typically diffuse, and cannot be described by a distribution. A criterion to describe the information is its range of reasonable alternatives, corresponding to the worst case-best case analysis of practitioners in decision situations where distributions are missing. Empirical data show, that numerical responses in such situations follow a rule that gives conditions for the exactness of the response. The rule says that the exactness is selected such that there are between 3 and 5 alternatives on this or a cruder level of exactness in the range of reasonable alternatives. This rule permits to predict the exactness of responses, but also permits to deduce on the exactness of information. Once known, it is a powerful tool to inform about information and motives of subjects from their numerical responses. - The paper introduces the rule, and gives some empirical examples that support the theory. These examples concern retail price setting of firms, subjects' estimates of numbers of inhabitants of towns, and a bearing experiment in which different degrees of diffuseness are simulated.
Control charts for health care monitoring under intermittent out-of-control behavior
Health care monitoring typically concerns attribute data with very low failure rates. Efficient control charts then signal if the waiting time till r (e.g. r≤5) failures is too small. An interesting alternative is the MAX-chart, which signals if all the associated r waiting times for a single failure are sufficiently small. In comparing these choices, the usual change point set-up has been used, in which going Out-of-Control (OoC) means that the failure rate suddenly jumps up and then stays at this higher level. However, another situation of interest is intermittent OoC behavior. In industrial settings, an OoC process can be adjusted to return to In-Control (IC), but with health care monitoring this usually is no option and stretches of OoC and IC behavior may alternate. Comparison of such intermittent alternatives to the change point situation shows that the former can be characterized as tail alternatives, in the sense that the difference w.r.t. the IC-distribution becomes more concentrated in the lower tail. This suggests to generalize the MAX-chart as follows: now signal if all but 1 (or 2) out of r individual waiting times are too small. A numerical study shows that this approach indeed works well
Improved binomial charts for monitoring high-quality processes
For processes concerning attribute data with (very) small failure rate p, often negative binomial control charts are used. The decision whether to stop or continue is made each time r failures have occurred, for some r≥1. Finding the optimal r for detecting a given increase of p first requires alignment of the charts in terms of in-control behavior. In the present paper binomial charts are subjected to this same requirement. Subsequent study reveals that the resulting charts are quite attractive in several aspects, such as detection power. For the case of unknown p, an estimated version of the chart is derived and studied
- …
