247 research outputs found
Fourier-Space Crystallography as Group Cohomology
We reformulate Fourier-space crystallography in the language of cohomology of
groups. Once the problem is understood as a classification of linear functions
on the lattice, restricted by a particular group relation, and identified by
gauge transformation, the cohomological description becomes natural. We review
Fourier-space crystallography and group cohomology, quote the fact that
cohomology is dual to homology, and exhibit several results, previously
established for special cases or by intricate calculation, that fall
immediately out of the formalism. In particular, we prove that {\it two phase
functions are gauge equivalent if and only if they agree on all their
gauge-invariant integral linear combinations} and show how to find all these
linear combinations systematically.Comment: plain tex, 14 pages (replaced 5/8/01 to include archive preprint
number for reference 22
Symmetry of Magnetically Ordered Quasicrystals
The notion of magnetic symmetry is reexamined in light of the recent
observation of long range magnetic order in icosahedral quasicrystals [Charrier
et al., Phys. Rev. Lett. 78, 4637 (1997)]. The relation between the symmetry of
a magnetically-ordered (periodic or quasiperiodic) crystal, given in terms of a
``spin space group,'' and its neutron diffraction diagram is established. In
doing so, an outline of a symmetry classification scheme for magnetically
ordered quasiperiodic crystals is provided. Predictions are given for the
expected diffraction patterns of magnetically ordered icosahedral crystals,
provided their symmetry is well described by icosahedral spin space groups.Comment: 5 pages. Accepted for publication in Phys. Rev. Letter
Standard Anatomical and Visual Space for the Mouse Retina: Computational Reconstruction and Transformation of Flattened Retinae with the Retistruct Package
The concept of topographic mapping is central to the understanding of the visual system at many levels, from the developmental to the computational. It is important to be able to relate different coordinate systems, e.g. maps of the visual field and maps of the retina. Retinal maps are frequently based on flat-mount preparations. These use dissection and relaxing cuts to render the quasi-spherical retina into a 2D preparation. The variable nature of relaxing cuts and associated tears limits quantitative cross-animal comparisons. We present an algorithm, "Retistruct," that reconstructs retinal flat-mounts by mapping them into a standard, spherical retinal space. This is achieved by: stitching the marked-up cuts of the flat-mount outline; dividing the stitched outline into a mesh whose vertices then are mapped onto a curtailed sphere; and finally moving the vertices so as to minimise a physically-inspired deformation energy function. Our validation studies indicate that the algorithm can estimate the position of a point on the intact adult retina to within 8° of arc (3.6% of nasotemporal axis). The coordinates in reconstructed retinae can be transformed to visuotopic coordinates. Retistruct is used to investigate the organisation of the adult mouse visual system. We orient the retina relative to the nictitating membrane and compare this to eye muscle insertions. To align the retinotopic and visuotopic coordinate systems in the mouse, we utilised the geometry of binocular vision. In standard retinal space, the composite decussation line for the uncrossed retinal projection is located 64° away from the retinal pole. Projecting anatomically defined uncrossed retinal projections into visual space gives binocular congruence if the optical axis of the mouse eye is oriented at 64° azimuth and 22° elevation, in concordance with previous results. Moreover, using these coordinates, the dorsoventral boundary for S-opsin expressing cones closely matches the horizontal meridian
Evaluation of rate law approximations in bottom-up kinetic models of metabolism.
BackgroundThe mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question.ResultsIn this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations.ConclusionsOverall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches
On the joint residence time of N independent two-dimensional Brownian motions
We study the behavior of several joint residence times of N independent
Brownian particles in a disc of radius in two dimensions. We consider: (i)
the time T_N(t) spent by all N particles simultaneously in the disc within the
time interval [0,t]; (ii) the time T_N^{(m)}(t) which at least m out of N
particles spend together in the disc within the time interval [0,t]; and (iii)
the time {\tilde T}_N^{(m)}(t) which exactly m out of N particles spend
together in the disc within the time interval [0,t]. We obtain very simple
exact expressions for the expectations of these three residence times in the
limit t\to\infty.Comment: 8 page
Order statistics of the trapping problem
When a large number N of independent diffusing particles are placed upon a
site of a d-dimensional Euclidean lattice randomly occupied by a concentration
c of traps, what is the m-th moment of the time t_{j,N} elapsed
until the first j are trapped? An exact answer is given in terms of the
probability Phi_M(t) that no particle of an initial set of M=N, N-1,..., N-j
particles is trapped by time t. The Rosenstock approximation is used to
evaluate Phi_M(t), and it is found that for a large range of trap
concentracions the m-th moment of t_{j,N} goes as x^{-m} and its variance as
x^{-2}, x being ln^{2/d} (1-c) ln N. A rigorous asymptotic expression (dominant
and two corrective terms) is given for for the one-dimensional
lattice.Comment: 11 pages, 7 figures, to be published in Phys. Rev.
Sublocalization, superlocalization, and violation of standard single parameter scaling in the Anderson model
We discuss the localization behavior of localized electronic wave functions
in the one- and two-dimensional tight-binding Anderson model with diagonal
disorder. We find that the distributions of the local wave function amplitudes
at fixed distances from the localization center are well approximated by
log-normal fits which become exact at large distances. These fits are
consistent with the standard single parameter scaling theory for the Anderson
model in 1d, but they suggest that a second parameter is required to describe
the scaling behavior of the amplitude fluctuations in 2d. From the log-normal
distributions we calculate analytically the decay of the mean wave functions.
For short distances from the localization center we find stretched exponential
localization ("sublocalization") in both, 1d and 2d. In 1d, for large
distances, the mean wave functions depend on the number of configurations N
used in the averaging procedure and decay faster that exponentially
("superlocalization") converging to simple exponential behavior only in the
asymptotic limit. In 2d, in contrast, the localization length increases
logarithmically with the distance from the localization center and
sublocalization occurs also in the second regime. The N-dependence of the mean
wave functions is weak. The analytical result agrees remarkably well with the
numerical calculations.Comment: 12 pages with 9 figures and 1 tabl
Subsumer-First: Steering Symbolic Reachability Analysis
Abstract. Symbolic reachability analysis provides a basis for the veri-fication of software systems by offering algorithmic support for the ex-ploration of the program state space when searching for proofs or coun-terexamples. The choice of exploration strategy employed by the anal-ysis has direct impact on its success, whereas the ability to find short counterexamples quickly and—as a complementary task—to efficiently perform the exhaustive state space traversal are of utmost importance for the majority of verification efforts. Existing exploration strategies can optimize only one of these objectives which leads to a sub-optimal reach-ability analysis, e.g., breadth-first search may sacrifice the exploration ef-ficiency and chaotic iteration can miss minimal counterexamples. In this paper we present subsumer-first, a new approach for steering symbolic reachability analysis that targets both minimal counterexample discovery and efficiency of exhaustive exploration. Our approach leverages the re-sult of fixpoint checks performed during symbolic reachability analysis to bias the exploration strategy towards its objectives, and does not require any additional computation. We demonstrate how the subsumer-first ap-proach can be applied to improve efficiency of software verification tools based on predicate abstraction. Our experimental evaluation indicates the practical usefulness of the approach: we observe significant efficiency improvements (median value 40%) on difficult verification benchmarks from the transportation domain.
- …