2,384 research outputs found
The Gysin map is compatible with mixed Hodge structures
We prove that the Gysin map is compatible with mixed Hodge Structures.Comment: Published in CRM Proceedings and Lecture Series, vol. 38, 200
The projectors of the decomposition theorem are motivic
We prove that the projectors arising from the decomposition theorem applied
to a projective map of quasi projective varieties are absolute Hodge, Andr\'e
motivated, Tate and Ogus classes. As a by-product, we introduce, in
characteristic zero, the notions of algebraic de Rham intersection cohomology
groups of a quasi projective variety and of intersection cohomology motive of a
projective variety
Weak violation of universality for Polyelectrolyte Chains: Variational Theory and Simulations
A variational approach is considered to calculate the free energy and the
conformational properties of a polyelectrolyte chain in dimensions. We
consider in detail the case of pure Coulombic interactions between the
monomers, when screening is not present, in order to compute the end-to-end
distance and the asymptotic properties of the chain as a function of the
polymer chain length . We find where
and is the exponent which characterize
the long-range interaction . The exponent is
shown to be non-universal, depending on the strength of the Coulomb
interaction. We check our findings, by a direct numerical minimization of the
variational energy for chains of increasing size . The
electrostatic blob picture, expected for small enough values of the interaction
strength, is quantitatively described by the variational approach. We perform a
Monte Carlo simulation for chains of length . The non universal
behavior of the exponent previously derived within the variational
method, is also confirmed by the simulation results. Non-universal behavior is
found for a polyelectrolyte chain in dimension. Particular attention is
devoted to the homopolymer chain problem, when short range contact interactions
are present.Comment: to appear in European Phys. Journal E (soft matter
Polymer chain in a quenched random medium: slow dynamics and ergodicity breaking
The Langevin dynamics of a self - interacting chain embedded in a quenched
random medium is investigated by making use of the generating functional method
and one - loop (Hartree) approximation. We have shown how this intrinsic
disorder causes different dynamical regimes. Namely, within the Rouse
characteristic time interval the anomalous diffusion shows up. The
corresponding subdiffusional dynamical exponents have been explicitly
calculated and thoroughly discussed. For the larger time interval the disorder
drives the center of mass of the chain to a trap or frozen state provided that
the Harris parameter, , where is a
disorder strength, is a Kuhnian segment length, is a chain length and
is the Flory exponent. We have derived the general equation for the non -
ergodicity function which characterizes the amplitude of frozen Rouse
modes with an index . The numerical solution of this equation has
been implemented and shown that the different Rouse modes freeze up at the same
critical disorder strength where the exponent
and does not depend from the solvent quality.Comment: 17 pages, 6 figures, submitted to EPJB (condensed matter
Polyelectrolyte chains in poor solvent. A variational description of necklace formation
We study the properties of polyelectrolyte chains under different solvent
conditions, using a variational technique. The free energy and the
conformational properties of a polyelectrolyte chain are studied minimizing the
free energy , depending on trial probabilities that
characterize the conformation of the chain. The Gaussian approximation is
considered for a ring of length and for an open chain of length
in poor and theta solvent conditions, including a Coulomb repulsion
between the monomers. In theta solvent conditions the blob size is measured and
found in agreement with scaling theory, including charge depletion effects,
expected for the case of an open chain. In poor solvent conditions, a globule
instability, driven by electrostatic repulsion, is observed. We notice also
inhomogeneous behavior of the monomer--monomer correlation function,
reminiscence of necklace formation in poor solvent polyelectrolyte solutions. A
global phase diagram in terms of solvent quality and inverse Bjerrum length is
presented.Comment: submitted to EPJE (soft matter
Ocular hypertension in myopia: analysis of contrast sensitivity
Purpose: we evaluated the evolution of contrast sensitivity reduction in patients affected by ocular hypertension and glaucoma, with low to moderate myopia. We also evaluated the relationship between contrast sensitivity and mean deviation of visual field.
Material and methods: 158 patients (316 eyes), aged between 38 and 57 years old, were enrolled and divided into 4 groups: emmetropes, myopes, myopes with ocular hypertension (IOP≥21 ±2 mmHg), myopes with glaucoma. All patients underwent anamnestic and complete eye evaluation, tonometric curves with Goldmann’s applanation tonometer, cup/disc ratio evaluation, gonioscopy by Goldmann’s three-mirrors lens, automated perimetry (Humphrey 30-2 full-threshold test) and contrast sensitivity evaluation by Pelli-Robson charts. A contrast sensitivity under 1,8 Logarithm of the Minimum Angle of Resolution (LogMAR) was considered
abnormal.
Results: contrast sensitivity was reduced in the group of myopes with ocular hypertension (1,788 LogMAR) and in the group of myopes with glaucoma (1,743 LogMAR), while it was preserved in the group of myopes (2,069 LogMAR) and in the group of emmetropes (1,990 LogMAR). We also found a strong correlation between contrast sensitivity reduction and mean deviation of visual fields in myopes with glaucoma (coefficient relation = 0.86) and in myopes with ocular hypertension (coefficient relation = 0.78).
Conclusions: the contrast sensitivity assessment performed by the Pelli-Robson test should be performed in all patients with middle-grade myopia, ocular hypertension and optic disc suspected for glaucoma, as it may be useful in the early diagnosis of the disease.
Introduction Contrast can be defined as the ability of the eye to discriminate differences in luminance between the stimulus and the background.
The sensitivity to contrast is represented by the inverse of the minimal contrast necessary to make an object visible; the lower the
contrast the greater the sensitivity, and the other way around.
Contrast sensitivity is a fundamental aspect of vision together with visual acuity: the latter defines the smallest spatial detail that the subject manages to discriminate under optimal conditions, but it only provides information about the size of the stimulus that the eye is capable to perceive; instead, the evaluation of contrast sensitivity provides information not obtainable with only the measurement of visual acuity, as it establishes the minimum difference in luminance that must be present between the stimulus and its background so that the retina is adequately stimulated to perceive the stimulus itself. The clinical methods of examining contrast sensitivity (lattices,
luminance gradients, variable-contrast optotypic tables and lowcontrast optotypic tables) relate the two parameters on which the
ability to distinctly perceive an object depends, namely the different luminance degree of the two adjacent areas and the spatial frequency,
which is linked to the size of the object.
The measurement of contrast sensitivity becomes valuable in the diagnosis and follow up of some important eye conditions such as
glaucoma. Studies show that contrast sensitivity can be related to data obtained with the visual perimetry, especially with the perimetric
damage of the central area and of the optic nerve head
Cost estimation of spatial join in spatialhadoop
Spatial join is an important operation in geo-spatial applications, since it is frequently used for performing data analysis involving geographical information. Many efforts have been done in the past decades in order to provide efficient algorithms for spatial join and this becomes particularly important as the amount of spatial data to be processed increases. In recent years, the MapReduce approach has become a de-facto standard for processing large amount of data (big-data) and some attempts have been made for extending existing frameworks for the processing of spatial data. In this context, several different MapReduce implementations of spatial join have been defined which mainly differ in the use of a spatial index and in the way this index is built and used. In general, none of these algorithms can be considered better than the others, but the choice might depend on the characteristics of the involved datasets. The aim of this work is to deeply analyse them and define a cost model for ranking them based on the characteristics of the dataset at hand (i.e., selectivity or spatial properties). This cost model has been extensively tested w.r.t. a set of synthetic datasets in order to prove its effectiveness
- …