759 research outputs found

    Evolution of cosmological constant in effective gravity

    Full text link
    In contrast to the phenomenon of nullification of the cosmological constant in the equilibrium vacuum, which is the general property of any quantum vacuum, there are many options in modifying the Einstein equation to allow the cosmological constant to evolve in a non-equilibrium vacuum. An attempt is made to extend the Einstein equation in the direction suggested by the condensed-matter analogy of the quantum vacuum. Different scenarios are found depending on the behavior of and the relation between the relaxation parameters involved, some of these scenarios having been discussed in the literature. One of them reproduces the scenario in which the effective cosmological constant emerges as a constant of integration. The second one describes the situation, when after the cosmological phase transition the cosmological constant drops from zero to the negative value; this scenario describes the relaxation from this big negative value back to zero and then to a small positive value. In the third example the relaxation time is not a constant but depends on matter; this scenario demonstrates that the vacuum energy (or its fraction) can play the role of the cold dark matter.Comment: LaTeX file, 5 pages, no figures, version submitted to JETP Letter

    The Revival of the Unified Dark Energy-Dark Matter Model ?

    Full text link
    We consider the generalized Chaplygin gas (GCG) proposal for unification of dark energy and dark matter and show that it admits an unique decomposition into dark energy and dark matter components once phantom-like dark energy is excluded. Within this framework, we study structure formation and show that difficulties associated to unphysical oscillations or blow-up in the matter power spectrum can be circumvented. Furthermore, we show that the dominance of dark energy is related to the time when energy density fluctuations start deviating from the linear δ∼a\delta \sim a behaviour.Comment: 6 pages, 4 eps figures, Revtex4 style. New References are added. Some typos are corrected. Conclusions remain the sam

    Both Ca2+ and Zn2+ are essential for S100A12 protein oligomerization and function

    Get PDF
    Background Human S100A12 is a member of the S100 family of EF-hand calcium-modulated proteins that are associated with many diseases including cancer, chronic inflammation and neurological disorders. S100A12 is an important factor in host/parasite defenses and in the inflammatory response. Like several other S100 proteins, it binds zinc and copper in addition to calcium. Mechanisms of zinc regulation have been proposed for a number of S100 proteins e.g. S100B, S100A2, S100A7, S100A8/9. The interaction of S100 proteins with their targets is strongly dependent on cellular microenvironment. Results The aim of the study was to explore the factors that influence S100A12 oligomerization and target interaction. A comprehensive series of biochemical and biophysical experiments indicated that changes in the concentration of calcium and zinc led to changes in the oligomeric state of S100A12. Surface plasmon resonance confirmed that the presence of both calcium and zinc is essential for the interaction of S100A12 with one of its extracellular targets, RAGE – the Receptor for Advanced Glycation End products. By using a single-molecule approach we have shown that the presence of zinc in tissue culture medium favors both the oligomerization of exogenous S100A12 protein and its interaction with targets on the cell surface. Conclusion We have shown that oligomerization and target recognition by S100A12 is regulated by both zinc and calcium. Our present work highlighted the potential role of calcium-binding S100 proteins in zinc metabolism and, in particular, the role of S100A12 in the cross talk between zinc and calcium in cell signaling

    SHREC 2011: robust feature detection and description benchmark

    Full text link
    Feature-based approaches have recently become very popular in computer vision and image analysis applications, and are becoming a promising direction in shape retrieval. SHREC'11 robust feature detection and description benchmark simulates the feature detection and description stages of feature-based shape retrieval algorithms. The benchmark tests the performance of shape feature detectors and descriptors under a wide variety of transformations. The benchmark allows evaluating how algorithms cope with certain classes of transformations and strength of the transformations that can be dealt with. The present paper is a report of the SHREC'11 robust feature detection and description benchmark results.Comment: This is a full version of the SHREC'11 report published in 3DO

    Particle Counting Statistics of Time and Space Dependent Fields

    Get PDF
    The counting statistics give insight into the properties of quantum states of light and other quantum states of matter such as ultracold atoms or electrons. The theoretical description of photon counting was derived in the 1960s and was extended to massive particles more recently. Typically, the interaction between each particle and the detector is assumed to be limited to short time intervals, and the probability of counting particles in one interval is independent of the measurements in previous intervals. There has been some effort to describe particle counting as a continuous measurement, where the detector and the field to be counted interact continuously. However, no general formula applicable to any time and space dependent field has been derived so far. In our work, we derive a fully time and space dependent description of the counting process for linear quantum many-body systems, taking into account the back-action of the detector on the field. We apply our formalism to an expanding Bose-Einstein condensate of ultracold atoms, and show that it describes the process correctly, whereas the standard approach gives unphysical results in some limits. The example illustrates that in certain situations, the back-action of the detector cannot be neglected and has to be included in the description

    Exploring the origin of high optical absorption in conjugated polymers

    Get PDF
    The specific optical absorption of an organic semiconductor is critical to the performance of organic optoelectronic devices. For example, higher light-harvesting efficiency can lead to higher photocurrent in solar cells that are limited by sub-optimal electrical transport. Here, we compare over 40 conjugated polymers, and find that many different chemical structures share an apparent maximum in their extinction coefficients. However, a diketopyrrolopyrrole-thienothiophene copolymer shows remarkably high optical absorption at relatively low photon energies. By investigating its backbone structure and conformation with measurements and quantum chemical calculations, we find that the high optical absorption can be explained by the high persistence length of the polymer. Accordingly, we demonstrate high absorption in other polymers with high theoretical persistence length. Visible light harvesting may be enhanced in other conjugated polymers through judicious design of the structure

    Deep Modeling of Growth Trajectories for Longitudinal Prediction of Missing Infant Cortical Surfaces

    Full text link
    Charting cortical growth trajectories is of paramount importance for understanding brain development. However, such analysis necessitates the collection of longitudinal data, which can be challenging due to subject dropouts and failed scans. In this paper, we will introduce a method for longitudinal prediction of cortical surfaces using a spatial graph convolutional neural network (GCNN), which extends conventional CNNs from Euclidean to curved manifolds. The proposed method is designed to model the cortical growth trajectories and jointly predict inner and outer cortical surfaces at multiple time points. Adopting a binary flag in loss calculation to deal with missing data, we fully utilize all available cortical surfaces for training our deep learning model, without requiring a complete collection of longitudinal data. Predicting the surfaces directly allows cortical attributes such as cortical thickness, curvature, and convexity to be computed for subsequent analysis. We will demonstrate with experimental results that our method is capable of capturing the nonlinearity of spatiotemporal cortical growth patterns and can predict cortical surfaces with improved accuracy.Comment: Accepted as oral presentation at IPMI 201

    A Graph Theoretic Approach for Object Shape Representation in Compositional Hierarchies Using a Hybrid Generative-Descriptive Model

    Full text link
    A graph theoretic approach is proposed for object shape representation in a hierarchical compositional architecture called Compositional Hierarchy of Parts (CHOP). In the proposed approach, vocabulary learning is performed using a hybrid generative-descriptive model. First, statistical relationships between parts are learned using a Minimum Conditional Entropy Clustering algorithm. Then, selection of descriptive parts is defined as a frequent subgraph discovery problem, and solved using a Minimum Description Length (MDL) principle. Finally, part compositions are constructed by compressing the internal data representation with discovered substructures. Shape representation and computational complexity properties of the proposed approach and algorithms are examined using six benchmark two-dimensional shape image datasets. Experiments show that CHOP can employ part shareability and indexing mechanisms for fast inference of part compositions using learned shape vocabularies. Additionally, CHOP provides better shape retrieval performance than the state-of-the-art shape retrieval methods.Comment: Paper : 17 pages. 13th European Conference on Computer Vision (ECCV 2014), Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III, pp 566-581. Supplementary material can be downloaded from http://link.springer.com/content/esm/chp:10.1007/978-3-319-10578-9_37/file/MediaObjects/978-3-319-10578-9_37_MOESM1_ESM.pd

    Instrumentation-related uncertainty of reflectance and transmittance measurements with a two-channel spectrophotometer

    Get PDF
    Spectrophotometers are operated in numerous fields of science and industry for a variety of applications. In order to provide confidence for the measured data, analyzing the associated uncertainty is valuable. However, the uncertainty of the measurement results is often unknown or reduced to sample-related contributions. In this paper, we describe our approach for the systematic determination of the measurement uncertainty of the commercially available two-channel spectrophotometer Agilent Cary 5000 in accordance with the Guide to the expression of uncertainty in measurements. We focus on the instrumentation-related uncertainty contributions rather than the specific application and thus outline a general procedure which can be adapted for other instruments. Moreover, we discover a systematic signal deviation due to the inertia of the measurement amplifier and develop and apply a correction procedure. Thereby we increase the usable dynamic range of the instrument by more than one order of magnitude. We present methods for the quantification of the uncertainty contributions and combine them into an uncertainty budget for the device. © 2017 Author(s)

    Fully Automatic Expression-Invariant Face Correspondence

    Full text link
    We consider the problem of computing accurate point-to-point correspondences among a set of human face scans with varying expressions. Our fully automatic approach does not require any manually placed markers on the scan. Instead, the approach learns the locations of a set of landmarks present in a database and uses this knowledge to automatically predict the locations of these landmarks on a newly available scan. The predicted landmarks are then used to compute point-to-point correspondences between a template model and the newly available scan. To accurately fit the expression of the template to the expression of the scan, we use as template a blendshape model. Our algorithm was tested on a database of human faces of different ethnic groups with strongly varying expressions. Experimental results show that the obtained point-to-point correspondence is both highly accurate and consistent for most of the tested 3D face models
    • …
    corecore