928 research outputs found

    ASTEC -- the Aarhus STellar Evolution Code

    Full text link
    The Aarhus code is the result of a long development, starting in 1974, and still ongoing. A novel feature is the integration of the computation of adiabatic oscillations for specified models as part of the code. It offers substantial flexibility in terms of microphysics and has been carefully tested for the computation of solar models. However, considerable development is still required in the treatment of nuclear reactions, diffusion and convective mixing.Comment: Astrophys. Space Sci, in the pres

    Co-evolution of RDF Datasets

    Get PDF
    Linking Data initiatives have fostered the publication of large number of RDF datasets in the Linked Open Data (LOD) cloud, as well as the development of query processing infrastructures to access these data in a federated fashion. However, different experimental studies have shown that availability of LOD datasets cannot be always ensured, being RDF data replication required for envisioning reliable federated query frameworks. Albeit enhancing data availability, RDF data replication requires synchronization and conflict resolution when replicas and source datasets are allowed to change data over time, i.e., co-evolution management needs to be provided to ensure consistency. In this paper, we tackle the problem of RDF data co-evolution and devise an approach for conflict resolution during co-evolution of RDF datasets. Our proposed approach is property-oriented and allows for exploiting semantics about RDF properties during co-evolution management. The quality of our approach is empirically evaluated in different scenarios on the DBpedia-live dataset. Experimental results suggest that proposed proposed techniques have a positive impact on the quality of data in source datasets and replicas.Comment: 18 pages, 4 figures, Accepted in ICWE, 201

    Simulating 0+1 Dimensional Quantum Gravity on Quantum Computers: Mini-Superspace Quantum Cosmology and the World Line Approach in Quantum Field Theory

    Full text link
    Quantum computers are a promising candidate to radically expand computational science through increased computing power and more effective algorithms. In particular quantum computing could have a tremendous impact in the field of quantum cosmology. The goal of quantum cosmology is to describe the evolution of the Universe through the Wheeler-DeWitt equation or path integral methods without having to first formulate a full theory of quantum gravity. The quantum computer provides an advantage in this endeavor because it can perform path integrals in Lorentzian space and does not require constructing contour integrations in Euclidean gravity. Also quantum computers can provide advantages in systems with fermions which are difficult to analyze on classical computers. In this study, we first employed classical computational methods to analyze a Friedmann-Robertson-Walker mini-superspace with a scalar field and visualize the calculated wave function of the Universe for a variety of different values of the spatial curvature and cosmological constant. We them used IBM's Quantum Information Science Kit Python library and the variational quantum eigensolver to study the same systems on a quantum computer. The framework can also be extended to the world line approach to quantum field theory.Comment: 5 pages, 4 figure

    Model atmospheres of chemically peculiar stars: Self-consistent empirical stratified model of HD24712

    Full text link
    High-resolution spectra of some chemically peculiar stars clearly demonstrate the presence of strong abundance gradients in their atmospheres. However, these inhomogeneities are usually ignored in the standard scheme of model atmosphere calculations, braking the consistency between model structure and spectroscopically derived abundance pattern. In this paper we present first empirical self-consistent stellar atmosphere model of roAp star HD24712, with stratification of chemical elements included, and which is derived directly from the observed profiles of spectral lines without time-consuming simulations of physical mechanisms responsible for these anomalies. We used the LLmodels stellar model atmosphere code and DDAFIT minimization tool for analysis of chemical elements stratification and construction of self-consistent atmospheric model. Empirical determination of Pr and Nd stratification in the atmosphere of HD24712 is based on NLTE line formation for Prii/iii and Ndii/iii with the use of the DETAIL code. Based on iterative procedure of stratification analysis and subsequent re-calculation of model atmosphere structure we constructed a self-consistent model of HD24712, i.e. the model which temperature-pressure structure is consistent with results of stratification analysis. It is shown that stratification of chemical elements leads to the considerable changes in model structure as to compare with non-stratified homogeneous case. We find that accumulation of REE elements allows for the inverse temperature gradient to be present in upper atmosphere of the star with the maximum temperature increase of about 600K.Comment: Comments: Accepted by A&A, 16 pages, 10 figures, 3 table

    Two decades of Martini:Better beads, broader scope

    Get PDF
    The Martini model, a coarse-grained force field for molecular dynamics simulations, has been around for nearly two decades. Originally developed for lipid-based systems by the groups of Marrink and Tieleman, the Martini model has over the years been extended as a community effort to the current level of a general-purpose force field. Apart from the obvious benefit of a reduction in computational cost, the popularity of the model is largely due to the systematic yet intuitive building-block approach that underlies the model, as well as the open nature of the development and its continuous validation. The easy implementation in the widely used Gromacs software suite has also been instrumental. Since its conception in 2002, the Martini model underwent a gradual refinement of the bead interactions and a widening scope of applications. In this review, we look back at this development, culminating with the release of the Martini 3 version in 2021. The power of the model is illustrated with key examples of recent important findings in biological and material sciences enabled with Martini, as well as examples from areas where coarse-grained resolution is essential, namely high-throughput applications, systems with large complexity, and simulations approaching the scale of whole cells. This article is categorized under: Software > Molecular Modeling Molecular and Statistical Mechanics > Molecular Dynamics and Monte-Carlo Methods Structure and Mechanism > Computational Materials Science Structure and Mechanism > Computational Biochemistry and Biophysics

    Speed data collection methods: a review

    Get PDF
    Various studies have been focusing on a wide range of techniques to detect traffic flow characteristics, like speed and travel times. Therefore, a key aspect to obtain statistically significant set of data is to observe and record driver behaviours in real world. To collect traffic data, traditional methods of traffic measurement - such as detection stations, radar guns or video cameras - have been used over the years. Other innovative methods refer to probe vehicles equipped with GPS devices and/or cameras, which allow continuous surveys along the entire road route. While point-based devices provide information of the entire flow, just in the section in which they are installed and only in the time domain, probe vehicles data are referred both to temporal and space domains but ignore traffic conditions. Obviously, it is necessary that the data collected refer to representative samples, by number and composition, of the user population. The paper proposes a review of the most used methods for speed data collection, highlighting the advantages and disadvantages of each experimental approach. Accordingly, the comparison illustrates the best relief method to be adopted depending on the research and investigation that will be performed

    A Probabilistic Data Fusion Modeling Approach for Extracting True Values from Uncertain and Conflicting Attributes

    Get PDF
    Real-world data obtained from integrating heterogeneous data sources are often multi-valued, uncertain, imprecise, error-prone, outdated, and have different degrees of accuracy and correctness. It is critical to resolve data uncertainty and conflicts to present quality data that reflect actual world values. This task is called data fusion. In this paper, we deal with the problem of data fusion based on probabilistic entity linkage and uncertainty management in conflict data. Data fusion has been widely explored in the research community. However, concerns such as explicit uncertainty management and on-demand data fusion, which can cope with dynamic data sources, have not been studied well. This paper proposes a new probabilistic data fusion modeling approach that attempts to find true data values under conditions of uncertain or conflicted multi-valued attributes. These attributes are generated from the probabilistic linkage and merging alternatives of multi-corresponding entities. Consequently, the paper identifies and formulates several data fusion cases and sample spaces that require further conditional computation using our computational fusion method. The identification is established to fit with a real-world data fusion problem. In the real world, there is always the possibility of heterogeneous data sources, the integration of probabilistic entities, single or multiple truth values for certain attributes, and different combinations of attribute values as alternatives for each generated entity. We validate our probabilistic data fusion approach through mathematical representation based on three data sources with different reliability scores. The validity of the approach was assessed via implementation into our probabilistic integration system to show how it can manage and resolve different cases of data conflicts and inconsistencies. The outcome showed improved accuracy in identifying true values due to the association of constructive evidence
    • …
    corecore