134,741 research outputs found

    Use of ontologies for metadata records analysis in big data

    Get PDF
    Big Data deals with the sets of information (structured, unstructured, or semi structured) so large that traditional ways and approaches (based on business intelligence decisions and database management systems) cannot be applied to them. Big Data is characterized by phenomenal acceleration of data accumulation and its complication. In different contexts Big Data often means both data of large volume and a set of tools and methods for their processing. Big Data sets are accompanied by metadata which contains a large amount of information about the data, including significant descriptive text information whose understanding by machines lead to better results of Big Data processing. Methods of artificial intelligence and intelligent Web-technologies improve the efficiency of all stages of Big Data processing. Most often this integration concerns the use of machine learning that provides the knowledge acquisition from Big Data and ontological analysis that formalizes for domain knowledge for Big Data analysis. In the paper, the authors present a method for analyzing the Big Data metadata which allows selecting those blocks of information among the heterogeneous sources and data repositories that are pertinent for solving the customer task. Much attention is paid to the matching of the text part of the metadata (metadata annotations) with the text describing the task. We suggest to use for these purposes the methods and instruments of natural language analysis and the Big Data ontology which contains knowledge about the specifics of this domain

    Resource Letter: Dark Energy and the Accelerating Universe

    Full text link
    This Resource Letter provides a guide to the literature on dark energy and the accelerating universe. It is intended to be of use to researchers, teachers, and students at several levels. Journal articles, books, and websites are cited for the following topics: Einstein's cosmological constant, quintessence or dynamical scalar fields, modified cosmic gravity, relations to high energy physics, cosmological probes and observations, terrestrial probes, calculational tools and parameter estimation, teaching strategies and educational resources, and the fate of the universe.Comment: Resource Letter for AAPT/AJP, 11 pages, 99 reference

    Gravitationally induced adiabatic particle production: from big bang to de Sitter

    Get PDF
    In the background of a flat homogeneous and isotropic space–time, we consider a scenario of the Universe driven by the gravitationally induced 'adiabatic' particle production with constant creation rate. We have shown that this Universe attains a big bang singularity in the past and at late-time it asymptotically becomes de Sitter. To clarify this model Universe, we performed a dynamical analysis and found that the Universe attains a thermodynamic equilibrium in this late de Sitter phase. Finally, for the first time, we have discussed the possible effects of 'adiabatic' particle creations in the context of loop quantum cosmology.Peer ReviewedPostprint (author's final draft

    The holographic induced gravity model with a Ricci dark energy: smoothing the little rip and big rip through Gauss-Bonnet effects?

    Full text link
    We present an holographic brane-world model of the Dvali-Gabadadze-Porrati (DGP) scenario with and without a Gauss-Bonnet term (GB) in the bulk. We show that an holographic dark energy component with the Ricci scale as the infra-red cutoff can describe the late-time acceleration of the universe. In addition, we show that the dimensionless holographic parameter is very important in characterising the DGP branches, and in determining the behaviour of the Ricci dark energy as well as the asymptotic behaviour of the brane. On the one hand, in the DGP scenario the Ricci dark energy will exhibit a phantom-like behaviour with no big rip if the holographic parameter is strictly larger than 1/2. For smaller values, the brane hits a big rip or a little rip. On the other hand, we have shown that the introduction of the GB term avoids the big rip and little rip singularities on both branches but cannot avoid the appearance of a big freeze singularity for some values of the holographic parameter on the normal branch, however, these values are very unlikely because they lead to a very negative equation of state at the present and therefore we can speak in practice of singularity avoidance. At this regard, the equation of state parameter of the Ricci dark energy plays a crucial role, even more important than the GB parameter, in rejecting the parameter space where future singularities appear.Comment: 11 pages, 8 figures. RevTex4-1. Comments and references added. Version accepted in PR

    Breadth-first search for social network graphs on heterogenous platforms

    Get PDF
    Breadth-First Search (BFS) is the core of many graph analysis algorithms, and it is useful in many problems including social network, computer network analysis, and data organization, but, due to its irregular behav- ior, its parallel implementation is very challenging. There are several approaches that implement efficient algorithms for BFS in multicore architectures and in Graphics Processors, but an efficient implementation of BFS for heterogeneous systems is even more complicated, as the task of distributing the work among the main cores and the accelerators becomes a big challenge. As part of this work, we have assessed different heterogenous shared-memory architectures (from high- end processors to embedded mobile processors, both composed by a multi-core CPU and an integrated GPU) and implemented different approaches to perform BFS. This work introduces three heterogeneous approaches for BFS: Selective, Concurrent, and Async. Contributions of this work includes both the analysis of BFS performance on Heterogenous platforms, as well as in depth analysis of social network graphs and its implications on the BFS algorithm. The results show that BFS is very input dependent, and that the structure of the graph is one of the prime factors to analyze in order to develop good and scalable algorithms. The results also show that heterogenous platforms can provide acceleration to even irregular algorithms, reaching speed-ups of 2.2x in the best case. It is also shown how the different system configurations and capabilities impact the performance and how the shared-memory system can reach bandwidth limitations that prevent performance improvements despite having higher utilization of the resources
    corecore