23,770 research outputs found

    Uniform management of heterogeneous semi-structured information sources

    Get PDF
    Nowadays, data can be represented and stored by using different formats ranging from non structured data, typical of file systems, to semi-structured data, typical of Web sources, to highly structured data, typical of relational database systems. Therefore, the necessity arises to define new tools and models for uniformly handling all these heterogeneous information sources. In this paper we propose both a framework and a conceptual model which aim at uniformly managing information sources having different nature and structure for obtaining a global, integrated and uniform representation. We show also how the proposed framework and the conceptual model can be useful in many application contexts

    Multi-Paradigm Reasoning for Access to Heterogeneous GIS

    Get PDF
    Accessing and querying geographical data in a uniform way has become easier in recent years. Emerging standards like WFS turn the web into a geospatial web services enabled place. Mediation architectures like VirGIS overcome syntactical and semantical heterogeneity between several distributed sources. On mobile devices, however, this kind of solution is not suitable, due to limitations, mostly regarding bandwidth, computation power, and available storage space. The aim of this paper is to present a solution for providing powerful reasoning mechanisms accessible from mobile applications and involving data from several heterogeneous sources. By adapting contents to time and location, mobile web information systems can not only increase the value and suitability of the service itself, but can substantially reduce the amount of data delivered to users. Because many problems pertain to infrastructures and transportation in general and to way finding in particular, one cornerstone of the architecture is higher level reasoning on graph networks with the Multi-Paradigm Location Language MPLL. A mediation architecture is used as a “graph provider” in order to transfer the load of computation to the best suited component – graph construction and transformation for example being heavy on resources. Reasoning in general can be conducted either near the “source” or near the end user, depending on the specific use case. The concepts underlying the proposal described in this paper are illustrated by a typical and concrete scenario for web applications

    Hijacker: Efficient static software instrumentation with applications in high performance computing: Poster paper

    Get PDF
    Static Binary Instrumentation is a technique that allows compile-time program manipulation. In particular, by relying on ad-hoc tools, the end user is able to alter the program's execution flow without affecting its overall semantic. This technique has been effectively used, e.g., to support code profiling, performance analysis, error detection, attack detection, or behavior monitoring. Nevertheless, efficiently relying on static instrumentation for producing executables which can be deployed without affecting the overall performance of the application still presents technical and methodological issues. In this paper, we present Hijacker, an open-source customizable static binary instrumentation tool which is able to alter a program's execution flow according to some user-specified rules, limiting the execution overhead due to the code snippets inserted in the original program, thus enabling for the exploitation in high performance computing. The tool is highly modular and works on an internal representation of the program which allows to perform complex instrumentation tasks efficiently, and can be additionally extended to support different instruction sets and executable formats without any need to modify the instrumentation engine. We additionally present an experimental assessment of the overhead induced by the injected code in real HPC applications. © 2013 IEEE

    Big data and the SP theory of intelligence

    Get PDF
    This article is about how the "SP theory of intelligence" and its realisation in the "SP machine" may, with advantage, be applied to the management and analysis of big data. The SP system -- introduced in the article and fully described elsewhere -- may help to overcome the problem of variety in big data: it has potential as "a universal framework for the representation and processing of diverse kinds of knowledge" (UFK), helping to reduce the diversity of formalisms and formats for knowledge and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualisation of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it.Comment: Accepted for publication in IEEE Acces

    Encoding models for scholarly literature

    Get PDF
    We examine the issue of digital formats for document encoding, archiving and publishing, through the specific example of "born-digital" scholarly journal articles. We will begin by looking at the traditional workflow of journal editing and publication, and how these practices have made the transition into the online domain. We will examine the range of different file formats in which electronic articles are currently stored and published. We will argue strongly that, despite the prevalence of binary and proprietary formats such as PDF and MS Word, XML is a far superior encoding choice for journal articles. Next, we look at the range of XML document structures (DTDs, Schemas) which are in common use for encoding journal articles, and consider some of their strengths and weaknesses. We will suggest that, despite the existence of specialized schemas intended specifically for journal articles (such as NLM), and more broadly-used publication-oriented schemas such as DocBook, there are strong arguments in favour of developing a subset or customization of the Text Encoding Initiative (TEI) schema for the purpose of journal-article encoding; TEI is already in use in a number of journal publication projects, and the scale and precision of the TEI tagset makes it particularly appropriate for encoding scholarly articles. We will outline the document structure of a TEI-encoded journal article, and look in detail at suggested markup patterns for specific features of journal articles

    Optimising performance in network-based information systems: Virtual organisations and customised views

    Get PDF
    ©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Network-based information systems use well-defined standards to ensure interoperability and also have a tightly coupled relationship between their internal data representation and the external network representation. Virtual organisations (VOs), where members share a problem-solving purpose rather than a location-based or formal organisation, constitute an environment where user requirements may not be met by these standards. A virtual organisation has no formal body to manage change requests for these standards so the user requirements cannot be met. We show how the decoupling of the internal and external representations, through the use of ontologies, can enhance the operation of these systems by enabling flexibility and extensibility. We illustrate this by demonstrating a system that implements and enhances the Domain Name System, a global network-based information system. Migrating an existing system to a decoupled, knowledge-driven system is neither simple nor effortless but can provide significant benefits.Nickolas J. G. Falkner, Paul D. Coddington, Andrew L. Wendelbor

    Model uncertainty in non-linear numerical analyses of slender reinforced concrete members

    Get PDF
    The present study aims to characterize the epistemic uncertainty within the use of global non-linear numerical analyses (i.e., NLNAs) for design and assessment purposes of slender reinforced concrete (RC) members. The epistemic uncertainty associated to NLNAs may be represented by approximations and choices performed during the definition of a structural numerical model. In order to quantify epistemic uncertainty associated to a non-linear numerical simulation, the resistance model uncertainty random variable has to be characterized by means of the comparison between experimental and numerical results. With this aim, a set of experimental tests on slender RC columns known from the literature is considered. Then, the experimental results in terms of maximum axial load are compared to the outcomes achieved from NLNAs. Nine different modeling hypotheses are herein considered to characterize the resistance model uncertainty random variable. The probabilistic analysis of the results has been performed according to Bayesian approach accounting also for both the previous knowledge from the scientific literature and the influence of the experimental uncertainty on the estimation of the statistics of the resistance model uncertainty random variable. Finally, the resistance model uncertainty partial safety factor is evaluated in line with the global resistance format of fib Model Code for Concrete Structures 2010 with reference to new and existing RC structures
    • 

    corecore