105,278 research outputs found

    Explaining Gabriel-Zisman localization to the computer

    Get PDF
    This explains a computer formulation of Gabriel-Zisman localization of categories in the proof assistant Coq. It includes both the general localization construction with the proof of GZ's Lemma 1.2, as well as the construction using calculus of fractions. The proof files are bundled with the other preprint "Files for GZ localization" posted simultaneously

    A Universal Machine for Biform Theory Graphs

    Full text link
    Broadly speaking, there are two kinds of semantics-aware assistant systems for mathematics: proof assistants express the semantic in logic and emphasize deduction, and computer algebra systems express the semantics in programming languages and emphasize computation. Combining the complementary strengths of both approaches while mending their complementary weaknesses has been an important goal of the mechanized mathematics community for some time. We pick up on the idea of biform theories and interpret it in the MMTt/OMDoc framework which introduced the foundations-as-theories approach, and can thus represent both logics and programming languages as theories. This yields a formal, modular framework of biform theory graphs which mixes specifications and implementations sharing the module system and typing information. We present automated knowledge management work flows that interface to existing specification/programming tools and enable an OpenMath Machine, that operationalizes biform theories, evaluating expressions by exhaustively applying the implementations of the respective operators. We evaluate the new biform framework by adding implementations to the OpenMath standard content dictionaries.Comment: Conferences on Intelligent Computer Mathematics, CICM 2013 The final publication is available at http://link.springer.com

    A Universal Parallel Two-Pass MDL Context Tree Compression Algorithm

    Full text link
    Computing problems that handle large amounts of data necessitate the use of lossless data compression for efficient storage and transmission. We present a novel lossless universal data compression algorithm that uses parallel computational units to increase the throughput. The length-NN input sequence is partitioned into BB blocks. Processing each block independently of the other blocks can accelerate the computation by a factor of BB, but degrades the compression quality. Instead, our approach is to first estimate the minimum description length (MDL) context tree source underlying the entire input, and then encode each of the BB blocks in parallel based on the MDL source. With this two-pass approach, the compression loss incurred by using more parallel units is insignificant. Our algorithm is work-efficient, i.e., its computational complexity is O(N/B)O(N/B). Its redundancy is approximately Blog(N/B)B\log(N/B) bits above Rissanen's lower bound on universal compression performance, with respect to any context tree source whose maximal depth is at most log(N/B)\log(N/B). We improve the compression by using different quantizers for states of the context tree based on the number of symbols corresponding to those states. Numerical results from a prototype implementation suggest that our algorithm offers a better trade-off between compression and throughput than competing universal data compression algorithms.Comment: Accepted to Journal of Selected Topics in Signal Processing special issue on Signal Processing for Big Data (expected publication date June 2015). 10 pages double column, 6 figures, and 2 tables. arXiv admin note: substantial text overlap with arXiv:1405.6322. Version: Mar 2015: Corrected a typ

    On the Algorithmic Nature of the World

    Full text link
    We propose a test based on the theory of algorithmic complexity and an experimental evaluation of Levin's universal distribution to identify evidence in support of or in contravention of the claim that the world is algorithmic in nature. To this end we have undertaken a statistical comparison of the frequency distributions of data from physical sources on the one hand--repositories of information such as images, data stored in a hard drive, computer programs and DNA sequences--and the frequency distributions generated by purely algorithmic means on the other--by running abstract computing devices such as Turing machines, cellular automata and Post Tag systems. Statistical correlations were found and their significance measured.Comment: Book chapter in Gordana Dodig-Crnkovic and Mark Burgin (eds.) Information and Computation by World Scientific, 2010. (http://www.idt.mdh.se/ECAP-2005/INFOCOMPBOOK/). Paper website: http://www.mathrix.org/experimentalAIT

    Towards MKM in the Large: Modular Representation and Scalable Software Architecture

    Full text link
    MKM has been defined as the quest for technologies to manage mathematical knowledge. MKM "in the small" is well-studied, so the real problem is to scale up to large, highly interconnected corpora: "MKM in the large". We contend that advances in two areas are needed to reach this goal. We need representation languages that support incremental processing of all primitive MKM operations, and we need software architectures and implementations that implement these operations scalably on large knowledge bases. We present instances of both in this paper: the MMT framework for modular theory-graphs that integrates meta-logical foundations, which forms the base of the next OMDoc version; and TNTBase, a versioned storage system for XML-based document formats. TNTBase becomes an MMT database by instantiating it with special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 201

    Examples of finite element mesh generation using SDRC IDEAS

    Get PDF
    IDEAS (Integrated Design Engineering Analysis Software) offers a comprehensive package for mechanical design engineers. Due to its multifaceted capabilities, however, it can be manipulated to serve the needs of electrical engineers, also. IDEAS can be used to perform the following tasks: system modeling, system assembly, kinematics, finite element pre/post processing, finite element solution, system dynamics, drafting, test data analysis, and project relational database
    corecore