6,345 research outputs found

    Julia: A Fresh Approach to Numerical Computing

    Get PDF
    Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast. Julia questions notions generally held as "laws of nature" by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment, and 3. There are parts of a system for the programmer, and other parts best left untouched as they are built by the experts. We introduce the Julia programming language and its design --- a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can have machine performance without sacrificing human convenience.Comment: 37 page

    XQuery for Archivists: Understanding EAD Finding Aids as Data

    Get PDF
    [Excerpt] XQuery is a simple, yet powerful, scripting language designed to enable users without formal programming training to extract, transform, and manipulate XML data. Moreover, the language is an accepted standard and a W3C recommendation much like its sister standards, XML and XSLT. In other words, XQuery’s raison d’etre coincides perfectly with the needs of today’s archivists. What follows is a brief, pragmatic, overview of XQuery for archivists that will enable archivists with a keen understanding of XML, XPath, and EAD to begin experimenting with manipulating EAD data using XQuery

    Peirce, meaning and the semantic web

    Get PDF
    The so-called ‘Semantic Web’ is phase II of Tim Berners-Lee’s original vision for the WWW, whereby resources would no longer be indexed merely ‘syntactically’, via opaque character-strings, but via their meanings. We argue that one roadblock to Semantic Web development has been researchers’ adherence to a Cartesian, ‘private’ account of meaning, which has been dominant for the last 400 years, and which understands the meanings of signs as what their producers intend them to mean. It thus strives to build ‘silos of meaning’ which explicitly and antecedently determine what signs on the Web will mean in all possible situations. By contrast, the field is moving forward insofar as it embraces Peirce’s ‘public’, evolutionary account of meaning, according to which the meaning of signs just is the way they are interpreted and used to produce further signs. Given the extreme interconnectivity of the Web, it is argued that silos of meaning are unnecessary as plentiful machine-understandable data about the meaning of Web resources exists already in the form of those resources themselves, for applications that are able to leverage it, and it is Peirce’s account of meaning which can best make sense of the recent explosion in ‘user-defined content’ on the Web, and its relevance to achieving Semantic Web goals

    Method and Instruments for Modeling Integrated Knowledge

    Get PDF
    MIMIK (Method and Instruments for Modeling Integrated Knowledge) is a set of tools used to formalize and represent knowledge within organizations. It furthermore supports knowledge creation and sharing within communities of interest or communities of practice. In this paper we show that MIMIK is based on a model theory approach and builds on other existing methods and techniques. We also explain how to use the method and its instruments in order to model strategic objectives, processes, knowledge, and roles found within an organization, as well as relations existing between these elements. Indeed MIMIK provides eight types of models in order to describe what is commonly called know-how, know-why and know-what; it uses matrices in order to formally and semantically link strategic objectives, knowledge and actors. We close this paper with a presentation of a prototype we built in order to demonstrate a technical architecture allowing for knowledge creation, formalization and sharing.knowledge modelling; process modelling; public administration; methodology; knowledge sharing; RSS

    The C++0x "Concepts" Effort

    Full text link
    C++0x is the working title for the revision of the ISO standard of the C++ programming language that was originally planned for release in 2009 but that was delayed to 2011. The largest language extension in C++0x was "concepts", that is, a collection of features for constraining template parameters. In September of 2008, the C++ standards committee voted the concepts extension into C++0x, but then in July of 2009, the committee voted the concepts extension back out of C++0x. This article is my account of the technical challenges and debates within the "concepts" effort in the years 2003 to 2009. To provide some background, the article also describes the design space for constrained parametric polymorphism, or what is colloquially know as constrained generics. While this article is meant to be generally accessible, the writing is aimed toward readers with background in functional programming and programming language theory. This article grew out of a lecture at the Spring School on Generic and Indexed Programming at the University of Oxford, March 2010
    • 

    corecore