8,300 research outputs found

    Middleware’s message : the financial technics of codata

    Get PDF
    In this paper, I will argue for the relevance of certain distinctive features of messaging systems, namely those in which data (a) can be sent and received asynchronously, (b) can be sent to multiple simultaneous recipients and (c) is received as a “potentially infinite” flow of unpredictable events. I will describe the social technology of the stock ticker, a telegraphic device introduced at the New York Stock Exchange in the 1860s, with reference to early twentieth century philosophers of synchronous experience (Bergson), simultaneous sign interpretations (Mead and Peirce), and flows of discrete events (Bachelard). Then, I will show how the ticker’s data flows developed into the 1990s-era technologies of message queues and message brokers, which distinguished themselves through their asynchronous implementation of ticker-like message feeds sent between otherwise incompatible computers and terminals. These latter systems’ characteristic “publish/subscribe” communication pattern was one in which conceptually centralized (if logically distributed) flows of messages would be “published,” and for which “subscribers” would be spontaneously notified when events of interest occurred. This paradigm—common to the so-called “message-oriented middleware” systems of the late 1990s—would re-emerge in different asynchronous distributed system contexts over the following decades, from “push media” to Twitter to the Internet of Things

    Status and projections of the NAS program

    Get PDF
    NASA's Numerical Aerodynamic Simulation (NAS) Program has completed development of the initial operating configuration of the NAS Processing System Network (NPSN). This is the first milestone in the continuing and pathfinding effort to provide state-of-the-art supercomputing for aeronautics research and development. The NPSN, available to a nation-wide community of remote users, provides a uniform UNIX environment over a network of host computers ranging from the Cray-2 supercomputer to advanced scientific workstations. This system, coupled with a vendor-independent base of common user interface and network software, presents a new paradigm for supercomputing environments. Background leading to the NAS program, its programmatic goals and strategies, technical goals and objectives, and the development activities leading to the current NPSN configuration are presented. Program status, near-term plans, and plans for the next major milestone, the extended operating configuration, are also discussed

    How open is open enough?: Melding proprietary and open source platform strategies

    Get PDF
    Computer platforms provide an integrated architecture of hardware and software standards as a basis for developing complementary assets. The most successful platforms were owned by proprietary sponsors that controlled platform evolution and appropriated associated rewards. Responding to the Internet and open source systems, three traditional vendors of proprietary platforms experimented with hybrid strategies which attempted to combine the advantages of open source software while retaining control and differentiation. Such hybrid standards strategies reflect the competing imperatives for adoption and appropriability, and suggest the conditions under which such strategies may be preferable to either the purely open or purely proprietary alternatives

    Extensible Component Based Architecture for FLASH, A Massively Parallel, Multiphysics Simulation Code

    Full text link
    FLASH is a publicly available high performance application code which has evolved into a modular, extensible software system from a collection of unconnected legacy codes. FLASH has been successful because its capabilities have been driven by the needs of scientific applications, without compromising maintainability, performance, and usability. In its newest incarnation, FLASH3 consists of inter-operable modules that can be combined to generate different applications. The FLASH architecture allows arbitrarily many alternative implementations of its components to co-exist and interchange with each other, resulting in greater flexibility. Further, a simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework providing verifiability, combined with a rigorous software maintenance process, allow the code to operate simultaneously in the dual mode of production and development. In this paper we describe the FLASH3 architecture, with emphasis on solutions to the more challenging conflicts arising from solver complexity, portable performance requirements, and legacy codes. We also include results from user surveys conducted in 2005 and 2007, which highlight the success of the code.Comment: 33 pages, 7 figures; revised paper submitted to Parallel Computin

    NASA's supercomputing experience

    Get PDF
    A brief overview of NASA's recent experience in supercomputing is presented from two perspectives: early systems development and advanced supercomputing applications. NASA's role in supercomputing systems development is illustrated by discussion of activities carried out by the Numerical Aerodynamical Simulation Program. Current capabilities in advanced technology applications are illustrated with examples in turbulence physics, aerodynamics, aerothermodynamics, chemistry, and structural mechanics. Capabilities in science applications are illustrated by examples in astrophysics and atmospheric modeling. Future directions and NASA's new High Performance Computing Program are briefly discussed

    Open Source Software: From Open Science to New Marketing Models

    Get PDF
    -Open source Software; Intellectual Property; Licensing; Business Model.

    A photometricity and extinction monitor at the Apache Point Observatory

    Full text link
    An unsupervised software ``robot'' that automatically and robustly reduces and analyzes CCD observations of photometric standard stars is described. The robot measures extinction coefficients and other photometric parameters in real time and, more carefully, on the next day. It also reduces and analyzes data from an all-sky 10ÎŒm10 \mu m camera to detect clouds; photometric data taken during cloudy periods are automatically rejected. The robot reports its findings back to observers and data analysts via the World-Wide Web. It can be used to assess photometricity, and to build data on site conditions. The robot's automated and uniform site monitoring represents a minimum standard for any observing site with queue scheduling, a public data archive, or likely participation in any future National Virtual Observatory.Comment: accepted for publication in A
    • 

    corecore