156,062 research outputs found

    KSC's work flow assistant

    Get PDF
    The work flow assistant (WFA) is an advanced technology project under the shuttle processing data management system (SPDMS) at Kennedy Space Center (KSC). It will be utilized for short range scheduling, controlling work flow on the floor, and providing near real-time status for all major space transportation systems (STS) work centers at KSC. It will increase personnel and STS safety and improve productivity through deeper active scheduling that includes tracking and correlation of STS and ground support equipment (GSE) configuration and work. It will also provide greater accessibility to this data. WFA defines a standards concept for scheduling data which permits both commercial off-the-shelf (COTS) scheduling tools and WFA developed applications to be reused. WFA will utilize industry standard languages and workstations to achieve a scalable, adaptable, and portable architecture which may be used at other sites

    Effect of Data Flow Architecture on Programming Language Design

    Get PDF
    This study is concerned with the aspects of data flow architecture. A survey of data flow processors is presented. The two broad classes of languages, procedural and applicative, are considered for the language design for the data flow architecture. Starting from the basic data flow program representation, the study extendends to the high level languages. Method for translating the conventional presented. language to data flow representation is Consideration is given to the conventional structured languages. A general discussion of usage of appli.cative language classes are presented, without considering specific syntax. The material presented can be extended to specific syntax design and its practical use can be studied from the given general discussions.Computing and Information Sciences

    Computer architectures for functional and logic languages

    Get PDF
    PhD ThesisIn recent years interest in functional and logic languages has grown considerably. Both classes of language offer advantages for programming and have an influential group of people promoting them. As yet no consensus has formed as to which class is best, and such a consensus may never form. Future general-purpose computer architectures may well be required to support both classes of language efficiently. Novel architectures designed to support both classes of languages could even add impetus to the area of hybrid functional/logic languages. Treleaven et al[68] have proposed a classification of computational mechanisms which they believe underly several types of novel computer architecture (i.e. control flow, data flow and reduction). The classification partitions novel general-purpose architectures into the following classes: control driven - where a statement is executed when it is selected by flow(s) of control, data driven - where a statement is executed when some combination of its arguments are available, and demand driven - where a statement is executed when the result it produces is needed by another, already active instruction. This thesis investigates the efficient support of both functional and logic languages using an architecture that attempts to be general purpose by embodying all the mechanisms that underly the above classification. A novel packet communication architecture is presented which intergrates the control driven, data driven and demand driven computational mechanisms. A software emulator for the machine was used as the basis for separate implementations of functional and logic languages, which were in turn used to evaluate the effectiveness of the computational mechanisms described in the classification. These mechanisms allowed functional languages to be implemented wi th ease, but caused severe problems when used to support logic languages. The difficulties with these mechanisms are taken as signifying that they do not provide adequate support for logic languages. The problems encountered led to the development of a novel implementation technique for logic languages, which also proved to be a good basis for a combined functional and logic model. This model is believed to provide a sound foundation for a parallel computer system that would support functional and logic languages with equal elegance and efficiency, and would therefore also support hybrid languages. The design for such a computer is described at the end of this thesis.the Science and Engineering Research Council, Great Britain

    HardScope: Thwarting DOP with Hardware-assisted Run-time Scope Enforcement

    Full text link
    Widespread use of memory unsafe programming languages (e.g., C and C++) leaves many systems vulnerable to memory corruption attacks. A variety of defenses have been proposed to mitigate attacks that exploit memory errors to hijack the control flow of the code at run-time, e.g., (fine-grained) randomization or Control Flow Integrity. However, recent work on data-oriented programming (DOP) demonstrated highly expressive (Turing-complete) attacks, even in the presence of these state-of-the-art defenses. Although multiple real-world DOP attacks have been demonstrated, no efficient defenses are yet available. We propose run-time scope enforcement (RSE), a novel approach designed to efficiently mitigate all currently known DOP attacks by enforcing compile-time memory safety constraints (e.g., variable visibility rules) at run-time. We present HardScope, a proof-of-concept implementation of hardware-assisted RSE for the new RISC-V open instruction set architecture. We discuss our systematic empirical evaluation of HardScope which demonstrates that it can mitigate all currently known DOP attacks, and has a real-world performance overhead of 3.2% in embedded benchmarks

    On Extracting Course-Grained Function Parallelism from C Programs

    Get PDF
    To efficiently utilize the emerging heterogeneous multi-core architecture, it is essential to exploit the inherent coarse-grained parallelism in applications. In addition to data parallelism, applications like telecommunication, multimedia, and gaming can also benefit from the exploitation of coarse-grained function parallelism. To exploit coarse-grained function parallelism, the common wisdom is to rely on programmers to explicitly express the coarse-grained data-flow between coarse-grained functions using data-flow or streaming languages. This research is set to explore another approach to exploiting coarse-grained function parallelism, that is to rely on compiler to extract coarse-grained data-flow from imperative programs. We believe imperative languages and the von Neumann programming model will still be the dominating programming languages programming model in the future. This dissertation discusses the design and implementation of a memory data-flow analysis system which extracts coarse-grained data-flow from C programs. The memory data-flow analysis system partitions a C program into a hierarchy of program regions. It then traverses the program region hierarchy from bottom up, summarizing the exposed memory access patterns for each program region, meanwhile deriving a conservative producer-consumer relations between program regions. An ensuing top-down traversal of the program region hierarchy will refine the producer-consumer relations by pruning spurious relations. We built an in-lining based prototype of the memory data-flow analysis system on top of the IMPACT compiler infrastructure. We applied the prototype to analyze the memory data-flow of several MediaBench programs. The experiment results showed that while the prototype performed reasonably well for the tested programs, the in-lining based implementation may not efficient for larger programs. Also, there is still room in improving the effectiveness of the memory data-flow analysis system. We did root cause analysis for the inaccuracy in the memory data-flow analysis results, which provided us insights on how to improve the memory data-flow analysis system in the future

    Database integrated analytics using R : initial experiences with SQL-Server + R

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Most data scientists use nowadays functional or semi-functional languages like SQL, Scala or R to treat data, obtained directly from databases. Such process requires to fetch data, process it, then store again, and such process tends to be done outside the DB, in often complex data-flows. Recently, database service providers have decided to integrate “R-as-a-Service” in their DB solutions. The analytics engine is called directly from the SQL query tree, and results are returned as part of the same query. Here we show a first taste of such technology by testing the portability of our ALOJA-ML analytics framework, coded in R, to Microsoft SQL-Server 2016, one of the SQL+R solutions released recently. In this work we discuss some data-flow schemes for porting a local DB + analytics engine architecture towards Big Data, focusing specially on the new DB Integrated Analytics approach, and commenting the first experiences in usability and performance obtained from such new services and capabilities.Peer ReviewedPostprint (author's final draft

    Prototyping the recursive internet architecture: the IRATI project approach

    Get PDF
    In recent years, many new Internet architectures are being proposed to solve shortcomings in the current Internet. A lot of these new architectures merely extend the current TCP/IP architecture and hence do not solve the fundamental cause of these problems. The Recursive Internet Architecture (RINA) is a true new network architecture, developed from scratch, building on lessons learned in the past. RINA prototyping efforts have been ongoing since 2010, but a prototype on which a commercial RINA implementation can be built has not been developed yet. The goal of the IRATI research project is to develop and evaluate such a prototype in Linux/OS. This article focuses on the software design required to implement a network stack in Linux/OS. We motivate the placement of, and communication between, the different software components in either the kernel or user space. The first open source prototype of the IRATI implementation of RINA will be available in June 2014 for researchers, developers, and early adopters
    corecore