327,965 research outputs found

    DART-MPI: An MPI-based Implementation of a PGAS Runtime System

    Full text link
    A Partitioned Global Address Space (PGAS) approach treats a distributed system as if the memory were shared on a global level. Given such a global view on memory, the user may program applications very much like shared memory systems. This greatly simplifies the tasks of developing parallel applications, because no explicit communication has to be specified in the program for data exchange between different computing nodes. In this paper we present DART, a runtime environment, which implements the PGAS paradigm on large-scale high-performance computing clusters. A specific feature of our implementation is the use of one-sided communication of the Message Passing Interface (MPI) version 3 (i.e. MPI-3) as the underlying communication substrate. We evaluated the performance of the implementation with several low-level kernels in order to determine overheads and limitations in comparison to the underlying MPI-3.Comment: 11 pages, International Conference on Partitioned Global Address Space Programming Models (PGAS14

    IRAF in the nineties

    Get PDF
    The Interactive Data Reduction and Analysis Facility (IRAF) data reduction and analysis system has been around since 1981. Today it is a mature system with hundreds of applications, and is supported on all the major platforms. Many institutions, projects, and individuals around the US and around the world have developed software for IRAF. Some of these packages are comparable in size to the IRAF core system itself. IRAF is both a data analysis system, and a programming environment. As a data analysis system it can be easily installed by a user at a remote site and immediately used to view and process data. As a programming environment IRAF contains a wealth of high and low level facilities for developing new applications for interactive and automated processing of astronomical or other data. As important as the applications programs and user interfaces are to the scientist using IRAF, the heart of the IRAF system is the programming environment. The programming environment determines to a large extent the types of applications which can be built within IRAF, what they will look like, and how they will interact with one another and with the user. While applications can be easily added to or removed from a software system, the programming environment must remain fairly stable, with carefully planned evolution and growth, over the lifetime of a system. The IRAF programming environment is the framework on which the rest of the IRAF system is built. The IRAF programming environment as it exists in 1992, and the work currently underway to enhance the environment are discussed. The structure of the programming environment as a class hierarchy is discussed, with emphasis on the work being done on the image data structures, graphics and image display interfaces, and user interfaces. The new technologies which we feel IRAF must deal with successfully over the coming years are discussed. Finally, a preview of what IRAF might look like to the user by the end of the decade is presented

    Architecture independent environment for developing engineering software on MIMD computers

    Get PDF
    Engineers are constantly faced with solving problems of increasing complexity and detail. Multiple Instruction stream Multiple Data stream (MIMD) computers have been developed to overcome the performance limitations of serial computers. The hardware architectures of MIMD computers vary considerably and are much more sophisticated than serial computers. Developing large scale software for a variety of MIMD computers is difficult and expensive. There is a need to provide tools that facilitate programming these machines. First, the issues that must be considered to develop those tools are examined. The two main areas of concern were architecture independence and data management. Architecture independent software facilitates software portability and improves the longevity and utility of the software product. It provides some form of insurance for the investment of time and effort that goes into developing the software. The management of data is a crucial aspect of solving large engineering problems. It must be considered in light of the new hardware organizations that are available. Second, the functional design and implementation of a software environment that facilitates developing architecture independent software for large engineering applications are described. The topics of discussion include: a description of the model that supports the development of architecture independent software; identifying and exploiting concurrency within the application program; data coherence; engineering data base and memory management

    Applications of artificial intelligence to mission planning

    Get PDF
    The scheduling problem facing NASA-Marshall mission planning is extremely difficult for several reasons. The most critical factor is the computational complexity involved in developing a schedule. The size of the search space is large along some dimensions and infinite along others. It is because of this and other difficulties that many of the conventional operation research techniques are not feasible or inadequate to solve the problems by themselves. Therefore, the purpose is to examine various artificial intelligence (AI) techniques to assist conventional techniques or to replace them. The specific tasks performed were as follows: (1) to identify mission planning applications for object oriented and rule based programming; (2) to investigate interfacing AI dedicated hardware (Lisp machines) to VAX hardware; (3) to demonstrate how Lisp may be called from within FORTRAN programs; (4) to investigate and report on programming techniques used in some commercial AI shells, such as Knowledge Engineering Environment (KEE); and (5) to study and report on algorithmic methods to reduce complexity as related to AI techniques

    Taxonomy of an application model:Toward building large scale, connected vehicle applications

    Get PDF
    With the advent of advanced computing systems beyond personal computing, such as mobile computing, cloud computing or recently, vehicular ad-hoc network, it is crucial that we understand the application development process of each type of these systems. Better understanding of how applications are built in different environment allows us to design better application models and system supports for developers. This paper studies the taxonomy of application models and defines its consisting aspects, namely application scope, application abstraction level, application structure, communication model and programming model. With the better understanding of the application models in general, we lay out the requirements for developing a class of large scale connected vehicle applications

    Shaking Up Traditional Training With Lynda.com

    Get PDF
    Supporting the diverse technology training needs on campus while resources continue to dwindle is a challenge many of us continue to tackle. Institutions from small liberal arts campuses to large research universities are providing individualized training and application support 24/7 by subscribing to the lynda.com Online Training Library(r) and marketing the service to various combinations of faculty, staff and students. As a supplemental service on most of our campuses, lynda.com has allowed us to extend support to those unable to attend live lab-based training, those who want advanced level training, those who want training on specialized applications, and those who want to learn applications that are not in high demand. The service also provides cost effective professional development opportunities for everyone on campus, from our own trainers and technology staff who are developing new workshops, learning new software versions or picking up new areas of expertise from project management to programming, to administrative and support staff who are trying to improve their skills in an ever-tighter economic environment. On this panel discussion, you will hear about different licensing approaches, ways of raising awareness about lynda.com on our campuses, lessons learned through implementation, reporting capabilities, and advice we would give for other campuses looking to offer this service

    Simplifying Context-Aware Agent Coordination Using Context-Sensitive Data Structures

    Get PDF
    Context-aware computing, an emerging paradigm in which applications sense and adapt their behavior to changes in their operational environment, is key to developing dependable agent-based soft-ware systems for use in the often unpredictable settings of ad hoc net-works. However, designing an application agent which interacts with other agents to gather, maintain, and adapt to context can be a difficult undertaking in an open and continuously changing environment, even for a seasoned programmer. Our goal is to simplify the programming task by hiding the details of agent coordination from the programmer, allowing one to quickly and reliably produce a context-aware application agent for use in large-scale ad hoc networks. With this goal in mind, we introduce a novel abstraction called context-sensitive data structures (CSDS). The programmer interacts with the CSDS through a familiar programming interface, without direct knowledge of the context gathering and maintenance tasks that occur behind the scenes. In this paper, we define a model of context-sensitive data structures, and we identify key requirements and issues associated with building an infrastructure to support the development of context-sensitive data structures

    Iterative MapReduce for Large Scale Machine Learning

    Full text link
    Large datasets ("Big Data") are becoming ubiquitous because the potential value in deriving insights from data, across a wide range of business and scientific applications, is increasingly recognized. In particular, machine learning - one of the foundational disciplines for data analysis, summarization and inference - on Big Data has become routine at most organizations that operate large clouds, usually based on systems such as Hadoop that support the MapReduce programming paradigm. It is now widely recognized that while MapReduce is highly scalable, it suffers from a critical weakness for machine learning: it does not support iteration. Consequently, one has to program around this limitation, leading to fragile, inefficient code. Further, reliance on the programmer is inherently flawed in a multi-tenanted cloud environment, since the programmer does not have visibility into the state of the system when his or her program executes. Prior work has sought to address this problem by either developing specialized systems aimed at stylized applications, or by augmenting MapReduce with ad hoc support for saving state across iterations (driven by an external loop). In this paper, we advocate support for looping as a first-class construct, and propose an extension of the MapReduce programming paradigm called {\em Iterative MapReduce}. We then develop an optimizer for a class of Iterative MapReduce programs that cover most machine learning techniques, provide theoretical justifications for the key optimization steps, and empirically demonstrate that system-optimized programs for significant machine learning tasks are competitive with state-of-the-art specialized solutions

    NYMPH: A multiprocessor for manipulation applications

    Get PDF
    The robotics group of the Stanford Artificial Intelligence Laboratory is currently developing a new computational system for robotics applications. Stanford's NYMPH system uses multiple NSC 32016 processors and one MC68010 based processor, sharing a common Intel Multibus. The 32K processors provide the raw computational power needed for advanced robotics applications, and the 68K provides a pleasant interface with the rest of the world. Software has been developed to provide useful communications and synchronization primitives, without consuming excessive processor resources or bus bandwidth. NYMPH provides both large amounts of computing power and a good programming environment, making it an effective research tool
    corecore