23,689 research outputs found

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Geostatistical analysis of an experimental stratigraphy

    Get PDF
    [1] A high-resolution stratigraphic image of a flume-generated deposit was scaled up to sedimentary basin dimensions where a natural log hydraulic conductivity (ln( K)) was assigned to each pixel on the basis of gray scale and conductivity end-members. The synthetic ln( K) map has mean, variance, and frequency distributions that are comparable to a natural alluvial fan deposit. A geostatistical analysis was conducted on selected regions of this map containing fluvial, fluvial/ floodplain, shoreline, turbidite, and deepwater sedimentary facies. Experimental ln(K) variograms were computed along the major and minor statistical axes and horizontal and vertical coordinate axes. Exponential and power law variogram models were fit to obtain an integral scale and Hausdorff measure, respectively. We conclude that the shape of the experimental variogram depends on the problem size in relation to the size of the local-scale heterogeneity. At a given problem scale, multilevel correlation structure is a result of constructing variogram with data pairs of mixed facies types. In multiscale sedimentary systems, stationary correlation structure may occur at separate scales, each corresponding to a particular hierarchy; the integral scale fitted thus becomes dependent on the problem size. The Hausdorff measure obtained has a range comparable to natural geological deposits. It increases from nonstratified to stratified deposits with an approximate cutoff of 0.15. It also increases as the number of facies incorporated in a problem increases. This implies that fractal characteristic of sedimentary rocks is both depositional process - dependent and problem-scale-dependent

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence

    Get PDF
    This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation
    corecore