52 research outputs found

    Web-Based Spatial Decision Support System and Watershed Management with a Case Study

    Get PDF
    In order to maintain a proper balance between development pressure and water resources protection, and also to improve public participation, efficient tools and techniques for soil and water conservation projects are needed. This paper describes the development and application of a web-based watershed management spatial decision support system, WebWMPI. The WebWMPI uses the Watershed Management Priority Indices (WMPI) approach which is a prioritizing method for watershed management planning and it integrates land use/cover, hydrological data, soils, slope, roads, and other spatial data. The land is divided into three categories: Conservation Priority Index (CPI) land, Restoration Priority Index (RPI) land, and Stormwater Management Priority Index (SMPI) land. Within each category, spatial factors are rated based on their influence on water resources and critical areas can be identified for soil conservation, water quality protection and improvement. The WebWMPI has user-friendly client side graphical interfaces which enable the public to interactively run the server side Geographic Information System to evaluate different scenarios for watershed planning and management. The system was applied for Dry Run Creek watershed (Cedar Falls, Iowa, US) as a demonstration and it can be easily used in other watersheds to prioritize crucial areas and to increase public participation for soil and water conservation projects

    An evaluation of Java implementations of messageā€passing

    Get PDF

    An evaluation of Java implementations of message-passing

    Get PDF

    Web-based Spatial Decision Support Systems (WebSDSS): Evolution, Architecture, Examples and Challenges

    Get PDF
    Spatial Decision Support Systems (SDSS), which support spatial analysis and decision making, are currently receiving much attention. Research on SDSS originated from two distinct sources, namely, the GIS community and the DSS community. The synergy between these two research groups has lead to the adoption of state of the art technical solutions and the development of sophisticated SDSS that satisfy the needs of geographers and top-level decision makers. Recently, the Web has added a new dimension to SDSS and Web-based SDSS (WebSDSS) that are being developed in a number of application domains. This article provides an overview of the emergence of SDSS, its architecture and applications, and discusses some of the enabling technologies and research challenges for future SDSS development and deployment

    MPJ: MPI-like message passing for Java

    Get PDF
    Recently, there has been a lot of interest in using Java for parallel programming. Efforts have been hindered by lack of standard Java parallel programming APIs. To alleviate this problem, various groups started projects to develop Java message passing systems modelled on the successful Message Passing Interface (MPI). Official MPI bindings are currently defined only for C, Fortran, and C++, so early MPI-like environments for Java have been divergent. This paper relates an effort undertaken by a working group of the Java Grande Forum, seeking a consensus on an MPI-like API, to enhance the viability of parallel programming using Java

    Using Contaminated Garbage Collection and Reference Counting Garbage Collection to Provide Automatic Storage Reclamation for Real-Time Systems

    Get PDF
    Language support of dynamic storage management simplifies the application programming task immensely. As a result, dynamic storage allocation and garbage collection have become common in general purpose computing. Garbage collection research has led to the development of algorithms for locating program memory that is no longer in use and returning the unused memory to the run-time system for late use by the program. While many programming languages have adopted automatic memory reclamation features, this has not been the trend in Real-Time systems. Many garbage collection methods involve some form of marking the objects in memory. This marking requires time proportional to the size of the head to complete. As a result, the predictability constraints of Real-Time are often not satisfied by such approaches. In this thesis, we present an analysis of several approaches for program garbage collection. We examine two approximate collection strategies (Reference Counting and Contamination Garbage Collection) and one complete collection approach (Mark and Sweep Garbage Collection). Additionally, we analyze the relative success of each approach for meeting the demands of Real-Time computing. In addition, we present an algorithm that attempts to classify object types as good candidates for reference counting. Our approach is conservative and uses static analysis of an application\u27s type system. Our analysis of these three collection strategies leads to the observation that there could be benefits to using multiple garbage collectors in parallel. Consequently we address challenges associated with using multiple garbage collectors in one application

    Memory-Accessing Optimization Via Gestures

    Get PDF
    We identify common storage-referencing gestures in Java bytecode and machine-level code, so that a gesture comprising a sequence of storage dereferences can be condensed into a single instruction. Because these gestures access memory in a recognizable pattern, the pattern can be preloaded into and executed by a ā€œsmartā€ memory. This approach can improve program execution time by making memory accesses more efļ¬cient, by saving CPU cycles, bus cycles, and power. We introduce a language of valid gesture types and conduct a series of experiments to analyze the characteristics of gestures deļ¬ned by this language within a set of benchmarks written in Java and C. We gather statistics on the frequency, length, and number of types of gestures found within these benchmarks, using both static and dynamic analysis methods. We propose an optimization of the number of gestures required for a program, showing the optimization problem to be NP-Complete

    Staged reads: mitigating the impact of DRAM writes on DRAM reads

    Get PDF
    Journal ArticleMain memory latencies have always been a concern for system performance. Given that reads are on the criti- cal path for CPU progress, reads must be prioritized over writes. However, writes must be eventually processed and they often delay pending reads. In fact, a single channel in the main memory system offers almost no parallelism between reads and writes. This is because a single off-chip memory bus is shared by reads and writes and the direction of the bus has to be explicitly turned around when switching from writes to reads. This is an expensive operation and its cost is amortized by carrying out a burst of writes or reads every time the bus direction is switched. As a result, no reads can be processed while a memory channel is busy servicing writes. This paper proposes a novel mechanism to boost read-write parallelism and perform useful components of read operations even when the memory system is busy performing writes. If some of the banks are busy servicing writes, we start issuing reads to the other idle banks. The results of these reads are stored in a few registers near the memory chip's I/O pads. These results are quickly returned immediately following the bus turnaround. The process is referred to as a Staged Read because it decouples a single read operation into two stages, with the first step being performed in parallel with writes. This innovation can also be viewed as a form of prefetch that is internal to a memory chip. The proposed tech- nique works best when there is bank imbalance in the write stream. We also introduce a write scheduling algorithm that artificially creates bank imbalance and allows useful read operations to be performed during the write drain. Across a suite of memory-intensive workloads, we show that Staged Reads can boost throughput by up to 33% (average 7%) with an average DRAM access latency improvement of 17%, while incurring a very small cost (0.25%) in terms of memory chip area. The throughput improvements are even greater when considering write-intensive work-loads (average 11%) or future systems (average 12%)
    • ā€¦
    corecore