15,846 research outputs found

    Scalability Analysis of Parallel GMRES Implementations

    Get PDF
    Applications involving large sparse nonsymmetric linear systems encourage parallel implementations of robust iterative solution methods, such as GMRES(k). Two parallel versions of GMRES(k) based on different data distributions and using Householder reflections in the orthogonalization phase, and variations of these which adapt the restart value k, are analyzed with respect to scalability (their ability to maintain fixed efficiency with an increase in problem size and number of processors).A theoretical algorithm-machine model for scalability is derived and validated by experiments on three parallel computers, each with different machine characteristics

    Noncontextuality with Marginal Selectivity in Reconstructing Mental Architectures

    Full text link
    We present a general theory of series-parallel mental architectures with selectively influenced stochastically non-independent components. A mental architecture is a hypothetical network of processes aimed at performing a task, of which we only observe the overall time it takes under variable parameters of the task. It is usually assumed that the network contains several processes selectively influenced by different experimental factors, and then the question is asked as to how these processes are arranged within the network, e.g., whether they are concurrent or sequential. One way of doing this is to consider the distribution functions for the overall processing time and compute certain linear combinations thereof (interaction contrasts). The theory of selective influences in psychology can be viewed as a special application of the interdisciplinary theory of (non)contextuality having its origins and main applications in quantum theory. In particular, lack of contextuality is equivalent to the existence of a "hidden" random entity of which all the random variables in play are functions. Consequently, for any given value of this common random entity, the processing times and their compositions (minima, maxima, or sums) become deterministic quantities. These quantities, in turn, can be treated as random variables with (shifted) Heaviside distribution functions, for which one can easily compute various linear combinations across different treatments, including interaction contrasts. This mathematical fact leads to a simple method, more general than the previously used ones, to investigate and characterize the interaction contrast for different types of series-parallel architectures.Comment: published in Frontiers in Psychology: Cognition 1:12 doi: 10.3389/fpsyg.2015.00735 (special issue "Quantum Structures in Cognitive and Social Science"

    A Case for Cooperative and Incentive-Based Coupling of Distributed Clusters

    Full text link
    Research interest in Grid computing has grown significantly over the past five years. Management of distributed resources is one of the key issues in Grid computing. Central to management of resources is the effectiveness of resource allocation as it determines the overall utility of the system. The current approaches to superscheduling in a grid environment are non-coordinated since application level schedulers or brokers make scheduling decisions independently of the others in the system. Clearly, this can exacerbate the load sharing and utilization problems of distributed resources due to suboptimal schedules that are likely to occur. To overcome these limitations, we propose a mechanism for coordinated sharing of distributed clusters based on computational economy. The resulting environment, called \emph{Grid-Federation}, allows the transparent use of resources from the federation when local resources are insufficient to meet its users' requirements. The use of computational economy methodology in coordinating resource allocation not only facilitates the QoS based scheduling, but also enhances utility delivered by resources.Comment: 22 pages, extended version of the conference paper published at IEEE Cluster'05, Boston, M

    Submarine depositional terraces at Salina Island (Southern Tyrrhenian Sea) and implications on the Late-Quaternary evolution of the insular shelf

    Get PDF
    The integrated analysis of high-resolution multibeam bathymetry and single-channel seismic profiles around Salina Island allowed us to characterize the stratigraphic architecture of the insular shelf. The shelf is formed by a gently-sloping erosive surface carved on the volcanic bedrock, mostly covered by sediments organized in a suite of terraced bodies, i.e. submarine depositional terraces. Based on their position on the shelf, depth range of their edge and inner geometry, different orders of terraces can be distinguished. The shallowest terrace (near-shore terrace) is a sedimentary prograding wedge, whose formation can be associated to the downward transport of sediments from the surf zone and shoreface during stormy conditions. According to the range depth of the terrace edge (i.e., 10–25 m, compatible with the estimated present-day, local storm-wave base level in the central and western Mediterranean), the formation of this wedge can be attributed to the present-day highstand. By assuming a similar genesis for the deeper terraces, mid-shelf terraces having the edge at depths of 40–50 m and 70–80 m can be attributed to the late and early stages of the Post-LGM transgression, respectively. Finally, the deepest terrace (shelf-edge terrace) has the edge at depths of 130–160 m, being thus referable to the lowstand occurred at ca. 20 ka. Based on the variability of edge depth in the different sectors, we also show how lowstand terraces can be used to provide insights on the recent vertical movements that affected Salina edifice in the last 20 ka, highlighting more generally their possible use for neo-tectonic studies elsewhere. Moreover, being these terraces associated to different paleo-sea levels, they can be used to constrain the relative age of the different erosive stages affecting shallow-water sectors

    S-Store: Streaming Meets Transaction Processing

    Get PDF
    Stream processing addresses the needs of real-time applications. Transaction processing addresses the coordination and safety of short atomic computations. Heretofore, these two modes of operation existed in separate, stove-piped systems. In this work, we attempt to fuse the two computational paradigms in a single system called S-Store. In this way, S-Store can simultaneously accommodate OLTP and streaming applications. We present a simple transaction model for streams that integrates seamlessly with a traditional OLTP system. We chose to build S-Store as an extension of H-Store, an open-source, in-memory, distributed OLTP database system. By implementing S-Store in this way, we can make use of the transaction processing facilities that H-Store already supports, and we can concentrate on the additional implementation features that are needed to support streaming. Similar implementations could be done using other main-memory OLTP platforms. We show that we can actually achieve higher throughput for streaming workloads in S-Store than an equivalent deployment in H-Store alone. We also show how this can be achieved within H-Store with the addition of a modest amount of new functionality. Furthermore, we compare S-Store to two state-of-the-art streaming systems, Spark Streaming and Storm, and show how S-Store matches and sometimes exceeds their performance while providing stronger transactional guarantees

    Technical Debt Prioritization: State of the Art. A Systematic Literature Review

    Get PDF
    Background. Software companies need to manage and refactor Technical Debt issues. Therefore, it is necessary to understand if and when refactoring Technical Debt should be prioritized with respect to developing features or fixing bugs. Objective. The goal of this study is to investigate the existing body of knowledge in software engineering to understand what Technical Debt prioritization approaches have been proposed in research and industry. Method. We conducted a Systematic Literature Review among 384 unique papers published until 2018, following a consolidated methodology applied in Software Engineering. We included 38 primary studies. Results. Different approaches have been proposed for Technical Debt prioritization, all having different goals and optimizing on different criteria. The proposed measures capture only a small part of the plethora of factors used to prioritize Technical Debt qualitatively in practice. We report an impact map of such factors. However, there is a lack of empirical and validated set of tools. Conclusion. We observed that technical Debt prioritization research is preliminary and there is no consensus on what are the important factors and how to measure them. Consequently, we cannot consider current research conclusive and in this paper, we outline different directions for necessary future investigations
    • …
    corecore