118,187 research outputs found

    The role of the host in a cooperating mainframe and workstation environment, volumes 1 and 2

    Get PDF
    In recent years, advancements made in computer systems have prompted a move from centralized computing based on timesharing a large mainframe computer to distributed computing based on a connected set of engineering workstations. A major factor in this advancement is the increased performance and lower cost of engineering workstations. The shift to distributed computing from centralized computing has led to challenges associated with the residency of application programs within the system. In a combined system of multiple engineering workstations attached to a mainframe host, the question arises as to how does a system designer assign applications between the larger mainframe host and the smaller, yet powerful, workstation. The concepts related to real time data processing are analyzed and systems are displayed which use a host mainframe and a number of engineering workstations interconnected by a local area network. In most cases, distributed systems can be classified as having a single function or multiple functions and as executing programs in real time or nonreal time. In a system of multiple computers, the degree of autonomy of the computers is important; a system with one master control computer generally differs in reliability, performance, and complexity from a system in which all computers share the control. This research is concerned with generating general criteria principles for software residency decisions (host or workstation) for a diverse yet coupled group of users (the clustered workstations) which may need the use of a shared resource (the mainframe) to perform their functions

    Many-Task Computing and Blue Waters

    Full text link
    This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware

    Patterns-based Evaluation of Open Source BPM Systems: The Cases of jBPM, OpenWFE, and Enhydra Shark

    Get PDF
    In keeping with the proliferation of free software development initiatives and the increased interest in the business process management domain, many open source workflow and business process management systems have appeared during the last few years and are now under active development. This upsurge gives rise to two important questions: what are the capabilities of these systems? and how do they compare to each other and to their closed source counterparts? i.e. in other words what is the state-of-the-art in the area?. To gain an insight into the area, we have conducted an in-depth analysis of three of the major open source workflow management systems - jBPM, OpenWFE and Enhydra Shark, the results of which are reported here. This analysis is based on the workflow patterns framework and provides a continuation of the series of evaluations performed using the same framework on closed source systems, business process modeling languages and web-service composition standards. The results from evaluations of the three open source systems are compared with each other and also with the results from evaluations of three representative closed source systems - Staffware, WebSphere MQ and Oracle BPEL PM, documented in earlier works. The overall conclusion is that open source systems are targeted more toward developers rather than business analysts. They generally provide less support for the patterns than closed source systems, particularly with respect to the resource perspective which describes the various ways in which work is distributed amongst business users and managed through to completion

    Energy Efficiency in Cache Enabled Small Cell Networks With Adaptive User Clustering

    Full text link
    Using a network of cache enabled small cells, traffic during peak hours can be reduced considerably through proactively fetching the content that is most probable to be requested. In this paper, we aim at exploring the impact of proactive caching on an important metric for future generation networks, namely, energy efficiency (EE). We argue that, exploiting the correlation in user content popularity profiles in addition to the spatial repartitions of users with comparable request patterns, can result in considerably improving the achievable energy efficiency of the network. In this paper, the problem of optimizing EE is decoupled into two related subproblems. The first one addresses the issue of content popularity modeling. While most existing works assume similar popularity profiles for all users in the network, we consider an alternative caching framework in which, users are clustered according to their content popularity profiles. In order to showcase the utility of the proposed clustering scheme, we use a statistical model selection criterion, namely Akaike information criterion (AIC). Using stochastic geometry, we derive a closed-form expression of the achievable EE and we find the optimal active small cell density vector that maximizes it. The second subproblem investigates the impact of exploiting the spatial repartitions of users with comparable request patterns. After considering a snapshot of the network, we formulate a combinatorial optimization problem that enables to optimize content placement such that the used transmission power is minimized. Numerical results show that the clustering scheme enable to considerably improve the cache hit probability and consequently the EE compared with an unclustered approach. Simulations also show that the small base station allocation algorithm results in improving the energy efficiency and hit probability.Comment: 30 pages, 5 figures, submitted to Transactions on Wireless Communications (15-Dec-2016

    A general framework for positioning, evaluating and selecting the new generation of development tools.

    Get PDF
    This paper focuses on the evaluation and positioning of a new generation of development tools containing subtools (report generators, browsers, debuggers, GUI-builders, ...) and programming languages that are designed to work together and have a common graphical user interface and are therefore called environments. Several trends in IT have led to a pluriform range of developments tools that can be classified in numerous categories. Examples are: object-oriented tools, GUI-tools, upper- and lower CASE-tools, client/server tools and 4GL environments. This classification does not sufficiently cover the tools subject in this paper for the simple reason that only one criterion is used to distinguish them. Modern visual development environments often fit in several categories because to a certain extent, several criteria can be applied to evaluate them. In this study, we will offer a broad classification scheme with which tools can be positioned and which can be refined through further research.
    corecore