10 research outputs found

    A Review of Commercial and Research Cluster Management Software

    Get PDF
    In the past decade there has been a dramatic shift from mainframe or ‘host-centric’ computing to a distributed ‘client-server’ approach. In the next few years this trend is likely to continue with further shifts towards ‘network-centric’ computing becoming apparent. All these trends were set in motion by the invention of the mass-reproducible microprocessor by Ted Hoff of Intel some twenty-odd years ago. The present generation of RISC microprocessors are now more than a match for mainframes in terms of cost and performance. The long-foreseen day when collections of RISC microprocessors assembled together as a parallel computer could outperform the vector supercomputers has finally arrived. Such high-performance parallel computers incorporate proprietary interconnection networks allowing low-latency, high bandwidth inter-processor communications. However, for certain types of applications such interconnect optimization is unnecessary and conventional LAN technology is sufficient. This has led to the realization that clusters of high-performance workstations can be realistically used for a variety of applications either to replace mainframes, vector supercomputers and parallel computers or to better manage already installed collections of workstations. Whilst it is clear that ‘cluster computers’ have limitations, many institutions and companies are exploring this option. Software to manage such clusters is at an early stage of development and this report reviews the current state-of-the-art. Cluster computing is a rapidly maturing technology that seems certain to play an important part in the ‘network-centric’ computing future

    Cluster Computing Review

    Get PDF
    In the past decade there has been a dramatic shift from mainframe or ‘host−centric’ computing to a distributed ‘client−server’ approach. In the next few years this trend is likely to continue with further shifts towards ‘network−centric’ computing becoming apparent. All these trends were set in motion by the invention of the mass−reproducible microprocessor by Ted Hoff of Intel some twenty−odd years ago. The present generation of RISC microprocessors are now more than a match for mainframes in terms of cost and performance. The long−foreseen day when collections of RISC microprocessors assembled together as a parallel computer could out perform the vector supercomputers has finally arrived. Such high−performance parallel computers incorporate proprietary interconnection networks allowing low−latency, high bandwidth inter−processor communications. However, for certain types of applications such interconnect optimization is unnecessary and conventional LAN technology is sufficient. This has led to the realization that clusters of high−performance workstations can be realistically used for a variety of applications either to replace mainframes, vector supercomputers and parallel computers or to better manage already installed collections of workstations. Whilst it is clear that ‘cluster computers’ have limitations, many institutions and companies are exploring this option. Software to manage such clusters is at an early stage of development and this report reviews the current state−of−the−art. Cluster computing is a rapidly maturing technology that seems certain to play an important part in the ‘network−centric’ computing future

    Dagstuhl News January - December 2001

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    A Policy-Based Resource Brokering Environment for Computational Grids

    Get PDF
    With the advances in networking infrastructure in general, and the Internet in particular, we can build grid environments that allow users to utilize a diverse set of distributed and heterogeneous resources. Since the focus of such environments is the efficient usage of the underlying resources, a critical component is the resource brokering environment that mediates the discovery, access and usage of these resources. With the consumer\u27s constraints, provider\u27s rules, distributed heterogeneous resources and the large number of scheduling choices, the resource brokering environment needs to decide where to place the user\u27s jobs and when to start their execution in a way that yields the best performance for the user and the best utilization for the resource provider. As brokering and scheduling are very complicated tasks, most current resource brokering environments are either specific to a particular grid environment or have limited features. This makes them unsuitable for large applications with heterogeneous requirements. In addition, most of these resource brokering environments lack flexibility. Policies at the resource-, application-, and system-levels cannot be specified and enforced to provide commitment to the guaranteed level of allocation that can help in attracting grid users and contribute to establishing credibility for existing grid environments. In this thesis, we propose and prototype a flexible and extensible Policy-based Resource Brokering Environment (PROBE) that can be utilized by various grid systems. In designing PROBE, we follow a policy-based approach that provides PROBE with the intelligence to not only match the user\u27s request with the right set of resources, but also to assure the guaranteed level of the allocation. PROBE looks at the task allocation as a Service Level Agreement (SLA) that needs to be enforced between the resource provider and the resource consumer. The policy-based framework is useful in a typical grid environment where resources, most of the time, are not dedicated. In implementing PROBE, we have utilized a layered architecture and façade design patterns. These along with the well-defined API, make the framework independent of any architecture and allow for the incorporation of different types of scheduling algorithms, applications and platform adaptors as the underlying environment requires. We have utilized XML as a base for all the specification needs. This provides a flexible mechanism to specify the heterogeneous resources and user\u27s requests along with their allocation constraints. We have developed XML-based specifications by which high-level internal structures of resources, jobs and policies can be specified. This provides interoperability in which a grid system can utilize PROBE to discover and use resources controlled by other grid systems. We have implemented a prototype of PROBE to demonstrate its feasibility. We also describe a test bed environment and the evaluation experiments that we have conducted to demonstrate the usefulness and effectiveness of our approach

    Elastic computation placement in edge-based environments

    Get PDF
    Today, technologies such as machine learning, virtual reality, and the Internet of Things are integrated in end-user applications more frequently. These technologies demand high computational capabilities. Especially mobile devices have limited resources in terms of execution performance and battery life. The offloading paradigm provides a solution to this problem and transfers computationally intensive parts of applications to more powerful resources, such as servers or cloud infrastructure. Recently, a new computation paradigm arose which exploits the huge amount of end-user devices in the modern computing landscape - called edge computing. These devices encompass smartphones, tablets, microcontrollers, and PCs. In edge computing, devices cooperate with each other while avoiding cloud infrastructure. Due to the proximity among the participating devices, the communication latencies for offloading are reduced. However, edge computing brings new challenges in form of device fluctuation, unreliability, and heterogeneity, which negatively affect the resource elasticity. As a solution, this thesis proposes a computation placement framework that provides an abstraction for computation and resource elasticity in edge-based environments. The design is middleware-based, encompasses heterogeneous platforms, and supports easy integration of existing applications. It is composed of two parts: the Tasklet system and the edge support layer. The Tasklet system is a flexible framework for computation placement on heterogeneous resources. It introduces closed units of computation that can be tailored to generic applications. The edge support layer handles the characteristics of edge resources. It copes with fluctuation and unreliability by applying reactive and proactive task migration. Furthermore, the performance heterogeneity and the consequent bottlenecks are handled by two edge-specific task partitioning approaches. As a proof of concept, the thesis presents a fully-fledged prototype of the design, which is evaluated comprehensively in a real-world testbed. The evaluation shows that the design is able to substantially improve the resource elasticity in edge-based environments

    Evidence-as-a-service: state recordkeeping in the cloud

    Get PDF
    The White House has engaged in recent years in efforts to ensure greater citizen access to government information and greater efficiency and effectiveness in managing that information. The Open Data policy and recent directives requiring that federal agencies create capacity to share scientific data have fallen on the heels of the Federal Government's Cloud First policy, an initiative requiring Federal agencies to consider using cloud computing before making IT investments. Still, much of the information accessed by the public resides in the hands of state and local records creators. Thus, this exploratory study sought to examine how cloud computing actually affects public information recordkeeping stewards. Specifically, it investigated whether recordkeeping stewards' concerns about cloud computing risks are similar to published risks in newly implemented cloud computing environments, it examined their perceptions of how cross-occupational relationships affect their ability to perform recordkeeping responsibilities in the Cloud, and it compared how recordkeeping roles and responsibilities are distributed within their organizations. The distribution was compared to published reports of recordkeeping roles and responsibilities in archives and records management journals published over the past 42 years. The study used an interpretive, constant comparative approach to data collection and an analytical framework from Structuration Theory. Findings were drawn from 29 interviews and their associated transcripts and from 682 published articles from six archives and records management journals dating from 1970 onwards. It was found that the actual work environments reported by interview participants most resembled the recordkeeping environments published by archival continuum theorists. In addition, records managers reported greater worry about status and a lack of clearly demarcated lines of responsibility in their work than did the archivists. Records managers also reported less impact from the new technology as physical artifact than from political and inter-occupational power adjustments that altered their status after the cloud implementations. It was also found that current cloud computing environments exhibit a variety of disincentives for accurate and complete recordkeeping, some of which are primarily due to political changes and others from the distributed nature of information storage in the Cloud.Doctor of Philosoph

    Understanding Language Evolution in Overlapping Generations of Reinforcement Learning Agents

    Get PDF

    "Shit Happens":The Spontaneous Self-Organisation of Communal Boundary Latrines via Stigmergy in a Null Model of the European Badger, Meles meles

    Get PDF

    Task Allocation in Foraging Robot Swarms:The Role of Information Sharing

    Get PDF
    Autonomous task allocation is a desirable feature of robot swarms that collect and deliver items in scenarios where congestion, caused by accumulated items or robots, can temporarily interfere with swarm behaviour. In such settings, self-regulation of workforce can prevent unnecessary energy consumption. We explore two types of self-regulation: non-social, where robots become idle upon experiencing congestion, and social, where robots broadcast information about congestion to their team mates in order to socially inhibit foraging. We show that while both types of self-regulation can lead to improved energy efficiency and increase the amount of resource collected, the speed with which information about congestion flows through a swarm affects the scalability of these algorithms
    corecore