6,333 research outputs found

    Remote MIB-item look-up service

    Get PDF
    Despite some deficiencies, the Internet management framework is widely deployed and thousands of management information base (MIB) modules have been defined thus far. These modules are used by implementers of agent software, as well as by managers and management applications, to understand the syntax and semantics of the management information that may be exchanged. At the manager's side, MIB modules are usually stored in separate files, which are maintained by the human manager and read by the management application. Since maintenance of this file repository can be cumbersome, management applications are often confronted with incomplete and outdated information. To solve this "meta-management" problem, this paper discusses the design of a remote look-up service for MIB-item definitions. Such a service facilitates the retrieval of missing MIB module definitions, as well as definitions of individual MIB-items. Initially the service may be provided by a single server, but other servers can be added at later stages to improve performance and prevent copyright problems. It is envisaged that vendors of network equipment will also install servers, to distribute their vendor specific MIB. The paper describes how the service, which is provided on a best effort basis, can be accessed by managers/management applications, and how servers inform each other about the MIB modules they support

    A parallel interaction potential approach coupled with the immersed boundary method for fully resolved simulations of deformable interfaces and membranes

    Get PDF
    In this paper we show and discuss the use of a versatile interaction potential approach coupled with an immersed boundary method to simulate a variety of flows involving deformable bodies. In particular, we focus on two kinds of problems, namely (i) deformation of liquid-liquid interfaces and (ii) flow in the left ventricle of the heart with either a mechanical or a natural valve. Both examples have in common the two-way interaction of the flow with a deformable interface or a membrane. The interaction potential approach (de Tullio & Pascazio, Jou. Comp. Phys., 2016; Tanaka, Wada and Nakamura, Computational Biomechanics, 2016) with minor modifications can be used to capture the deformation dynamics in both classes of problems. We show that the approach can be used to replicate the deformation dynamics of liquid-liquid interfaces through the use of ad-hoc elastic constants. The results from our simulations agree very well with previous studies on the deformation of drops in standard flow configurations such as deforming drop in a shear flow or a cross flow. We show that the same potential approach can also be used to study the flow in the left ventricle of the heart. The flow imposed into the ventricle interacts dynamically with the mitral valve (mechanical or natural) and the ventricle which are simulated using the same model. Results from these simulations are compared with ad- hoc in-house experimental measurements. Finally, a parallelisation scheme is presented, as parallelisation is unavoidable when studying large scale problems involving several thousands of simultaneously deforming bodies on hundreds of distributed memory computing processors

    Distributed Programming with Shared Data

    Get PDF
    Until recently, at least one thing was clear about parallel programming: tightly coupled (shared memory) machines were programmed in a language based on shared variables and loosely coupled (distributed) systems were programmed using message passing. The explosive growth of research on distributed systems and their languages, however, has led to several new methodologies that blur this simple distinction. Operating system primitives (e.g., problem-oriented shared memory, Shared Virtual Memory, the Agora shared memory) and languages (e.g., Concurrent Prolog, Linda, Emerald) for programming distributed systems have been proposed that support the shared variable paradigm without the presence of physical shared memory. In this paper we will look at the reasons for this evolution, the resemblances and differences among these new proposals, and the key issues in their design and implementation. It turns out that many implementations are based on replication of data. We take this idea one step further, and discuss how automatic replication (initiated by the run time system) can be used as a basis for a new model, called the shared data-object model, whose semantics are similar to the shared variable model. Finally, we discuss the design of a new language for distributed programming, Orca, based on the shared data-object model. 1

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    MOLNs: A cloud platform for interactive, reproducible and scalable spatial stochastic computational experiments in systems biology using PyURDME

    Full text link
    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools, a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments

    Issues in providing a reliable multicast facility

    Get PDF
    Issues involved in point-to-multipoint communication are presented and the literature for proposed solutions and approaches surveyed. Particular attention is focused on the ideas and implementations that align with the requirements of the environment of interest. The attributes of multicast receiver groups that might lead to useful classifications, what the functionality of a management scheme should be, and how the group management module can be implemented are examined. The services that multicasting facilities can offer are presented, followed by mechanisms within the communications protocol that implements these services. The metrics of interest when evaluating a reliable multicast facility are identified and applied to four transport layer protocols that incorporate reliable multicast

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    High Performance Fault-Tolerant Hadoop Distributed File System

    Get PDF
    The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. Huge amounts of data generated from many sources daily. Maintenance of such data is a challenging task. One proposing solution is to use Hadoop. The solution provided by Google, ?Doug Cutting? and his team developed an Open Source Project called Hadoop. Hadoop is a framework written in Java for running applications on large clusters of commodity hardware. The Hadoop Distributed File System (HDFS) is designed to be scalable, fault-tolerant, distributed storage system. Hadoop?s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. The HDFS stores filesystem Metadata and application data separately. HDFS stores Metadata on separate dedicated server called NameNode and application data stored on separate servers called DataNodes. The file system data is accessed via HDFS clients, which first contact the NameNode data location and then transfer data to (write) or from (read) the specified DataNodes. Download file request chooses only one of the servers to download. Other replicated servers are not used. As the file size increases the download time increases. In this paper we work on three policies for selection of blocks. Those are first, random and loadbased. By observing the results the speed of download time for file is ?first? runs slower than ?random? and ?random? runs slower than ?loadbased?
    • …
    corecore