21 research outputs found

    A real-time diagnostic and performance monitor for UNIX

    Get PDF
    There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance

    Naming issues in the design of transparently distributed operating systems

    Get PDF
    PhD ThesisNaming is of fundamental importance in the design of transparently distributed operating systems. A transparently distributed operating system should be functionally equivalent to the systems of which it is composed. In particular, the names of remote objects should be indistinguishable from the names oflocal objects. In this thesis we explore the implication that this recursive notion of transparency has for the naming mechanisms provided by an operating system. In particular, we show that a recursive naming system is more readily extensible than a flat naming system by demonstrating that it is in precisely those areas in which a system is not recursive that transparency is hardest to achieve. However, this is not so much a problem of distribution so much as a problem of scale. A system which does not scale well internally will not extend well to a distributed system. Building a distributed system out of existing systems involves joining the name spaces of the individual systems together. When combining name spaces it is important to preserve the identity of individual objects. Although unique identifiers may be used to distinguish objects within a single name space, we argue that it is difficult if not impossible in practice to guarantee the uniqueness of such identifiers between name spaces. Instead, we explore the possibility of Using hierarchical identifiers, unique only within a localised context. However, We show that such identifiers cannot be used in an arbitrary naming graph without compromising the notion of identity and hence violating the semantics of the underlying system. The only alternative is to sacrifice a deterministic notion of identity by using random identifiers to approximate global uniqueness with a know probability of failure (which can be made arbitrarily small if the overall size of the system is known in advance).UK Science and Engineering Research Council

    The Alumni Network, Summer 1999 (Vol. XV No. 1)

    Get PDF
    https://nsuworks.nova.edu/alumni_newsletter/1050/thumbnail.jp

    Do Too Many Chefs Really Spoil the Broth? The European Commission, Bureaucratic Politics and European Integration, CES Germany & Europe Working Papers, No. 09.2, 5 August 1999

    Get PDF
    There is a puzzling, little-remarked contradiction in scholarly views of the European Commission. On the one hand, the Commission is seen as the maestro of European integration, gently but persistently guiding both governments and firms toward Brussels. On the other hand, the Commission is portrayed as a headless bunch of bickering fiefdoms who can hardly be bothered by anything but their own in­ ternecine turf wars. The reason these very different views of the same institution have so seldom come into conflict is quite apparent: EU studies has a set of relatively autonomous and poorly integrated sub­ fields that work at different levels of analysis. Those scholars holding the "heroic" view of the Com­ mission are generally focused on the contest between national and supranational levels that character­ ized the 1992 program and subsequent major steps toward European integration. By contrast, those scholars with the "bureaucratic politics" view are generally authors of case studies or legislative his­ tories of individual EU directives or decisions. However, the fact that these twO images of the Commis­ sion are often two ships passing in the night hardly implies that there is no dispute. Clearly both views cannot be right; but then, how can we explain the significant support each enjoys from the empirical record? The CommiSSion, perhaps the single most important supranational body in the world, certainly deserves better than the schizophrenic interpretation the EU studies community has given it. In this paper, I aim to make a contribution toward the unraveling of this paradox. In brief, the argument I make is as follows: the European Commission can be effective in pursuit of its broad integration goals in spite of, and even because of, its internal divisions. The folk wisdom that too many chefs spoil the broth may often be true, but it need not always be so. The paper is organized as follows. 1 begin with an elaboration of the theoretical position briefly out­ lined above. 1 then tum to a case study from the major Commission efforts to restructure the computer industry in the context of its 1992 program. The computer sector does not merely provide interesting, random illustrations of the hypothesis 1 have advanced. Rather, as Wayne Sandholtz and John Zysman have stressed, the Commission's efforts on informatics formed one of the most crucial parts of the en­ tire 1992 program, and so the Commission's success in "Europeanizing" these issues had significant ripple effects across the entire European political economy. I conclude with some thoughts on the fol­ lowing question: now that the Commission has succeeded in bringing the world to its doorstep, does its bureaucratic division still serve a useful purpose

    Distributed file systems for Unix

    Get PDF
    With the advent of distributed systems, mechanisms that support efficient resource sharing are necessary to exploit a distributed architecture. One of the key resources UNIX provides is a hierarchical file system. Early efforts supported distributed UNIX systems by copying files and sending mail between individual machines. The desire to provide transparent mechanisms on which distributed systems access resources has propelled the development of distributed file systems. This thesis presents a brief history of the development of distributed systems based on UNIX, and surveys recent implementations of distributed file systems based on UNIX. The IBIS distributed file system is an example of the latter. The original capabilities of IBIS are discussed and modifications that enhance these capabilities described

    Distributed Operating Systems

    Get PDF
    Distributed operating systems have many aspects in common with centralized ones, but they also differ in certain ways. This paper is intended as an introduction to distributed operating systems, and especially to current university research about them. After a discussion of what constitutes a distributed operating system and how it is distinguished from a computer network, various key design issues are discussed. Then several examples of current research projects are examined in some detail, namely, the Cambridge Distributed Computing System, Amoeba, V, and Eden. © 1985, ACM. All rights reserved

    Methodology for modeling high performance distributed and parallel systems

    Get PDF
    Performance modeling of distributed and parallel systems is of considerable importance to the high performance computing community. To achieve high performance, proper task or process assignment and data or file allocation among processing sites is essential. This dissertation describes an elegant approach to model distributed and parallel systems, which combines the optimal static solutions for data allocation with dynamic policies for task assignment. A performance-efficient system model is developed using analytical tools and techniques. The system model is accomplished in three steps. First, the basic client-server model which allows only data transfer is evaluated. A prediction and evaluation method is developed to examine the system behavior and estimate performance measures. The method is based on known product form queueing networks. The next step extends the model so that each site of the system behaves as both client and server. A data-allocation strategy is designed at this stage which optimally assigns the data to the processing sites. The strategy is based on flow deviation technique in queueing models. The third stage considers process-migration policies. A novel on-line adaptive load-balancing algorithm is proposed which dynamically migrates processes and transfers data among different sites to minimize the job execution cost. The gradient-descent rule is used to optimize the cost function, which expresses the cost of process execution at different processing sites. The accuracy of the prediction method and the effectiveness of the analytical techniques is established by the simulations. The modeling procedure described here is general and applicable to any message-passing distributed and parallel system. The proposed techniques and tools can be easily utilized in other related areas such as networking and operating systems. This work contributes significantly towards the design of distributed and parallel systems where performance is critical

    Recursive structure in computer systems

    Get PDF
    PhD ThesisStructure plays a important part in the design of large systems. Unstructured programs are difficult to design or test and good structure has been recognized as essential to all but the smallest programs. Similarly, concurrently executing computers must co-operate in a structured way if an uncontrolled growth in complexity is to be avoided. The thesis presented here is that recursive structure can be used to organize and simplify large programs and highly parallel computers. In programming, naming concerns the way names are used to identify objects. Various naming schemes are examined including 'block structured' and 'pathname' naming. A new scheme is presented as a synthesis of these two combining most of their advantages. Recursively structured naming is shown to be an advantage when programs are to be de-composed or combined to an arbitrary degree. Also, a contribution to the UNIX United/Newcastle Connection distributed operating system design is described. This shows how recursive naming was used in a practical system. Computation concerns the progress of execution in a computer. A distinction is made between control driven computation where the programmer has explicit control over sequencing and data driven or demand driven computation where sequencing is implicit. It is shown that recursively structured computation has attractive locality properties. The definition of a recursive structure may itself be cyclic (self-referencing). A new resource management ('garbage collection') algorithm is presented which can manage cyclic structures without costs proportional to the system size. The scheme is an extension of 'reference counting'. Finally the need for structure in program and computer design and the advantages of recursive structure are discussed.The Science and Engineering Research Council of Great Britain
    corecore