5 research outputs found

    Local Coordination for Interpersonal Communication Systems

    Get PDF
    The decomposition of complex applications into modular units is anacknowledged design principle for creating robust systems and forenabling the flexible re-use of modules in new applicationcontexts. Typically, component frameworks provide mechanisms and rulesfor developing software modules in the scope of a certain programmingparadigm or programming language and a certain computing platform. Forexample, the JavaBeans framework is a component framework for thedevelopment of component-based systems -- in the Java environment.In this thesis, we present a light-weight, platform-independentapproach that views a component-based application as a set of ratherloosely coupled parallel processes that can be distributed on multiplehosts and are coordinated through a protocol. The core of ourframework is the Message Bus (Mbus): an asynchronous, message-orientedcoordination protocol that is based on Internet technologies andprovides group communication between application components.Based on this framework, we have developed a local coordinationarchitecture for decomposed multimedia conferencing applications thatis designed for endpoint and gateway applications. One element of thisarchitecture is an Mbus-based protocol for the coordination of callcontrol components in conferencing applications

    Research reports: 1987 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    For the 23rd consecutive year, a NASA/ASEE Summer Faculty Fellowship Program was conducted at the Marshall Space Flight Center (MSFC). The program was conducted by the University of Alabama in Huntsville and MSFC during the period 1 June to 7 August 1987. Operated under the auspices of the American Society for Engineering Education, the MSFC program, as well as those at other NASA Centers, was sponsored by the Office of University Affairs, NASA Headquarters, Washington, D.C. The basic objectives of the program are: (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participant's institutions; and (4) to contribute to the research objectives of the NASA Centers. This document is a compilation of Fellow's reports on their research during the Summer of 1987

    Large-Scale Client/Server Migration Methodology

    Get PDF
    The purpose of this dissertation is to explain how to migrate a medium-sized or large company to client/server computing. It draws heavily on the recent IBM Boca Raton migration experience. The client/server computing model is introduced and related, by a Business Reengineering Model, to the major trends that are affecting most businesses today, including business process reengineering, empowered teams, and quality management. A recommended information technology strategy is presented. A business case development approach, necessary to justify the large expenditures required for a client/server migration, is discussed. A five-phase migration management methodology is presented to explain how a business can be transformed from mid-range or mainframe-centric computing to client/server computing. Requirements definition, selection methodology, and development alternatives for client/server applications are presented. Applications are broadly categorized for use by individuals (personal applications) or teams. Client systems, server systems, and network infrastructures are described along with discussions of requirements definition, selection, installation, and support. The issues of user communication, education, and support with respect to a large client/server infrastructure are explored. Measurements for evaluation of a client/server computing environment are discussed with actual results achieved at the IBM Boca Raton site during the 1994 migration. The dissertation concludes with critical success factors for client/server computing investments and perspectives regarding future technology in each major area

    A shared-disk parallel cluster file system

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaToday, clusters are the de facto cost effective platform both for high performance computing (HPC) as well as IT environments. HPC and IT are quite different environments and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX-compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs). These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds. Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include: · Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data. · Fine-grain locking, whereby processes running across distinct nodes may define nonoverlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required). A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was slightly modified, and two kernel modules and a user-level daemon were added. In the prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN. Our benchmarks for non-overlapping writers over a single file shared among processes running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage.Lusitania, Companhia de Seguros S.A, Programa IBM Shared University Research (SUR
    corecore