35 research outputs found

    Repository Replication Using NNTP and SMTP

    Full text link
    We present the results of a feasibility study using shared, existing, network-accessible infrastructure for repository replication. We investigate how dissemination of repository contents can be ``piggybacked'' on top of existing email and Usenet traffic. Long-term persistence of the replicated repository may be achieved thanks to current policies and procedures which ensure that mail messages and news posts are retrievable for evidentiary and other legal purposes for many years after the creation date. While the preservation issues of migration and emulation are not addressed with this approach, it does provide a simple method of refreshing content with unknown partners.Comment: This revised version has 24 figures and a more detailed discussion of the experiments conducted by u

    Multidimensional catalogs for systematic exploration of component-based design spaces

    Get PDF
    Most component-based approaches to elaborate software require complete and consistent descriptions of components, but in practical settings components information is incomplete, imprecise and changing, and requirements may be likewise. More realistically deployable are approaches that combine exploration of candidate architectures with their evaluation vis-a-vis requirements, and deal with the fuzzyness of available component information. This article presents an approach to systematic generation, evaluation and re-generation of component assemblies, using potentially incomplete, imprecise, unreliable and changing descriptions of requirements and components. The key ideas are representation of NFRs using architectural policies, systematic reification of policies into mechanisms and components that implement them, multi-dimensional characterizations of these three levels, and catalogs of them. The Azimut framework embodies these ideas and enables traceability of architecture by supporting architecture-level reasoning, and allows architects to engage into systematic exploration of design spaces. A detailed illustrative example illustrates the approach.1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 1: Software ArchitectureRed de Universidades con Carreras en Informática (RedUNCI

    Multidimensional catalogs for systematic exploration of component-based design spaces

    Get PDF
    Most component-based approaches to elaborate software require complete and consistent descriptions of components, but in practical settings components information is incomplete, imprecise and changing, and requirements may be likewise. More realistically deployable are approaches that combine exploration of candidate architectures with their evaluation vis-a-vis requirements, and deal with the fuzzyness of available component information. This article presents an approach to systematic generation, evaluation and re-generation of component assemblies, using potentially incomplete, imprecise, unreliable and changing descriptions of requirements and components. The key ideas are representation of NFRs using architectural policies, systematic reification of policies into mechanisms and components that implement them, multi-dimensional characterizations of these three levels, and catalogs of them. The Azimut framework embodies these ideas and enables traceability of architecture by supporting architecture-level reasoning, and allows architects to engage into systematic exploration of design spaces. A detailed illustrative example illustrates the approach.1st International Workshop on Advanced Software Engineering: Expanding the Frontiers of Software Technology - Session 1: Software ArchitectureRed de Universidades con Carreras en Informática (RedUNCI

    Derivation of the required elements for a definition of the term middleware

    Get PDF
    Thirteen contemporary definitions of Middleware were analyzed. The definitions agree that any software that can do the following should be classified as Middleware (1) provide service that provides transparent application-to-application interaction across the network, (2) act as a service provider for distributed applications, and (3) provide services that are primarily used by distributed applications (e.g., RPCs, ORBs, Directories, name-resolution services, etc.) Most definitions agree that Middleware is that level of software required to achieve platform, location, and network transparency. There is some discrepancy about the OSI levels at which middleware operates. The majority of definitions limit it to levels 5, 6, and 7. Additionally, almost half of the definitions do not include database transparency as something achieved by Middleware, perhaps due to the ambiguous classification of ODBC and JDBC as software. Assuming that the number of times a service is mentioned, the majority of the definitions rank services associated with legal access to an application as core to Middleware, along with valid, standardized APIs for application development as core to the definition of middleware

    Mobile computing with the Rover Toolkit

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (leaves 138-147).by Anthony Douglas Joseph.Ph.D

    The Architecture of a Worldwide Distributed System

    Get PDF

    The Internet is a Semicommons

    Get PDF
    The Internet is a semicommons. Private property in servers and network links coexists with a shared communications platform. This distinctive combination both explains the Internet\u27s enormous success and illustrates some of its recurring problems. Building on Henry Smith\u27s theory of the semicommons in the medieval open-field system, this essay explains how the dynamic interplay between private and common uses on the Internet enables it to facilitate worldwide sharing and collaboration without collapsing under the strain of misuse. It shows that key technical features of the Internet, such as its layering of protocols and the Web\u27s division into distinct sites, respond to the characteristic threats of strategic behavior in a semicommons. An extended case study of the Usenet distributed messaging system shows that not all semicommons on the Internet succeed; the continued success of the Internet depends on our ability to create strong online communities that can manage and defend the infrastructure on which they rely. Private and common both have essential roles to play in that task, a lesson recognized in David Post\u27s and Jonathan Zittrain\u27s recent books on the Internet

    Analysis and performance optimization of e-mail server

    Get PDF
    Nowadays the use of electronic services and Internet communications are increasingly common among citizens and thus the demand for better services and better solutions is constantly growing. In recent years we have seen the emergence of new infrastructures and computing platforms as well as the improvement of the existing ones. The need to improve services and electronic communications is compelling and it requires constant monitoring and studying new solutions towards new infrastructures and platforms. To cope with the increase of tra c as well as the dimension of organizations, several architectures have been evolving, such as cluster or cloud computing, promising new paradigms of service delivery, which can possibility to solve many current problems such as scalability, increased storage and processing capacity, greater rationalization of resources, cost reduction, and increase in performance. However, there it is not clear if they are suitable to host e-mail servers. In this dissertation we perform the evaluation of the performance of e-mail servers, in di erent hosting architectures. Beyond computing platforms, was also analyze di erent server applications. Tests were run to determine which combinations of computer platforms and applications obtained better performances for the SMTP service and for services POP3/IMAP. The tests are performed by measuring the number of sessions per ammount of time, in several test scenarios. We used Qmail and Post x as SMTP servers and Qmail, Courier and Dovecot for POP and IMAP services. Nos dias de hoje, o uso de serviços de comunicações electrónicas e de Internet é cada vez mais comum entre os cidadãos. Neste sentido, a demanda por melhores serviços e melhores soluções est_a em constante crescimento. Nos últimos anos tem-se assistido ao aparecimento de novas infra-estruturas e plataformas de computação, bem como a melhoria das já existentes. A constante necessidade de melhorar os serviços e comunicações electrónicas exige um constante acompanhamento e estudo de novas soluções para novas infra-estruturas e plataformas. Para lidar com o aumento do tráfego, bem como a dimensão da organizações, várias arquitecturas foram evoluindo, tais como o cluster ou cloud computing, promissores de novos paradigmas de prestação de serviços, que podem possibilitar a resolução de muitos dos problemas actuais, tais como escalabilidade, maior armazenamento e capacidade de processamento, uma maior racionalização de recursos, redução de custos e aumento no desempenho. No entanto, não está claro se estes estão adequados para os servidores de e-mail. Nesta dissertação realizamos a avaliação do desempenho dos servidores de e-mail em diferentes arquitecturas. Para além das plataformas de computação, também foram analisadas diferentes aplicações servidoras. Foram realizados testes para determinar que combinações de plataformas de computação e aplicações obtêm melhor desempenho para o serviço SMTP e para os serviços POP3/IMAP. Os testes são realizados através da medição do número de sessões por quantidade de tempo, em vários cenários de teste. Optou-se por usar o Qmail e o Post_x como serviço de SMTP e servidores Qmail, Courier e Dovecot para os serviços POP e IMAP

    Informatics architecture. Implementation. June 1997

    Get PDF
    corecore