133 research outputs found

    Extensibility and limitations of FDDI

    Get PDF
    Recently two standards for Metropolitan Area Networks (MANs), Fiber Distributed Data Interface (FDDI) and Distributed Queue Dual Bus (DQDB), have emerged as the primary competitors for the MAN arena. Great interest exists in building higher speed networks which support large numbers of node and greater distance, and it is not clear what types of protocols are needed for this type of environment. There is some question as to whether or not these MAN standards can be extended to such environments. The extensibility of FDDI to the Gbps range and a long distance environment is investigated. Specification parameters which affect performance are shown and a measure is provided for predicting utilization of FDDI. A comparison of FDDI at 100 Mbps and 1 Gbps is presented. Some specific problems with FDDI are addressed and modifications which improve the viability of FDDI in such high speed networks are investigated

    Performance of gigabit FDDI

    Get PDF
    Great interest exists in developing high speed protocols which will be able to support data rates at gigabit speeds. Hardware currently exists which can experimentally transmit at data rates exceeding a gigabit per second, but it is not clear as to what types of protocols will provide the best performance. One possibility is to examine current protocols and their extensibility to these speeds. Scaling of Fiber Distributed Data Interface (FDDI) to gigabit speeds is studied. More specifically, delay statistics are included to provide insight as to which parameters (network length, packet length or number of nodes) have the greatest effect on performance

    A software development and evolution model based on decision-making

    Get PDF
    Design is a complex activity whose purpose is to construct an artifact which satisfies a set of constraints and requirements. However the design process is not well understood. The software design and evolution process is the focus of interest, and a three dimensional software development space organized around a decision-making paradigm is presented. An initial instantiation of this model called 3DPM(sub p) which was partly implemented, is presented. Discussion of the use of this model in software reuse and process management is given

    Documenting the decision structure in software development

    Get PDF
    Current software development paradigms focus on the products of the development process. Much of the decision making process which produces these products is outside the scope of these paradigms. The Decision-Based Software Development (DBSD) paradigm views the design process as a series of interrelated decisions which involve the identification and articulation of problems, alternates, solutions and justifications. Decisions made by programmers and analysts are recorded in a project data base. Unresolved problems are also recorded and resources for their resolution are allocated by management according to the overall development strategy. This decision structure is linked to the products affected by the relevant decision and provides a process oriented view of the resulted system. Software maintenance uses this decision view of the system to understand the rationale behind the decisions affecting the part of the system to be modified. D-HyperCase, a prototype Decision-Based Hypermedia System is described and results of applying the DBSD approach during its development are presented

    Smart Objects and Open Archives

    Get PDF
    Within the context of digital libraries (DLs), we are making information objects first-class citizens . We decouple information objects from the systems used for their storage and retrieval, allowing the technology for both DLs and information content to progress independently. We believe dismantling the stovepipe of DL-archive-content is the first step in building richer DL experiences for users and insuring the long-term survivability of digital information. To demonstrate this partitioning between DLs, archives and information content, we introduce buckets : aggregative, intelligent, object-oriented constructs for publishing in digital libraries. Buckets exist within the Smart Object, Dumb Archive (SODA) DL model, which promotes the importance and responsibility of individual information objects and reduces the role of traditional archives and database systems. The goal is to have smart objects be independent of and more resilient to the transient nature of information systems. The SODA model fits well with the emerging Open Archives Initiative (OAI), which promotes DL interoperability through the use of simple archives. This paper examines the motivation for buckets, SODA and the OAI, and initial experiences using them in various DL testbeds

    A Scalable Backward Chaining-Based Reasoner for a Semantic Web

    Get PDF
    In this paper we consider knowledge bases that organize information using ontologies. Specifically, we investigate reasoning over a semantic web where the underlying knowledge base covers linked data about science research that are being harvested from the Web and are supplemented and edited by community members. In the semantic web over which we want to reason, frequent changes occur in the underlying knowledge base, and less frequent changes occur in the underlying ontology or the rule set that governs the reasoning. Interposing a backward chaining reasoner between a knowledge base and a query manager yields an architecture that can support reasoning in the face of frequent changes. However, such an interposition of the reasoning introduces uncertainty regarding the size and effort measurements typically exploited during query optimization. We present an algorithm for dynamic query optimization in such an architecture. We also introduce new optimization techniques to the backward-chaining algorithm. We show that these techniques together with the query-optimization reported on earlier, will allow us to actually outperform forward-chaining reasoners in scenarios where the knowledge base is subject to frequent change. Finally, we analyze the impact of these techniques on a large knowledge base that requires external storage

    Generalization Per Category: Theory And Application

    Get PDF
    The concept of Generalization Per Category (GPO is formalized. It is shown that GPC imposes lattice structures on entity types and their subtypes. A high level application oriented data definition language based on the GPC is outlined which allows the system to derive general entity types and organize their instances. Users are freed from undue efforts in the design of databases which are about entity types with rich varieties and high populations. Effective browsing of these databases and efficient execution of frequent queries against them are achieved by using the lattice structures among the entity types and their subtypes

    Lessons Learned with Arc, an OAI-PMH Service Provider

    Get PDF
    Web-based digital libraries have historically been built in isolation utilizing different technologies, protocols, and metadata. These differences hindered the development of digital library services that enable users to discover information from multiple libraries through a single unified interface. The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is a major, international effort to address technical interoperability among distributed repositories. Arc debuted in 2000 as the first end-user OAI-PMH service provider. Since that time, Arc has grown to include nearly 7,000,000 metadata records. Arc has been deployed in a number of environments and has served as the basis for many other OAI-PMH projects, including Archon, Kepler, NCSTRL, and DP9. In this article we review the history of OAI-PMH and Arc, as well as some of the lessons learned while developing Arc and related OAI-PMH services. Reprinted by permission of the publisher

    CSMA/RN: A universal protocol for gigabit networks

    Get PDF
    Networks must provide intelligent access for nodes to share the communications resources. In the range of 100 Mbps to 1 Gbps, the demand access class of protocols were studied extensively. Many use some form of slot or reservation system and many the concept of attempt and defer to determine the presence or absence of incoming information. The random access class of protocols like shared channel systems (Ethernet), also use the concept of attempt and defer in the form of carrier sensing to alleviate the damaging effects of collisions. In CSMA/CD, the sensing of interference is on a global basis. All systems discussed above have one aspect in common, they examine activity on the network either locally or globally and react in an attempt and whatever mechanism. Of the attempt + mechanisms discussed, one is obviously missing; that is attempt and truncate. Attempt and truncate was studied in a ring configuration called the Carrier Sensed Multiple Access Ring Network (CSMA/RN). The system features of CSMA/RN are described including a discussion of the node operations for inserting and removing messages and for handling integrated traffic. The performance and operational features based on analytical and simulation studies which indicate that CSMA/RN is a useful and adaptable protocol over a wide range of network conditions are discussed. Finally, the research and development activities necessary to demonstrate and realize the potential of CSMA/RN as a universal, gigabit network protocol is outlined

    Traffic placement policies for a multi-band network

    Get PDF
    Recently protocols were introduced that enable the integration of synchronous traffic (voice or video) and asynchronous traffic (data) and extend the size of local area networks without loss in speed or capacity. One of these is DRAMA, a multiband protocol based on broadband technology. It provides dynamic allocation of bandwidth among clusters of nodes in the total network. A number of traffic placement policies for such networks are proposed and evaluated. Metrics used for performance evaluation include average network access delay, degree of fairness of access among the nodes, and network throughput. The feasibility of the DRAMA protocol is established through simulation studies. DRAMA provides effective integration of synchronous and asychronous traffic due to its ability to separate traffic types. Under the suggested traffic placement policies, the DRAMA protocol is shown to handle diverse loads, mixes of traffic types, and numbers of nodes, as well as modifications to the network structure and momentary traffic overloads
    • …
    corecore