790 research outputs found

    An Overview of the AURORA Gigabit Testbed

    Get PDF
    AURORA is one of five U.S. testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. AURORA is also an experiment in collaboration, where government support (through the Corporation for National Research Initiatives, which is in turn funded by DARPA and the NSF) has spurred interaction among centers of excellence in industry, academia, and government. The emphasis of the AURORA testbed, distinct from the other four testbeds, is research into the supporting technologies for gigabit networking. Our targets include new software architectures, network abstractions, hardware technologies, and applications. This paper provides an overview of the goals and methodologies employed in AURORA, and reports preliminary results from our first year of research

    The AURORA Gigabit Testbed

    Get PDF
    AURORA is one of five U.S. networking testbeds charged with exploring applications of, and technologies necessary for, networks operating at gigabit per second or higher bandwidths. The emphasis of the AURORA testbed, distinct from the other four testbeds, BLANCA, CASA, NECTAR, and VISTANET, is research into the supporting technologies for gigabit networking. Like the other testbeds, AURORA itself is an experiment in collaboration, where government initiative (in the form of the Corporation for National Research Initiatives, which is funded by DARPA and the National Science Foundation) has spurred interaction among pre-existing centers of excellence in industry, academia, and government. AURORA has been charged with research into networking technologies that will underpin future high-speed networks. This paper provides an overview of the goals and methodologies employed in AURORA, and points to some preliminary results from our first year of research, ranging from analytic results to experimental prototype hardware. This paper enunciates our targets, which include new software architectures, network abstractions, and hardware technologies, as well as applications for our work

    Enterprise network convergence: path to cost optimization

    Get PDF
    During the past two decades, telecommunications has evolved a great deal. In the eighties, people were using television, radio and telephone as their communication systems. Eventually, the introduction of the Internet and the WWW immensely transformed the telecommunications industry. This internet revolution brought about a huge change in the way businesses communicated and operated. Enterprise networks now had an increasing demand for more bandwidth as they started to embrace newer technologies. The requirements of the enterprise networks grew as the applications and services that were used in the network expanded. This stipulation for fast and high performance communication systems has now led to the emergence of converged network solutions. Enterprises across the globe are investigating new ways to implement voice, video, and data over a single network for various reasons – to optimize network costs, to restructure their communication system, to extend next generation networking abilities, or to bridge the gap between their corporate network and the existing technological progress. To date, organizations had multiple network services to support a range of communication needs. Investing in this type of multiple communication infrastructures limits the networks ability to provide resourceful bandwidth optimization services throughout the system. Thus, as the requirements for the corporate networks to handle dynamic traffic grow day by day, the need for a more effective and efficient network arises. A converged network is the solution for enterprises aspiring to employ advanced applications and innovative services. This thesis will emphasize the importance of converging network infrastructure and prove that it leads to cost savings. It discusses the characteristics, architecture, and relevant protocols of the voice, data and video traffic over both traditional infrastructure and converged architecture. While IP-based networks present excellent quality for non real-time data networking, the network by itself is not capable of providing reliable, quality and secure services for real-time traffic. In order for IP networks to perform reliable and timely transmission of real-time data, additional mechanisms to reduce delay, jitter and packet loss are required. Therefore, this thesis will also discuss the important mechanisms for running real-time traffic like voice and video over an IP network. Lastly, it will also provide an example of an enterprise network specifications (voice, video and data), and present an in depth cost analysis of a typical network vs. a converged network to prove that converged infrastructures provide significant savings

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Issues in designing transport layer multicast facilities

    Get PDF
    Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined

    ANALYSIS OF DATA & COMPUTER NETWORKS IN STUDENTS' RESIDENTIAL AREA IN UNIVERSITI TEKNOLOGI PETRONAS

    Get PDF
    In Universiti Teknologi Petronas (UTP), most of the students depend on the Internet and computer network connection to gain academics information and share educational resources. Even though the Internet connections and computers networks are provided, the service always experience interruption, such as slow Internet access, viruses and worms distribution, and network abuse by irresponsible students. Since UTP organization keeps on expanding, the need for a better service in UTP increases. Several approaches were put into practice to address the problems. Research on data and computer network was performed to understand the network technology applied in UTP. A questionnaire forms were distributed among the students to obtain feedback and statistical data about UTP's network in Students' Residential Area. The studies concentrate only on Students' Residential Area as it is where most of the users reside. From the survey, it can be observed that 99% of the students access the network almost 24 hours a day. In 2005, the 2 Mbps allocated bandwidth was utilized 100% almost continuously but in 2006, the bottleneck of Internet access has reduced significantly since the bandwidth allocated have been increased to 8 Mbps. Server degradation due to irresponsible acts by users also adds burden to the main server. In general, if the proposal to ITMS (Information Technology & Media Services) Department for them to improve their Quality of Service (QoS) and established UTP Computer Emergency Response Team (UCert), most of the issues addressed in this report can be solved

    IP and ATM integration: A New paradigm in multi-service internetworking

    Get PDF
    ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise

    The Design of a System Architecture for Mobile Multimedia Computers

    Get PDF
    This chapter discusses the system architecture of a portable computer, called Mobile Digital Companion, which provides support for handling multimedia applications energy efficiently. Because battery life is limited and battery weight is an important factor for the size and the weight of the Mobile Digital Companion, energy management plays a crucial role in the architecture. As the Companion must remain usable in a variety of environments, it has to be flexible and adaptable to various operating conditions. The Mobile Digital Companion has an unconventional architecture that saves energy by using system decomposition at different levels of the architecture and exploits locality of reference with dedicated, optimised modules. The approach is based on dedicated functionality and the extensive use of energy reduction techniques at all levels of system design. The system has an architecture with a general-purpose processor accompanied by a set of heterogeneous autonomous programmable modules, each providing an energy efficient implementation of dedicated tasks. A reconfigurable internal communication network switch exploits locality of reference and eliminates wasteful data copies

    Guifi.net: characterization, data collection and selfmanagement of community

    Get PDF
    In this project, we are going to present an E2E (end to end) solution for the principal problems that normally impact the community networks and especially Guifinet. To introduce our solution, we were investigating how the Guifinet works internally (its network hierarchy, equipment used, IP configuration and also its financial system) and also how wireless technology works and their limitations. Once we analysed and detected all the potential issues, we performed a routing performance and QoS (quality or service) simulation in order to test two experimental protocol called BATMAN and OLSR to find the most suitable routing protocol for our approach. And finally, we presented our new Guifinet network concept basing in MPLS over OLSR

    Characterizing, managing and monitoring the networks for the ATLAS data acquisition system

    Get PDF
    Particle physics studies the constituents of matter and the interactions between them. Many of the elementary particles do not exist under normal circumstances in nature. However, they can be created and detected during energetic collisions of other particles, as is done in particle accelerators. The Large Hadron Collider (LHC) being built at CERN will be the world's largest circular particle accelerator, colliding protons at energies of 14 TeV. Only a very small fraction of the interactions will give raise to interesting phenomena. The collisions produced inside the accelerator are studied using particle detectors. ATLAS is one of the detectors built around the LHC accelerator ring. During its operation, it will generate a data stream of 64 Terabytes/s. A Trigger and Data Acquisition System (TDAQ) is connected to ATLAS -- its function is to acquire digitized data from the detector and apply trigger algorithms to identify the interesting events. Achieving this requires the power of over 2000 computers plus an interconnecting network capable of sustaining a throughput of over 150 Gbit/s with minimal loss and delay. The implementation of this network required a detailed study of the available switching technologies to a high degree of precision in order to choose the appropriate components. We developed an FPGA-based platform (the GETB) for testing network devices. The GETB system proved to be flexible enough to be used as the ba sis of three different network-related projects. An analysis of the traffic pattern that is generated by the ATLAS data-taking applications was also possible thanks to the GETB. Then, while the network was being assembled, parts of the ATLAS detector started commissioning -- this task relied on a functional network. Thus it was imperative to be able to continuously identify existing and usable infrastructure and manage its operations. In addition, monitoring was required to detect any overload conditions with an indication where the excess demand was being generated. We developed tools to ease the maintenance of the network and to automatically produce inventory reports. We created a system that discovers the network topology and this permitted us to verify the installation and to track its progress. A real-time traffic visualization system has been built, allowing us to see at a glance which network segments are heavily utilized. Later, as the network achieves production status, it will be necessary to extend the monitoring to identify individual applications' use of the available bandwidth. We studied a traffic monitoring technology that will allow us to have a better understanding on how the network is used. This technology, based on packet sampling, gives the possibility of having a complete view of the network: not only its total capacity utilization, but also how this capacity is divided among users and software applicati ons. This thesis describes the establishment of a set of tools designed to characterize, monitor and manage complex, large-scale, high-performance networks. We describe in detail how these tools were designed, calibrated, deployed and exploited. The work that led to the development of this thesis spans over more than four years and closely follows the development phases of the ATLAS network: its design, its installation and finally, its current and future operation
    • …
    corecore