1,098 research outputs found

    Uncertain Bandwidth Calculation in Networks with Non-Linear Services

    Get PDF
    ABSTRACT : The data transfer rate in maximum level of network. It is used to observe how much amount of data can send through connection. Most of the networks are fail to guess the bandwidth availability accurately in the wireless setting. So that advance bandwidth reservation can become a critical task to improve the network resources utilization. This type of system will provide increased inconsistency of wireless channel conditions difficult for bandwidth calculation. To overcome these problems we are introducing a scheme as "Bandwidth Recycling", (i.e.) to recycle the unused bandwidth without changing the existing bandwidth reservation. We are calculating the each node in the networks bandwidth calculation considering through the queries for long time periods. For small scale networks we are using optimal algorithm with exponential time complexity, for large scale networks we are developing the heuristics with polynomial time complexity and we are using token bucket algorithm to avoid packet loss while it travelling in the network. By using this bandwidth calculation we can achieve both good accuracy and accurate levels in the networks for each node

    PROVIDING THE BOUNDARY LINE CONTROLLED REQUEST WITH ADAPTABLE TRANSMISSION RATES IN WDM MESH NETWORKS

    Get PDF
    The mixture of applications increases and supported over optical networks, to the network customers new service guarantees must be offered .The partitioning the data into multiple segments which can be processed independently the useful data to be transferred before a predefined deadline .this is a deadline driven request. To provide the request the customer chooses the bandwidth DDRs provide scheduling flexibility for the service providers. It chooses bandwidth while achieving two objectives 1.satisfying the guaranteed deadline 2.decreasing network resource utilization .by using bandwidth allocation policies improve the network performance and by using mixed integer linear program allows choosing flexible transmission rates

    Data transfer scheduling with advance reservation and provisioning

    Get PDF
    Over the years, scientific applications have become more complex and more data intensive. Although through the use of distributed resources the institutions and organizations gain access to the resources needed for their large-scale applications, complex middleware is required to orchestrate the use of these storage and network resources between collaborating parties, and to manage the end-to-end processing of data. We present a new data scheduling paradigm with advance reservation and provisioning. Our methodology provides a basis for provisioning end-to-end high performance data transfers which require integration between system, storage and network resources, and coordination between reservation managers and data transfer nodes. This allows researchers/users and higher level meta-schedulers to use data placement as a service where they can plan ahead and reserve time and resources for their data movement operations. We present a novel approach for evaluating time-dependent structures with bandwidth guaranteed paths. We present a practical online scheduling model using advance reservation in dynamic network with time constraints. In addition, we report a new polynomial algorithm presenting possible reservation options and alternatives for earliest completion and shortest transfer duration. We enhance the advance network reservation system by extending the underlying mechanism to provide a new service in which users submit their constraints and the system suggests possible reservation requests satisfying users\u27 requirements. We have studied scheduling data transfer operation with resource and time conflicts. We have developed a new scheduling methodology considering resource allocation in client sites and bandwidth allocation on network link connecting resources. Some other major contributions of our study include enhanced reliability, adaptability, and performance optimization of distributed data placement tasks. While designing this new data scheduling architecture, we also developed other important methodologies such as early error detection, failure awareness, job aggregation, and dynamic adaptation of distributed data placement tasks. The adaptive tuning includes dynamically setting data transfer parameters and controlling utilization of available network capacity. Our research aims to provide a middleware to improve the data bottleneck in high performance computing systems

    Methods and design issues for next generation network-aware applications

    Get PDF
    Networks are becoming an essential component of modern cyberinfrastructure and this work describes methods of designing distributed applications for high-speed networks to improve application scalability, performance and capabilities. As the amount of data generated by scientific applications continues to grow, to be able to handle and process it, applications should be designed to use parallel, distributed resources and high-speed networks. For scalable application design developers should move away from the current component-based approach and implement instead an integrated, non-layered architecture where applications can use specialized low-level interfaces. The main focus of this research is on interactive, collaborative visualization of large datasets. This work describes how a visualization application can be improved through using distributed resources and high-speed network links to interactively visualize tens of gigabytes of data and handle terabyte datasets while maintaining high quality. The application supports interactive frame rates, high resolution, collaborative visualization and sustains remote I/O bandwidths of several Gbps (up to 30 times faster than local I/O). Motivated by the distributed visualization application, this work also researches remote data access systems. Because wide-area networks may have a high latency, the remote I/O system uses an architecture that effectively hides latency. Five remote data access architectures are analyzed and the results show that an architecture that combines bulk and pipeline processing is the best solution for high-throughput remote data access. The resulting system, also supporting high-speed transport protocols and configurable remote operations, is up to 400 times faster than a comparable existing remote data access system. Transport protocols are compared to understand which protocol can best utilize high-speed network connections, concluding that a rate-based protocol is the best solution, being 8 times faster than standard TCP. An HD-based remote teaching application experiment is conducted, illustrating the potential of network-aware applications in a production environment. Future research areas are presented, with emphasis on network-aware optimization, execution and deployment scenarios

    A routing architecture for scheduled dynamic circuit services

    Full text link

    Taking Saratoga from Space-Based Ground Sensors to Ground-Based Space Sensors

    Full text link
    The Saratoga transfer protocol was developed by Surrey Satellite Technology Ltd (SSTL) for its Disaster Monitoring Constellation (DMC) satellites. In over seven years of operation, Saratoga has provided efficient delivery of remote-sensing Earth observation imagery, across private wireless links, from these seven low-orbit satellites to ground stations, using the Internet Protocol (IP). Saratoga is designed to cope with high bandwidth-delay products, constrained acknowledgement channels, and high loss while streaming or delivering extremely large files. An implementation of this protocol has now been developed at the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) for wider use and testing. This is intended to prototype delivery of data across dedicated astronomy radio telescope networks on the ground, where networked sensors in Very Long Baseline Interferometer (VLBI) instruments generate large amounts of data for processing and can send that data across private IP- and Ethernet-based links at very high rates. We describe this new Saratoga implementation, its features and focus on high throughput and link utilization, and lessons learned in developing this protocol for sensor-network applications.Comment: Preprint of peer-reviewed conference paper accepted by and to appear at the IEEE Aerospace 2011 conference, Big Sky, Montana, March 201

    Performance Optimization and Dynamics Control for Large-scale Data Transfer in Wide-area Networks

    Get PDF
    Transport control plays an important role in the performance of large-scale scientific and media streaming applications involving transfer of large data sets, media streaming, online computational steering, interactive visualization, and remote instrument control. In general, these applications have two distinctive classes of transport requirements: large-scale scientific applications require high bandwidths to move bulk data across wide-area networks, while media streaming applications require stable bandwidths to ensure smooth media playback. Unfortunately, the widely deployed Transmission Control Protocol is inadequate for such tasks due to its performance limitations. The purpose of this dissertation is to conduct rigorous analytical study of the design and performance of transport solutions, and develop an integrated transport solution in a systematical way to overcome the limitations of current transport methods. One of the primary challenges is to explore and compose a set of feasible route options with multiple constraints. Another challenge essentially arises from the randomness inherent in wide-area networks, particularly the Internet. This randomness must be explicitly accounted for to achieve both goodput maximization and stabilization over the constructed routes by suitably adjusting the source rate in response to both network and host dynamics.The superior and robust performance of the proposed transport solution is extensively evaluated in a simulated environment and further verified through real-life implementations and deployments over both Internet and dedicated connections under disparate network conditions in comparison with existing transport methods

    Future of networking is the future of Big Data, The

    Get PDF
    2019 Summer.Includes bibliographical references.Scientific domains such as Climate Science, High Energy Particle Physics (HEP), Genomics, Biology, and many others are increasingly moving towards data-oriented workflows where each of these communities generates, stores and uses massive datasets that reach into terabytes and petabytes, and projected soon to reach exabytes. These communities are also increasingly moving towards a global collaborative model where scientists routinely exchange a significant amount of data. The sheer volume of data and associated complexities associated with maintaining, transferring, and using them, continue to push the limits of the current technologies in multiple dimensions - storage, analysis, networking, and security. This thesis tackles the networking aspect of big-data science. Networking is the glue that binds all the components of modern scientific workflows, and these communities are becoming increasingly dependent on high-speed, highly reliable networks. The network, as the common layer across big-science communities, provides an ideal place for implementing common services. Big-science applications also need to work closely with the network to ensure optimal usage of resources, intelligent routing of requests, and data. Finally, as more communities move towards data-intensive, connected workflows - adopting a service model where the network provides some of the common services reduces not only application complexity but also the necessity of duplicate implementations. Named Data Networking (NDN) is a new network architecture whose service model aligns better with the needs of these data-oriented applications. NDN's name based paradigm makes it easier to provide intelligent features at the network layer rather than at the application layer. This thesis shows that NDN can push several standard features to the network. This work is the first attempt to apply NDN in the context of large scientific data; in the process, this thesis touches upon scientific data naming, name discovery, real-world deployment of NDN for scientific data, feasibility studies, and the designs of in-network protocols for big-data science
    • …
    corecore