863 research outputs found

    Development of a completely decentralized control system for modular continuous conveyors

    Get PDF
    To increase the flexibility of application of continuous conveyor systems, a completely decentralized control system for a modular conveyor system is introduced in the paper. This system is able to carry conveyor units without any centralized infrastructure. Based on existing methods of decentralized data transfer in IT networks, single modules operate autonomously and, after being positioned into the required topology, independently connect together to become a functioning conveyor system

    Development of a completely decentralized control system for modular continuous conveyors

    Get PDF
    To increase the flexibility of continuous conveyor systems, a completely decentralized control system for a modular conveyor system was developed. The system is able to carry conveyor units without any centralized infrastructure. Based on existing methods of data transfer in IT networks, single modules operate autonomously and, after being positioned into the required topology, independently connect together to become a functioning conveyor system

    High performance deep packet inspection on multi-core platform

    Get PDF
    Deep packet inspection (DPI) provides the ability to perform quality of service (QoS) and Intrusion Detection on network packets. But since the explosive growth of Internet, performance and scalability issues have been raised due to the gap between network and end-system speeds. This article describles how a desirable DPI system with multi-gigabits throughput and good scalability should be like by exploiting parallelism on network interface card, network stack and user applications. Connection-based parallelism, affinity-based scheduling and lock-free data structure are the main technologies introduced to alleviate the performance and scalability issues. A common DPI application L7-Filter is used as an example to illustrate the applicaiton level parallelism

    Achieving High Throughput for Data Transfer over ATM Networks

    Get PDF
    File-transfer rates for ftp are often reported to be relatively slow, compared to the raw bandwidth available in emerging gigabit networks. While a major bottleneck is disk I/O, protocol issues impact performance as well. Ftp was developed and optimized for use over the TCP/IP protocol stack of the Internet. However, TCP has been shown to run inefficiently over ATM. In an effort to maximize network throughput, data-transfer protocols can be developed to run over UDP or directly over IP, rather than over TCP. If error-free transmission is required, techniques for achieving reliable transmission can be included as part of the transfer protocol. However, selected image-processing applications can tolerate a low level of errors in images that are transmitted over a network. In this paper we report on experimental work to develop a high-throughput protocol for unreliable data transfer over ATM networks. We attempt to maximize throughput by keeping the communications pipe full, but still keep packet loss under five percent. We use the Bay Area Gigabit Network Testbed as our experimental platform

    Development of a completely decentralized control system for modular continuous conveyors

    Get PDF
    To increase the flexibility of continuous conveyor systems, a completely decentralized control system for a modular conveyor system was developed. The system is able to carry conveyor units without any centralized infrastructure. Based on existing methods of data transfer in IT networks, single modules operate autonomously and, after being positioned into the required topology, independently connect together to become a functioning conveyor system

    Best effort measurement based congestion control

    Get PDF
    Abstract available: p.

    Parallel network protocol stacks using replication

    Get PDF
    Computing applications demand good performance from networking systems. This includes high-bandwidth communication using protocols with sophisticated features such as ordering, reliability, and congestion control. Much of this protocol processing occurs in software, both on desktop systems and servers. Multi-processing is a requirement on today\u27s computer architectures because their design does not allow for increased processor frequencies. At the same time, network bandwidths continue to increase. In order to meet application demand for throughput, protocol processing must be parallel to leverage the full capabilities of multi-processor or multi-core systems. Existing parallelization strategies have performance difficulties that limit their scalability and their application to single, high-speed data streams. This dissertation introduces a new approach to parallelizing network protocol processing without the need for locks or for global state. Rather than maintain global states, each processor maintains its own copy of protocol state. Therefore, updates are local and don\u27t require fine-grained locks or explicit synchronization. State management work is replicated, but logically independent work is parallelized. Along with the approach, this dissertation describes Dominoes, a new framework for implementing replicated processing systems. Dominoes organizes the state information into Domains and the communication into Channels. These two abstractions provide a powerful, but flexible model for testing the replication approach. This dissertation uses Dominoes to build a replicated network protocol system. The performance of common protocols, such as TCP/IP, is increased by multiprocessing single connections. On commodity hardware, throughput increases between 15-300% depending on the type of communication. Most gains are possible when communicating with unmodified peer implementations, such as Linux. In addition to quantitative results, protocol behavior is studied as it relates to the replication approach
    corecore