9,058 research outputs found

    Performance evaluation of an efficient counter-based scheme for mobile ad hoc networks based on realistic mobility model

    Get PDF
    Flooding is the simplest and commonly used mechanism for broadcasting in mobile ad hoc networks (MANETs). Despite its simplicity, it can result in high redundant retransmission, contention and collision in the network, a phenomenon referred to as broadcast storm problem. Several probabilistic broadcast schemes have been proposed to mitigate this problem inherent with flooding. Recently, we have proposed a hybrid-based scheme as one of the probabilistic scheme, which combines the advantages of pure probabilistic and counter-based schemes to yield a significant performance improvement. Despite these considerable numbers of proposed broadcast schemes, majority of these schemes’ performance evaluation was based on random waypoint model. In this paper, we evaluate the performance of our broadcast scheme using a community based mobility model which is based on social network theory and compare it against widely used random waypoint mobility model. Simulation results have shown that using unrealistic movement pattern does not truly reflect on the actual performance of the scheme in terms of saved-rebroadcast, reachability and end to end delay

    Design of an integrated airframe/propulsion control system architecture

    Get PDF
    The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture

    On the Performance of Copying Large Files Across a Contention-Based Network

    Full text link
    Analytical and simulation models of interconnected local area networks, because of the large scale involved, are often constrained to represent only the most ideal of conditions for tractability sake. Consequently, many of the important causes of network delay are not accounted for. In this study, experimental evidence is presented to show how delay time in local area networks is significantly affected by hardware limitations in the connected workstations, software overhead, and network contention. The mechanism is a controlled experiment with two Vax workstations over an Ethernet. We investigate the network delays for large file transfers, taking into account the Vax workstation disk transfer limitations; generalized file transfer software such as NFS, FTP, and rcp; and the effect of contention on this simple network by the introduction of substantial workload from competing workstations. A comparison is made between the experimental data and a network modeling tool, and the limitations of the tool are explained. Insights from these experiments have increased our understanding of how more complex networks are likely to perform under heavy workloads.http://deepblue.lib.umich.edu/bitstream/2027.42/107873/1/citi-tr-89-3.pd

    Modeling and measurement of fault-tolerant multiprocessors

    Get PDF
    The workload effects on computer performance are addressed first for a highly reliable unibus multiprocessor used in real-time control. As an approach to studing these effects, a modified Stochastic Petri Net (SPN) is used to describe the synchronous operation of the multiprocessor system. From this model the vital components affecting performance can be determined. However, because of the complexity in solving the modified SPN, a simpler model, i.e., a closed priority queuing network, is constructed that represents the same critical aspects. The use of this model for a specific application requires the partitioning of the workload into job classes. It is shown that the steady state solution of the queuing model directly produces useful results. The use of this model in evaluating an existing system, the Fault Tolerant Multiprocessor (FTMP) at the NASA AIRLAB, is outlined with some experimental results. Also addressed is the technique of measuring fault latency, an important microscopic system parameter. Most related works have assumed no or a negligible fault latency and then performed approximate analyses. To eliminate this deficiency, a new methodology for indirectly measuring fault latency is presented

    Scalable parallel communications

    Get PDF
    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups

    Performance studies of file system design choices for two concurrent processing paradigms

    Get PDF

    NetMod: A Design Tool for Large-Scale Heterogeneous Campus Networks

    Full text link
    The Network Modeling Tool (NetMod) uses simple analytical models to provide the designers of large interconnected local area networks with an in-depth analysis of the potential performance of these systems. This tool can be used in either a university, industrial, or governmental campus networking environment consisting of thousands of computer sites. NetMod is implemented with a combination of the easy-to-use Macintosh software packages HyperCard and Excel. The objectives of NetMod, the analytical models, and the user interface are described in detail along with its application to an actual campus-wide network.http://deepblue.lib.umich.edu/bitstream/2027.42/107971/1/citi-tr-90-1.pd

    Spacelab system analysis: A study of the Marshall Avionics System Testbed (MAST)

    Get PDF
    An analysis of the Marshall Avionics Systems Testbed (MAST) communications requirements is presented. The average offered load for typical nodes is estimated. Suitable local area networks are determined
    • …
    corecore