138,689 research outputs found

    Direct sequence spread spectrum techniques in local area networks

    Get PDF
    This thesis describes the application of a direct sequence spread spectrum modulation scheme to the physical layer of a local area networks subsequently named the SS-LAN. Most present day LANs employ erne form or another of time division multiplexing which performs well in many systems but which is limited by its very nature in real time, time critical and time demanding applications. The use of spread spectrum multiplexing removes these limitations by providing a simultaneous multiple user access capability to the channel which permits each and all nodes to utilise the channel independent of the activity being currently supported by that channel. The theory of spectral spreading is a consequence of the Shannon channel capacity in which the channel capacity may be maintained by the trading of signal to noise ratio for bandwidth. The increased bandwidth provides an increased signal dimensionality which can be utilised in providing noise immunity and/or a simultaneous multiple user environment: the effects of the simultaneous users can be considered as noise from the point of view of any particular constituent signal. The use of code sequences at the physical layer of a LAN permits a wide range of mapping alternatives which can be selected according to the particular application. Each of the mapping techniques possess the general spread spectrum properties but certain properties can be emphasised at the expense of others. The work has Involved the description of the properties of the SS-LAN coupled with the development of the mapping techniques for use In the distribution of the code sequences. This has been followed by an appraisal of a set of code sequences which has resulted in the definition of the ideal code properties and the selection of code families for particular types of applications. The top level design specification for the hardware required in the construction of the SS-LAN has also been presented and this has provided the basis for a simplified and idealised theoretical analysis of the performance parameters of the SS-LAN. A positive set of conclusions for the range of these parameters has been obtained and these have been further analysed by the use of a SS-LAN computer simulation program. This program can simulate any configuration of the SS-LAN and the results it has produced have been compared with those of the analysis and have been found to be in agreement. A tool for the further analysis of complex SS-LAN configurations has therefore been developed and this will form the basis for further work

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Design and Implementation of a Distributed Middleware for Parallel Execution of Legacy Enterprise Applications

    Get PDF
    A typical enterprise uses a local area network of computers to perform its business. During the off-working hours, the computational capacities of these networked computers are underused or unused. In order to utilize this computational capacity an application has to be recoded to exploit concurrency inherent in a computation which is clearly not possible for legacy applications without any source code. This thesis presents the design an implementation of a distributed middleware which can automatically execute a legacy application on multiple networked computers by parallelizing it. This middleware runs multiple copies of the binary executable code in parallel on different hosts in the network. It wraps up the binary executable code of the legacy application in order to capture the kernel level data access system calls and perform them distributively over multiple computers in a safe and conflict free manner. The middleware also incorporates a dynamic scheduling technique to execute the target application in minimum time by scavenging the available CPU cycles of the hosts in the network. This dynamic scheduling also supports the CPU availability of the hosts to change over time and properly reschedule the replicas performing the computation to minimize the execution time. A prototype implementation of this middleware has been developed as a proof of concept of the design. This implementation has been evaluated with a few typical case studies and the test results confirm that the middleware works as expected

    Communications software performance prediction

    Get PDF
    Software development can be costly and it is important that confidence in a software system be established as early as possible in the design process. Where the software supports communication services, it is essential that the resultant system will operate within certain performance constraints (e.g. response time). This paper gives an overview of work in progress on a collaborative project sponsored by BT which aims to offer performance predictions at an early stage in the software design process. The Permabase architecture enables object-oriented software designs to be combined with descriptions of the network configuration and workload as a basis for the input to a simulation model which can predict aspects of the performance of the system. The prototype implementation of the architecture uses a combination of linked design and simulation tools

    Adaptive Control: Actual Status and Trends

    Get PDF
    Important progress in research and application of Adaptive Control Systems has been achieved in the last ten years. The techniques which are currently used in applications will be reviewed. Theoretical aspects currently under investigation and which are related to the application of adaptive control techniques in various fields will be briefly discussed. Applications in various areas will be briefly reviewed. The use of adaptive techniques for vibrations monitoring and active vibration control will be emphasized

    Intelligent Resource Management for Local Area Networks: Approach and Evolution

    Get PDF
    The Data Management System network is a complex and important part of manned space platforms. Its efficient operation is vital to crew, subsystems and experiments. AI is being considered to aid in the initial design of the network and to augment the management of its operation. The Intelligent Resource Management for Local Area Networks (IRMA-LAN) project is concerned with the application of AI techniques to network configuration and management. A network simulation was constructed employing real time process scheduling for realistic loads, and utilizing the IEEE 802.4 token passing scheme. This simulation is an integral part of the construction of the IRMA-LAN system. From it, a causal model is being constructed for use in prediction and deep reasoning about the system configuration. An AI network design advisor is being added to help in the design of an efficient network. The AI portion of the system is planned to evolve into a dynamic network management aid. The approach, the integrated simulation, project evolution, and some initial results are described

    Using the Pattern-of-Life in Networks to Improve the Effectiveness of Intrusion Detection Systems

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.As the complexity of cyber-attacks keeps increasing, new and more robust detection mechanisms need to be developed. The next generation of Intrusion Detection Systems (IDSs) should be able to adapt their detection characteristics based not only on the measureable network traffic, but also on the available high- level information related to the protected network to improve their detection results. We make use of the Pattern-of-Life (PoL) of a network as the main source of high-level information, which is correlated with the time of the day and the usage of the network resources. We propose the use of a Fuzzy Cognitive Map (FCM) to incorporate the PoL into the detection process. The main aim of this work is to evidence the improved the detection performance of an IDS using an FCM to leverage on network related contextual information. The results that we present verify that the proposed method improves the effectiveness of our IDS by reducing the total number of false alarms; providing an improvement of 9.68% when all the considered metrics are combined and a peak improvement of up to 35.64%, depending on particular metric combination

    Parallel and Distributed Simulation from Many Cores to the Public Cloud (Extended Version)

    Full text link
    In this tutorial paper, we will firstly review some basic simulation concepts and then introduce the parallel and distributed simulation techniques in view of some new challenges of today and tomorrow. More in particular, in the last years there has been a wide diffusion of many cores architectures and we can expect this trend to continue. On the other hand, the success of cloud computing is strongly promoting the everything as a service paradigm. Is parallel and distributed simulation ready for these new challenges? The current approaches present many limitations in terms of usability and adaptivity: there is a strong need for new evaluation metrics and for revising the currently implemented mechanisms. In the last part of the paper, we propose a new approach based on multi-agent systems for the simulation of complex systems. It is possible to implement advanced techniques such as the migration of simulated entities in order to build mechanisms that are both adaptive and very easy to use. Adaptive mechanisms are able to significantly reduce the communication cost in the parallel/distributed architectures, to implement load-balance techniques and to cope with execution environments that are both variable and dynamic. Finally, such mechanisms will be used to build simulations on top of unreliable cloud services.Comment: Tutorial paper published in the Proceedings of the International Conference on High Performance Computing and Simulation (HPCS 2011). Istanbul (Turkey), IEEE, July 2011. ISBN 978-1-61284-382-
    • …
    corecore