481 research outputs found

    Time Based Traffic Policing and Shaping Algorithms on Campus Network Internet Traffic

    Get PDF
    This paper presents the development of algorithm on Policing and Shaping Traffic for bandwidth management which serves as Quality of Services (QoS) in a Campus network. The Campus network is connected with a 16 Mbps Virtual Private Network line to the internet Wide Area Network. Both inbound and outbound real internet traffic were captured and analyzed. Goodness of Fit (GoF) test with Anderson-darling (AD) was fitted to real traffic to identify the best distribution. The Best-fitted Cumulative Distribution Function (CDF) model was used to analyze and characterized the data and the parameters. Based on the identified parameters, a new Time Based Policing and Shaping algorithm have been developed and simulated. The policing process drops the burst traffic, while the shaping process delays traffic to the next time transmissions. Mathematical model to formulate the controlled algorithm on burst traffic with selected time has been derived. Inbound traffic threshold control burst was policed at 1200 MByte (MB) while outbound traffic threshold was policed at 680 MB in the algorithms. The algorithms were varied in relation to the identified Weibull parameters to reduce the burst. The analysis shows that the higher shape parameter value that relates to the lower burst of network throughput can be controlled. This research presented a new method for time based bandwidth management and an enhanced network performance by identifying new traffic parameters for traffic modeling in Campus network

    Social Insect-Inspired Adaptive Hardware

    Get PDF
    Modern VLSI transistor densities allow large systems to be implemented within a single chip. As technologies get smaller, fundamental limits of silicon devices are reached resulting in lower design yields and post-deployment failures. Many-core systems provide a platform for leveraging the computing resource on offer by deep sub-micron technologies and also offer high-level capabilities for mitigating the issues with small feature sizes. However, designing for many-core systems that can adapt to in-field failures and operation variability requires an extremely large multi-objective optimisation space. When a many-core reaches the size supported by the densities of modern technologies (thousands of processing cores), finding design solutions in this problem space becomes extremely difficult. Many biological systems show properties that are adaptive and scalable. This thesis proposes a self-optimising and adaptive, yet scalable, design approach for many-core based on the emergent behaviours of social-insect colonies. In these colonies there are many thousands of individuals with low intelligence who contribute, without any centralised control, to complete a wide range of tasks to build and maintain the colony. The experiments presented translate biological models of social-insect intelligence into simple embedded intelligence circuits. These circuits sense low-level system events and use this manage the parameters of the many-core's Network-on-Chip (NoC) during runtime. Centurion, a 128-node many-core, was created to investigate these models at large scale in hardware. The results show that, by monitoring a small number of signals within each NoC router, task allocation emerges from the social-insect intelligence models that can self-configure to support representative applications. It is demonstrated that emergent task allocation supports fault tolerance with no extra hardware overhead. The response-threshold decision making circuitry uses a negligible amount of hardware resources relative to the size of the many-core and is an ideal technology for implementing embedded intelligence for system runtime management of large-complexity single-chip systems

    A Framework to Quantify Network Resilience and Survivability

    Get PDF
    The significance of resilient communication networks in the modern society is well established. Resilience and survivability mechanisms in current networks are limited and domain specific. Subsequently, the evaluation methods are either qualitative assessments or context-specific metrics. There is a need for rigorous quantitative evaluation of network resilience. We propose a service oriented framework to characterize resilience of networks to a number of faults and challenges at any abstraction level. This dissertation presents methods to quantify the operational state and the expected service of the network using functional metrics. We formalize resilience as transitions of the network state in a two-dimensional state space quantifying network characteristics, from which network service performance parameters can be derived. One dimension represents the network as normally operating, partially degraded, or severely degraded. The other dimension represents network service as acceptable, impaired, or unacceptable. Our goal is to initially understand how to characterize network resilience, and ultimately how to guide network design and engineering toward increased resilience. We apply the proposed framework to evaluate the resilience of the various topologies and routing protocols. Furthermore, we present several mechanisms to improve the resilience of the networks to various challenges

    BIOLOGICAL INSPIRED INTRUSION PREVENTION AND SELF-HEALING SYSTEM FOR CRITICAL SERVICES NETWORK

    Get PDF
    With the explosive development of the critical services network systems and Internet, the need for networks security systems have become even critical with the enlargement of information technology in everyday life. Intrusion Prevention System (IPS) provides an in-line mechanism focus on identifying and blocking malicious network activity in real time. This thesis presents new intrusion prevention and self-healing system (SH) for critical services network security. The design features of the proposed system are inspired by the human immune system, integrated with pattern recognition nonlinear classification algorithm and machine learning. Firstly, the current intrusions preventions systems, biological innate and adaptive immune systems, autonomic computing and self-healing mechanisms are studied and analyzed. The importance of intrusion prevention system recommends that artificial immune systems (AIS) should incorporate abstraction models from innate, adaptive immune system, pattern recognition, machine learning and self-healing mechanisms to present autonomous IPS system with fast and high accurate detection and prevention performance and survivability for critical services network system. Secondly, specification language, system design, mathematical and computational models for IPS and SH system are established, which are based upon nonlinear classification, prevention predictability trust, analysis, self-adaptation and self-healing algorithms. Finally, the validation of the system carried out by simulation tests, measuring, benchmarking and comparative studies. New benchmarking metrics for detection capabilities, prevention predictability trust and self-healing reliability are introduced as contributions for the IPS and SH system measuring and validation. Using the software system, design theories, AIS features, new nonlinear classification algorithm, and self-healing system show how the use of presented systems can ensure safety for critical services networks and heal the damage caused by intrusion. This autonomous system improves the performance of the current intrusion prevention system and carries on system continuity by using self-healing mechanism

    Secure Cloud-Edge Deployments, with Trust

    Get PDF
    Assessing the security level of IoT applications to be deployed to heterogeneous Cloud-Edge infrastructures operated by different providers is a non-trivial task. In this article, we present a methodology that permits to express security requirements for IoT applications, as well as infrastructure security capabilities, in a simple and declarative manner, and to automatically obtain an explainable assessment of the security level of the possible application deployments. The methodology also considers the impact of trust relations among different stakeholders using or managing Cloud-Edge infrastructures. A lifelike example is used to showcase the prototyped implementation of the methodology

    Helena

    Get PDF
    Ensemble-based systems are software-intensive systems consisting of large numbers of components which can dynamically form goal-oriented communication groups. The goal of an ensemble is usually achieved through interaction of some components, but the contributing components may simultaneously participate in several collaborations. With standard component-based techniques, such systems can only be described by a complex model specifying all ensembles and participants at the same time. Thus, ensemble-based systems lack a development methodology which particularly addresses the dynamic formation and concurrency of ensembles as well as transparency of participants. This thesis proposes the Helena development methodology. It slices an ensemble-based system in two dimensions: Each kind of ensemble is considered separately. This allows the developer to focus on the relevant parts of the system only and abstract away those parts which are non-essential to the current ensemble. Furthermore, an ensemble itself is not defined solely in terms of participating components, but in terms of roles which components adopt in that ensemble. A role is the logical entity needed to contribute to the ensemble while a component provides the technical functionalities to actually execute a role. By simultaneously adopting several roles, a component can concurrently participate in several ensembles. Helena addresses the particular challenges of ensemble-based systems in the main development phases: The domain of an ensemble-based system is described as an ensemble structure of roles built on top of a component-based platform. Based on the ensemble structure, the goals of ensembles are specified as linear temporal logic formulae. With these goals in mind, the dynamic behavior of the system is designed as a set of role behaviors. To show that the ensemble participants actually achieve the global goals of the ensemble by collaboratively executing the specified behaviors, the Helena model is verified against its goals with the model-checker Spin. For that, we provide a translation of Helena models to Promela, the input language of Spin, which is proven semantically correct for a kernel part of Helena. Finally, we provide the Java framework jHelena which realizes all Helena concepts in Java. By implementing a Helena model with this framework, Helena models can be executed according to the formal Helena semantics. To support all activities of the Helena development methodology, we provide the Helena workbench as a tool for specification and automated verification and code generation. The general applicability of Helena is backed by a case study of a larger software system, the Science Cloud Platform. Helena is able to capture, verify and implement the main characteristics of the system. Looking at Helena from a different angle shows that the Helena idea of roles is also well-suited to realize adaptive systems changing their behavioral modes based on perceptions. We extend the Helena development methodology to adaptive systems and illustrate its applicability at an adaptive robotic search-and-rescue example

    Thermal-Aware Networked Many-Core Systems

    Get PDF
    Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.Siirretty Doriast
    corecore