539 research outputs found

    Multiple metrics-OLSR in NAN for Advanced Metering Infrastructures

    Get PDF
    Routing in Neighbourhood Area Network (NAN) for Smart Grid's Advanced Metering Infrastructure (AMI) raises the need for Quality of Service (QoS)-Aware routing. This is due to the expanded list of applications that will result in the transmission of different types of traffic between NAN devices (i.e smart meters). In wireless mesh network (WMN) routing, a combination of multiple link metrics, though complex, has been identified as a possible solution for QoS routing. These complexities (i.e Np complete problem) can be resolved through the use of Analytical Hierarchy Process (AHP) algorithm and pruning techniques. With the assumption that smart meters transmit IP packets of different sizes at different interval to represent AMI traffic, a case study of the performance of three Optimised Link State Routing (OLSR) link metrics is carried out on a grid topology NAN based WMN in ns-2 network simulator. The best two performing metric were used to show the possibility of combining multiple metrics with OLSR through the AHP algorithm to fulfill the QoS routing requirements of targeted AMI application traffic in NANs

    Improving the reliability of optimised link state routing in a smart grid neighbour area network based wireless mesh network using multiple metrics

    Get PDF
    © 2017 by the authors; licensee MDPI. Reliable communication is the backbone of advanced metering infrastructure (AMI). Within the AMI, the neighbourhood area network (NAN) transports a multitude of traffic, each with unique requirements. In order to deliver an acceptable level of reliability and latency, the underlying network, such as the wireless mesh network (WMN), must provide or guarantee the quality-of-service (QoS) level required by the respective application traffic. Existing WMN routing protocols, such as optimised link state routing (OLSR), typically utilise a single metric and do not consider the requirements of individual traffic; hence, packets are delivered on a best-effort basis. This paper presents a QoS-aware WMN routing technique that employs multiple metrics in OLSR optimal path selection for AMI applications. The problems arising from this approach are non deterministic polynomial time (NP)-complete in nature, which were solved through the combined use of the analytical hierarchy process (AHP) algorithm and pruning techniques. For smart meters transmitting Internet Protocol (IP) packets of varying sizes at different intervals, the proposed technique considers the constraints of NAN and the applications' traffic characteristics. The technique was developed by combining multiple OLSR path selection metrics with the AHP algorithm in ns-2. Compared with the conventional link metric in OLSR, the results show improvements of about 23% and 45% in latency and Packet Delivery Ratio (PDR), respectively, in a 25-node grid NAN

    Analysis of manufacturing operations using knowledge- Enriched aggregate process planning

    Get PDF
    Knowledge-Enriched Aggregate Process Planning is concerned with the problem of supporting agile design and manufacture by making process planning feedback integral to the design function. A novel Digital Enterprise Technology framework (Maropoulos 2003) provides the technical context and is the basis for the integration of the methods with existing technologies for enterprise-wide product development. The work is based upon the assertion that, to assure success when developing new products, the technical and qualitative evaluation of process plans must be carried out as early as possible. An intelligent exploration methodology is presented for the technical evaluation of the many alternative manufacturing options which are feasible during the conceptual and embodiment design phases. 'Data resistant' aggregate product, process and resource models are the foundation of these planning methods. From the low-level attributes of these models, aggregate methods to generate suitable alternative process plans and estimate Quality, Cost and Delivery (QCD) have been created. The reliance on QCD metrics in process planning neglects the importance of tacit knowledge that people use to make everyday decisions and express their professional judgement in design. Hence, the research also advances the core aggregate planning theories by developing knowledge-enrichment methods for measuring and analysing qualitative factors as an additional indicator of manufacturing performance, which can be used to compute the potential of a process plan. The application of these methods allows the designer to make a comparative estimation of manufacturability for design alternatives. Ultimately, this research should translate into significant reductions in both design costs and product development time and create synergy between the product design and the manufacturing system that will be used to make it. The efficacy of the methodology was proved through the development of an experimental computer system (called CAPABLE Space) which used real industrial data, from a leading UK satellite manufacturer to validate the industrial benefits and promote the commercial exploitation of the research

    Design for Support in the Initial Design of Naval Combatants

    Get PDF
    The decline of defence budgets coupled with the escalation of warship procurement costs have significantly contributed to fleet downsizing in most major western navies despite little reduction in overall commitments, resulting in extra capability and reliability required per ship. Moreover, the tendency of governments to focus on short-term strategies and expenditure has meant that those aspects of naval ship design that may be difficult to quantify, such as supportability, are often treated as secondary issues and allocated insufficient attention in Early Stage Design. To tackle this, innovation in both the design process and the development of individual ship designs is necessary, especially at the crucial early design stages. Novelty can be achieved thanks to major developments in computer technology and in adopting an architecturally-orientated approach to early stage ship design. The existing technical solutions aimed at addressing supportability largely depend on highly detailed ship design information, thus fail to enable rational supportability assessments in the Concept Phase. This research therefore aimed at addressing the lack of a quantitative supportability evaluation approach applicable to early stage naval ship design. Utilising Decision Analysis, Effectiveness Analysis, and Analytic Hierarchy Process, the proposed approach tackled the difficulty of quantifying certain aspects of supportability in initial ship design and provided a framework to address the issue of inconsistent and often conflicting preferences of decision makers. Since the ship’s supportability is considered to be significantly affected by its configuration, the proposed approach utilised the advantages of an architecturally-orientated early stage ship design approach and a new concept design tool developed at University College London. The new tool was used to develop concept level designs of a frigate-sized combatant and a number of variations of it, namely configurational rearrangement with enhancement of certain supportably features, and an alternative ship design style. The design cases were then used to demonstrate the proposed evaluation approach. The overall aim of proposing a quantitative supportability evaluation approach applicable to concept naval ship design was achieved, although several issues and limitations emerged during both the development as well as the implementation of the approach. Through identification of the research limitations, areas for future work aimed at improving the proposal have been proposed

    Incident Prioritisation for Intrusion Response Systems

    Get PDF
    The landscape of security threats continues to evolve, with attacks becoming more serious and the number of vulnerabilities rising. To manage these threats, many security studies have been undertaken in recent years, mainly focusing on improving detection, prevention and response efficiency. Although there are security tools such as antivirus software and firewalls available to counter them, Intrusion Detection Systems and similar tools such as Intrusion Prevention Systems are still one of the most popular approaches. There are hundreds of published works related to intrusion detection that aim to increase the efficiency and reliability of detection, prevention and response systems. Whilst intrusion detection system technologies have advanced, there are still areas available to explore, particularly with respect to the process of selecting appropriate responses. Supporting a variety of response options, such as proactive, reactive and passive responses, enables security analysts to select the most appropriate response in different contexts. In view of that, a methodical approach that identifies important incidents as opposed to trivial ones is first needed. However, with thousands of incidents identified every day, relying upon manual processes to identify their importance and urgency is complicated, difficult, error-prone and time-consuming, and so prioritising them automatically would help security analysts to focus only on the most critical ones. The existing approaches to incident prioritisation provide various ways to prioritise incidents, but less attention has been given to adopting them into an automated response system. Although some studies have realised the advantages of prioritisation, they released no further studies showing they had continued to investigate the effectiveness of the process. This study concerns enhancing the incident prioritisation scheme to identify critical incidents based upon their criticality and urgency, in order to facilitate an autonomous mode for the response selection process in Intrusion Response Systems. To achieve this aim, this study proposed a novel framework which combines models and strategies identified from the comprehensive literature review. A model to estimate the level of risks of incidents is established, named the Risk Index Model (RIM). With different levels of risk, the Response Strategy Model (RSM) dynamically maps incidents into different types of response, with serious incidents being mapped to active responses in order to minimise their impact, while incidents with less impact have passive responses. The combination of these models provides a seamless way to map incidents automatically; however, it needs to be evaluated in terms of its effectiveness and performances. To demonstrate the results, an evaluation study with four stages was undertaken; these stages were a feasibility study of the RIM, comparison studies with industrial standards such as Common Vulnerabilities Scoring System (CVSS) and Snort, an examination of the effect of different strategies in the rating and ranking process, and a test of the effectiveness and performance of the Response Strategy Model (RSM). With promising results being gathered, a proof-of-concept study was conducted to demonstrate the framework using a live traffic network simulation with online assessment mode via the Security Incident Prioritisation Module (SIPM); this study was used to investigate its effectiveness and practicality. Through the results gathered, this study has demonstrated that the prioritisation process can feasibly be used to facilitate the response selection process in Intrusion Response Systems. The main contribution of this study is to have proposed, designed, evaluated and simulated a framework to support the incident prioritisation process for Intrusion Response Systems.Ministry of Higher Education in Malaysia and University of Malay

    Improving the Reliability of Optimised Link State Routing Protocol in Smart Grid’s Neighbour Area Network

    Get PDF
    A reliable and resilient communication infrastructure that can cope with variable application traffic types and delay objectives is one of the prerequisites that differentiates a Smart Grid from the conventional electrical grid. However, the legacy communication infrastructure in the existing electrical grid is insufficient, if not incapable of satisfying the diverse communication requirements of the Smart Grid. The IEEE 802.11 ad hoc Wireless Mesh Network (WMN) is re-emerging as one of the communication networks that can significantly extend the reach of Smart Grid to backend devices through the Advanced Metering Infrastructure (AMI). However, the unique characteristics of AMI application traffic in the Smart Grid poses some interesting challenges to conventional communication networks including the ad hoc WMN. Hence, there is a need to modify the conventional ad hoc WMN, to address the uncertainties that may exist in its applicability in a Smart Grid environment. This research carries out an in-depth study of the communication of Smart Grid application traffic types over ad hoc WMN deployed in the Neighbour Area Network (NAN). It begins by conducting a critical review of the application characteristics and traffic requirements of several Smart Grid applications and highlighting some key challenges. Based on the reviews, and assuming that the application traffic types use the internet protocol (IP) as a transport protocol, a number of Smart Grid application traffic profiles were developed. Through experimental and simulation studies, a performance evaluation of an ad hoc WMN using the Optimised Link State Routing (OLSR) routing protocol was carried out. This highlighted some capacity and reliability issues that routing AMI application traffic may face within a conventional ad hoc WMN in a Smart Grid NAN. Given the fact that conventional routing solutions do not consider the traffic requirements when making routing decisions, another key observation is the inability of link metrics in routing protocols to select good quality links across multiple hops to a destination and also provide Quality of Service (QoS) support for target application traffic. As with most routing protocols, OLSR protocol uses a single routing metric acquired at the network layer, which may not be able to accommodate different QoS requirements for application traffic in Smart Grid. To address these problems, a novel multiple link metrics approach to improve the reliability performance of routing in ad hoc WMN when deployed for Smart Grid is presented. It is based on the OLSR protocol and explores the possibility of applying QoS routing for application traffic types in NAN based ad hoc WMN. Though routing in multiple metrics has been identified as a complex problem, Multi-Criteria Decision Making (MCDM) techniques such as the Analytical Hierarchy Process (AHP) and pruning have been used to perform such routing on wired and wireless multimedia applications. The proposed multiple metrics OLSR with AHP is used to offer the best available route, based on a number of considered metric parameters. To accommodate the variable application traffic requirements, a study that allows application traffic to use the most appropriate routing metric is presented. The multiple metrics development is then evaluated in Network Simulator 2.34; the simulation results demonstrate that it outperforms existing routing methods that are based on single metrics in OLSR. It also shows that it can be used to improve the reliability of application traffic types, thereby overcoming some weaknesses of existing single metric routing across multiple hops in NAN. The IEEE 802.11g was used to compare and analyse the performance of OLSR and the IEEE 802.11b was used to implement the multiple metrics framework which demonstrate a better performance than the single metric. However, the multiple metrics can also be applied for routing on different IEEE wireless standards, as well as other communication technologies such as Power Line Communication (PLC) when deployed in Smart Grid NAN

    A Specification For A Next Generation Cad Toolkit For Electronics Product Design

    Get PDF
    Electronic engineering product design is a complex process which has enjoyed an increasing provision of computer based tools since the early 1980's. Over this period computer aided design tool development has progressed at such a pace that new features and functions have tended to be market driven. As such CAD tools have not been developed through the recommended practise of defining a functional specification prior to any software code generation. This thesis defines a new functional specification for next generation CAD tools to support the electronics product design process. It is synthesized from a review of the use of computers in the electronics product design process, from a case study of Best Practices prevalent in a wide range of electronics companies and from a new model of the design process. The model and the best practices have given rise to a new concept for company engineering documentation, the Product Book which provides a logical framework for constraining CAD tools and their users (designers) as means of controlling costs in the design process. This specification differs from current perceptions of computer functionality in the CAD tool industry by addressing human needs together with company needs of computer supported design, rather than just providing more technological support for the designer in isolation.Racal Reda

    Managing Distributed Cloud Applications and Infrastructure

    Get PDF
    The emergence of the Internet of Things (IoT), combined with greater heterogeneity not only online in cloud computing architectures but across the cloud-to-edge continuum, is introducing new challenges for managing applications and infrastructure across this continuum. The scale and complexity is simply so complex that it is no longer realistic for IT teams to manually foresee the potential issues and manage the dynamism and dependencies across an increasing inter-dependent chain of service provision. This Open Access Pivot explores these challenges and offers a solution for the intelligent and reliable management of physical infrastructure and the optimal placement of applications for the provision of services on distributed clouds. This book provides a conceptual reference model for reliable capacity provisioning for distributed clouds and discusses how data analytics and machine learning, application and infrastructure optimization, and simulation can deliver quality of service requirements cost-efficiently in this complex feature space. These are illustrated through a series of case studies in cloud computing, telecommunications, big data analytics, and smart cities
    • …
    corecore