75,298 research outputs found

    Why (and How) Networks Should Run Themselves

    Full text link
    The proliferation of networked devices, systems, and applications that we depend on every day makes managing networks more important than ever. The increasing security, availability, and performance demands of these applications suggest that these increasingly difficult network management problems be solved in real time, across a complex web of interacting protocols and systems. Alas, just as the importance of network management has increased, the network has grown so complex that it is seemingly unmanageable. In this new era, network management requires a fundamentally new approach. Instead of optimizations based on closed-form analysis of individual protocols, network operators need data-driven, machine-learning-based models of end-to-end and application performance based on high-level policy goals and a holistic view of the underlying components. Instead of anomaly detection algorithms that operate on offline analysis of network traces, operators need classification and detection algorithms that can make real-time, closed-loop decisions. Networks should learn to drive themselves. This paper explores this concept, discussing how we might attain this ambitious goal by more closely coupling measurement with real-time control and by relying on learning for inference and prediction about a networked application or system, as opposed to closed-form analysis of individual protocols

    Exact two-terminal reliability of some directed networks

    Full text link
    The calculation of network reliability in a probabilistic context has long been an issue of practical and academic importance. Conventional approaches (determination of bounds, sums of disjoint products algorithms, Monte Carlo evaluations, studies of the reliability polynomials, etc.) only provide approximations when the network's size increases, even when nodes do not fail and all edges have the same reliability p. We consider here a directed, generic graph of arbitrary size mimicking real-life long-haul communication networks, and give the exact, analytical solution for the two-terminal reliability. This solution involves a product of transfer matrices, in which individual reliabilities of edges and nodes are taken into account. The special case of identical edge and node reliabilities (p and rho, respectively) is addressed. We consider a case study based on a commonly-used configuration, and assess the influence of the edges being directed (or not) on various measures of network performance. While the two-terminal reliability, the failure frequency and the failure rate of the connection are quite similar, the locations of complex zeros of the two-terminal reliability polynomials exhibit strong differences, and various structure transitions at specific values of rho. The present work could be extended to provide a catalog of exactly solvable networks in terms of reliability, which could be useful as building blocks for new and improved bounds, as well as benchmarks, in the general case

    Governance for sustainability: learning from VSM practice

    Get PDF
    Purpose – While there is some agreement on the usefulness of systems and complexity approaches to tackle the sustainability challenges facing the organisations and governments in the twenty-first century, less is clear regarding the way such approaches can inspire new ways of governance for sustainability. The purpose of this paper is to progress ongoing research using the Viable System Model (VSM) as a meta-language to facilitate long-term sustainability in business, communities and societies, using the “Methodology to support self-transformation”, by focusing on ways of learning about governance for sustainability. Design/methodology/approach – It summarises core self-governance challenges for long-term sustainability, and the organisational capabilities required to face them, at the “Framework for Assessing Sustainable Governance”. This tool is then used to analyse capabilities for governance for sustainability at three real situations where the mentioned Methodology inspired bottom up processes of self-organisation. It analyses the transformations decided from each organisation, in terms of capabilities for sustainable governance, using the suggested Framework. Findings – Core technical lessons learned from using the framework are discussed, include the usefulness of using a unified language and tool when studying governance for sustainability in differing types and scales of case study organisations. Research limitations/implications – As with other exploratory research, it reckons the convenience for further development and testing of the proposed tools to improve their reliability and robustness. Practical implications – A final conclusion suggests that the suggested tools offer a useful heuristic path to learn about governance for sustainability, from a VSM perspective; the learning from each organisational self-transformation regarding governance for sustainability is insightful for policy and strategy design and evaluation; in particular the possibility of comparing situations from different scales and types of organisations. Originality/value – There is very little coherence in the governance literature and the field of governance for sustainability is an emerging field. This piece of exploratory research is valuable as it presents an effective tool to learn about governance for sustainability, based in the “Methodology for Self-Transformation”; and offers reflexions on applications of the methodology and the tool, that contribute to clarify the meaning of governance for sustainability in practice, in organisations from different scales and types

    Fleet Prognosis with Physics-informed Recurrent Neural Networks

    Full text link
    Services and warranties of large fleets of engineering assets is a very profitable business. The success of companies in that area is often related to predictive maintenance driven by advanced analytics. Therefore, accurate modeling, as a way to understand how the complex interactions between operating conditions and component capability define useful life, is key for services profitability. Unfortunately, building prognosis models for large fleets is a daunting task as factors such as duty cycle variation, harsh environments, inadequate maintenance, and problems with mass production can lead to large discrepancies between designed and observed useful lives. This paper introduces a novel physics-informed neural network approach to prognosis by extending recurrent neural networks to cumulative damage models. We propose a new recurrent neural network cell designed to merge physics-informed and data-driven layers. With that, engineers and scientists have the chance to use physics-informed layers to model parts that are well understood (e.g., fatigue crack growth) and use data-driven layers to model parts that are poorly characterized (e.g., internal loads). A simple numerical experiment is used to present the main features of the proposed physics-informed recurrent neural network for damage accumulation. The test problem consist of predicting fatigue crack length for a synthetic fleet of airplanes subject to different mission mixes. The model is trained using full observation inputs (far-field loads) and very limited observation of outputs (crack length at inspection for only a portion of the fleet). The results demonstrate that our proposed hybrid physics-informed recurrent neural network is able to accurately model fatigue crack growth even when the observed distribution of crack length does not match with the (unobservable) fleet distribution.Comment: Data and codes (including our implementation for both the multi-layer perceptron, the stress intensity and Paris law layers, the cumulative damage cell, as well as python driver scripts) used in this manuscript are publicly available on GitHub at https://github.com/PML-UCF/pinn. The data and code are released under the MIT Licens

    Investigating the impact of networking capability on firm innovation performance:using the resource-action-performance framework

    Get PDF
    The author's final peer reviewed version can be found by following the URI link. The Publisher's final version can be found by following the DOI link.Purpose The experience of successful firms has proven that one of the most important ways to promote co-learning and create successful networked innovations is the proper application of inter-organizational knowledge mechanisms. This study aims to use a resource-action-performance framework to open the black box on the relationship between networking capability and innovation performance. The research population embraces companies in the Iranian automotive industry. Design/methodology/approach Due to the latent nature of the variables studied, the required data are collected through a web-based cross-sectional survey. First, the content validity of the measurement tool is evaluated by experts. Then, a pre-test is conducted to assess the reliability of the measurement tool. All data are gathered by the Iranian Vehicle Manufacturers Association (IVMA) and Iranian Auto Parts Manufacturers Association (IAPMA) samples. The power analysis method and G*Power software are used to determine the sample size. Moreover, SmartPLS 3 and IBM SPSS 25 software are used for data analysis of the conceptual model and relating hypotheses. Findings The results of this study indicated that the relationships between networking capability, inter-organizational knowledge mechanisms and inter-organizational learning result in a self-reinforcing loop, with a marked impact on firm innovation performance. Originality/value Since there is little understanding of the interdependencies of networking capability, inter-organizational knowledge mechanisms, co-learning and their effect on firm innovation performance, most previous research studies have focused on only one or two of the above-mentioned variables. Thus, their cumulative effect has not examined yet. Looking at inter-organizational relationships from a network perspective and knowledge-based view (KBV), and to consider the simultaneous effect of knowledge mechanisms and learning as intermediary actions alongside, to consider the performance effect of the capability-building process, are the main advantages of this research

    Interface refactoring in performance-constrained web services

    Get PDF
    This paper presents the development of REF-WS an approach to enable a Web Service provider to reliably evolve their service through the application of refactoring transformations. REF-WS is intended to aid service providers, particularly in a reliability and performance constrained domain as it permits upgraded ’non-backwards compatible’ services to be deployed into a performance constrained network where existing consumers depend on an older version of the service interface. In order for this to be successful, the refactoring and message mediation needs to occur without affecting functional compatibility with the services’ consumers, and must operate within the performance overhead expected of the original service, introducing as little latency as possible. Furthermore, compared to a manually programmed solution, the presented approach enables the service developer to apply and parameterize refactorings with a level of confidence that they will not produce an invalid or ’corrupt’ transformation of messages. This is achieved through the use of preconditions for the defined refactorings

    Review and Comparison of Intelligent Optimization Modelling Techniques for Energy Forecasting and Condition-Based Maintenance in PV Plants

    Get PDF
    Within the field of soft computing, intelligent optimization modelling techniques include various major techniques in artificial intelligence. These techniques pretend to generate new business knowledge transforming sets of "raw data" into business value. One of the principal applications of these techniques is related to the design of predictive analytics for the improvement of advanced CBM (condition-based maintenance) strategies and energy production forecasting. These advanced techniques can be used to transform control system data, operational data and maintenance event data to failure diagnostic and prognostic knowledge and, ultimately, to derive expected energy generation. One of the systems where these techniques can be applied with massive potential impact are the legacy monitoring systems existing in solar PV energy generation plants. These systems produce a great amount of data over time, while at the same time they demand an important e ort in order to increase their performance through the use of more accurate predictive analytics to reduce production losses having a direct impact on ROI. How to choose the most suitable techniques to apply is one of the problems to address. This paper presents a review and a comparative analysis of six intelligent optimization modelling techniques, which have been applied on a PV plant case study, using the energy production forecast as the decision variable. The methodology proposed not only pretends to elicit the most accurate solution but also validates the results, in comparison with the di erent outputs for the di erent techniques

    Optimized Surface Code Communication in Superconducting Quantum Computers

    Full text link
    Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [googlemachine,gambetta2015building], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes---planar and double-defect surface codes---using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium on Microarchitectur

    Enabling stream processing for people-centric IoT based on the fog computing paradigm

    Get PDF
    The world of machine-to-machine (M2M) communication is gradually moving from vertical single purpose solutions to multi-purpose and collaborative applications interacting across industry verticals, organizations and people - A world of Internet of Things (IoT). The dominant approach for delivering IoT applications relies on the development of cloud-based IoT platforms that collect all the data generated by the sensing elements and centrally process the information to create real business value. In this paper, we present a system that follows the Fog Computing paradigm where the sensor resources, as well as the intermediate layers between embedded devices and cloud computing datacenters, participate by providing computational, storage, and control. We discuss the design aspects of our system and present a pilot deployment for the evaluating the performance in a real-world environment. Our findings indicate that Fog Computing can address the ever-increasing amount of data that is inherent in an IoT world by effective communication among all elements of the architecture
    • …
    corecore