5,365 research outputs found

    Progressive damage assessment and network recovery after massive failures

    Get PDF
    After a massive scale failure, the assessment of damages to communication networks requires local interventions and remote monitoring. While previous works on network recovery require complete knowledge of damage extent, we address the problem of damage assessment and critical service restoration in a joint manner. We propose a polynomial algorithm called Centrality based Damage Assessment and Recovery (CeDAR) which performs a joint activity of failure monitoring and restoration of network components. CeDAR works under limited availability of recovery resources and optimizes service recovery over time. We modified two existing approaches to the problem of network recovery to make them also able to exploit incremental knowledge of the failure extent. Through simulations we show that CeDAR outperforms the previous approaches in terms of recovery resource utilization and accumulative flow over time of the critical service

    A Model of Total Factor Productivity Built on Hayek’s View of Knowledge: What Really Went Wrong with Socialist Planned Economies?

    Get PDF
    Because Hayek’s view goes beyond the Walrasian framework, his descriptive arguments on socialist planned economies are prone to be misunderstood. This paper clarifies Hayek’s arguments by using them as a basis to construct a model of total factor productivity. The model shows that productivity depends substantially on the intelligence of ordinary workers. The model indicates that the essential reason for the reduced productivity of a socialist economy is that, even though human beings are imperfect and do not know everything about the universe, they are able to utilize their intelligence to innovate. Decentralized market economies are far more productive than socialist economies because they intrinsically can fully utilize human beings’ intelligence, but socialist planned economies cannot, in large part because of the imagined perfect central planning bureau that does not exist.Hayek; Market economy; Socialist planned economy; Total factor productivity; Innovation; Experience curve effect; China

    Managing the boundary of an 'open' project

    Get PDF
    In the past ten years, the boundaries between public and open science and commercial research efforts have become more porous. Scholars have thus more critically examined ways in which these two institutional regimes intersect. Large open source software projects have also attracted commercial collaborators and now struggle to develop code in an open public environment that still protects their communal boundaries. This research applies a dynamic social network approach to understand how one community-managed software project, Debian, developed a membership process. We examine the project's face-to-face social network over a five-year period (1997-2001) to see how changes in the social structure affected the evolution of membership mechanisms and the determination of gatekeepers. While the amount and importance of a contributor's work increased the probability that a contributor would become a gatekeeper, those more central in the social network were more likely to become gatekeepers and influence the membership process. A greater understanding of the mechanisms open projects use to manage their boundaries has critical implications for research and knowledge-producing communities operating in pluralistic, open and distributed environments.open source software; social networks; organizational design; institutional design;

    Entanglement consumption of instantaneous nonlocal quantum measurements

    Get PDF
    Relativistic causality has dramatic consequences on the measurability of nonlocal variables and poses the fundamental question of whether it is physically meaningful to speak about the value of nonlocal variables at a particular time. Recent work has shown that by weakening the role of the measurement in preparing eigenstates of the variable it is in fact possible to measure all nonlocal observables instantaneously by exploiting entanglement. However, for these measurement schemes to succeed with certainty an infinite amount of entanglement must be distributed initially and all this entanglement is necessarily consumed. In this work we sharpen the characterisation of instantaneous nonlocal measurements by explicitly devising schemes in which only a finite amount of the initially distributed entanglement is ever utilised. This enables us to determine an upper bound to the average consumption for the most general cases of nonlocal measurements. This includes the tasks of state verification, where the measurement verifies if the system is in a given state, and verification measurements of a general set of eigenstates of an observable. Despite its finiteness the growth of entanglement consumption is found to display an extremely unfavourable exponential of an exponential scaling with either the number of qubits needed to contain the Schmidt rank of the target state or total number of qubits in the system for an operator measurement. This scaling is seen to be a consequence of the combination of the generic exponential scaling of unitary decompositions combined with the highly recursive structure of our scheme required to overcome the no-signalling constraint of relativistic causality.Comment: 32 pages and 14 figures. Updated to published versio

    Re-reengineering the dream: agility as competitive adaptability

    Get PDF
    Organizational adaptation and transformative change management in technology-based organizations is explored in the context of collaborative alliances. A Re-reengineering approach is outlined in which a new Competitive Adaptability Five-Influences Analysis approach under conditions of collaborative alliance, is described as an alternative to Porter’s Five-Forces Competitive Rivalry Analysis model. Whilst continuous change in technology and the associated effects of technology shock (Dedola & Neri, 2006; Christiano, Eichenbaum & Vigfusson, 2003) are not new constructs, the reality of the industrial age was and is a continuing reduction in timeline for relevance and lifetime for a specific technology and the related skills and expertise base required for its effective implementation. This, combined with increasing pressures for innovation (Tidd & Bessant, 2013) and at times severe impacts from both local and global economic environments (Hitt, Ireland & Hoskisson, 2011) raises serious challenges for contemporary management teams seeking to strategically position a company and its technology base advantageously, relative to its suppliers, competitors and customers, as well as in predictive readiness for future technological change and opportunistic adaptation. In effect, the life-cycle of a technology has become typically one of disruptive change and rapid adjustment, followed by a plateau as a particular technology or process captures and holds its position against minor challenges, eventually to be displaced by yet another alternative (Bower & Christensen, 1995)

    Are cyber-blackouts in service networks likely?: Implications for Aggregate Cyber Risk Management

    Get PDF
    @TechReport{UCAM-CL-TR-926, author = {Pal, Ranjan and Psounis, Konstantinos and Kumar, Abhishek and Crowcroft, Jon and Hui, Pan and Golubchik, Leana and Kelly, John and Chatterjee, Aritra and Tarkoma, Sasu}, title = {{Are cyber-blackouts in service networks likely?: implications for cyber risk management}}, year = 2018, month = oct, url = {https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-926.pdf}, institution = {University of Cambridge, Computer Laboratory}, number = {UCAM-CL-TR-926} }Service liability interconnections among networked IT and IoT driven service organizations create potential channels for cascading service disruptions due to modern cybercrimes such as DDoS, APT, and ransomware attacks. The very recent Mirai DDoS and WannaCry ransomware attacks serve as famous examples of cyber-incidents that have caused catastrophic service disruptions worth billions of dollars across organizations around the globe. A natural question that arises in this context is “what is the likelihood of a cyber-blackout?”, where the latter term is defined as: “the probability that all (or a major subset of) organizations in a service chain become dysfunctional in a certain manner due to a cyber-attack at some or all points in the chain”. The answer to this question has major implications to risk management businesses such as cyber-insurance when it comes to designing policies by risk-averse insurers for providing coverage to clients in the aftermath of such catastrophic network events. In this paper, we investigate this question in general as a function of service chain networks and different loss distribution types. We show somewhat surprisingly (and discuss potential practical implications) that following a cyber-attack, the probability of a cyber-blackout and the increase in total service-related monetary losses across all organizations, due to the effect of (a) network interconnections, and (b) a wide range of loss distributions, are mostly very small, regardless of the network structure – the primary rationale behind the results being attributed to degrees of heterogeneity in wealth base among organizations, and Increasing Failure Rate (IFR) property of loss distributions

    The Budget-Constrained Functional Dependency

    Full text link
    Armstrong's axioms of functional dependency form a well-known logical system that captures properties of functional dependencies between sets of database attributes. This article assumes that there are costs associated with attributes and proposes an extension of Armstrong's system for reasoning about budget-constrained functional dependencies in such a setting. The main technical result of this article is the completeness theorem for the proposed logical system. Although the proposed axioms are obtained by just adding cost subscript to the original Armstrong's axioms, the proof of the completeness for the proposed system is significantly more complicated than that for the Armstrong's system

    Effect of Cooperation on Economic Growth of Both China and Japan

    Get PDF
    This paper tries to measure the effect of cooperation on economic growth of both China and Japan by setting up an econometric model. This measuring is based on the basic framework of cooperation economics (Huang shao-an,2000) . The structure of the paper is as follows: The first part introduces the basic idea and analytical methods of cooperation economics; the second part establishes an econometric model for measuring the effect of cooperation on economic growth of both China and Japan; The third part measures degree of cooperation between China and Japan from two dimensions which are political factor and bilateral trade between China and Japan, and lists all the macroeconomic data that the econometric model needs; The fourth part employs the econometric model and the macroeconomic data to calculate the effect of cooperation on economic growth of both China and Japan; the final part is a brief conclusion

    Public Transport Timetables and Vehicle Scheduling with Balanced Passenger Loads

    Get PDF
    This work attempts to combine the creation of public transport timetables and vehicle scheduling so as to improve the correspondence of vehicle departure times with passenger demand while minimising the resources (the fleet size required). The methods presented for handling the two components simultaneously can be applied for both single and interlining transit routes, and can be carried out in an automated manner. With the growing problems of transit reliability, and advance in the technology of passenger information systems, the importance of even and clock headways is reduced. This allows for the possibility to create more efficient schedules from both the passenger and operator perspectives. The methodology framework contains a developed algorithm for the derivation of vehicle departure times (timetable) with even average loads and smoothing consideration in the transition between time periods. It is done while ensuring that the derived timetables will be carried out by the minimum number of vehicles. The procedures presented are accompanied by examples and clear graphical explanations. It is emphasised that the public timetable is one of the predominant bridges between the operator (and community) and the passengers
    corecore