83,898 research outputs found

    Beyond the hegemonic narrative – a study of managers

    Get PDF
    Purpose – The aim is to analyse managerial behaviour using narrative analysis to identify stories that are often ignored, silenced or missed by the hegemonic managerialist narrative. Design/methodology/approach – An ethnographic narrative based on an 18 month period of participant observation where the author was a manager in a business unit acquired by another company for $1 billion. Findings – Strategy can be diverted or altered by managers lower down the organization in a counter strategy process. This is consistent with Dalton where managers lower down the organization adapt and change strategy to make it work in practice. Research limitations/implications – Participant observation and ethnomethodological narrative analysis have the potential to go beyond the hegemonic managerialist literature and identify a much more complex picture. However, such research is always open to criticism as being from the author's “own perspective” and appearing to claim “omnipresence.” Other stories have been given voice but it is never possible to say that all stories have been recovered from the silencing processes of the organization. Practical implications – A clearer understanding of how management operates counter strategies within an organization in practice. This enables organizations to reconsider how they engage managers beyond the hegemonic narrative. Originality/value – This paper aims to provide an insight into management behaviour beyond the usual treatment of managers as an amorphous mass as is common in most of the hegemonic managerialist narrative. When managers are told the narratives in this paper they can recount their own similar stories yet these are rarely told

    Redundancy Scheduling with Locally Stable Compatibility Graphs

    Full text link
    Redundancy scheduling is a popular concept to improve performance in parallel-server systems. In the baseline scenario any job can be handled equally well by any server, and is replicated to a fixed number of servers selected uniformly at random. Quite often however, there may be heterogeneity in job characteristics or server capabilities, and jobs can only be replicated to specific servers because of affinity relations or compatibility constraints. In order to capture such situations, we consider a scenario where jobs of various types are replicated to different subsets of servers as prescribed by a general compatibility graph. We exploit a product-form stationary distribution and weak local stability conditions to establish a state space collapse in heavy traffic. In this limiting regime, the parallel-server system with graph-based redundancy scheduling operates as a multi-class single-server system, achieving full resource pooling and exhibiting strong insensitivity to the underlying compatibility constraints.Comment: 28 pages, 4 figure

    Lifted Multiplicity Codes and the Disjoint Repair Group Property

    Get PDF
    Lifted Reed Solomon Codes (Guo, Kopparty, Sudan 2013) were introduced in the context of locally correctable and testable codes. They are multivariate polynomials whose restriction to any line is a codeword of a Reed-Solomon code. We consider a generalization of their construction, which we call lifted multiplicity codes. These are multivariate polynomial codes whose restriction to any line is a codeword of a multiplicity code (Kopparty, Saraf, Yekhanin 2014). We show that lifted multiplicity codes have a better trade-off between redundancy and a notion of locality called the t-disjoint-repair-group property than previously known constructions. More precisely, we show that, for t <=sqrt{N}, lifted multiplicity codes with length N and redundancy O(t^{0.585} sqrt{N}) have the property that any symbol of a codeword can be reconstructed in t different ways, each using a disjoint subset of the other coordinates. This gives the best known trade-off for this problem for any super-constant t < sqrt{N}. We also give an alternative analysis of lifted Reed Solomon codes using dual codes, which may be of independent interest

    Self-repairing Homomorphic Codes for Distributed Storage Systems

    Full text link
    Erasure codes provide a storage efficient alternative to replication based redundancy in (networked) storage systems. They however entail high communication overhead for maintenance, when some of the encoded fragments are lost and need to be replenished. Such overheads arise from the fundamental need to recreate (or keep separately) first a copy of the whole object before any individual encoded fragment can be generated and replenished. There has been recently intense interest to explore alternatives, most prominent ones being regenerating codes (RGC) and hierarchical codes (HC). We propose as an alternative a new family of codes to improve the maintenance process, which we call self-repairing codes (SRC), with the following salient features: (a) encoded fragments can be repaired directly from other subsets of encoded fragments without having to reconstruct first the original data, ensuring that (b) a fragment is repaired from a fixed number of encoded fragments, the number depending only on how many encoded blocks are missing and independent of which specific blocks are missing. These properties allow for not only low communication overhead to recreate a missing fragment, but also independent reconstruction of different missing fragments in parallel, possibly in different parts of the network. We analyze the static resilience of SRCs with respect to traditional erasure codes, and observe that SRCs incur marginally larger storage overhead in order to achieve the aforementioned properties. The salient SRC properties naturally translate to low communication overheads for reconstruction of lost fragments, and allow reconstruction with lower latency by facilitating repairs in parallel. These desirable properties make self-repairing codes a good and practical candidate for networked distributed storage systems

    Detecting and Refactoring Operational Smells within the Domain Name System

    Full text link
    The Domain Name System (DNS) is one of the most important components of the Internet infrastructure. DNS relies on a delegation-based architecture, where resolution of names to their IP addresses requires resolving the names of the servers responsible for those names. The recursive structures of the inter dependencies that exist between name servers associated with each zone are called dependency graphs. System administrators' operational decisions have far reaching effects on the DNSs qualities. They need to be soundly made to create a balance between the availability, security and resilience of the system. We utilize dependency graphs to identify, detect and catalogue operational bad smells. Our method deals with smells on a high-level of abstraction using a consistent taxonomy and reusable vocabulary, defined by a DNS Operational Model. The method will be used to build a diagnostic advisory tool that will detect configuration changes that might decrease the robustness or security posture of domain names before they become into production.Comment: In Proceedings GaM 2015, arXiv:1504.0244
    • …
    corecore