127,198 research outputs found

    What is Strategic Competence and Does it Matter? Exposition of the Concept and a Research Agenda

    Get PDF
    Drawing on a range of theoretical and empirical insights from strategic management and the cognitive and organizational sciences, we argue that strategic competence constitutes the ability of organizations and the individuals who operate within them to work within their cognitive limitations in such a way that they are able to maintain an appropriate level of responsiveness to the contingencies confronting them. Using the language of the resource based view of the firm, we argue that this meta-level competence represents a confluence of individual and organizational characteristics, suitably configured to enable the detection of those weak signals indicative of the need for change and to act accordingly, thereby minimising the dangers of cognitive bias and cognitive inertia. In an era of unprecedented informational burdens and instability, we argue that this competence is central to the longer-term survival and well being of the organization. We conclude with a consideration of the major scientific challenges that lie ahead, if the ideas contained within this paper are to be validated

    Compositional competitiveness for distributed algorithms

    Full text link
    We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.Comment: 33 pages, 2 figures; full version of STOC 96 paper titled "Modular competitiveness for distributed algorithms.

    The Transactional Conflict Problem

    Full text link
    The transactional conflict problem arises in transactional systems whenever two or more concurrent transactions clash on a data item. While the standard solution to such conflicts is to immediately abort one of the transactions, some practical systems consider the alternative of delaying conflict resolution for a short interval, which may allow one of the transactions to commit. The challenge in the transactional conflict problem is to choose the optimal length of this delay interval so as to minimize the overall running time penalty for the conflicting transactions. In this paper, we propose a family of optimal online algorithms for the transactional conflict problem. Specifically, we consider variants of this problem which arise in different implementations of transactional systems, namely "requestor wins" and "requestor aborts" implementations: in the former, the recipient of a coherence request is aborted, whereas in the latter, it is the requestor which has to abort. Both strategies are implemented by real systems. We show that the requestor aborts case can be reduced to a classic instance of the ski rental problem, while the requestor wins case leads to a new version of this classical problem, for which we derive optimal deterministic and randomized algorithms. Moreover, we prove that, under a simplified adversarial model, our algorithms are constant-competitive with the offline optimum in terms of throughput. We validate our algorithmic results empirically through a hardware simulation of hardware transactional memory (HTM), showing that our algorithms can lead to non-trivial performance improvements for classic concurrent data structures

    Parallel Sort-Based Matching for Data Distribution Management on Shared-Memory Multiprocessors

    Full text link
    In this paper we consider the problem of identifying intersections between two sets of d-dimensional axis-parallel rectangles. This is a common problem that arises in many agent-based simulation studies, and is of central importance in the context of High Level Architecture (HLA), where it is at the core of the Data Distribution Management (DDM) service. Several realizations of the DDM service have been proposed; however, many of them are either inefficient or inherently sequential. These are serious limitations since multicore processors are now ubiquitous, and DDM algorithms -- being CPU-intensive -- could benefit from additional computing power. We propose a parallel version of the Sort-Based Matching algorithm for shared-memory multiprocessors. Sort-Based Matching is one of the most efficient serial algorithms for the DDM problem, but is quite difficult to parallelize due to data dependencies. We describe the algorithm and compute its asymptotic running time; we complete the analysis by assessing its performance and scalability through extensive experiments on two commodity multicore systems based on a dual socket Intel Xeon processor, and a single socket Intel Core i7 processor.Comment: Proceedings of the 21-th ACM/IEEE International Symposium on Distributed Simulation and Real Time Applications (DS-RT 2017). Best Paper Award @DS-RT 201

    Internet of things security implementation using blockchain for wireless technology

    Get PDF
    Blockchain is a new security system which group many data into a block or so called classifying the data into a block. The block can have many types and each of them content data and security code. By using a decentralize mechanism, one security code protect all the data. That could happen at the server. In this research, a network of wireless sensor technology is proposed. The transmission of sensor data is via the Internet of things (Internet of Thing) technology. As many data transmitted, they have to classified and group them into a block. All the blocks are then send to the central processing unit, like a microcontroller. The block of data is then processed, identified and encrypted before send over the internet network. At the receiver, a GUI or Apps is developed to open and view the data. The Apps or GUI have an encrypted data or security code. User must key in the password before they can view the data. The password used by the end user at the Apps or GUI must be equivalent to the one encrypted at the sensor nodes. This is to satisfy the decentralized concept used in the Blockchain. To demonstrate the Blockchain technology applied to the wireless sensor network, a MATLAB Simulink function is used. The expected results should show a number of block of data in cryptography manner and chain together. The two set of data. Both have the data encrypted using hash. The black dots indicate the data has been encrypted whereas the white dot indicate indicates the data is not encrypted. The half white and half black indicates the data is in progress of encrypted. All this data should arrange in cryptography order and chain together in a vertical line. A protocol called block and chain group the data into the block and then chain then. The data appears in the blocks and send over the network. As seen in the simulation results, the yellow color represents the user data. This data has a default amplitude as 1 or 5. The data is chained and blocked to produce the Blockchain waveform Keywords: Blockchain, Internet of things, Wireless Sensor Network and MATLAB Simulin
    corecore