7,356 research outputs found

    Link communities reveal multiscale complexity in networks

    Full text link
    Networks have become a key approach to understanding systems of interacting objects, unifying the study of diverse phenomena including biological organisms and human society. One crucial step when studying the structure and dynamics of networks is to identify communities: groups of related nodes that correspond to functional subunits such as protein complexes or social spheres. Communities in networks often overlap such that nodes simultaneously belong to several groups. Meanwhile, many networks are known to possess hierarchical organization, where communities are recursively grouped into a hierarchical structure. However, the fact that many real networks have communities with pervasive overlap, where each and every node belongs to more than one group, has the consequence that a global hierarchy of nodes cannot capture the relationships between overlapping groups. Here we reinvent communities as groups of links rather than nodes and show that this unorthodox approach successfully reconciles the antagonistic organizing principles of overlapping communities and hierarchy. In contrast to the existing literature, which has entirely focused on grouping nodes, link communities naturally incorporate overlap while revealing hierarchical organization. We find relevant link communities in many networks, including major biological networks such as protein-protein interaction and metabolic networks, and show that a large social network contains hierarchically organized community structures spanning inner-city to regional scales while maintaining pervasive overlap. Our results imply that link communities are fundamental building blocks that reveal overlap and hierarchical organization in networks to be two aspects of the same phenomenon.Comment: Main text and supplementary informatio

    Platform Competition as Network Contestability

    Full text link
    Recent research in industrial organisation has investigated the essential place that middlemen have in the networks that make up our global economy. In this paper we attempt to understand how such middlemen compete with each other through a game theoretic analysis using novel techniques from decision-making under ambiguity. We model a purposely abstract and reduced model of one middleman who pro- vides a two-sided platform, mediating surplus-creating interactions between two users. The middleman evaluates uncertain outcomes under positional ambiguity, taking into account the possibility of the emergence of an alternative middleman offering intermediary services to the two users. Surprisingly, we find many situations in which the middleman will purposely extract maximal gains from her position. Only if there is relatively low probability of devastating loss of business under competition, the middleman will adopt a more competitive attitude and extract less from her position.Comment: 23 pages, 3 figure

    Consistency in Organization (updated)

    Get PDF
    Internal organization relies heavily on psychological consistency requirements. This thought has been emphasized in modern compensation theory, but has not been extended to organization theory. The perspective sheds new light on several topics in the theory of the firm, like the boundaries of the firm, the importance of fairness concerns within firms, the attenuation of incentives, or the role of routines and incentives. It implies a perceptional theory of the firm that is realistic in the sense advocated by Ronald Coase (1937).disruptive technologies, skunkworks, ownership effect, fairness, employment relationship, Simon, theory of the firm, hierarchy, evolutionary theory of the firm, perceptional theory of the firm, consistency, small numbers, Williamsonā€™s puzzle, centralization paradox, compensation, boundaries of the firm, fairness, idiosyncratic exchange, entitlements, obligations, routines, framing, Tayloristic organization, holistic organization

    Simple Forecasts and Paradigm Shifts

    Get PDF
    We postulate that agents make forecasts using overly simplified models of the worldā€”i. e. , models that only embody a subset of available information. We then go on to study the implications of learning in this environment. Our key premise is that learning is based on a model-selection criterion. Thus if a particular simple model does a poor job of forecasting over a period of time, it is eventually discarded in favor of an alternative, yet equally simple model that would have done better over the same period. This theory makes several distinctive predictions, which, for concreteness, we develop in a stock-market setting. For example, starting with symmetric and homoskedastic fundamentals, the theory yields forecastable variation in the size of the value/glamour differential, in volatility, and in the skewness of returns. Some of these features mirror familiar accounts of stock-price bubbles.

    Tensor Contraction Layers for Parsimonious Deep Nets

    Get PDF
    Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. Tensor methods are noted for their ability to discover multi-dimensional dependencies, and tensor decompositions in particular, have been used to produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting two popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. Applying the TCL to the task of image recognition, using the CIFAR100 and ImageNet datasets, we evaluate the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance
    • ā€¦
    corecore