1,389 research outputs found

    Evaluation of Uncertain International Markets The Advantage of Flexible Organization Structures

    Get PDF
    The present article is concerned with organizational flexibility in transnational corporations (TNCs), i.e., larger firms that operate in multiple national markets. Contrasting prior research into entry modes (e.g. joint ventures, greenfield investments, or acquisitions), the present article examines the way the organization of evaluation teams can influence entry and exit decisions of business units. Empirical studies broadly support the claim that TNCs experiment with flexible organizational structures in response to increased levels of turbulence and uncertainty in international markets. However, these advances in the description of TNCs, and more generally in the literature on new organizational forms, have been largely ignored in our theories about evaluation of market opportunities in TNCs and multi-national corporations (MNCs). To address this gap in our knowledge, the present article examines the effects of flexible evaluation teams when TNCs assess the viability of international markets characterized by high levels of uncertainty. Remarkably, we show that TNCs employing flexible teams of (very) fallible evaluators can obtain profits at levels that asymptote optimality. Our main result supports the claim advanced in recent empirical studies. Structural flexibility can help TNCs employing (very) fallible evaluators achieve high levels of performance in conditions of turbulence and uncertainty.Multinational corporations, entry modes

    Organizational Design and Resource Evaluation

    Get PDF
    A crucial problem of evaluating, discovering, and creating the value of resources remains at the center of the subject of business strategy. The present article draws on reliability theory to advance an analytical platform that can address part of this problem, the evaluation of resource value. Reliability theory offers a way to model managerial ability and to derive the evaluation properties of organizations, boards, teams and committees. It is shown how the problem of resource evaluation can be remedied by proper evaluation structures. An evaluation structure that is build out of a very few agents can achieve significant improvements. A simulation of the classical n-armed bandit problem shows how evaluation structures can help managers select innovations of better economic value.Reliability theory, resource value

    The Human version of Moore-Shannon's Theorem: The Design of Reliable Economic Systems

    Get PDF
    Moore & Shannon's theorem is the cornerstone in reliability theory, but cannot be applied to human systems in its original form. A generalization to human systems would therefore be of considerable interest because the choice of organization structure can remedy reliability problems that notoriously plaque business operations, financial institutions, military intelligence and other human activities. Our main result is a proof that provides answers to the following three questions. Is it possible to design a reliable social organization from fallible human individuals? How many fallible human agents are required to build an economic system of a certain level of reliability? What is the best way to design an organization of two or more agents in order to minimize error? On the basis of constructive proofs, this paper provides answers to these questions and thus offers a method to analyze any form of decision making structure with respect to its reliability.Organizational design; reliability theory; decision making; project selection

    Statistical methods for tissue array images - algorithmic scoring and co-training

    Full text link
    Recent advances in tissue microarray technology have allowed immunohistochemistry to become a powerful medium-to-high throughput analysis tool, particularly for the validation of diagnostic and prognostic biomarkers. However, as study size grows, the manual evaluation of these assays becomes a prohibitive limitation; it vastly reduces throughput and greatly increases variability and expense. We propose an algorithm - Tissue Array Co-Occurrence Matrix Analysis (TACOMA) - for quantifying cellular phenotypes based on textural regularity summarized by local inter-pixel relationships. The algorithm can be easily trained for any staining pattern, is absent of sensitive tuning parameters and has the ability to report salient pixels in an image that contribute to its score. Pathologists' input via informative training patches is an important aspect of the algorithm that allows the training for any specific marker or cell type. With co-training, the error rate of TACOMA can be reduced substantially for a very small training sample (e.g., with size 30). We give theoretical insights into the success of co-training via thinning of the feature set in a high-dimensional setting when there is "sufficient" redundancy among the features. TACOMA is flexible, transparent and provides a scoring process that can be evaluated with clarity and confidence. In a study based on an estrogen receptor (ER) marker, we show that TACOMA is comparable to, or outperforms, pathologists' performance in terms of accuracy and repeatability.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS543 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Markov Chain Approach to Randomly Grown Graphs

    Get PDF
    A Markov chain approach to the study of randomly grown graphs is proposed and applied to some popular models that have found use in biology and elsewhere. For most randomly grown graphs used in biology, it is not known whether the graph or properties of the graph converge (in some sense) as the number of vertices becomes large. Particularly, we study the behaviour of the degree sequence, that is, the number of vertices with degree 0,1,…, in large graphs, and apply our results to the partial duplication model. We further illustrate the results by application to real data

    Using equilibrium frequencies in models of sequence evolution

    Get PDF
    BACKGROUND: The f factor is a new parameter for accommodating the influence of both the starting and ending states in the rate matrices of "generalized weighted frequencies" (+gwF) models for sequence evolution. In this study, we derive an expected value for f, starting from a nearly neutral model of weak selection, and then assess the biological interpretation of this factor with evolutionary simulations. RESULTS: An expected value of f = 0.5 (i.e., equal dependency on the starting and ending states) is derived for sequences that are evolving under the nearly neutral model of this study. However, this expectation is sensitive to violations of its underlying assumptions as illustrated with the evolutionary simulations. CONCLUSION: This study illustrates how selection, drift, and mutation at the population level can be linked to the rate matrices of models for sequence evolution to derive an expected value of f. However, as f is affected by a number of factors that limit its biological interpretation, this factor should normally be estimated as a free parameter rather than fixed a priori in a +gwF analysis
    corecore