311 research outputs found

    Network Cournot Competition

    Full text link
    Cournot competition is a fundamental economic model that represents firms competing in a single market of a homogeneous good. Each firm tries to maximize its utility---a function of the production cost as well as market price of the product---by deciding on the amount of production. In today's dynamic and diverse economy, many firms often compete in more than one market simultaneously, i.e., each market might be shared among a subset of these firms. In this situation, a bipartite graph models the access restriction where firms are on one side, markets are on the other side, and edges demonstrate whether a firm has access to a market or not. We call this game \emph{Network Cournot Competition} (NCC). In this paper, we propose algorithms for finding pure Nash equilibria of NCC games in different situations. First, we carefully design a potential function for NCC, when the price functions for markets are linear functions of the production in that market. However, for nonlinear price functions, this approach is not feasible. We model the problem as a nonlinear complementarity problem in this case, and design a polynomial-time algorithm that finds an equilibrium of the game for strongly convex cost functions and strongly monotone revenue functions. We also explore the class of price functions that ensures strong monotonicity of the revenue function, and show it consists of a broad class of functions. Moreover, we discuss the uniqueness of equilibria in both of these cases which means our algorithms find the unique equilibria of the games. Last but not least, when the cost of production in one market is independent from the cost of production in other markets for all firms, the problem can be separated into several independent classical \emph{Cournot Oligopoly} problems. We give the first combinatorial algorithm for this widely studied problem

    A Microsoft-Excel-based tool for running and critically appraising network meta-analyses--an overview and application of NetMetaXL.

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.BACKGROUND: The use of network meta-analysis has increased dramatically in recent years. WinBUGS, a freely available Bayesian software package, has been the most widely used software package to conduct network meta-analyses. However, the learning curve for WinBUGS can be daunting, especially for new users. Furthermore, critical appraisal of network meta-analyses conducted in WinBUGS can be challenging given its limited data manipulation capabilities and the fact that generation of graphical output from network meta-analyses often relies on different software packages than the analyses themselves. METHODS: We developed a freely available Microsoft-Excel-based tool called NetMetaXL, programmed in Visual Basic for Applications, which provides an interface for conducting a Bayesian network meta-analysis using WinBUGS from within Microsoft Excel. . This tool allows the user to easily prepare and enter data, set model assumptions, and run the network meta-analysis, with results being automatically displayed in an Excel spreadsheet. It also contains macros that use NetMetaXL's interface to generate evidence network diagrams, forest plots, league tables of pairwise comparisons, probability plots (rankograms), and inconsistency plots within Microsoft Excel. All figures generated are publication quality, thereby increasing the efficiency of knowledge transfer and manuscript preparation. RESULTS: We demonstrate the application of NetMetaXL using data from a network meta-analysis published previously which compares combined resynchronization and implantable defibrillator therapy in left ventricular dysfunction. We replicate results from the previous publication while demonstrating result summaries generated by the software. CONCLUSIONS: Use of the freely available NetMetaXL successfully demonstrated its ability to make running network meta-analyses more accessible to novice WinBUGS users by allowing analyses to be conducted entirely within Microsoft Excel. NetMetaXL also allows for more efficient and transparent critical appraisal of network meta-analyses, enhanced standardization of reporting, and integration with health economic evaluations which are frequently Excel-based.CC is a recipient of a Vanier Canada Graduate Scholarship from the Canadian Institutes of Health Research (funding reference number—CGV 121171) and is a trainee on the Canadian Institutes of Health Research Drug Safety and Effectiveness Network team grant (funding reference number—116573). BH is funded by a New Investigator award from the Canadian Institutes of Health Research and the Drug Safety and Effectiveness Network. This research was partly supported by funding from CADTH as part of a project to develop Excel-based tools to support the conduct of health technology assessments. This research was also supported by Cornerstone Research Group

    The Number of Patients and Events Required to Limit the Risk of Overestimation of Intervention Effects in Meta-Analysis—A Simulation Study

    Get PDF
    BACKGROUND: Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been validated. METHODS: We simulated a comprehensive array of meta-analysis scenarios where no intervention effect existed (i.e., relative risk reduction (RRR) = 0%) or where a small but possibly unimportant effect existed (RRR = 10%). We constructed different scenarios by varying the control group risk, the degree of heterogeneity, and the distribution of trial sample sizes. For each scenario, we calculated the probability of observing overestimates of RRR>20% and RRR>30% for each cumulative 500 patients and 50 events. We calculated the cumulative number of patients and events required to reduce the probability of overestimation of intervention effect to 10%, 5%, and 1%. We calculated the optimal information size for each of the simulated scenarios and explored whether meta-analyses that surpassed their optimal information size had sufficient protection against overestimation of intervention effects due to random error. RESULTS: The risk of overestimation of intervention effects was usually high when the number of patients and events was small and this risk decreased exponentially over time as the number of patients and events increased. The number of patients and events required to limit the risk of overestimation depended considerably on the underlying simulation settings. Surpassing the optimal information size generally provided sufficient protection against overestimation. CONCLUSIONS: Random errors are a frequent cause of overestimation of intervention effects in meta-analyses. Surpassing the optimal information size will provide sufficient protection against overestimation

    Estimating the Power of Indirect Comparisons: A Simulation Study

    Get PDF
    Indirect comparisons are becoming increasingly popular for evaluating medical treatments that have not been compared head-to-head in randomized clinical trials (RCTs). While indirect methods have grown in popularity and acceptance, little is known about the fragility of confidence interval estimations and hypothesis testing relying on this method.We present the findings of a simulation study that examined the fragility of indirect confidence interval estimation and hypothesis testing relying on the adjusted indirect method.Our results suggest that, for the settings considered in this study, indirect confidence interval estimation suffers from under-coverage while indirect hypothesis testing suffers from low power in the presence of moderate to large between-study heterogeneity. In addition, the risk of overestimation is large when the indirect comparison of interest relies on just one trial for one of the two direct comparisons.Indirect comparisons typically suffer from low power. The risk of imprecision is increased when comparisons are unbalanced

    Estimating required information size by quantifying diversity in random-effects model meta-analyses

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated <it>a priori </it>intervention effect or from an intervention effect suggested by trials with low-risk of bias.</p> <p>Methods</p> <p>Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta-analysis.</p> <p>Results</p> <p>We devise a measure of diversity (<it>D</it><sup>2</sup>) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. <it>D</it><sup>2 </sup>is the percentage that the between-trial variability constitutes of the sum of the between-trial variability and a sampling error estimate considering the required information size. <it>D</it><sup>2 </sup>is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (<it>I</it><sup>2</sup>), which may underestimate the required information size. Thus, <it>D</it><sup>2 </sup>and <it>I</it><sup>2 </sup>are compared and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is <it>D</it><sup>2 </sup>≥ <it>I</it><sup>2</sup>, for all meta-analyses.</p> <p>Conclusion</p> <p>We conclude that <it>D</it><sup>2 </sup>seems a better alternative than <it>I</it><sup>2 </sup>to consider model variation in any random-effects meta-analysis despite the choice of the between trial variance estimator that constitutes the model. Furthermore, <it>D</it><sup>2 </sup>can readily adjust the required information size in any random-effects model meta-analysis.</p
    • …
    corecore