4,136 research outputs found

    Strictness of Leniency Programs and Cartels of Asymmetric Firms

    Get PDF
    This paper studies the effects of leniency programs on the behavior of firms participating in illegal cartel agreements.The main contribution of the paper is that we consider asymmetric firms.In general, firms differ in size and operate in several different markets.In our model, they form a cartel in one market only.This asymmetry results in additional costs in case of disclosure of the cartel, which are caused by an asymmetric reduction of the sales in other markets due to a negative reputation effect.This modeling framework can also be applied to the case of international cartels, where firms are subject to different punishment procedures according to the laws of their countries, or in situations where following an application for leniency firms are subject to costs other than the fine itself and where these costs depend on individual characteristics of the firm.Moreover, following the rules of existing Leniency Programs, we analyze the effects of the strictness of the Leniency Programs, which reflects the likelihood of getting complete exemption from the fine even in case many firms self-report simultaneously.Our main results are that, first, leniency programs work better for small (less diversified) companies, in the sense that a lower rate of law enforcement is needed in order to induce self-reporting by less diversified firms.At the same time, big (more diversified) firms are less likely to start a cartel in the first place given the possibility of self-reporting in the future.Second, the more cartelized the economy, the less strict the rules of leniency programs should be.Antitrust Policy;Antitrust Law;Self-reporting;Leniency Programs

    Last Time Buy and Control Policies With Phase-Out Returns: A Case Study in Plant Control Systems

    Get PDF
    This research involves the combination of spare parts management and reverse logistics. At the end of the product life cycle, products in the field (so called installed base) can usually be serviced by either new parts, obtained from a Last Time Buy, or by repaired failed parts. This paper, however, introduces a third source: the phase-out returns obtained from customers that replace systems. These returned parts may serve other customers that do not replace the systems yet. Phase-out return flows represent higher volumes and higher repair yields than failed parts and are cheaper to get than new ones. This new phenomenon has been ignored in the literature thus far, but due to increased product replacements rates its relevance will grow. We present a generic model, applied in a case study with real-life data from ConRepair, a third-party service provider in plant control systems (mainframes). Volumes of demand for spares, defects returns and phase-out returns are interrelated, because the same installed base is involved. In contrast with the existing literature, this paper explicitly models the operational control of both failed- and phase-out returns, which proves far from trivial given the nonstationary nature of the problem. We have to consider subintervals within the total planning interval to optimize both Last Time Buy and control policies well. Given the novelty of the problem, we limit ourselves to a single customer, single-item approach. Our heuristic solution methods prove efficient and close to optimal when validated. The resulting control policies in the case-study are also counter-intuitive. Contrary to (management) expectations, exogenous variables prove to be more important to the repair firm (which we show by sensitivity analysis) and optimizing the endogenous control policy benefits the customers. Last Time Buy volume does not make the decisive difference; far more important is the disposal versus repair policy. PUSH control policy is outperformed by PULL, which exploits demand information and waits longer to decide between repair and disposal. The paper concludes by mapping a number of extensions for future research, as it represents a larger class of problems.spare parts;reverse logistics;phase-out;PUSH-PULL repair;non stationary;Last Time Buy;business case

    A scheme with two large extra dimensions confronted with neutrino physics

    Get PDF
    We investigate a particle physics model in a six-dimensional spacetime, where two extra dimensions form a torus. Particles with Standard Model charges are confined by interactions with a scalar field to four four-dimensional branes, two vortices accommodating ordinary type fermions and two antivortices accommodating mirror fermions. We investigate the phenomenological implications of this multibrane structure by confronting the model with neutrino physics data.Comment: LATEX, 24 pages, 9 figures, minor changes in the tex

    A generalization of moderated statistics to data adaptive semiparametric estimation in high-dimensional biology

    Full text link
    The widespread availability of high-dimensional biological data has made the simultaneous screening of numerous biological characteristics a central statistical problem in computational biology. While the dimensionality of such datasets continues to increase, the problem of teasing out the effects of biomarkers in studies measuring baseline confounders while avoiding model misspecification remains only partially addressed. Efficient estimators constructed from data adaptive estimates of the data-generating distribution provide an avenue for avoiding model misspecification; however, in the context of high-dimensional problems requiring simultaneous estimation of numerous parameters, standard variance estimators have proven unstable, resulting in unreliable Type-I error control under standard multiple testing corrections. We present the formulation of a general approach for applying empirical Bayes shrinkage approaches to asymptotically linear estimators of parameters defined in the nonparametric model. The proposal applies existing shrinkage estimators to the estimated variance of the influence function, allowing for increased inferential stability in high-dimensional settings. A methodology for nonparametric variable importance analysis for use with high-dimensional biological datasets with modest sample sizes is introduced and the proposed technique is demonstrated to be robust in small samples even when relying on data adaptive estimators that eschew parametric forms. Use of the proposed variance moderation strategy in constructing stabilized variable importance measures of biomarkers is demonstrated by application to an observational study of occupational exposure. The result is a data adaptive approach for robustly uncovering stable associations in high-dimensional data with limited sample sizes

    Robust and Flexible Estimation of Stochastic Mediation Effects: A Proposed Method and Example in a Randomized Trial Setting

    Full text link
    Causal mediation analysis can improve understanding of the mechanisms underlying epidemiologic associations. However, the utility of natural direct and indirect effect estimation has been limited by the assumption of no confounder of the mediator-outcome relationship that is affected by prior exposure---an assumption frequently violated in practice. We build on recent work that identified alternative estimands that do not require this assumption and propose a flexible and double robust semiparametric targeted minimum loss-based estimator for data-dependent stochastic direct and indirect effects. The proposed method treats the intermediate confounder affected by prior exposure as a time-varying confounder and intervenes stochastically on the mediator using a distribution which conditions on baseline covariates and marginalizes over the intermediate confounder. In addition, we assume the stochastic intervention is given, conditional on observed data, which results in a simpler estimator and weaker identification assumptions. We demonstrate the estimator's finite sample and robustness properties in a simple simulation study. We apply the method to an example from the Moving to Opportunity experiment. In this application, randomization to receive a housing voucher is the treatment/instrument that influenced moving to a low-poverty neighborhood, which is the intermediate confounder. We estimate the data-dependent stochastic direct effect of randomization to the voucher group on adolescent marijuana use not mediated by change in school district and the stochastic indirect effect mediated by change in school district. We find no evidence of mediation. Our estimator is easy to implement in standard statistical software, and we provide annotated R code to further lower implementation barriers.Comment: 24 pages, 2 tables, 2 figure

    Effect of breastfeeding on gastrointestinal infection in infants: A targeted maximum likelihood approach for clustered longitudinal data

    Full text link
    The PROmotion of Breastfeeding Intervention Trial (PROBIT) cluster-randomized a program encouraging breastfeeding to new mothers in hospital centers. The original studies indicated that this intervention successfully increased duration of breastfeeding and lowered rates of gastrointestinal tract infections in newborns. Additional scientific and popular interest lies in determining the causal effect of longer breastfeeding on gastrointestinal infection. In this study, we estimate the expected infection count under various lengths of breastfeeding in order to estimate the effect of breastfeeding duration on infection. Due to the presence of baseline and time-dependent confounding, specialized "causal" estimation methods are required. We demonstrate the double-robust method of Targeted Maximum Likelihood Estimation (TMLE) in the context of this application and review some related methods and the adjustments required to account for clustering. We compare TMLE (implemented both parametrically and using a data-adaptive algorithm) to other causal methods for this example. In addition, we conduct a simulation study to determine (1) the effectiveness of controlling for clustering indicators when cluster-specific confounders are unmeasured and (2) the importance of using data-adaptive TMLE.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS727 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Estimating treatment importance in multidrug-resistant tuberculosis using Targeted Learning : an observational individual patient data network meta-analysis

    Full text link
    Persons with multidrug‐resistant tuberculosis (MDR‐TB) have a disease resulting from a strain of tuberculosis (TB) that does not respond to at least isoniazid and rifampicin, the two most effective anti‐TB drugs. MDR‐TB is always treated with multiple antimicrobial agents. Our data consist of individual patient data from 31 international observational studies with varying prescription practices, access to medications, and distributions of antibiotic resistance. In this study, we develop identifiability criteria for the estimation of a global treatment importance metric in the context where not all medications are observed in all studies. With stronger causal assumptions, this treatment importance metric can be interpreted as the effect of adding a medication to the existing treatments. We then use this metric to rank 15 observed antimicrobial agents in terms of their estimated add‐on value. Using the concept of transportability, we propose an implementation of targeted maximum likelihood estimation, a doubly robust and locally efficient plug‐in estimator, to estimate the treatment importance metric. A clustered sandwich estimator is adopted to compute variance estimates and produce confidence intervals. Simulation studies are conducted to assess the performance of our estimator, verify the double robustness property, and assess the appropriateness of the variance estimation approach

    How does star formation proceed in the circumnuclear starburst ring of NGC 6951?

    Full text link
    Gas inflowing along stellar bars is often stalled at the location of circumnuclear rings, that form an effective reservoir for massive star formation and thus shape the central regions of galaxies. However, how exactly star formation is proceeding within these circumnuclear starburst rings is subject of debate. Two main scenarios for this process have been put forward: In the first the onset of star formation is regulated by the total amount of gas present in the ring with star forming starting once a mass threshold has reached in a `random' position within the ring like `popcorn'. In the second star formation preferentially takes place near the locations where the gas enters the ring. This scenario has been dubbed `pearls-on-a-string'. Here we combine new optical IFU data covering the full stellar bar with existing multi-wavelength data to study in detail the 580 pc radius circumnuclear starburst ring in the nearby spiral galaxy NGC 6951. Using HST archival data together with Sauron and Oasis IFU data, we derive the ages and stellar masses of star clusters as well as the total stellar content of the central region. Adding information on the molecular gas distribution, stellar and gaseous dynamics and extinction, we find that the circumnuclear ring in NGC 6951 is ~1-1.5 Gyr old and has been forming stars for most of that time. We see evidence for preferred sites of star formation within the ring, consistent with the `pearls-on-a-string' scenario, when focusing on the youngest stellar populations. Due to the ring's longevity this signature is washed out when older stellar populations are included in the analysis.Comment: accepted for publication in A&A, 15 page

    Pengaruh Karakteristik Biografis, Kualitas Kehidupan Kerja, dan Pelatihan Terhadap Loyalitas Pegawai Dengan Kepuasan Kerja Sebagai Variabel Intervening

    Get PDF
    Tujuan penelitian ini adalah untuk menganalisis (1) Pengaruh karakteristik biografis terhadaployalias pegawai baik langsung maupun tidak langsung melalui kepuasan kerja; (2) Pengaruhkualitas kehidupan kerja terhadap loyalias pegawai baik langsung maupun tidak langsungmelalui kepuasan kerja; dan (3) Pengaruh pelatihan terhadap loyalias pegawai baik langsungmaupun tidak langsung melalui kepuasan kerja. Sampel penelitian sebanyak 42 pegawai, yangditentukan dengan teknik sampel jenuh. Pengumpulan data penelitian menggunakan kuesioner.Data dianalisis dengan teknik regresi liner bergnda. Hasil penelitian menunjukkan bahwa (1)Karakteristik biografis berpengaruh signifikan terhadap loyalitas peawai baik langsung maupunmelalui kepuasan kerja; (2) Kualitas kehidupan kerja berpengaruh signifikan terhadap loyalitaspegawai baik langsung maupun melalui kepuasan kerja; dan (3) Pelatihan berpengaruh signifikanterhadap loyalitas pegawai baik langsung maupun melalui kepuasan kerja
    • 

    corecore