760 research outputs found

    An Efficient Interpolation Technique for Jump Proposals in Reversible-Jump Markov Chain Monte Carlo Calculations

    Full text link
    Selection among alternative theoretical models given an observed data set is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty: it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the MCMC algorithm and convergence is correspondingly slow. Here we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose inter-model jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient "global" proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher-dimensional spaces efficiently.Comment: Minor revision to match published versio

    Strategic Planning Constraints within a Fast Pace Changing Organizational Context

    Get PDF
    The purpose of this study is to identify how a fast pace change in the organizational context impacts strategic planning as well as to have a better understanding of which are the main constraints that organizations are facing regarding their strategic planning process

    Can a Post-Discharge Telephone Call Reduce Hospital Readmission after Colorectal Surgery? A Prospective Study

    Get PDF
    BACKGROUND: Hospital readmission after major colorectal surgery is a major economic burden and a benchmark of quality care by government agencies. We hypothesized that a post-discharge telephone follow-up (TFU) could reduce readmission after abdominal colorectal surgery. METHODS: Consecutive patients undergoing abdominal colorectal surgery over the 4-month period ending Oct 2016 were prospectively evaluated. A structured TFU call during the 4-day period after hospital discharge evaluating the patient’s clinical status and possible interventions to avoid readmission was conducted by a second-year medical student, supervised by two board certified colorectal surgeons. Readmission rates were compared to a control group undergoing abdominal colorectal surgery by the same surgeons not receiving TFU over the prior 12-month period. Low-complexity surgery was defined as small bowel resection, right colectomy, creation or revision of ileostomy or colostomy. High-complexity surgery included left or total colectomy, or proctectomy with or without diversion. Groups were compared using Fisher\u27s exact test. RESULTS: The TFU patient group (n=74) and control patient group (n=134) were well matched in all clinical and operative characteristics except for case complexity. TFU group patients were more likely to undergo low-complexity surgery (n=41;55%) compared to control group patients (n=35;26%) (p=0.001). Readmission rates in the TFU patient group (n=9; 12%) and control patient group (n=26; 19%) were comparable (p=.25). For patients undergoing high-complexity surgery, readmission rates were not statistically different between the TFU patients (n=6;18%) and control patients (n=14; 14%). For patients undergoing low-complexity surgery, readmission rates were significantly lower in the TFU patient group (n=3;7%) compared to the control patient group (n=12;34%) (p=0.004). CONCLUSIONS: A simple, post discharge medical student-led phone call signficantly reduced the rate of readmission after low-complexity but not high-complexity colorectal surgery. Readmission after high-complexity colorectal surgery appears unpreventable. We recommend early post-discharge telephone follow-up to reduce readmission after abdominal colorectal surgery

    On the initial estimate of interface forces in FETI methods

    Full text link
    The Balanced Domain Decomposition (BDD) method and the Finite Element Tearing and Interconnecting (FETI) method are two commonly used non-overlapping domain decomposition methods. Due to strong theoretical and numerical similarities, these two methods are generally considered as being equivalently efficient. However, for some particular cases, such as for structures with strong heterogeneities, FETI requires a large number of iterations to compute the solution compared to BDD. In this paper, the origin of the bad efficiency of FETI in these particular cases is traced back to poor initial estimates of the interface stresses. To improve the estimation of interface forces a novel strategy for splitting interface forces between neighboring substructures is proposed. The additional computational cost incurred is not significant. This yields a new initialization for the FETI method and restores numerical efficiency which makes FETI comparable to BDD even for problems where FETI was performing poorly. Various simple test problems are presented to discuss the efficiency of the proposed strategy and to illustrate the so-obtained numerical equivalence between the BDD and FETI solvers

    EM localization and separation using interaural level and phase cues

    Get PDF
    We describe a system for localizing and separating multiple sound sources from a reverberant two-channel recording. It consists of a probabilistic model of interaural level and phase differences and an EM algorithm for finding the maximum likelihood parameters of this model. By assigning points in the interaural spectrogram probabilistically to sources with the best-fitting parameters and then estimating the parameters of the sources from the points assigned to them, the system is able to separate and localize more sound sources than there are available channels. It is also able to estimate frequency-dependent level differences of sources in a mixture that correspond well to those measured in isolation. In experiments in simulated anechoic and reverberant environments, the proposed system improved the signal-to-noise ratio of target sources by 2.7 and 3.4dB more than two comparable algorithms on average

    Double Compact Objects II: Cosmological Merger Rates

    Full text link
    The development of advanced gravitational wave (GW) observatories, such as Advanced LIGO and Advanced Virgo, provides impetus to refine theoretical predictions for what these instruments might detect. In particular, with the range increasing by an order of magnitude, the search for GW sources is extending beyond the "local" Universe and out to cosmological distances. Double compact objects (neutron star-neutron star (NS-NS), black hole-neutron star (BH-NS) and black hole-black hole (BH-BH) systems) are considered to be the most promising gravitational wave sources. In addition, NS-NS and/or BH-NS systems are thought to be the progenitors of gamma ray bursts (GRBs), and may also be associated with kilonovae. In this paper we present the merger event rates of these objects as a function of cosmological redshift. We provide the results for four cases, each one investigating a different important evolution parameter of binary stars. Each case is also presented for two metallicity evolution scenarios. We find that (i) in most cases NS-NS systems dominate the merger rates in the local Universe, while BH-BH mergers dominate at high redshift; (ii) BH-NS mergers are less frequent than other sources per unit volume, for all time; and (iii) natal kicks may alter the observable properties of populations in a significant way, allowing the underlying models of binary evolution and compact object formation to be easily distinguished. This is the second paper in a series of three. The third paper will focus on calculating the detection rates of mergers by gravitational wave telescopes.Comment: 8 pages, 10 figures, second in series, accepted for Ap

    The Formation and Gravitational-Wave Detection of Massive Stellar Black-Hole Binaries

    Full text link
    If binaries consisting of two 100 Msun black holes exist they would serve as extraordinarily powerful gravitational-wave sources, detectable to redshifts of z=2 with the advanced LIGO/Virgo ground-based detectors. Large uncertainties about the evolution of massive stars preclude definitive rate predictions for mergers of these massive black holes. We show that rates as high as hundreds of detections per year, or as low as no detections whatsoever, are both possible. It was thought that the only way to produce these massive binaries was via dynamical interactions in dense stellar systems. This view has been challenged by the recent discovery of several stars with mass above 150 Msun in the R136 region of the Large Magellanic Cloud. Current models predict that when stars of this mass leave the main sequence, their expansion is insufficient to allow common envelope evolution to efficiently reduce the orbital separation. The resulting black-hole--black-hole binary remains too wide to be able to coalesce within a Hubble time. If this assessment is correct, isolated very massive binaries do not evolve to be gravitational-wave sources. However, other formation channels exist. For example, the high multiplicity of massive stars, and their common formation in relatively dense stellar associations, opens up dynamical channels for massive black hole mergers (e.g., via Kozai cycles or repeated binary-single interactions). We identify key physical factors that shape the population of very massive black-hole--black-hole binaries. Advanced gravitational-wave detectors will provide important constraints on the formation and evolution of very massive stars.Comment: ApJ accepted, extended description of modelin

    Agent-based dynamics in disaggregated growth models

    Get PDF
    This paper presents an agent-based model of disaggregated economic systems with endogenous growth features named Lagon GeneriC. This model is thought to represent a proof of concept that dynamically complete and highly disaggregated agent-based models allow to model economies as complex dynamical systems. It is used here for "theory generation", investigating the extension to a framework with capital accumulation of Gintis results on the dynamics of general equilibrium.Agent-based models, economic growth.
    • …
    corecore