2,110 research outputs found

    Optimal Dorfman Group Testing For Symmetric Distributions

    Full text link
    We study Dorfman's classical group testing protocol in a novel setting where individual specimen statuses are modeled as exchangeable random variables. We are motivated by infectious disease screening. In that case, specimens which arrive together for testing often originate from the same community and so their statuses may exhibit positive correlation. Dorfman's protocol screens a population of n specimens for a binary trait by partitioning it into nonoverlapping groups, testing these, and only individually retesting the specimens of each positive group. The partition is chosen to minimize the expected number of tests under a probabilistic model of specimen statuses. We relax the typical assumption that these are independent and indentically distributed and instead model them as exchangeable random variables. In this case, their joint distribution is symmetric in the sense that it is invariant under permutations. We give a characterization of such distributions in terms of a function q where q(h) is the marginal probability that any group of size h tests negative. We use this interpretable representation to show that the set partitioning problem arising in Dorfman's protocol can be reduced to an integer partitioning problem and efficiently solved. We apply these tools to an empirical dataset from the COVID-19 pandemic. The methodology helps explain the unexpectedly high empirical efficiency reported by the original investigators.Comment: 20 pages w/o references, 2 figure

    Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates

    Get PDF
    International audienceCaffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the Pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate

    Three Essays on Public Procurement

    Get PDF
    Abstract The following dissertation presents three essays on the theory and empirics of public procurement—the process by which government defines its needs for goods and services and acquires them using contracts. The objective of this dissertation is to address three unresolved questions in the literature regarding how the characteristics of products governments procure, and the environments in which they are bought and sold, shape government and non-governmental actors’ decision making at different points in the procurement process. The first essay develops an expanded theory of government’s decision to directly deliver public services—social welfare, energy utilities, select forms of security provision, and other services for citizens—or contract out these responsibilities to third parties. The essay takes as its point departure that transaction cost economics—the theory that a product is “made” or “bought” based on the ease or difficulty with which it can be defined, produced, and exchanged via a contract—does not adequately account for the environmental context within which governments select among alternative service delivery modes. The essay rectifies this deficiency by drawing on resource dependence theory, a complementary theory arguing that the make-or-buy decision turns on the nature of the public service marketplace: the number of alternative sellers with which a government can do business, and the amount of revenue sellers derive from this government vis-à-vis their other customers. The argument is that combined, these factors shape the degree of power government can exercise in a contracting relationship, directly influencing the choice to make or buy a service as well as moderating the impact of service-specific characteristics. This argument is specified in a set of hypotheses and a model for testing in future empirical research. The second essay examines how the characteristics of products government chooses to buy (rather than make) influence competition among sellers vying for its business. Drawing from transaction cost economics, the essay argues product complexity—defined and operationalized in terms of asset specificity, or the degree of relationship-specific physical and human capital investments required to produce and deliver a product—is a key determinant of competition. More specifically, the essay argues (i) at higher levels of complexity, and thus of asset specificity, sellers may deem the risks of doing business with the government as too high to warrant submitting a bid, but (ii) while lower levels of complexity may decrease these risks, they may also discourage competition by creating a collective action dilemma: for a simpler product, individual sellers may not submit a bid because they believe the competition will be too intense, and their probability of winning too low. This reasoning points to two effects—a project risk effect (simpler products invite more bids) and a win probability effect (simpler products invite less bids)—and implies competing hypotheses for how complexity influences competition. The essay presents an econometric test of these hypotheses using a sample of information technology procurements drawn from U.S. procurement federal data, finding that the effects mostly offset one another. The effects likely operate with greater force (in one direction or another) in larger, program-based procurements (e.g., of major weapons or information systems) that can span many years and involve multiple individual contracts for development, production, maintenance, and upgrades. Thus, a more complete theoretical story that links complexity and competition would likely need to make its propositions contingent on the depth and duration of the underlying business relationship, as well as the nature of the product being procured. The third essay examines the conditions under which government adopts and implements alternative strategies to procure products after it has selected and awarded its business to one from a competing set of sellers. Specifically, the essay examines the conditions under which government implements a knowledge-based procurement strategy predicated on incremental delivery of product capabilities and a sequential approach to product development and production (typically seen as a best practice), or a strategy predicated on delivering product capabilities in a single-step fashion and using a concurrent approach to development and production activities. The essay starts from the observation that procurements executed in accordance with knowledge-based principles consistently feature strong leadership—individuals purported to be pivotal in ensuring procurements adhere to a strategy anchored in knowledge—and posits that leader commitment influences adoption of the knowledge-based approach through a “credible commitment” mechanism. In essence, tenured leaders serve an advocacy role for the procurements they oversee, ensuring the procurements receive sufficient support and protecting them from policymakers wishing to commit resources to other projects (including those for which failure to follow a knowledge-based strategy could invite future problems, but, at least in the short-run, appear that they will take less time and provide more capability). In this way, sustained leadership provides teams tasked with managing procurements incentives to “stay the course,” maintaining adherence to a knowledge-based strategy, pursuing modest capability objectives, and taking the time necessary for sequential development and production. The essay samples and examines a set of four successfully executed United States weapon system procurements to probe the plausibility of the credible commitment mechanism, finding and presenting evidence that leaders do influence employment of knowledge-based strategies in part through this channel

    On Monte Carlo methods for the Dirichlet process mixture model, and the selection of its precision parameter prior

    Get PDF
    Two issues commonly faced by users of Dirichlet process mixture models are: 1) how to appropriately select a hyperprior for its precision parameter alpha, and 2) the typically slow mixing of the MCMC chain produced by conditional Gibbs samplers based on its stick-breaking representation, as opposed to marginal collapsed Gibbs samplers based on the Polya urn, which have smaller integrated autocorrelation times. In this thesis, we analyse the most common approaches to hyperprior selection for alpha, we identify their limitations, and we propose a new methodology to overcome them. To address slow mixing, we revisit three label-switching Metropolis moves from the literature (Hastie et al., 2015; Papaspiliopoulos and Roberts, 2008), improve them, and introduce a fourth move. Secondly, we revisit two i.i.d. sequential importance samplers which operate in the collapsed space (Liu, 1996; S. N. MacEachern et al., 1999), and we develop a new sequential importance sampler for the stick-breaking parameters of Dirichlet process mixtures, which operates in the stick-breaking space and which has minimal integrated autocorrelation time. Thirdly, we introduce the i.i.d. transcoding algorithm which, conditional to a partition of the data, can infer back which specific stick in the stick-breaking construction each observation originated from. We use it as a building block to develop the transcoding sampler, which removes the need for label-switching Metropolis moves in the conditional stick-breaking sampler, as it uses the better performing marginal sampler (or any other sampler) to drive the MCMC chain, and augments its exchangeable partition posterior with conditional i.i.d. stick-breaking parameter inferences after the fact, thereby inheriting its shorter autocorrelation times

    Informed Bayesian survival analysis

    Get PDF
    • …
    corecore