60 research outputs found

    Protection or Harm? Suppressing Substance-Use Data

    Get PDF
    What if it were impossible to closely study a disease affecting 1 in 11 Americans over 11 years of age — a disease that’s associated with more than 60,000 deaths in the United States each year, that tears families apart, and that costs society hundreds of billions of dollars? What if the affected population included vulnerable and underserved patients and those more likely than most Americans to have costly and deadly communicable diseases, including HIV–AIDS? What if we could not thoroughly evaluate policies designed to reduce costs or improve care for such patients

    Five minutes with The Incidental Economist Austin Frakt: “Only 0.04% of published papers in health are reported on by the media, so blogs and other social media can help”

    Get PDF
    Health economist and editor of The Incidental Economist Austin Frakt takes five minutes to talk to LSE Impact blog editor Danielle Moran on how his research blog has increased his exposure and has grown to become a credible source in academic, media and policy circles

    Beyond Capitation: How New Payment Experiments Seek to Find the \u27Sweet Spot\u27 in Amount of Risk Providers and Payers Bear

    Get PDF
    A key issue in the decades-long struggle over US health care spending is how to distribute liability for expenses across all market participants, from insurers to providers. The rise and abandonment in the 1990s of capitation payments—lump-sum, per person payments to health care providers to provide all care for a specified individual or group—offers a stark example of how difficult it is for providers to assume meaningful financial responsibility for patient care. This article chronicles the expansion and decline of the capitation model in the 1990s. We offer lessons learned and assess the extent to which these lessons have been applied in the development of contemporary forms of provider cost sharing, particularly accountable care organizations, which in effect constitute a search for the “sweet spot,” or appropriate place on a spectrum, between providers and payers with respect to the degree of risk they absorb

    Internal multiscale autoregressive processes, stochastic realization, and covariance extension

    No full text
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 209-223) and index.The focus of this thesis is on the identification of multiscale autoregressive (MAR) models for stochastic processes from second-order statistical characterizations. The class of MAR processes constitutes a rich and powerful stochastic modeling framework that admits efficient statistical inference algorithms. To harness the utility of MAR processes requires that the phenomena of interest be effectively modeled in the framework. This thesis addresses this challenge and develops MAR model identification theory and algorithms that overcome some of the limitations of previous approaches (e.g., model inconsistency and computational complexity) and that extend the breadth of applicability of the framework. One contribution of this thesis is the resolution of the problem of model inconsistency. This is achieved through a new parameterization of so-called internal MAR processes. This new parameterization admits a computationally efficient, scale-recursive approach to model realization. The efficiency of this approach stems from both its scale-recursive structure and from a novel application of the estimation-theoretic concept of predictive efficiency. Another contribution of this thesis is to provide a unification of the MAR and wavelet frameworks. This unification leads to wavelet-based stochastic models that are fundamentally different from conventional ones. A limitation of previous MAR model identification approaches is that they require a complete second-order characterization of the process to be modeled. Relaxing this assumption leads to the problem of covariance extension in which unknown covariance elements are inferred from known ones. This thesis makes two contributions in this area. First, the classical covariance extension algorithm (Levinson's algorithm) is generalized to address a wider range of extension problems. Second, this algorithm is applied to the problem of designing a MAR model from a partially known covariance matrix. The final contribution of this thesis is the development of techniques for incorporating nonlocal variables (e.g., multiresolution measurements) into a MAR model. These techniques are more powerful than those previously developed and lead to computational efficiencies in model realization and statistical inference.by Austin B. Frakt.Ph.D

    Multiscale hypothesis testing with application to anomaly characterization from tomographic projections

    No full text
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (p. 151-155).by Austin B. Frakt.M.S

    Making Health Care More Productive

    No full text

    Image Indexing and Retrieval from Digital Libraries

    No full text
    As digital image libraries grow, methods for indexing and searching such libraries based on image content become more important. In this paper we focus on three specific approaches to the representation of images for content-based image indexing and retrieval [13,20,29] although others will be discussed briefly. The work of [13] represents one of many methods which blends color histograms with some spatial information. The work of [20] is a multiresolution approach which indexes images in a compressed domain. There are several methods discussed in [29]---one for objects, one for shapes, and one for textures. Their common feature is that they explicitly capture the dominant geometric features of images. 1 Introduction Today, digital images are being generated at rates and stored in volumes far too large for manual indexing or retrieval. For example: ffl the FBI receives 40,000 fingerprint images per day [12]; ffl by the year 2000, the NASA Earth Observing System will generate terabytes..
    • …
    corecore