59 research outputs found

    The Long-Term Returns of Obesity Prevention Policies

    Get PDF
    This study illustrates the importance for policymakers of long-termbudget impact analyses of preventive health policies, specifically those aimed at obesity prevention. The study recommends that the Congressional Budget Office (CBO), the agency responsible for estimating costs of proposed federal legislation, develop the capacity to estimate the costs of these policies over a 75-year horizon.Obesity rates have doubled among adults in the last twenty years and tripled among children in a single generation. Evidence suggests that by 2040 roughly half the adult population may be obese. Obesity increases the risk of type 2 diabetes, high blood pressure, heart disease, certain types of cancer, stroke, and many other diseases and conditions. These associated conditions carry high financial costs and can be devastating to quality of life. Health care spending due to obesity is estimated to be as high as 210billionannually,or21percentoftotalnationalhealthcarespending.Whenalsoaccountingforthenonmedicalcostsofobesity,theoverallannualcostisestimatedtobe210 billion annually, or 21 percent of total national health care spending. When also accounting for the nonmedical costs of obesity, the overall annual cost is estimated to be 450 billion.The Institute of Medicine and other scientific bodies have identified evidence-based strategies for addressing the childhood obesity epidemic. One impediment to pursuing obesity prevention policies at the federal level lies in how their budgetary impacts are assessed. CBO generally uses a ten-year budget window, but effective preventive health measures can have long-run budgetary impacts that differ greatly from their ten-year projections. In fact, very little of the federal savings they induce may be captured in the first decade, especially if an intervention is geared toward children or young adults and yields meaningful impacts on health care costs for individuals receiving Medicare decades in the future. In addition to distorting policymakers' understanding of the net cost of preventive health policies, a narrow budget window also fails to distinguish between effective and ineffective interventions. Because a ten-year window misses most or all of the savings from an effective obesity prevention policy, a tenyear cost estimate for such a policy would not differ from a ten-year estimate for an ineffective one.This study constructs an illustrative model of the long-term budget impact of obesity prevention policies, accounting for the Medicaid, Medicare, Social Security, and tax effects of preventing obesity. The model demonstrates the complexities involved in reaching a long-term cost estimate. Using four obesity prevention policies and programs as examples, the model generates lifetime (i.e., 75-year) percapita savings estimates for different types of people. In so doing, it makes it possible to compare the discrepancy between 75-year and ten-year cost estimates of a policy to prevent obesity

    Optimum spectral window for imaging of art with optical coherence tomography

    Get PDF
    Optical Coherence Tomography (OCT) has been shown to have potential for important applications in the field of art conservation and archaeology due to its ability to image subsurface microstructures non-invasively. However, its depth of penetration in painted objects is limited due to the strong scattering properties of artists’ paints. VIS-NIR (400 nm – 2400 nm) reflectance spectra of a wide variety of paints made with historic artists’ pigments have been measured. The best spectral window with which to use optical coherence tomography (OCT) for the imaging of subsurface structure of paintings was found to be around 2.2 μm. The same spectral window would also be most suitable for direct infrared imaging of preparatory sketches under the paint layers. The reflectance spectra from a large sample of chemically verified pigments provide information on the spectral transparency of historic artists’ pigments/paints as well as a reference set of spectra for pigment identification. The results of the paper suggest that broadband sources at ~2 microns are highly desirable for OCT applications in art and potentially material science in general

    Determinants of participation in a web-based health risk assessment and consequences for health promotion programs

    Get PDF
    Background: The health risk assessment (HRA) is a type of health promotion program frequently offered at the workplace. Insight into the underlying determinants of participation is needed to evaluate and implement these interventions. Objective: To analyze whether individual characteristics including demographics, health behavior, self-rated health, and work-related factors are associated with participation and nonparticipation in a Web-based HRA. Methods: Determinants of participation and nonparticipation were investigated in a cross-sectional study among individuals employed at five Dutch organizations. Multivariate logistic regression was performed to identify determinants of participation and nonparticipation in the HRA after controlling for organization and all other variables. Results: Of the 8431 employees who were invited, 31.9% (2686/8431) enrolled in the HRA. The online questionnaire was completed by 27.2% (1564/5745) of the nonparticipants. Determinants of participation were some periods of stress at home or work in the preceding year (OR 1.62, 95% CI 1.08-2.42), a decreasing number of weekdays on which at least 30 minutes were spent on moderate to vigorous physical activity (ORdayPA0.84, 95% CI 0.79-0.90), and increasing alcohol consumption. Determinants of nonparticipation were less-than-positive self-rated health (poor/very poor vs very good, OR 0.25, 95% CI 0.08-0.81) and tobacco use (at least weekly vs none, OR 0.65, 95% CI 0.46-0.90). Conclusions: This study showed that with regard to isolated health behaviors (insufficient physical activity, excess alcohol consumption, and stress), those who could benefit most from the HRA were more likely to participate. However, tobacco users and those who rate

    An international comparative analysis and roadmap to sustainable biosimilar markets

    Get PDF
    Background: Although biosimilar uptake has increased (at a variable pace) in many countries, there have been recent concerns about the long-term sustainability of biosimilar markets. The aim of this manuscript is to assess the sustainability of policies across the biosimilar life cycle in selected countries with a view to propose recommendations for supporting biosimilar sustainability.Methods: The study conducted a comparative analysis across 17 countries from North America, South America, Asia-Pacific, Europe and the Gulf Cooperation Council. Biosimilar policies were identified and their sustainability was assessed based on country-specific reviews of the scientific and grey literature, validation by industry experts and 23 international and local non-industry experts, and two advisory board meetings with these non-industry experts.Results: Given that European countries tend to have more experience with biosimilars and more developed policy frameworks, they generally have higher sustainability scores than the other selected countries. Existing approaches to biosimilar manufacturing and R&D, policies guaranteeing safe and high-quality biosimilars, exemption from the requirement to apply health technology assessment to biosimilars, and initiatives counteracting biosimilar misconceptions are considered sustainable. However, biosimilar contracting approaches, biosimilar education and understanding can be ameliorated in all selected countries. Also, similar policies are sometimes perceived to be sustainable in some markets, but not in others. More generally, the sustainability of the biosimilar landscape depends on the nature of the healthcare system and existing pharmaceutical market access policies, the experience with biosimilar use and policies. This suggests that a general biosimilar policy toolkit that ensures sustainability does not exist, but varies from country to country.Conclusion: This study proposes a set of elements that should underpin sustainable biosimilar policy development over time in a country. At first, biosimilar policies should guarantee the safety and quality of biosimilars, healthy levels of supply and a level of cost savings. As a country gains experience with biosimilars, policies need to optimise uptake and combat any misconceptions about biosimilars. Finally, a country should implement biosimilar policies that foster competition, expand treatment options and ensure a sustainable market environment

    Readout of a quantum processor with high dynamic range Josephson parametric amplifiers

    Full text link
    We demonstrate a high dynamic range Josephson parametric amplifier (JPA) in which the active nonlinear element is implemented using an array of rf-SQUIDs. The device is matched to the 50 Ω\Omega environment with a Klopfenstein-taper impedance transformer and achieves a bandwidth of 250-300 MHz, with input saturation powers up to -95 dBm at 20 dB gain. A 54-qubit Sycamore processor was used to benchmark these devices, providing a calibration for readout power, an estimate of amplifier added noise, and a platform for comparison against standard impedance matched parametric amplifiers with a single dc-SQUID. We find that the high power rf-SQUID array design has no adverse effect on system noise, readout fidelity, or qubit dephasing, and we estimate an upper bound on amplifier added noise at 1.6 times the quantum limit. Lastly, amplifiers with this design show no degradation in readout fidelity due to gain compression, which can occur in multi-tone multiplexed readout with traditional JPAs.Comment: 9 pages, 8 figure

    Measurement-Induced State Transitions in a Superconducting Qubit: Within the Rotating Wave Approximation

    Full text link
    Superconducting qubits typically use a dispersive readout scheme, where a resonator is coupled to a qubit such that its frequency is qubit-state dependent. Measurement is performed by driving the resonator, where the transmitted resonator field yields information about the resonator frequency and thus the qubit state. Ideally, we could use arbitrarily strong resonator drives to achieve a target signal-to-noise ratio in the shortest possible time. However, experiments have shown that when the average resonator photon number exceeds a certain threshold, the qubit is excited out of its computational subspace, which we refer to as a measurement-induced state transition. These transitions degrade readout fidelity, and constitute leakage which precludes further operation of the qubit in, for example, error correction. Here we study these transitions using a transmon qubit by experimentally measuring their dependence on qubit frequency, average photon number, and qubit state, in the regime where the resonator frequency is lower than the qubit frequency. We observe signatures of resonant transitions between levels in the coupled qubit-resonator system that exhibit noisy behavior when measured repeatedly in time. We provide a semi-classical model of these transitions based on the rotating wave approximation and use it to predict the onset of state transitions in our experiments. Our results suggest the transmon is excited to levels near the top of its cosine potential following a state transition, where the charge dispersion of higher transmon levels explains the observed noisy behavior of state transitions. Moreover, occupation in these higher energy levels poses a major challenge for fast qubit reset

    Overcoming leakage in scalable quantum error correction

    Full text link
    Leakage of quantum information out of computational states into higher energy states represents a major challenge in the pursuit of quantum error correction (QEC). In a QEC circuit, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of logical error with scale, challenging the feasibility of QEC as a path towards fault-tolerant quantum computation. Here, we demonstrate the execution of a distance-3 surface code and distance-21 bit-flip code on a Sycamore quantum processor where leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a ten-fold reduction in steady-state leakage population on the data qubits encoding the logical state and an average leakage population of less than 1×1031 \times 10^{-3} throughout the entire device. The leakage removal process itself efficiently returns leakage population back to the computational basis, and adding it to a code circuit prevents leakage from inducing correlated error across cycles, restoring a fundamental assumption of QEC. With this demonstration that leakage can be contained, we resolve a key challenge for practical QEC at scale.Comment: Main text: 7 pages, 5 figure

    Suppressing quantum errors by scaling a surface code logical qubit

    Full text link
    Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016%2.914\%\pm 0.016\% compared to 3.028%±0.023%3.028\%\pm 0.023\%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7×1061.7\times10^{-6} logical error per round floor set by a single high-energy event (1.6×1071.6\times10^{-7} when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references, Fig. S12, Table I

    Measurement-induced entanglement and teleportation on a noisy quantum processor

    Full text link
    Measurement has a special role in quantum theory: by collapsing the wavefunction it can enable phenomena such as teleportation and thereby alter the "arrow of time" that constrains unitary evolution. When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in space-time that go beyond established paradigms for characterizing phases, either in or out of equilibrium. On present-day NISQ processors, the experimental realization of this physics is challenging due to noise, hardware limitations, and the stochastic nature of quantum measurement. Here we address each of these experimental challenges and investigate measurement-induced quantum information phases on up to 70 superconducting qubits. By leveraging the interchangeability of space and time, we use a duality mapping, to avoid mid-circuit measurement and access different manifestations of the underlying phases -- from entanglement scaling to measurement-induced teleportation -- in a unified way. We obtain finite-size signatures of a phase transition with a decoding protocol that correlates the experimental measurement record with classical simulation data. The phases display sharply different sensitivity to noise, which we exploit to turn an inherent hardware limitation into a useful diagnostic. Our work demonstrates an approach to realize measurement-induced physics at scales that are at the limits of current NISQ processors
    corecore