2,196 research outputs found

    Identification and cost of adverse events in metastatic breast cancer in taxane and capecitabine based regimens.

    Get PDF
    PurposeWe sought to compare the economic impact of treatment-related adverse events (AEs) in patients with metastatic breast cancer (mBC) using taxane- or capecitabine-based treatment regimens as either first- or second-line (FL or SL) therapy in the US.MethodsWe used healthcare claims data from the Truven Health Analytics MarketScan® Commercial Databases to conduct a retrospective cohort study comparing the economic impact of AEs amongst taxane- and capecitabine-treated mBC patients in the US. We selected women diagnosed with mBC between 2008-2010 who received a taxane or capecitabine as first- or second-line (FL or SL) chemotherapy. Costs related to hospitalization, outpatient services, emergency department visits, chemotherapy and other medications were tabulated and combined to determine total healthcare costs. The incremental monthly costs associated with the presence of AEs compared to no AEs were estimated using generalized linear models, controlling for age and Charlson Comorbidity Index.ResultsWe identified 15,443 mBC patients meeting inclusion criteria. Adjusted total monthly costs were significantly higher in those who experienced AEs than in those without AEs in both lines of treatment (FL incremental cost: taxanes 1,142,capecitabine1,142, capecitabine 1,817; SL incremental cost: taxanes 1,448,capecitabine1,448, capecitabine 4,437). Total costs increased with the number of AEs and were primarily driven by increased hospitalization amongst those with AEs.ConclusionsAdverse events in taxane- or capecitabine-treated mBC patients are associated with significant increases in costs. Selecting treatment options associated with fewer AEs may reduce costs and improve outcomes in these patients

    Stochastic Assembly of Bacteria in Microwell Arrays Reveals the Importance of Confinement in Community Development

    Get PDF
    Citation: Hansen, R. H., Timm, A. C., Timm, C. M., Bible, A. N., Morrell-Falvey, J. L., Pelletier, D. A., . . . Retterer, S. T. (2016). Stochastic Assembly of Bacteria in Microwell Arrays Reveals the Importance of Confinement in Community Development. Plos One, 11(5), 18. doi:10.1371/journal.pone.0155080The structure and function of microbial communities is deeply influenced by the physical and chemical architecture of the local microenvironment and the abundance of its community members. The complexity of this natural parameter space has made characterization of the key drivers of community development difficult. In order to facilitate these characterizations, we have developed a microwell platform designed to screen microbial growth and interactions across a wide variety of physical and initial conditions. Assembly of microbial communities into microwells was achieved using a novel biofabrication method that exploits well feature sizes for control of innoculum levels. Wells with incrementally smaller size features created populations with increasingly larger variations in inoculum levels. This allowed for reproducible growth measurement in large (20 mu m diameter) wells, and screening for favorable growth conditions in small (5, 10 mu m diameter) wells. We demonstrate the utility of this approach for screening and discovery using 5 mu m wells to assemble P. aeruginosa colonies across a broad distribution of innoculum levels, and identify those conditions that promote the highest probability of survivial and growth under spatial confinement. Multi-member community assembly was also characterized to demonstrate the broad potential of this platform for studying the role of member abundance on microbial competition, mutualism and community succession

    Mapping spacetimes with LISA: inspiral of a test-body in a `quasi-Kerr' field

    Get PDF
    The future LISA detector will constitute the prime instrument for high-precision gravitational wave observations.LISA is expected to provide information for the properties of spacetime in the vicinity of massive black holes which reside in galactic nuclei.Such black holes can capture stellar-mass compact objects, which afterwards slowly inspiral,radiating gravitational waves.The body's orbital motion and the associated waveform carry information about the spacetime metric of the massive black hole,and it is possible to extract this information and experimentally identify (or not!) a Kerr black hole.In this paper we lay the foundations for a practical `spacetime-mapping' framework. Our work is based on the assumption that the massive body is not necessarily a Kerr black hole, and that the vacuum exterior spacetime is stationary axisymmetric,described by a metric which deviates slightly from the Kerr metric. We first provide a simple recipe for building such a `quasi-Kerr' metric by adding to the Kerr metric the deviation in the value of the quadrupole moment. We then study geodesic motion in this metric,focusing on equatorial orbits. We proceed by computing `kludge' waveforms which we compare with their Kerr counterparts. We find that a modest deviation from the Kerr metric is sufficient for producing a significant mismatch between the waveforms, provided we fix the orbital parameters. This result suggests that an attempt to use Kerr waveform templates for studying EMRIs around a non-Kerr object might result in serious loss of signal-to-noise ratio and total number of detected events. The waveform comparisons also unveil a `confusion' problem, that is the possibility of matching a true non-Kerr waveform with a Kerr template of different orbital parameters.Comment: 19 pages, 6 figure

    Early-universe constraints on a time-varying fine structure constant

    Get PDF
    Higher-dimensional theories have the remarkable feature of predicting a time (and hence redshift) dependence of the `fundamental' four dimensional constants on cosmological timescales. In this paper we update the bounds on a possible variation of the fine structure constant alpha at the time of BBN (z =10^10) and CMB (z=10^3). Using the recently-released high-resolution CMB anisotropy data and the latest estimates of primordial abundances of 4He and D, we do not find evidence for a varying alpha at more than one-sigma level at either epoch.Comment: 5 pages, 1 figure, minor misprints corrected, references added. The analysis has been updated using new BOOMERanG and DASI data on CMB anisotrop

    Energy Release During Disk Accretion onto a Rapidly Rotating Neutron Star

    Get PDF
    The energy release L_s on the surface of a neutron star (NS) with a weak magnetic field and the energy release L_d in the surrounding accretion disk depend on two independent parameters that determine its state (for example, mass M and cyclic rotation frequency f) and is proportional to the accretion rate. We derive simple approximation formulas illustrating the dependence of the efficiency of energy release in an extended disk and in a boundary layer near the NS surface on the frequency and sense of rotation for various NS equations of state. Such formulas are obtained for the quadrupole moment of a NS, for a gap between its surface and a marginally stable orbit, for the rotation frequency in an equatorial Keplerian orbit and in the marginally stable circular orbit, and for the rate of NS spinup via disk accretion. In the case of NS and disk counterrotation, the energy release during accretion can reach 0.67M˙c20.67\dot{M}c^2. The sense of NS rotation is a factor that strongly affects the observed ratio of nuclear energy release during bursts to gravitational energy release between bursts in X-ray bursters. The possible existence of binary systems with NS and disk counterrotation in the Galaxy is discussed. Based on the static criterion for stability, we present a method of constructing the dependence of gravitational mass M on Kerr rotation parameter j and on total baryon mass (rest mass) m for a rigidly rotating neutron star. We show that all global NS characteristics can be expressed in terms of the function M(j, m) and its derivatives.Comment: 42 pages, 12 figures, to appear in Astronomy Letters, 2000, v.26, p.69

    The social value of a QALY : raising the bar or barring the raise?

    Get PDF
    Background: Since the inception of the National Institute for Health and Clinical Excellence (NICE) in England, there have been questions about the empirical basis for the cost-per-QALY threshold used by NICE and whether QALYs gained by different beneficiaries of health care should be weighted equally. The Social Value of a QALY (SVQ) project, reported in this paper, was commissioned to address these two questions. The results of SVQ were released during a time of considerable debate about the NICE threshold, and authors with differing perspectives have drawn on the SVQ results to support their cases. As these discussions continue, and given the selective use of results by those involved, it is important, therefore, not only to present a summary overview of SVQ, but also for those who conducted the research to contribute to the debate as to its implications for NICE. Discussion: The issue of the threshold was addressed in two ways: first, by combining, via a set of models, the current UK Value of a Prevented Fatality (used in transport policy) with data on fatality age, life expectancy and age-related quality of life; and, second, via a survey designed to test the feasibility of combining respondents’ answers to willingness to pay and health state utility questions to arrive at values of a QALY. Modelling resulted in values of £10,000-£70,000 per QALY. Via survey research, most methods of aggregating the data resulted in values of a QALY of £18,000-£40,000, although others resulted in implausibly high values. An additional survey, addressing the issue of weighting QALYs, used two methods, one indicating that QALYs should not be weighted and the other that greater weight could be given to QALYs gained by some groups. Summary: Although we conducted only a feasibility study and a modelling exercise, neither present compelling evidence for moving the NICE threshold up or down. Some preliminary evidence would indicate it could be moved up for some types of QALY and down for others. While many members of the public appear to be open to the possibility of using somewhat different QALY weights for different groups of beneficiaries, we do not yet have any secure evidence base for introducing such a system

    Evidence for Quantum Interference in SAMs of Arylethynylene Thiolates in Tunneling Junctions with Eutectic Ga-In (EGaIn) Top-Contacts

    Get PDF
    This paper compares the current density (J) versus applied bias (V) of self-assembled monolayers (SAMs) of three different ethynylthiophenol-functionalized anthracene derivatives of approximately the same thickness with linear-conjugation (AC), cross-conjugation (AQ), and broken-conjugation (AH) using liquid eutectic Ga-In (EGaIn) supporting a native skin (~1 nm thick) of Ga2O3 as a nondamaging, conformal top-contact. This skin imparts non-Newtonian rheological properties that distinguish EGaIn from other top-contacts; however, it may also have limited the maximum values of J observed for AC. The measured values of J for AH and AQ are not significantly different (J ≈ 10-1 A/cm2 at V = 0.4 V). For AC, however, J is 1 (using log averages) or 2 (using Gaussian fits) orders of magnitude higher than for AH and AQ. These values are in good qualitative agreement with gDFTB calculations on single AC, AQ, and AH molecules chemisorbed between Au contacts that predict currents, I, that are 2 orders of magnitude higher for AC than for AH at 0 < |V| < 0.4 V. The calculations predict a higher value of I for AQ than for AH; however, the magnitude is highly dependent on the position of the Fermi energy, which cannot be calculated precisely. In this sense, the theoretical predictions and experimental conclusions agree that linearly conjugated AC is significantly more conductive than either cross-conjugated AQ or broken conjugate AH and that AQ and AH cannot necessarily be easily differentiated from each other. These observations are ascribed to quantum interference effects. The agreement between the theoretical predictions on single molecules and the measurements on SAMs suggest that molecule-molecule interactions do not play a significant role in the transport properties of AC, AQ, and AH.

    Quantitative evaluation of oligonucleotide surface concentrations using polymerization-based amplification

    Get PDF
    Quantitative evaluation of minimal polynucleotide concentrations has become a critical analysis among a myriad of applications found in molecular diagnostic technology. Development of high-throughput, nonenzymatic assays that are sensitive, quantitative and yet feasible for point-of-care testing are thus beneficial for routine implementation. Here, we develop a nonenzymatic method for quantifying surface concentrations of labeled DNA targets by coupling regulated amounts of polymer growth to complementary biomolecular binding on array-based biochips. Polymer film thickness measurements in the 20–220 nm range vary logarithmically with labeled DNA surface concentrations over two orders of magnitude with a lower limit of quantitation at 60 molecules/μm2 (∼106 target molecules). In an effort to develop this amplification method towards compatibility with fluorescence-based methods of characterization, incorporation of fluorescent nanoparticles into the polymer films is also evaluated. The resulting gains in fluorescent signal enable quantification using detection instrumentation amenable to point-of-care settings

    Methods to study splicing from high-throughput RNA Sequencing data

    Full text link
    The development of novel high-throughput sequencing (HTS) methods for RNA (RNA-Seq) has provided a very powerful mean to study splicing under multiple conditions at unprecedented depth. However, the complexity of the information to be analyzed has turned this into a challenging task. In the last few years, a plethora of tools have been developed, allowing researchers to process RNA-Seq data to study the expression of isoforms and splicing events, and their relative changes under different conditions. We provide an overview of the methods available to study splicing from short RNA-Seq data. We group the methods according to the different questions they address: 1) Assignment of the sequencing reads to their likely gene of origin. This is addressed by methods that map reads to the genome and/or to the available gene annotations. 2) Recovering the sequence of splicing events and isoforms. This is addressed by transcript reconstruction and de novo assembly methods. 3) Quantification of events and isoforms. Either after reconstructing transcripts or using an annotation, many methods estimate the expression level or the relative usage of isoforms and/or events. 4) Providing an isoform or event view of differential splicing or expression. These include methods that compare relative event/isoform abundance or isoform expression across two or more conditions. 5) Visualizing splicing regulation. Various tools facilitate the visualization of the RNA-Seq data in the context of alternative splicing. In this review, we do not describe the specific mathematical models behind each method. Our aim is rather to provide an overview that could serve as an entry point for users who need to decide on a suitable tool for a specific analysis. We also attempt to propose a classification of the tools according to the operations they do, to facilitate the comparison and choice of methods.Comment: 31 pages, 1 figure, 9 tables. Small corrections adde
    corecore