199 research outputs found

    Programmed Death-Ligand 1 Expression in Lung Cancer and Paired Brain Metastases-a Single-Center Study in 190 Patients.

    Get PDF
    Expression of programmed death-ligand 1 (PD-L1) is the only routinely used tissue biomarker for predicting response to programmed cell death protein 1/PD-L1 inhibitors. It is to date unclear whether PD-L1 expression is preserved in brain metastases (BMs). In this single-center, retrospective study, we evaluated PD-L1 expression using the SP263 assay in consecutively resected BMs of lung carcinomas and paired primary tumors, diagnosed from 2000 to 2015, with correlation to clinicopathological and molecular tumor and patient characteristics. PD-L1 tumor proportional score (TPS) could be evaluated on whole tissue slides in 191 BMs and 84 paired primary lung carcinomas. PD-L1 TPS was less than 1% in 113 of 191 (59.2%), 1% to 49% in 34 of 191 (17.8%), and greater than or equal to 50% in 44 of 191 (23.0%) BMs. TPS was concordant between BMs and paired primary lung carcinomas in most cases, with discordance regarding the clinically relevant cutoffs at 1% and 50% in 18 of 84 patients (21.4%). Four of 18 discordant cases had no shared mutations between the primary lung carcinoma and BM. Intratumoral heterogeneity, as assessed using tissue microarray cores, was only significant at the primary site (p <sub>Wilcoxon signed rank</sub> = 0.002) with higher PD-L1 TPS at the infiltration front (mean = 40.4%, interquartile range: 0%-90%). Neither TPS greater than or equal to 1% nor TPS greater than or equal to 50% nor discordance between the primary lung carcinoma and BMs had prognostic significance regarding overall survival or BM-specific overall survival. PD-L1 expression was mostly concordant between primary lung carcinoma and its BM and between resections of BM and stereotactic biopsies, mirrored by tissue microarray cores. Differences in PD-L1 TPS existed primarily in cases with TPS greater than 10%, for which also human assessment tends to be most error prone

    Zero-point quantum fluctuations in cosmology

    Full text link
    We re-examine the classic problem of the renormalization of zero-point quantum fluctuations in a Friedmann-Robertson-Walker background. We discuss a number of issues that arise when regularizing the theory with a momentum-space cutoff, and show explicitly how introducing non-covariant counter-terms allows to obtain covariant results for the renormalized vacuum energy-momentum tensor. We clarify some confusion in the literature concerning the equation of state of vacuum fluctuations. Further, we point out that the general structure of the effective action becomes richer if the theory contains a scalar field phi with mass m smaller than the Hubble parameter H(t). Such an ultra-light particle cannot be integrated out completely to get the effective action. Apart from the volume term and the Einstein-Hilbert term, that are reabsorbed into renormalizations of the cosmological constant and Newton's constant, the effective action in general also has a term proportional to F(phi)R, for some function F(phi). As a result, vacuum fluctuations of ultra-light scalar fields naturally lead to models where the dark energy density has the form rho_{DE}(t)=rho_X(t)+rho_Z(t), where rho_X is the component that accelerates the Hubble expansion at late times and rho_Z(t) is an extra contribution proportional to H^2(t). We perform a detailed comparison of such models with CMB, SNIa and BAO data.Comment: 23 pages, 9 figures. v3: refs added. To appear in Phys. Rev.

    LNCS

    Get PDF
    Static program analyzers are increasingly effective in checking correctness properties of programs and reporting any errors found, often in the form of error traces. However, developers still spend a significant amount of time on debugging. This involves processing long error traces in an effort to localize a bug to a relatively small part of the program and to identify its cause. In this paper, we present a technique for automated fault localization that, given a program and an error trace, efficiently narrows down the cause of the error to a few statements. These statements are then ranked in terms of their suspiciousness. Our technique relies only on the semantics of the given program and does not require any test cases or user guidance. In experiments on a set of C benchmarks, we show that our technique is effective in quickly isolating the cause of error while out-performing other state-of-the-art fault-localization techniques

    Diagnosis, management, and outcomes of patients with syncope and bundle branch block

    Get PDF
    Although patients with syncope and bundle branch block (BBB) are at high risk of developing atrio-ventricular block, syncope may be due to other aetiologies. We performed a prospective, observational study of the clinical outcomes of patients with syncope and BBB following a systematic diagnostic approach. Patients with ≥1 syncope in the last 6 months, with QRS duration ≥120 ms, were prospectively studied following a three-phase diagnostic strategy: Phase I, initial evaluation; Phase II, electrophysiological study (EPS); and Phase III, insertion of an implantable loop recorder (ILR). Overall, 323 patients (left ventricular ejection fraction 56 ± 12%) were studied. The aetiological diagnosis was established in 267 (82.7%) patients (102 at initial evaluation, 113 upon EPS, and 52 upon ILR) with the following aetiologies: bradyarrhythmia (202), carotid sinus syndrome (20), ventricular tachycardia (18), neurally mediated (9), orthostatic hypotension (4), drug-induced (3), secondary to cardiopulmonary disease (2), supraventricular tachycardia (1), bradycardia-tachycardia (1), and non-arrhythmic (7). A pacemaker was implanted in 220 (68.1%), an implantable cardioverter defibrillator in 19 (5.8%), and radiofrequency catheter ablation was performed in 3 patients. Twenty patients (6%) had died at an average follow-up of 19.2 ± 8.2 months. In patients with syncope, BBB, and mean left ventricular ejection fraction of 56 ± 12%, a systematic diagnostic approach achieves a high rate of aetiological diagnosis and allows to select specific treatment

    Drop test: A new method to measure the particle adhesion force

    Get PDF
    Measurement of the adhesive force is of great interest in a large number of applications, such as powder coating and processing of cohesive powders. Established measurement methods such as Atomic Force Microscopy (AFM) and the centrifugal method are costly and time consuming. For engineering applications there is a need to develop a quick test method. The drop test method has been designed and developed for this purpose. In this test method particles that are adhered to a substrate are mounted on and are subjected to a tensile force by impacting the stub against a stopper ring by dropping it from a set height. From the balance of the detachment force and adhesive force for a critical particles size, above which particles are detached and below which they remain on the substrate, the interfacial specific energy is calculated. A model of adhesion is required to estimate the adhesive force between the particles and the surface, and in this work we use the JKR theory. The detachment force is estimated by Newton's second law of motion, using an estimated particle mass, based on its size and density and calculated particle acceleration. A number of materials such as silanised glass beads, Avicel, α-lactose monohydrate and starch have been tested and the adhesive force and energy between the particle and the substrate surface have been quantified. Consistent values of the interface energy with a narrow error band are obtained, independent of the impact velocity. As the latter is varied, different particle sizes detach; nevertheless similar values of the interface energy are obtained, an indication that the technique is robust, as it is in fact based on microscopic observations of many particles. The trends of the results obtained with the drop test method are similar to those shown in studies by other researchers using established methods like the AFM and the centrifuge method

    Danger Invariants

    Get PDF
    Static analysers search for overapproximating proofs of safety commonly known as safety invariants. Conversely, static bug finders (e.g. Bounded Model Checking) give evidence for the failure of an assertion in the form of a counterexample trace. As opposed to safety invariants, the size of a counterexample is dependent on the depth of the bug, i.e., the length of the execution trace prior to the error state, which also determines the computational effort required to find them. We propose a way of expressing danger proofs that is independent of the depth of bugs. Essentially, such danger proofs constitute a compact representation of a counterexample trace, which we call a danger invariant. Danger invariants summarise sets of traces that are guaranteed to be able to reach an error state. Our conjecture is that such danger proofs will enable the design of bug finding analyses for which the computational effort is independent of the depth of bugs, and thus find deep bugs more efficiently. As an exemplar of an analysis that uses danger invariants, we design a bug finding technique based on a synthesis engine. We implemented this technique and compute danger invariants for intricate programs taken from SV-COMP 2016

    The M235T Polymorphism in the AGT Gene and CHD Risk: Evidence of a Hardy-Weinberg Equilibrium Violation and Publication Bias in a Meta-Analysis

    Get PDF
    BACKGROUND: The M235T polymorphism in the AGT gene has been related to an increased risk of hypertension. This finding may also suggest an increased risk of coronary heart disease (CHD). METHODOLOGY/PRINCIPAL FINDINGS: A case-cohort study was conducted in 1,732 unrelated middle-age women (210 CHD cases and 1,522 controls) from a prospective cohort of 15,236 initially healthy Dutch women. We applied a Cox proportional hazards model to study the association of the polymorphism with acute myocardial infarction (AMI) (n = 71) and CHD. In the case-cohort study, no increased risk for CHD was found under the additive genetic model (hazard ratio [HR] = 1.20; 95% confidence interval [CI], 0.86 to 1.68; P = 0.28). This result was not changed by adjustment (HR = 1.17; 95% CI, 0.83 to 1.64; P = 0.38) nor by using dominant, recessive and pairwise genetic models. Analyses for AMI risk under the additive genetic model also did not show any statistically significant association (crude HR = 1.14; 95% CI, 0.93 to 1.39; P = 0.20). To evaluate the association, a comprehensive systematic review and meta-analysis were undertaken of all studies published up to February 2007 (searched through PubMed/MEDLINE, Web of Science and EMBASE). The meta-analysis (38 studies with 13284 cases and 18722 controls) showed a per-allele odds ratio (OR) of 1.08 (95% CI, 1.01 to 1.15; P = 0.02). Moderate to large levels of heterogeneity were identified between studies. Hardy-Weinberg equilibrium (HWE) violation and the mean age of cases were statistically significant sources of the observed variation. In a stratum of non-HWE violation studies, there was no effect. An asymmetric funnel plot, the Egger's test (P = 0.066), and the Begg-Mazumdar test (P = 0.074) were all suggestive of the presence of publication bias. CONCLUSIONS/SIGNIFICANCE: The pooled OR of the present meta-analysis, including our own data, presented evidence that there is an increase in the risk of CHD conferred by the M235T variant of the AGT gene. However, the relevance of this weakly positive overall association remains uncertain because it may be due to various residual biases, including HWE-violation and publication biases

    Cell-based screen for altered nuclear phenotypes reveals senescence progression in polyploid cells after Aurora kinase B inhibition.

    Get PDF
    Cellular senescence is a widespread stress response and is widely considered to be an alternative cancer therapeutic goal. Unlike apoptosis, senescence is composed of a diverse set of subphenotypes, depending on which of its associated effector programs are engaged. Here we establish a simple and sensitive cell-based prosenescence screen with detailed validation assays. We characterize the screen using a focused tool compound kinase inhibitor library. We identify a series of compounds that induce different types of senescence, including a unique phenotype associated with irregularly shaped nuclei and the progressive accumulation of G1 tetraploidy in human diploid fibroblasts. Downstream analyses show that all of the compounds that induce tetraploid senescence inhibit Aurora kinase B (AURKB). AURKB is the catalytic component of the chromosome passenger complex, which is involved in correct chromosome alignment and segregation, the spindle assembly checkpoint, and cytokinesis. Although aberrant mitosis and senescence have been linked, a specific characterization of AURKB in the context of senescence is still required. This proof-of-principle study suggests that our protocol is capable of amplifying tetraploid senescence, which can be observed in only a small population of oncogenic RAS-induced senescence, and provides additional justification for AURKB as a cancer therapeutic target.This work was supported by the University of Cambridge, Cancer Research UK, Hutchison Whampoa; Cancer Research UK grants A6691 and A9892 (M.N., N.K., C.J.T., D.C.B., C.J.C., L.S.G, and M.S.); a fellowship from the Uehara Memorial Foundation (M.S.).This is the author accepted manuscript. The final version is available from the American Society for Cell Biology via http://dx.doi.org/10.1091/mbc.E15-01-000
    corecore