42 research outputs found

    Clinical and pathologic characteristics of T-cell lymphoma with a leukemic phase in a raccoon dog (Nyctereutes Procyonoides)

    Get PDF
    A 7.5-year-old raccoon dog (Nyctereutes procyonoides) from the Henry Doorly Zoo (Omaha, Nebraska) presented to the veterinary hospital for lethargy and weight loss. On physical examination, splenomegaly and hepatomegaly were noted on palpation and were confirmed by radiographic evaluation. Radiography also demonstrated a mass in the cranial mediastinum. A complete blood cell count revealed marked leukocytosis (115,200 cells/microl), with a predominance of lymphoid cells. The animal was euthanized due to a poor prognosis. Necropsy revealed splenomegaly, hepatomegaly, and a large multiloculated mass in the cranial mediastinum. The histologic and immunohistochemical diagnosis was multicentric T-cell lymphoma with a leukemic phase.published_or_final_versio

    High-Dimensional Similarity Search with Quantum-Assisted Variational Autoencoder

    Full text link
    Recent progress in quantum algorithms and hardware indicates the potential importance of quantum computing in the near future. However, finding suitable application areas remains an active area of research. Quantum machine learning is touted as a potential approach to demonstrate quantum advantage within both the gate-model and the adiabatic schemes. For instance, the Quantum-assisted Variational Autoencoder has been proposed as a quantum enhancement to the discrete VAE. We extend on previous work and study the real-world applicability of a QVAE by presenting a proof-of-concept for similarity search in large-scale high-dimensional datasets. While exact and fast similarity search algorithms are available for low dimensional datasets, scaling to high-dimensional data is non-trivial. We show how to construct a space-efficient search index based on the latent space representation of a QVAE. Our experiments show a correlation between the Hamming distance in the embedded space and the Euclidean distance in the original space on the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset. Further, we find real-world speedups compared to linear search and demonstrate memory-efficient scaling to half a billion data points

    The social value of a QALY : raising the bar or barring the raise?

    Get PDF
    Background: Since the inception of the National Institute for Health and Clinical Excellence (NICE) in England, there have been questions about the empirical basis for the cost-per-QALY threshold used by NICE and whether QALYs gained by different beneficiaries of health care should be weighted equally. The Social Value of a QALY (SVQ) project, reported in this paper, was commissioned to address these two questions. The results of SVQ were released during a time of considerable debate about the NICE threshold, and authors with differing perspectives have drawn on the SVQ results to support their cases. As these discussions continue, and given the selective use of results by those involved, it is important, therefore, not only to present a summary overview of SVQ, but also for those who conducted the research to contribute to the debate as to its implications for NICE. Discussion: The issue of the threshold was addressed in two ways: first, by combining, via a set of models, the current UK Value of a Prevented Fatality (used in transport policy) with data on fatality age, life expectancy and age-related quality of life; and, second, via a survey designed to test the feasibility of combining respondents’ answers to willingness to pay and health state utility questions to arrive at values of a QALY. Modelling resulted in values of £10,000-£70,000 per QALY. Via survey research, most methods of aggregating the data resulted in values of a QALY of £18,000-£40,000, although others resulted in implausibly high values. An additional survey, addressing the issue of weighting QALYs, used two methods, one indicating that QALYs should not be weighted and the other that greater weight could be given to QALYs gained by some groups. Summary: Although we conducted only a feasibility study and a modelling exercise, neither present compelling evidence for moving the NICE threshold up or down. Some preliminary evidence would indicate it could be moved up for some types of QALY and down for others. While many members of the public appear to be open to the possibility of using somewhat different QALY weights for different groups of beneficiaries, we do not yet have any secure evidence base for introducing such a system

    Coordinated optimization of visual cortical maps (II) Numerical studies

    Get PDF
    It is an attractive hypothesis that the spatial structure of visual cortical architecture can be explained by the coordinated optimization of multiple visual cortical maps representing orientation preference (OP), ocular dominance (OD), spatial frequency, or direction preference. In part (I) of this study we defined a class of analytically tractable coordinated optimization models and solved representative examples in which a spatially complex organization of the orientation preference map is induced by inter-map interactions. We found that attractor solutions near symmetry breaking threshold predict a highly ordered map layout and require a substantial OD bias for OP pinwheel stabilization. Here we examine in numerical simulations whether such models exhibit biologically more realistic spatially irregular solutions at a finite distance from threshold and when transients towards attractor states are considered. We also examine whether model behavior qualitatively changes when the spatial periodicities of the two maps are detuned and when considering more than 2 feature dimensions. Our numerical results support the view that neither minimal energy states nor intermediate transient states of our coordinated optimization models successfully explain the spatially irregular architecture of the visual cortex. We discuss several alternative scenarios and additional factors that may improve the agreement between model solutions and biological observations.Comment: 55 pages, 11 figures. arXiv admin note: substantial text overlap with arXiv:1102.335

    The additional value of patient-reported health status in predicting 1-year mortality after invasive coronary procedures: A report from the Euro Heart Survey on Coronary Revascularisation

    Get PDF
    Objective: Self-perceived health status may be helpful in identifying patients at high risk for adverse outcomes. The Euro Heart Survey on Coronary Revascularization (EHS-CR) provided an opportunity to explore whether impaired health status was a predictor of 1-year mortality in patients with coronary artery disease (CAD) undergoing angiographic procedures. Methods: Data from the EHS-CR that included 5619 patients from 31 member countries of the European Society of Cardiology were used. Inclusion criteria for the current study were completion of a self-report measure of health status, the EuroQol Questionnaire (EQ-5D) at discharge and information on 1-year follow-up, resulting in a study population of 3786 patients. Results: The 1-year mortality was 3.2% (n = 120). Survivors reported fewer problems on the five dimensions of the EQ-5D as compared with non-survivors. A broad range of potential confounders were adjusted for, which reached a p<0.10 in the unadjusted analyses. In the adjusted analyses, problems with self-care (OR 3.45; 95% CI 2.14 to 5.59) and a low rating (≤ 60) on health status (OR 2.41; 95% CI 1.47 to 3.94) were the most powerful independent predictors of mortality, among the 22 clinical variables included in the analysis. Furthermore, patients who reported no problems on all five dimensions had significantly lower 1-year mortality rates (OR 0.47; 95% CI 0.28 to 0.81). Conclusions: This analysis shows that impaired health status is associated with a 2-3-fold increased risk of all-cause mortality in patients with CAD, independent of other conventional risk factors. These results highlight the importance of including patients' subjective experience of their own health status in the evaluation strategy to optimise risk stratification and management in clinical practice

    On the Origins of Suboptimality in Human Probabilistic Inference

    Get PDF
    Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior
    corecore