902 research outputs found

    Quantum Computing

    Full text link
    Quantum mechanics---the theory describing the fundamental workings of nature---is famously counterintuitive: it predicts that a particle can be in two places at the same time, and that two remote particles can be inextricably and instantaneously linked. These predictions have been the topic of intense metaphysical debate ever since the theory's inception early last century. However, supreme predictive power combined with direct experimental observation of some of these unusual phenomena leave little doubt as to its fundamental correctness. In fact, without quantum mechanics we could not explain the workings of a laser, nor indeed how a fridge magnet operates. Over the last several decades quantum information science has emerged to seek answers to the question: can we gain some advantage by storing, transmitting and processing information encoded in systems that exhibit these unique quantum properties? Today it is understood that the answer is yes. Many research groups around the world are working towards one of the most ambitious goals humankind has ever embarked upon: a quantum computer that promises to exponentially improve computational power for particular tasks. A number of physical systems, spanning much of modern physics, are being developed for this task---ranging from single particles of light to superconducting circuits---and it is not yet clear which, if any, will ultimately prove successful. Here we describe the latest developments for each of the leading approaches and explain what the major challenges are for the future.Comment: 26 pages, 7 figures, 291 references. Early draft of Nature 464, 45-53 (4 March 2010). Published version is more up-to-date and has several corrections, but is half the length with far fewer reference

    Treatment of rats with a self-selected hyperlipidic diet, increases the lipid content of the main adipose tissue sites in a proportion similar to that of the lipids in the rest of organs and tissues

    Get PDF
    Adipose tissue (AT) is distributed as large differentiated masses, and smaller depots covering vessels, and organs, as well as interspersed within them. The differences between types and size of cells makes AT one of the most disperse and complex organs. Lipid storage is partly shared by other tissues such as muscle and liver. We intended to obtain an approximate estimation of the size of lipid reserves stored outside the main fat depots. Both male and female rats were made overweight by 4-weeks feeding of a cafeteria diet. Total lipid content was analyzed in brain, liver, gastrocnemius muscle, four white AT sites: subcutaneous, perigonadal, retroperitoneal and mesenteric, two brown AT sites (interscapular and perirenal) and in a pool of the rest of organs and tissues (after discarding gut contents). Organ lipid content was estimated and tabulated for each individual rat. Food intake was measured daily. There was a surprisingly high proportion of lipid not accounted for by the main macroscopic AT sites, even when brain, liver and BAT main sites were discounted. Muscle contained about 8% of body lipids, liver 1-1.4%, four white AT sites lipid 28-63% of body lipid, and the rest of the body (including muscle) 38-44%. There was a good correlation between AT lipid and body lipid, but lipid in"other organs" was highly correlated too with body lipid. Brain lipid was not. Irrespective of dietary intake, accumulation of body fat was uniform both for the main lipid storage and handling organs: large masses of AT (but also liver, muscle), as well as in the"rest" of tissues. These storage sites, in specialized (adipose) or not-specialized (liver, muscle) tissues reacted in parallel against a hyperlipidic diet challenge. We postulate that body lipid stores are handled and regulated coordinately, with a more centralized and overall mechanisms than usually assumed

    Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Get PDF
    Background Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson’s Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. Results The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson’s disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Conclusions Classification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context

    The RAPID-CTCA trial (Rapid Assessment of Potential Ischaemic Heart Disease with CTCA) - a multicentre parallel-group randomised trial to compare early computerised tomography coronary angiography versus standard care in patients presenting with suspected or confirmed acute coronary syndrome: study protocol for a randomised controlled trial.

    Get PDF
    BACKGROUND: Emergency department attendances with chest pain requiring assessment for acute coronary syndrome (ACS) are a major global health issue. Standard assessment includes history, examination, electrocardiogram (ECG) and serial troponin testing. Computerised tomography coronary angiography (CTCA) enables additional anatomical assessment of patients for coronary artery disease (CAD) but has only been studied in very low-risk patients. This trial aims to investigate the effect of early CTCA upon interventions, event rates and health care costs in patients with suspected/confirmed ACS who are at intermediate risk. METHODS/DESIGN: Participants will be recruited in about 35 tertiary and district general hospitals in the UK. Patients ≥18 years old with symptoms with suspected/confirmed ACS with at least one of the following will be included: (1) ECG abnormalities, e.g. ST-segment depression >0.5 mm; (2) history of ischaemic heart disease; (3) troponin elevation above the 99(th) centile of the normal reference range or increase in high-sensitivity troponin meeting European Society of Cardiology criteria for 'rule-in' of myocardial infarction (MI). The early use of ≥64-slice CTCA as part of routine assessment will be compared to standard care. The primary endpoint will be 1-year all-cause death or recurrent type 1 or type 4b MI at 1 year, measured as the time to such event. A number of secondary clinical, process and safety endpoints will be collected and analysed. Cost effectiveness will be estimated in terms of the lifetime incremental cost per quality-adjusted life year gained. We plan to recruit 2424 (2500 with ~3% drop-out) evaluable patients (1212 per arm) to have 90% power to detect a 20% versus 15% difference in 1-year death or recurrent type 1 MI or type 4b MI, two-sided p < 0.05. Analysis will be on an intention-to-treat basis. The relationship between intervention and the primary outcome will be analysed using Cox proportional hazard regression adjusted for study site (used to stratify the randomisation), age, baseline Global Registry of Acute Coronary Events score, previous CAD and baseline troponin level. The results will be expressed as a hazard ratio with the corresponding 95% confidence intervals and p value. DISCUSSION: The Rapid Assessment of Potential Ischaemic Heart Disease with CTCA (RAPID-CTCA) trial will recruit 2500 participants across about 35 hospital sites. It will be the first study to investigate the role of CTCA in the early assessment of patients with suspected or confirmed ACS who are at intermediate risk and including patients who have raised troponin measurements during initial assessment. TRIAL REGISTRATION: ISRCTN19102565 . Registered on 3 October 2014. ClinicalTrials.gov: NCT02284191

    Factors contributing to delays in diagnosis of breast cancers in Ghana, West Africa

    Get PDF
    BACKGROUND: Late diagnoses and poor prognoses of breast cancer are common throughout Africa. METHODS: To identify responsible factors, we utilized data from a population-based case-control study involving 1,184 women with breast malignancies conducted in three hospitals in Accra and Kumasi, Ghana. Interviews focused on potential breast cancer risk factors as well as factors that might contribute to presentation delays. We calculated odds ratios (OR) and 95% confidence intervals (CI) comparing malignances with biopsy masses larger than 5 cm. (62.4% of the 1,027 cases with measurable lesions) to smaller lesions. RESULTS: In multivariate analyses, strong predictors of larger masses were limited education (OR=1.96, 95% CI 1.32–2.90 <primary vs. ≥senior secondary school), being separated/divorced or widowed (1.75, 1.18–2.60 and 2.25, 1.43–3.55, respectively, vs. currently married), delay in care seeking after onset of symptoms (2.64, 1.77–3.95 for ≥12 vs. ≤2 months), care having initially been sought from someone other than a doctor/nurse (1.86, 0.85–4.09), and frequent use of herbal medications/treatment (1.51, 0.95–2.43 for ≥3x/day usage vs. none),. Particularly high risks associated with these factors were found among less educated women; for example, women with less than junior secondary schooling who delayed seeking care for breast symptoms for 6 months or longer were at nearly 4-times the risk of more educated women who promptly sought assistance. CONCLUSIONS: Our findings suggest that additional communication, particularly among less educated women, could promote earlier breast cancer diagnoses. Involvement of individuals other than medical practitioners, including traditional healers, may be helpful in this process

    Annex to Quirke et al. Quality assurance in pathology in colorectal cancer screening and diagnosis: annotations of colorectal lesions

    Get PDF
    Multidisciplinary, evidence-based European Guidelines for quality assurance in colorectal cancer screening and diagnosis have recently been developed by experts in a pan-European project coordinated by the International Agency for Research on Cancer. The full guideline document includes a chapter on pathology with pan-European recommendations which take into account the diversity and heterogeneity of health care systems across the EU. The present paper is based on the annex to the pathology chapter which attempts to describe in greater depth some of the issues raised in the chapter in greater depth, particularly details of special interest to pathologists. It is presented here to make the relevant discussion known to a wider scientific audience
    corecore