116 research outputs found

    Potential savings without compromising the quality of care

    Get PDF
    SUMMARY Aims: This study was designed to analyse the association between adherence to guidelines for rational drug use and surrogate outcome markers for hypertension, diabetes and hypercholesterolaemia. Methods: The study used a cross-sectional ecological design. Data from dispensed prescriptions and medical records were analysed from 24 primary healthcare centres with a combined registered population of 330,000 patients in 2006. Guideline adherence was determined calculating the proportion of the prescribed volume of antidiabetic agents, antihypertensives and lipid-lowering agents representing the 14 different drugs included in the guidelines for these three areas. Patient outcome was assessed using surrogate marker data on HbA1C, blood pressure (BP) and s-cholesterol. The association between the guidelines adherence and outcomes measures was analysed by logistic regression. Results: The proportion of guideline antidiabetic drugs in relation to all antidiabetic drugs prescribed varied between 80% and 97% among the practices, the ratio of angiotensin converting enzyme (ACE)-inhibitors to all renin–angiotensin drugs 40–77% and the ratio of simvastatin to all statins 58–90%. The proportion of patients reaching targets for HbA1C, BP and s-cholesterol varied between 34% and 66%, 36% and 57% and 46% and 71% respectively. No significant associations were found between adherence to the guidelines and outcome. The expenditures for antihypertensives and lipid-lowering drugs could potentially be reduced by 10% and 50% respectively if all practices adhered to the guidelines as the top performing practices. Conclusion: A substantial amount of money can be saved in primary care without compromising the quality of care by using recommended first-line drugs for the treatment diabetes, hypertension and hypercholesterolaemia. What's known ‱ There are substantial price differences between branded and off-patent drugs for the treatment of diabetes, hypertension and hypercholesterolaemia. ‱ There is a wide variation in adherence to prescribe targets in primary healthcare. ‱ There is a limited knowledge on the relation between adherence to prescribing targets or guidelines, patient outcomes and potential savings that could be achieved. What's new ‱ No significant associations were found at a practice level between adherence to the guidelines and outcomes in terms of patients reaching target levels for surrogate markers. ‱ A substantial amount of money can be saved in primary care without compromising the quality of care by using recommended off-patent drugs for the treatment of diabetes, hypertension and hypercholesterolaemia

    Do patients have worse outcomes in heart failure than in cancer? A primary care-based cohort study with 10-year follow-up in Scotland

    Get PDF
    Aims: To evaluate whether the survival rates of patients with heart failure (HF) in the community are better than those with a diagnosis of the 4 most common cancers in men and women in a contemporary primary care cohort in Scotland. Methods and Results: The data were obtained from the Primary Care Clinical Informatics Unit from a database of 1.75 million people registered with 393 general practices in Scotland. Sex-specific survival modeling was undertaken using Cox proportional hazards models, adjusted for potential confounders. A total of 56,658 patients were eligible to be included in the study with 147,938 person years follow up (median follow up 2.04 years). In men, heart failure (reference group; 5yrs survival 37.7%) had worse mortality outcomes than patients with prostate cancer (HR 0.61, 95%CI 0.57-0.65; 5yrs survival 49.0%), and bladder cancer (HR 0.88, 95%CI 0.81-0.96; 5yrs survival 36.5%), but better than lung cancer (HR 3.86, 95%CI 3.65-4.07; 5yrs survival 2.8%) and colorectal cancer (HR 1.23 95%CI 1.16-1.31; 5 yrs survival 25.9%). In women, patients with HF (reference group; 5yrs survival 31.9%) had worse mortality outcomes than patients with breast cancer (HR 0.55 95%CI 0.51-0.59; 5yrs survival 61.0%), but better outcomes than lung cancer (HR 3.82, 95%CI 3.60-4.05; 5yrs survival 3.6%), ovarian cancer (HR 1.98, 95%CI 1.80-2.17; 5yrs survival 19%) and colorectal cancer (HR 1.21, 95%CI 1.13-1.29; 5yrs survival 28.4%). Conclusions: Despite advances in management, heart failure remains as ‘malignant’ as some of the common cancers in both men and women

    Prognosis of heart failure recorded in primary care, acute hospital admissions, or both: a population-based linked electronic health record cohort study in 2.1 million people

    Get PDF
    Aims: The prognosis of patients hospitalized for worsening heart failure (HF) is well described but not that of patients managed solely in non-acute settings such as primary care or secondary outpatient care. We assessed the distribution and prognostic differences for patients with HF either recorded in primary care (including secondary out-patient care) (PC), hospital admissions alone, or known in both contexts. Methods and Results: This study was part of the CALIBER programme, comprising linked data from primary care, hospital admissions, and death-certificates for 2.1 million inhabitants of England. We identified 89,554 patients with incident HF, of whom 23,547(26%) were recorded in PC but never hospitalised, 30,629(34%) in hospital admissions but not known in PC, 23,681(26%) in both, and 11,697(13%) in death-certificates only. Highest prescription rates of ACEi, betablockers, and minerocorticoid receptor antagonists was found in patients known in both contexts. The respective 5-year survival in the first three groups was 43.9% (95%CI 43.2-44.6%), 21.7% (95%CI 21.1-22.2%), and 39.8% (95%CI 39.2-40.5%), compared to 88.1% (95%CI 87.9-88.3%) in the age and sex matched general population. Conclusion: In the general population, one in four patients with HF will not be hospitalised for worsening HF within a median follow up of 1.7 years, yet they still have a poor five-year prognosis. Patients admitted to hospital with worsening HF but not known with HF in primary care have the worst prognosis and management. Mitigating the prognostic burden of HF requires greater consistency across primary- and secondary care in the identification, profiling and treatment of patients

    Latency reduction by dynamic channel estimator selection in C-RAN networks using fuzzy logic

    Get PDF
    Due to a dramatic increase in the number of mobile users, operators are forced to expand their networks accordingly. Cloud Radio Access Network (C-RAN) was introduced to tackle the problems of the current generation of mobile networks and to support future 5G networks. However, many challenges have arisen through the centralised structure of C-RAN. The accuracy of the channel state information acquisition in the C-RAN for large numbers of remote radio heads and user equipment is one of the main challenges in this architecture. In order to minimize the time required to acquire the channel information in C-RAN and to reduce the end-to-end latency, in this paper a dynamic channel estimator selection algorithm is proposed. The idea is to assign different channel estimation algorithms to the users of mobile networks based on their link status (particularly the SNR threshold). For the purpose of automatic and adaptive selection to channel estimators, a fuzzy logic algorithm is employed as a decision maker to select the best SNR threshold by utilising the bit error rate measurements. The results demonstrate a reduction in the estimation time with low loss in data throughput. It is also observed that the outcome of the proposed algorithm increases at high SNR values

    Effects of High-Intensity Interval Training versus Continuous Training on Physical Fitness, Cardiovascular Function and Quality of Life in Heart Failure Patients

    Get PDF
    Introduction Physical fitness is an important prognostic factor in heart failure (HF). To improve fitness, different types of exercise have been explored, with recent focus on high-intensity interval training (HIT). We comprehensively compared effects of HIT versus continuous training (CT) in HF patients NYHA II-III on physical fitness, cardiovascular function and structure, and quality of life, and hypothesize that HIT leads to superior improvements compared to CT. Methods Twenty HF patients (male:female 19:1, 64±8 yrs, ejection fraction 38±6%) were allocated to 12-weeks of HIT (10*1-minute at 90% maximal workload—alternated by 2.5 minutes at 30% maximal workload) or CT (30 minutes at 60–75% of maximal workload). Before and after intervention, we examined physical fitness (incremental cycling test), cardiac function and structure (echocardiography), vascular function and structure (ultrasound) and quality of life (SF-36, Minnesota living with HF questionnaire (MLHFQ)). Results Training improved maximal workload, peak oxygen uptake (VO2peak) related to the predicted VO2peak, oxygen uptake at the anaerobic threshold, and maximal oxygen pulse (all P<0.05), whilst no differences were present between HIT and CT (N.S.). We found no major changes in resting cardiovascular function and structure. SF-36 physical function score improved after training (P<0.05), whilst SF-36 total score and MLHFQ did not change after training (N.S.). Conclusion Training induced significant improvements in parameters of physical fitness, although no evidence for superiority of HIT over CT was demonstrated. No major effect of training was found on cardiovascular structure and function or quality of life in HF patients NYHA II-III

    Kunskapsöverföring mellan datamÀngder i djupa arkitekturer för informationssökning

    No full text
    Recent approaches to IR include neural networks that generate query and document vector representations. The representations are used as the basis for document retrieval and are able to encode semantic features if trained on large datasets, an ability that sets them apart from classical IR approaches such as TF-IDF. However, the datasets necessary to train these networks are not available to the owners of most search services used today, since they are not used by enough users. Thus, methods for enabling the use of neural IR models in data-poor environments are of interest. In this work, a bag-of-trigrams neural IR architecture is used in a transfer learning procedure in an attempt to increase performance on a target dataset by pre-training on external datasets. The target dataset used is WikiQA, and the external datasets are Quora’s Question Pairs, Reuters’ RCV1 and SQuAD. When considering individual model performance, pre-training on Question Pairs and fine-tuning on WikiQA gives us the best individual models. However, when considering average performance, pre-training on the chosen external dataset result in lower performance on the target dataset, both when all datasets are used together and when they are used individually, with different average performance depending on the external dataset used. On average, pre-training on RCV1 and Question Pairs gives the lowest and highest average performance respectively, when considering only the pre-trained networks. Surprisingly, the performance of an untrained, randomly generated network is high, and beats the performance of all pre-trained networks on average. The best performing model on average is a neural IR model trained on the target dataset without prior pre-training.Nya modeller inom informationssökning inkluderar neurala nĂ€t som genererar vektorrepresentationer för sökfrĂ„gor och dokument. Dessa vektorrepresentationer anvĂ€nds tillsammans med ett likhetsmĂ„tt för att avgöra relevansen för ett givet dokument med avseende pĂ„ en sökfrĂ„ga. Semantiska sĂ€rdrag i sökfrĂ„gor och dokument kan kodas in i vektorrepresentationerna. Detta möjliggör informationssökning baserat pĂ„ semantiska enheter, vilket ej Ă€r möjligt genom de klassiska metoderna inom informationssökning, som istĂ€llet förlitar sig pĂ„ den ömsesidiga förekomsten av nyckelord i sökfrĂ„gor och dokument. För att trĂ€na neurala sökmodeller krĂ€vs stora datamĂ€ngder. De flesta av dagens söktjĂ€nster anvĂ€nds i för liten utstrĂ€ckning för att möjliggöra framstĂ€llande av datamĂ€ngder som Ă€r stora nog att trĂ€na en neural sökmodell. DĂ€rför Ă€r det önskvĂ€rt att hitta metoder som möjliggör anvĂ€ndadet av neurala sökmodeller i domĂ€ner med smĂ„ tillgĂ€ngliga datamĂ€ngder. I detta examensarbete har en neural sökmodell implementerats och anvĂ€nts i en metod avsedd att förbĂ€ttra dess prestanda pĂ„ en mĂ„ldatamĂ€ngd genom att förtrĂ€na den pĂ„ externa datamĂ€ngder. MĂ„ldatamĂ€ngden som anvĂ€nds Ă€r WikiQA, och de externa datamĂ€ngderna Ă€r Quoras Question Pairs, Reuters RCV1 samt SquAD. I experimenten erhĂ„lls de bĂ€sta enskilda modellerna genom att fötrĂ€na pĂ„ Question Pairs och finjustera pĂ„ WikiQA. Den genomsnittliga prestandan över ett flertal trĂ€nade modeller pĂ„verkas negativt av vĂ„r metod. Detta Ă€ller bĂ„de nĂ€r samtliga externa datamĂ€nder anvĂ€nds tillsammans, samt nĂ€r de anvĂ€nds enskilt, med varierande prestanda beroende pĂ„ vilken datamĂ€ngd som anvĂ€nds. Att förtrĂ€na pĂ„ RCV1 och Question Pairs ger den största respektive minsta negativa pĂ„verkan pĂ„ den genomsnittliga prestandan. Prestandan hos en slumpmĂ€ssigt genererad, otrĂ€nad modell Ă€r förvĂ„nansvĂ€rt hög, i genomsnitt högre Ă€n samtliga förtrĂ€nade modeller, och i nivĂ„ med BM25. Den bĂ€sta genomsnittliga prestandan erhĂ„lls genom att trĂ€na pĂ„ mĂ„ldatamĂ€ngden WikiQA utan tidigare förtrĂ€ning

    Understanding LTE with Matlab: from mathematical modeling to simulation and prototyping

    No full text
    An introduction to technical details related to the Physical Layer of the LTE standard with MATLABÂź The LTE (Long Term Evolution) and LTE-Advanced are among the latest mobile communications standards, designed to realize the dream of a truly global, fast, all-IP-based, secure broadband mobile access technology. This book examines the Physical Layer (PHY) of the LTE standards by incorporating three conceptual elements: an overview of the theory behind key enabling technologies; a concise discussion regarding standard specifications; and the MATLABÂź algorithms needed to simulate the standard

    Kunskapsöverföring mellan datamÀngder i djupa arkitekturer för informationssökning

    No full text
    Recent approaches to IR include neural networks that generate query and document vector representations. The representations are used as the basis for document retrieval and are able to encode semantic features if trained on large datasets, an ability that sets them apart from classical IR approaches such as TF-IDF. However, the datasets necessary to train these networks are not available to the owners of most search services used today, since they are not used by enough users. Thus, methods for enabling the use of neural IR models in data-poor environments are of interest. In this work, a bag-of-trigrams neural IR architecture is used in a transfer learning procedure in an attempt to increase performance on a target dataset by pre-training on external datasets. The target dataset used is WikiQA, and the external datasets are Quora’s Question Pairs, Reuters’ RCV1 and SQuAD. When considering individual model performance, pre-training on Question Pairs and fine-tuning on WikiQA gives us the best individual models. However, when considering average performance, pre-training on the chosen external dataset result in lower performance on the target dataset, both when all datasets are used together and when they are used individually, with different average performance depending on the external dataset used. On average, pre-training on RCV1 and Question Pairs gives the lowest and highest average performance respectively, when considering only the pre-trained networks. Surprisingly, the performance of an untrained, randomly generated network is high, and beats the performance of all pre-trained networks on average. The best performing model on average is a neural IR model trained on the target dataset without prior pre-training.Nya modeller inom informationssökning inkluderar neurala nĂ€t som genererar vektorrepresentationer för sökfrĂ„gor och dokument. Dessa vektorrepresentationer anvĂ€nds tillsammans med ett likhetsmĂ„tt för att avgöra relevansen för ett givet dokument med avseende pĂ„ en sökfrĂ„ga. Semantiska sĂ€rdrag i sökfrĂ„gor och dokument kan kodas in i vektorrepresentationerna. Detta möjliggör informationssökning baserat pĂ„ semantiska enheter, vilket ej Ă€r möjligt genom de klassiska metoderna inom informationssökning, som istĂ€llet förlitar sig pĂ„ den ömsesidiga förekomsten av nyckelord i sökfrĂ„gor och dokument. För att trĂ€na neurala sökmodeller krĂ€vs stora datamĂ€ngder. De flesta av dagens söktjĂ€nster anvĂ€nds i för liten utstrĂ€ckning för att möjliggöra framstĂ€llande av datamĂ€ngder som Ă€r stora nog att trĂ€na en neural sökmodell. DĂ€rför Ă€r det önskvĂ€rt att hitta metoder som möjliggör anvĂ€ndadet av neurala sökmodeller i domĂ€ner med smĂ„ tillgĂ€ngliga datamĂ€ngder. I detta examensarbete har en neural sökmodell implementerats och anvĂ€nts i en metod avsedd att förbĂ€ttra dess prestanda pĂ„ en mĂ„ldatamĂ€ngd genom att förtrĂ€na den pĂ„ externa datamĂ€ngder. MĂ„ldatamĂ€ngden som anvĂ€nds Ă€r WikiQA, och de externa datamĂ€ngderna Ă€r Quoras Question Pairs, Reuters RCV1 samt SquAD. I experimenten erhĂ„lls de bĂ€sta enskilda modellerna genom att fötrĂ€na pĂ„ Question Pairs och finjustera pĂ„ WikiQA. Den genomsnittliga prestandan över ett flertal trĂ€nade modeller pĂ„verkas negativt av vĂ„r metod. Detta Ă€ller bĂ„de nĂ€r samtliga externa datamĂ€nder anvĂ€nds tillsammans, samt nĂ€r de anvĂ€nds enskilt, med varierande prestanda beroende pĂ„ vilken datamĂ€ngd som anvĂ€nds. Att förtrĂ€na pĂ„ RCV1 och Question Pairs ger den största respektive minsta negativa pĂ„verkan pĂ„ den genomsnittliga prestandan. Prestandan hos en slumpmĂ€ssigt genererad, otrĂ€nad modell Ă€r förvĂ„nansvĂ€rt hög, i genomsnitt högre Ă€n samtliga förtrĂ€nade modeller, och i nivĂ„ med BM25. Den bĂ€sta genomsnittliga prestandan erhĂ„lls genom att trĂ€na pĂ„ mĂ„ldatamĂ€ngden WikiQA utan tidigare förtrĂ€ning
    • 

    corecore