434 research outputs found
Logical Segmentation of Source Code
Many software analysis methods have come to rely on machine learning
approaches. Code segmentation - the process of decomposing source code into
meaningful blocks - can augment these methods by featurizing code, reducing
noise, and limiting the problem space. Traditionally, code segmentation has
been done using syntactic cues; current approaches do not intentionally capture
logical content. We develop a novel deep learning approach to generate logical
code segments regardless of the language or syntactic correctness of the code.
Due to the lack of logically segmented source code, we introduce a unique data
set construction technique to approximate ground truth for logically segmented
code. Logical code segmentation can improve tasks such as automatically
commenting code, detecting software vulnerabilities, repairing bugs, labeling
code functionality, and synthesizing new code.Comment: SEKE2019 Conference Full Pape
A Procedure to Identify and Rank Rainfall/Runoff Phenomena for the Evaluation of Urban Stormwater Models
Computer models are needed to predict the effects of changes in land-use and climate within an urban stormwater drainage catchment. As with any computer model result, it is important to clearly state the reliability of the calculations and the modeling assumptions that were use
Prevalence of obstructive sleep apnoea in REM behaviour disorder:response to continuous positive airway pressure therapy
Objectives: Rapid eye movement behaviour disorder (RBD) is a parasomnia in which there is loss of muscle atonia during rapid eye movement (REM) sleep, resulting in dream enactment. The aims of this study were to determine the prevalence of obstructive sleep apnoea (OSA) in RBD patients and determine whether continuous positive airway pressure (CPAP) therapy improved RBD symptoms in patients with concomitant RBD and OSA.Methods: A questionnaire was mailed to 120 patients identified from a tertiary sleep centre with RBD meeting full International Classification for Sleep Disorders-3 (ICSD-3) criteria. Patients were diagnosed as having OSA if they had an apnoea-hypopnea index (AHI) ≥ 5. The questionnaire focused on CPAP-use, compliance and complications. Standard statistical analysis was undertaken using SPSS (v.21, IBM).Results: One hundred and seven of the potential participants (89.2%) had an OSA diagnosis. Out of 72 who responded to the questionnaire, (60%) 27 patients were using CPAP therapy. CPAP therapy improved RBD symptoms in 45.8% of this group. Despite this positive response to treatment in nearly half of CPAP-users, there was no significant difference in subjective or objective CPAP compliance between those who reported RBD improvement and those who did not. Subjective compliance with CPAP was over-reported, with mean usage being 7.17 ± 1.7 h per night compared to objective mean compliance of 5.71 ± 1.7.Conclusions: OSA is a very common co-morbidity of RBD. CPAP therapy might improve self-reported RBD symptoms further, in addition to standard RBD treatment. However, further research into its topic is necessary.</p
How to Simulate Realistic Survival Data? A Simulation Study to Compare Realistic Simulation Models
In statistics, it is important to have realistic data sets available for a
particular context to allow an appropriate and objective method comparison. For
many use cases, benchmark data sets for method comparison are already available
online. However, in most medical applications and especially for clinical
trials in oncology, there is a lack of adequate benchmark data sets, as patient
data can be sensitive and therefore cannot be published. A potential solution
for this are simulation studies. However, it is sometimes not clear, which
simulation models are suitable for generating realistic data. A challenge is
that potentially unrealistic assumptions have to be made about the
distributions. Our approach is to use reconstructed benchmark data sets %can be
used as a basis for the simulations, which has the following advantages: the
actual properties are known and more realistic data can be simulated. There are
several possibilities to simulate realistic data from benchmark data sets. We
investigate simulation models based upon kernel density estimation, fitted
distributions, case resampling and conditional bootstrapping. In order to make
recommendations on which models are best suited for a specific survival
setting, we conducted a comparative simulation study. Since it is not possible
to provide recommendations for all possible survival settings in a single
paper, we focus on providing realistic simulation models for two-armed phase
III lung cancer studies. To this end we reconstructed benchmark data sets from
recent studies. We used the runtime and different accuracy measures (effect
sizes and p-values) as criteria for comparison
Strategies to Enhance Rehabilitation after Acute Kidney Injury in the Developing World
Acute kidney injury (AKI) is independently associated with new onset chronic kidney disease (CKD), end-stage kidney disease, cardiovascular disease, and all-cause mortality. However, only a minority of patients receive follow-up care after an episode of AKI in the developing world, and the optimal strategies to promote rehabilitation after AKI are ill-defined. On this background, a working group of the 18th Acute Dialysis Quality Initiative (ADQI) applied the consensus-building process informed by a PubMed review of English language articles to address questions related to rehabilitation after AKI. The consensus statements propose that all patients should be offered follow-up within three months of an AKI episode, with more intense follow-up (e.g., < one month) considered based upon patient risk factors, characteristics of the AKI event, and the degree of kidney recovery. Patients should be monitored for renal and non-renal events post-AKI, and we suggest the minimum level of monitoring consist of an assessment of kidney function and proteinuria within three months of the AKI episode. Care should be individualized for higher risk patients, particularly patients who are still dialysis-dependent to promote renal recovery. While evidence-based treatments for survivors of AKI are lacking and some outcomes may not be modifiable, we recommend simple interventions such as lifestyle changes, medication reconciliation, blood pressure control, and education, including the documentation of AKI on the patient’s medical record. In conclusion, survivors of AKI represent a high-risk population and these consensus statements should provide clinicians with guidance on the care of patients after an episode of AKI
The self-perception and political biases of ChatGPT
This contribution analyzes the self-perception and political biases of OpenAI’s Large Language Model ChatGPT. Considering the first small-scale reports and studies that have emerged, claiming that ChatGPT is politically biased towards progressive and libertarian points of view, this contribution is aimed at providing further clarity on this subject. Although the concept of political bias and affiliation is hard to define, lacking an agreed-upon measure for its quantification, this contribution attempts to examine this issue by having ChatGPT respond to questions on commonly used measures of political bias. In addition, further measures for personality traits that have previously been linked to political affiliations were examined. More specifically, ChatGPT was asked to answer the questions posed by the political compass test as well as similar questionnaires that are specific to the respective politics of the G7 member states. These eight tests were repeated ten times each and indicate that ChatGPT seems to hold a bias towards progressive views. The political compass test revealed a bias towards progressive and libertarian views, supporting the claims of prior research. The political questionnaires for the G7 member states indicated a bias towards progressive views but no significant bias between authoritarian and libertarian views, contradicting the findings of prior reports. In addition, ChatGPT’s Big Five personality traits were tested using the OCEAN test, and its personality type was queried using the Myers-Briggs Type Indicator (MBTI) test. Finally, the maliciousness of ChatGPT was evaluated using the Dark Factor test. These three tests were also repeated ten times each, revealing that ChatGPT perceives itself as highly open and agreeable, has the Myers-Briggs personality type ENFJ, and is among the test-takers with the least pronounced dark traits
Which test for crossing survival curves? A user’s guideline
Background:
The exchange of knowledge between statisticians developing new methodology and clinicians, reviewers or authors applying them is fundamental. This is specifically true for clinical trials with time-to-event endpoints. Thereby, one of the most commonly arising questions is that of equal survival distributions in two-armed trial. The log-rank test is still the gold-standard to infer this question. However, in case of non-proportional hazards, its power can become poor and multiple extensions have been developed to overcome this issue. We aim to facilitate the choice of a test for the detection of survival differences in the case of crossing hazards.
Methods:
We restricted the review to the most recent two-armed clinical oncology trials with crossing survival curves. Each data set was reconstructed using a state-of-the-art reconstruction algorithm. To ensure reproduction quality, only publications with published number at risk at multiple time points, sufficient printing quality and a non-informative censoring pattern were included. This article depicts the p-values of the log-rank and Peto-Peto test as references and compares them with nine different tests developed for detection of survival differences in the presence of non-proportional or crossing hazards.
Results:
We reviewed 1400 recent phase III clinical oncology trials and selected fifteen studies that met our eligibility criteria for data reconstruction. After including further three individual patient data sets, for nine out of eighteen studies significant differences in survival were found using the investigated tests. An important point that reviewers should pay attention to is that 28% of the studies with published survival curves did not report the number at risk. This makes reconstruction and plausibility checks almost impossible.
Conclusions:
The evaluation shows that inference methods constructed to detect differences in survival in presence of non-proportional hazards are beneficial and help to provide guidance in choosing a sensible alternative to the standard log-rank test
Survival benefits of statins for primary prevention: a cohort study
Objectives: Estimate the effect of statin prescription on mortality in the population of England and Wales with no previous history of cardiovascular disease. Methods: Primary care records from The Health Improvement Network 1987-2011 were used.Four cohorts of participants aged 60, 65, 70, or 75 years at baseline included 118,700,199,574, 247,149, and 194,085 participants; and 1.4, 1.9, 1.8, and 1.1 million person-years of data, respectively. The exposure was any statin prescription at any time before the participant reached the baseline age (60, 65, 70 or 75) and the outcome was all-cause mortality at any age above the baseline age. The hazard of mortality associated with statin prescription was calculated by Cox's proportional hazard regressions, adjusted for sex, year of birth, socioeconomic status, diabetes,antihypertensive medication, hypercholesterolaemia, body mass index, smoking status, and general practice. Participants were grouped by QRISK2 baseline risk of afirst cardiovascular event in the next ten years of <10%, 10-19%, or ≥20%. Results: There was no reduction in all-cause mortality for statin prescription initiated in participants with a QRISK2 score <10% at any baseline age, or in participants aged 60at baseline in any risk group. Mortality was lower in participants with a QRISK2 score≥20% if statin prescription had been initiated by age 65 (adjusted hazard ratio (HR)0.86 (0.79-0.94)), 70 (HR 0.83 (0.79-0.88)), or 75 (HR 0.82 (0.79-0.86)). Mortality reduction was uncertain with a QRISK2 score of 10-19%: the HR was 1.00 (0.91-1.11)for statin prescription by age 65, 0.89 (0.81-0.99) by age 70, or 0.79 (0.52-1.19) by age75. Conclusions: The current internationally recommended thresholds for statin therapy for primary prevention of cardiovascular disease in routine practice may be too low and may lead to overtreatment of younger people and those at low risk
Low Rates of Both Lipid-Lowering Therapy Use and Achievement of Low-Density Lipoprotein Cholesterol Targets in Individuals at High-Risk for Cardiovascular Disease across Europe
Aims
To analyse the treatment and control of dyslipidaemia in patients at high and very high cardiovascular
risk being treated for the primary prevention of cardiovascular disease (CVD) in
Europe.
Methods and Results
Data were assessed from the European Study on Cardiovascular Risk Prevention and Management
in Usual Daily Practice (EURIKA, ClinicalTrials.gov identifier: NCT00882336),
which included a randomly sampled population of primary CVD prevention patients from 12
European countries (n = 7641). Patients’ 10-year risk of CVD-related mortality was calculated
using the Systematic Coronary Risk Evaluation (SCORE) algorithm, identifying 5019
patients at high cardiovascular risk (SCORE 5% and/or receiving lipid-lowering therapy),
and 2970 patients at very high cardiovascular risk (SCORE 10% or with diabetes
mellitus). Among high-risk individuals, 65.3% were receiving lipid-lowering therapy, and
61.3% of treated patients had uncontrolled low-density lipoprotein cholesterol (LDL-C)
levels ( 2.5 mmol/L). For very-high-risk patients (uncontrolled LDL-C levels defined as
1.8 mmol/L) these figures were 49.5% and 82.9%, respectively. Excess 10-year risk of
CVD-related mortality (according to SCORE) attributable to lack of control of dyslipidaemia
was estimated to be 0.72%and 1.61% among high-risk and very-high-risk patients, respectively.
Among high-risk individuals with uncontrolled LDL-C levels, only 8.7% were receiving
a high-intensity statin (atorvastatin 40 mg/day or rosuvastatin 20 mg/day). Among veryhigh-
risk patients, this figure was 8.4%.
Conclusions
There is a considerable opportunity for improvement in rates of lipid-lowering therapy use
and achievement of lipid-level targets in high-risk and very-high-risk patients being treated
for primary CVD prevention in EuropeWriting support was provided by Oxford
PharmaGenesis Ltd, Oxford, UK, and was
funded by AstraZenec
Treatment pattern trends of medications for type 2 diabetes in British Columbia, Canada
Introduction Several new oral drug classes for type 2 diabetes (T2DM) have been introduced in the last 20 years accompanied by developments in clinical evidence and guidelines. The uptake of new therapies and contemporary use of blood glucose-lowering drugs has not been closely examined in Canada. The objective of this project was to describe these treatment patterns and relate them to changes in provincial practice guidelines. Research design and methods We conducted a longitudinal drug utilization study among persons with T2DM aged ≥18 years from 2001 to 2020 in British Columbia (BC), Canada. We used dispensing data from community pharmacies with linkable physician billing and hospital admission records. Laboratory results were available from 2011 onwards. We identified incident users of blood glucose-lowering drugs, then determined sequence patterns of medications dispensed, with stratification by age group, and subgroup analysis for patients with a history of cardiovascular disease. Results Among a cohort of 362 391 patients (mean age 57.7 years old, 53.5% male) treated for non-insulin-dependent diabetes, the proportion who received metformin monotherapy as first-line treatment reached a maximum of 90% in 2009, decreasing to 73% in 2020. The proportion of patients starting two-drug combinations nearly doubled from 3.3% to 6.4%. Sulfonylureas were the preferred class of second-line agents over the course of the study period. In 2020, sodium-glucose cotransporter type 2 inhibitors and glucagon-like peptide-1 receptor agonists accounted for 21% and 10% of second-line prescribing, respectively. For patients with baseline glycated hemoglobin (A1C) results prior to initiating diabetic treatment, 41% had a value ≤7.0% and 27% had a value over 8.5%. Conclusions Oral diabetic medication patterns have changed significantly over the last 20 years in BC, primarily in terms of medications used as second-line therapy. Over 40% of patients with available laboratory results initiated T2DM treatment with an A1C value ≤7.0%, with the average A1C value trending lower over the last decade.</p
- …
