106 research outputs found

    Finite difference time domain modeling of steady state scattering from jet engines with moving turbine blades

    Get PDF
    The approach chosen to model steady state scattering from jet engines with moving turbine blades is based upon the Finite Difference Time Domain (FDTD) method. The FDTD method is a numerical electromagnetic program based upon the direct solution in the time domain of Maxwell's time dependent curl equations throughout a volume. One of the strengths of this method is the ability to model objects with complicated shape and/or material composition. General time domain functions may be used as source excitations. For example, a plane wave excitation may be specified as a pulse containing many frequencies and at any incidence angle to the scatterer. A best fit to the scatterer is accomplished using cubical cells in the standard cartesian implementation of the FDTD method. The material composition of the scatterer is determined by specifying its electrical properties at each cell on the scatterer. Thus, the FDTD method is a suitable choice for problems with complex geometries evaluated at multiple frequencies. It is assumed that the reader is familiar with the FDTD method

    Epigenetic modelling of former, current and never smokers

    Get PDF
    BACKGROUND: DNA methylation (DNAm) performs excellently in the discrimination of current and former smokers from never smokers, where AUCs > 0.9 are regularly reported using a single CpG site (cg05575921; AHRR). However, there is a paucity of DNAm models which attempt to distinguish current, former and never smokers as individual classes. Derivation of a robust DNAm model that accurately distinguishes between current, former and never smokers would be particularly valuable to epidemiological research (as a more accurate smoking definition vs. self-report) and could potentially translate to clinical settings. Therefore, we appraise 4 DNAm models of ternary smoking status (that is, current, former and never smokers): methylation at cg05575921 (AHRR model), weighted scores from 13 CpGs created by Maas et al. (Maas model), weighted scores from a LASSO model of candidate smoking CpGs from the literature (candidate CpG LASSO model), and weighted scores from a LASSO model supplied with genome-wide 450K data (agnostic LASSO model). Discrimination is assessed by AUC, whilst classification accuracy is assessed by accuracy and kappa, derived from confusion matrices. RESULTS: We find that DNAm can classify ternary smoking status with reasonable accuracy, including when applied to external data. Ternary classification using only DNAm far exceeds the classification accuracy of simply assigning all classes as the most prevalent class (63.7% vs. 36.4%). Further, we develop a DNAm classifier which performs well in discriminating current from former smokers (agnostic LASSO model AUC in external validation data: 0.744). Finally, across our DNAm models, we show evidence of enrichment for biological pathways and human phenotype ontologies relevant to smoking, such as haemostasis, molybdenum cofactor synthesis, body fatness and social behaviours, providing evidence of the generalisability of our classifiers. CONCLUSIONS: Our findings suggest that DNAm can classify ternary smoking status with close to 65% accuracy. Both the ternary smoking status classifiers and current versus former smoking status classifiers address the present lack of former smoker classification in epigenetic literature; essential if DNAm classifiers are to adequately relate to real-world populations. To improve performance further, additional focus on improving discrimination of current from former smokers is necessary. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13148-021-01191-6

    Polytypic Genetic Programming

    Get PDF
    Program synthesis via heuristic search often requires a great deal of boilerplate code to adapt program APIs to the search mechanism. In addition, the majority of existing approaches are not type-safe: i.e. they can fail at runtime because the search mechanisms lack the strict type information often available to the compiler. In this article, we describe Polytope, a Scala framework that uses polytypic programming, a relatively recent advance in program abstraction. Polytope requires a minimum of boilerplate code and supports a form of strong-typing in which type rules are automatically enforced by the compiler, even for search operations such as mutation which are applied at run-time. By operating directly on language-native expressions, it provides an embeddable optimization procedure for existing code. We give a tutorial example of the specific polytypic approach we adopt and compare both runtime efficiency and required lines of code against the well-known EpochX GP framework, showing comparable performance in the former and the complete elimination of boilerplate for the latter

    Normalized Affymetrix expression data are biased by G-quadruplex formation

    Get PDF
    Probes with runs of four or more guanines (G-stacks) in their sequences can exhibit a level of hybridization that is unrelated to the expression levels of the mRNA that they are intended to measure. This is most likely caused by the formation of G-quadruplexes, where inter-probe guanines form Hoogsteen hydrogen bonds, which probes with G-stacks are capable of forming. We demonstrate that for a specific microarray data set using the Human HG-U133A Affymetrix GeneChip and RMA normalization there is significant bias in the expression levels, the fold change and the correlations between expression levels. These effects grow more pronounced as the number of G-stack probes in a probe set increases. Approximately 14 of the probe sets are directly affected. The analysis was repeated for a number of other normalization pipelines and two, FARMS and PLIER, minimized the bias to some extent. We estimate that ∼15 of the data sets deposited in the GEO database are susceptible to the effect. The inclusion of G-stack probes in the affected data sets can bias key parameters used in the selection and clustering of genes. The elimination of these probes from any analysis in such affected data sets outweighs the increase of noise in the signal. © 2011 The Author(s)

    Exploring Fitness and Edit Distance of Mutated Python Programs

    Get PDF
    Genetic Improvement (GI) is the process of using computational search techniques to improve existing software e.g. in terms of execution time, power consumption or correctness. As in most heuristic search algorithms, the search is guided by fitness with GI searching the space of program variants of the original software. The relationship between the program space and fitness is seldom simple and often quite difficult to analyse. This paper makes a preliminary analysis of GI’s fitness distance measure on program repair with three small Python programs. Each program undergoes incremental mutations while the change in fitness as measured by proportion of tests passed is monitored. We conclude that the fitnesses of these programs often does not change with single mutations and we also confirm the inherent discreteness of bug fixing fitness functions. Although our findings cannot be assumed to be general for other software they provide us with interesting directions for further investigation

    Epigenetic biomarkers of ageing are predictive of mortality risk in a longitudinal clinical cohort of individuals diagnosed with oropharyngeal cancer

    Get PDF
    Background: Epigenetic clocks are biomarkers of ageing derived from DNA methylation levels at a subset of CpG sites. The difference between age predicted by these clocks and chronological age, termed “epigenetic age acceleration”, has been shown to predict age-related disease and mortality. We aimed to assess the prognostic value of epigenetic age acceleration and a DNA methylation-based mortality risk score with all-cause mortality in a prospective clinical cohort of individuals with head and neck cancer: Head and Neck 5000. We investigated two markers of intrinsic epigenetic age acceleration (IEAAHorvath and IEAAHannum), one marker of extrinsic epigenetic age acceleration (EEAA), one optimised to predict physiological dysregulation (AgeAccelPheno), one optimised to predict lifespan (AgeAccelGrim) and a DNA methylation-based predictor of mortality (ZhangScore). Cox regression models were first used to estimate adjusted hazard ratios (HR) and 95% confidence intervals (CI) for associations of epigenetic age acceleration with all-cause mortality in people with oropharyngeal cancer (n = 408; 105 deaths). The added prognostic value of epigenetic markers compared to a clinical model including age, sex, TNM stage and HPV status was then evaluated. Results: IEAAHannum and AgeAccelGrim were associated with mortality risk after adjustment for clinical and lifestyle factors (HRs per standard deviation [SD] increase in age acceleration = 1.30 [95% CI 1.07, 1.57; p = 0.007] and 1.40 [95% CI 1.06, 1.83; p = 0.016], respectively). There was weak evidence that the addition of AgeAccelGrim to the clinical model improved 3-year mortality prediction (area under the receiver operating characteristic curve: 0.80 vs. 0.77; p value for difference = 0.069). Conclusion: In the setting of a large, clinical cohort of individuals with head and neck cancer, our study demonstrates the potential of epigenetic markers of ageing to enhance survival prediction in people with oropharyngeal cancer, beyond established prognostic factors. Our findings have potential uses in both clinical and non-clinical contexts: to aid treatment planning and improve patient stratification

    Clinical judgement by primary care physicians for the diagnosis of all-cause dementia or cognitive impairment in symptomatic people

    Get PDF
    Background: In primary care, general practitioners (GPs) unavoidably reach a clinical judgement about a patient as part of their encounter with patients, and so clinical judgement can be an important part of the diagnostic evaluation. Typically clinical decision making about what to do next for a patient incorporates clinical judgement about the diagnosis with severity of symptoms and patient factors, such as their ideas and expectations for treatment. When evaluating patients for dementia, many GPs report using their own judgement to evaluate cognition, using information that is immediately available at the point of care, to decide whether someone has or does not have dementia, rather than more formal tests. Objectives: To determine the diagnostic accuracy of GPs’ clinical judgement for diagnosing cognitive impairment and dementia in symptomatic people presenting to primary care. To investigate the heterogeneity of test accuracy in the included studies. Search methods: We searched MEDLINE (Ovid SP), Embase (Ovid SP), PsycINFO (Ovid SP), Web of Science Core Collection (ISI Web of Science), and LILACs (BIREME) on 16 September 2021. Selection criteria: We selected cross-sectional and cohort studies from primary care where clinical judgement was determined by a GP either prospectively (after consulting with a patient who has presented to a specific encounter with the doctor) or retrospectively (based on knowledge of the patient and review of the medical notes, but not relating to a specific encounter with the patient). The target conditions were dementia and cognitive impairment (mild cognitive impairment and dementia) and we included studies with any appropriate reference standard such as the Diagnostic and Statistical Manual of Mental Disorders (DSM), International Classification of Diseases (ICD), aetiological definitions, or expert clinical diagnosis. Data collection and analysis: Two review authors screened titles and abstracts for relevant articles and extracted data separately with differences resolved by consensus discussion. We used QUADAS-2 to evaluate the risk of bias and concerns about applicability in each study using anchoring statements. We performed meta-analysis using the bivariate method. Main results: We identified 18,202 potentially relevant articles, of which 12,427 remained after de-duplication. We assessed 57 full-text articles and extracted data on 11 studies (17 papers), of which 10 studies had quantitative data. We included eight studies in the meta-analysis for the target condition dementia and four studies for the target condition cognitive impairment. Most studies were at low risk of bias as assessed with the QUADAS-2 tool, except for the flow and timing domain where four studies were at high risk of bias, and the reference standard domain where two studies were at high risk of bias. Most studies had low concern about applicability to the review question in all QUADAS-2 domains. Average age ranged from 73 years to 83 years (weighted average 77 years). The percentage of female participants in studies ranged from 47% to 100%. The percentage of people with a final diagnosis of dementia was between 2% and 56% across studies (a weighted average of 21%). For the target condition dementia, in individual studies sensitivity ranged from 34% to 91% and specificity ranged from 58% to 99%. In the meta-analysis for dementia as the target condition, in eight studies in which a total of 826 of 2790 participants had dementia, the summary diagnostic accuracy of clinical judgement of general practitioners was sensitivity 58% (95% confidence interval (CI) 43% to 72%), specificity 89% (95% CI 79% to 95%), positive likelihood ratio 5.3 (95% CI 2.4 to 8.2), and negative likelihood ratio 0.47 (95% CI 0.33 to 0.61). For the target condition cognitive impairment, in individual studies sensitivity ranged from 58% to 97% and specificity ranged from 40% to 88%. The summary diagnostic accuracy of clinical judgement of general practitioners in four studies in which a total of 594 of 1497 participants had cognitive impairment was sensitivity 84% (95% CI 60% to 95%), specificity 73% (95% CI 50% to 88%), positive likelihood ratio 3.1 (95% CI 1.4 to 4.7), and negative likelihood ratio 0.23 (95% CI 0.06 to 0.40). It was impossible to draw firm conclusions in the analysis of heterogeneity because there were small numbers of studies. For specificity we found the data were compatible with studies that used ICD-10, or applied retrospective judgement, had higher reported specificity compared to studies with DSM definitions or using prospective judgement. In contrast for sensitivity, we found studies that used a prospective index test may have had higher sensitivity than studies that used a retrospective index test. Authors' conclusions: Clinical judgement of GPs is more specific than sensitive for the diagnosis of dementia. It would be necessary to use additional tests to confirm the diagnosis for either target condition, or to confirm the absence of the target conditions, but clinical judgement may inform the choice of further testing. Many people who a GP judges as having dementia will have the condition. People with false negative diagnoses are likely to have less severe disease and some could be identified by using more formal testing in people who GPs judge as not having dementia. Some false positives may require similar practical support to those with dementia, but some - such as some people with depression - may suffer delayed intervention for an alternative treatable pathology

    6G Opportunities Arising from Internet of Things Use Cases: A Review Paper

    Get PDF
    The race for the 6th generation of wireless networks (6G) has begun. Researchers around the world have started to explore the best solutions for the challenges that the previous generations have experienced. To provide the readers with a clear map of the current developments, several review papers shared their vision and critically evaluated the state of the art. However, most of the work is based on general observations and the big picture vision, and lack the practical implementation challenges of the Internet of Things (IoT) use cases. This paper takes a novel approach in the review, as we present a sample of IoT use cases that are representative of a wide variety of its implementations. The chosen use cases are from the most research-active sectors that can benefit from 6G and its enabling technologies. These sectors are healthcare, smart grid, transport, and Industry 4.0. Additionally, we identified some of the practical challenges and the lessons learned in the implementation of these use cases. The review highlights the cases’ main requirements and how they overlap with the key drivers for the future generation of wireless networks
    corecore