1,141 research outputs found

    Time-Pattern Profiling from Smart Meter Data to Detect Outliers in Energy Consumption

    Get PDF
    Smart meters have become a core part of the Internet of Things, and its sensory network is increasing globally. For example, in the UK there are over 15 million smart meters operating across homes and businesses. One of the main advantages of the smart meter installation is the link to a reduction in carbon emissions. Research shows that, when provided with accurate and real-time energy usage readings, consumers are more likely to turn off unneeded appliances and change other behavioural patterns around the home (e.g., lighting, thermostat adjustments). In addition, the smart meter rollout results in a lessening in the number of vehicle callouts for the collection of consumption readings from analogue meters and a general promotion of renewable sources of energy supply. Capturing and mining the data from this fully maintained (and highly accurate) sensing network, provides a wealth of information for utility companies and data scientists to promote applications that can further support a reduction in energy usage. This research focuses on modelling trends in domestic energy consumption using density-based classifiers. The technique estimates the volume of outliers (e.g., high periods of anomalous energy consumption) within a social class grouping. To achieve this, Density-Based Spatial Clustering of Applications with Noise (DBSCAN), Ordering Points to Identify the Clustering Structure (OPTICS) and Local Outlier Factor (LOF) demonstrate the detection of unusual energy consumption within naturally occurring groups with similar characteristics. Using DBSCAN and OPTICS, 53 and 208 outliers were detected respectively; with 218 using LOF, on a dataset comprised of 1,058,534 readings from 1026 homes

    Effects of gestational age at birth on cognitive performance : a function of cognitive workload demands

    Get PDF
    Objective: Cognitive deficits have been inconsistently described for late or moderately preterm children but are consistently found in very preterm children. This study investigates the association between cognitive workload demands of tasks and cognitive performance in relation to gestational age at birth. Methods: Data were collected as part of a prospective geographically defined whole-population study of neonatal at-risk children in Southern Bavaria. At 8;5 years, n = 1326 children (gestation range: 23–41 weeks) were assessed with the K-ABC and a Mathematics Test. Results: Cognitive scores of preterm children decreased as cognitive workload demands of tasks increased. The relationship between gestation and task workload was curvilinear and more pronounced the higher the cognitive workload: GA2 (quadratic term) on low cognitive workload: R2 = .02, p<0.001; moderate cognitive workload: R2 = .09, p<0.001; and high cognitive workload tasks: R2 = .14, p<0.001. Specifically, disproportionally lower scores were found for very (<32 weeks gestation) and moderately (32–33 weeks gestation) preterm children the higher the cognitive workload of the tasks. Early biological factors such as gestation and neonatal complications explained more of the variance in high (12.5%) compared with moderate (8.1%) and low cognitive workload tasks (1.7%). Conclusions: The cognitive workload model may help to explain variations of findings on the relationship of gestational age with cognitive performance in the literature. The findings have implications for routine cognitive follow-up, educational intervention, and basic research into neuro-plasticity and brain reorganization after preterm birth

    The COMET Handbook: version 1.0

    Get PDF
    The selection of appropriate outcomes is crucial when designing clinical trials in order to compare the effects of different interventions directly. For the findings to influence policy and practice, the outcomes need to be relevant and important to key stakeholders including patients and the public, health care professionals and others making decisions about health care. It is now widely acknowledged that insufficient attention has been paid to the choice of outcomes measured in clinical trials. Researchers are increasingly addressing this issue through the development and use of a core outcome set, an agreed standardised collection of outcomes which should be measured and reported, as a minimum, in all trials for a specific clinical area. Accumulating work in this area has identified the need for guidance on the development, implementation, evaluation and updating of core outcome sets. This Handbook, developed by the COMET Initiative, brings together current thinking and methodological research regarding those issues. We recommend a four-step process to develop a core outcome set. The aim is to update the contents of the Handbook as further research is identified

    tropiTree:an NGS-based EST-SSR resource for 24 tropical tree species

    Get PDF
    The development of genetic tools for non-model organisms has been hampered by cost, but advances in next-generation sequencing (NGS) have created new opportunities. In ecological research, this raises the prospect for developing molecular markers to simultaneously study important genetic processes such as gene flow in multiple non-model plant species within complex natural and anthropogenic landscapes. Here, we report the use of bar-coded multiplexed paired-end Illumina NGS for the de novo development of expressed sequence tag-derived simple sequence repeat (EST-SSR) markers at low cost for a range of 24 tree species. Each chosen tree species is important in complex tropical agroforestry systems where little is currently known about many genetic processes. An average of more than 5,000 EST-SSRs was identified for each of the 24 sequenced species, whereas prior to analysis 20 of the species had fewer than 100 nucleotide sequence citations. To make results available to potential users in a suitable format, we have developed an open-access, interactive online database, tropiTree (http://bioinf.hutton.ac.uk/tropiTree), which has a range of visualisation and search facilities, and which is a model for the efficient presentation and application of NGS data

    Empirical Investigations of Reference Point Based Methods When Facing a Massively Large Number of Objectives: First Results

    Get PDF
    EMO 2017: 9th International Conference on Evolutionary Multi-Criterion Optimization, 19-22 March 2017, Münster, GermanyThis is the author accepted manuscript. The final version is available from Springer Verlag via the DOI in this record.Multi-objective optimization with more than three objectives has become one of the most active topics in evolutionary multi-objective optimization (EMO). However, most existing studies limit their experiments up to 15 or 20 objectives, although they claimed to be capable of handling as many objectives as possible. To broaden the insights in the behavior of EMO methods when facing a massively large number of objectives, this paper presents some preliminary empirical investigations on several established scalable benchmark problems with 25, 50, 75 and 100 objectives. In particular, this paper focuses on the behavior of the currently pervasive reference point based EMO methods, although other methods can also be used. The experimental results demonstrate that the reference point based EMO method can be viable for problems with a massively large number of objectives, given an appropriate choice of the distance measure. In addition, sufficient population diversity should be given on each weight vector or a local niche, in order to provide enough selection pressure. To the best of our knowledge, this is the first time an EMO methodology has been considered to solve a massively large number of conflicting objectives.This work was partially supported by EPSRC (Grant No. EP/J017515/1

    New architecture for reconfigurable WDM-PON networks based on SOA gating array

    Get PDF
    Abstract A new architecture of reconfigurable WDM-PON networks for the dynamic capacity allocation is proposed and experimentally demonstrated. The architecture based on an SOA gate allows 1µs switching time to the re-allocation of wavelength resources

    Living risk prediction algorithm (QCOVID) for risk of hospital admission and mortality from coronavirus 19 in adults: national derivation and validation cohort study.

    Get PDF
    OBJECTIVE: To derive and validate a risk prediction algorithm to estimate hospital admission and mortality outcomes from coronavirus disease 2019 (covid-19) in adults. DESIGN: Population based cohort study. SETTING AND PARTICIPANTS: QResearch database, comprising 1205 general practices in England with linkage to covid-19 test results, Hospital Episode Statistics, and death registry data. 6.08 million adults aged 19-100 years were included in the derivation dataset and 2.17 million in the validation dataset. The derivation and first validation cohort period was 24 January 2020 to 30 April 2020. The second temporal validation cohort covered the period 1 May 2020 to 30 June 2020. MAIN OUTCOME MEASURES: The primary outcome was time to death from covid-19, defined as death due to confirmed or suspected covid-19 as per the death certification or death occurring in a person with confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in the period 24 January to 30 April 2020. The secondary outcome was time to hospital admission with confirmed SARS-CoV-2 infection. Models were fitted in the derivation cohort to derive risk equations using a range of predictor variables. Performance, including measures of discrimination and calibration, was evaluated in each validation time period. RESULTS: 4384 deaths from covid-19 occurred in the derivation cohort during follow-up and 1722 in the first validation cohort period and 621 in the second validation cohort period. The final risk algorithms included age, ethnicity, deprivation, body mass index, and a range of comorbidities. The algorithm had good calibration in the first validation cohort. For deaths from covid-19 in men, it explained 73.1% (95% confidence interval 71.9% to 74.3%) of the variation in time to death (R2); the D statistic was 3.37 (95% confidence interval 3.27 to 3.47), and Harrell's C was 0.928 (0.919 to 0.938). Similar results were obtained for women, for both outcomes, and in both time periods. In the top 5% of patients with the highest predicted risks of death, the sensitivity for identifying deaths within 97 days was 75.7%. People in the top 20% of predicted risk of death accounted for 94% of all deaths from covid-19. CONCLUSION: The QCOVID population based risk algorithm performed well, showing very high levels of discrimination for deaths and hospital admissions due to covid-19. The absolute risks presented, however, will change over time in line with the prevailing SARS-C0V-2 infection rate and the extent of social distancing measures in place, so they should be interpreted with caution. The model can be recalibrated for different time periods, however, and has the potential to be dynamically updated as the pandemic evolves

    Increase in Non-AIDS Related Conditions as Causes of Death among HIV-Infected Individuals in the HAART Era in Brazil

    Get PDF
    Background. In 1996, Brazil became the first developing country to provide free and universal access to HAART. Although a decrease in overall mortality has been documented, there are no published data on the impact of HAART on causes of death among HIV-infected individuals in Brazil. We assessed temporal trends of mortality due to cardiovascular diseases (CVD), diabetes mellitus (DM) and other conditions generally not associated with HIV-infection among persons with and without HIV infection in Brazil between 1999 and 2004. Methodology/Principal Findings. Odds ratios were used to compare causes of death in individuals who had HIV/AIDS listed on any field of the death certificate with those who did not. Logistic regression models were fitted with generalized estimating equations to account for spatial correlation; co-variables were added to the models to control for potential confounding. Of 5,856,056 deaths reported in Brazil between 1999 and 2004 67,249 (1.15%) had HIV/AIDS listed on the death certificate and non-HIV-related conditions were listed on 16.3% in 1999, Increasing to 24.1% by 2004 (p<0.001) The adjusted average yearly increases were 8% and 0.8% for CVD (p<0.001), and 12% and 2.8% for DM (p<0.001), for those who had and kiki not have HIV/AIDS listed on the death certificate respectively. Similar results were found for these conditions as underlying causes of death. Conclusions/Significance. In Brazil between 1999 and 2004 conditions usually considered not to be related to HIV-infection appeared to become more likely causes of death over time than reported causes of death among individuals who had HIV/AIDS listed on the death certificate than in those who did not. This observation has important programmatic implications for developing countries that are scaling-up access to antiretroviral therapy. © 2008 Pacheco et al
    • …
    corecore