14 research outputs found

    An integrated flexibility optimizer for economic gains of local energy communities — A case study for a University campus

    No full text
    With a capacity-based network tariff structure, consumers are encouraged to reduce their connection capacity to avoid higher costs. However, overloading beyond the administrative grid connection capacity limit would result in an increased connection capacity, thus prosumers have to pay the increased electricity bill for the rest of the year. Therefore, it is important to optimize the energy generation and consumption profiles of local energy communities (LECs) considering the comfort level of occupants. This work aims to reduce the overloading of the grid connection and increase the utilization of local renewable energy resources (RES) thus avoids being penalized throughout the year due to casual intermittent overloading in peak hours, even once in a year. The present work proposes a novel data-driven flexibility optimizer model for day-ahead scheduling of energy profiles for LECs, considering photovoltaic (PV) generation, heat pump (HPs), and cooling loads. The proposed methodology has been developed to explore the flexibility potentials from a university campus network which includes both electrical and heating/cooling systems in an integrated way. A two-layer optimization strategy is developed, to guard the occupant's comfort level. Simulation has been performed for complete two months, considering winter and summer scenarios. A peak demand reduction of 16% has been observed with negligible energy usage differences between the proposed and the baseline case. Two types of flexibility indicators are estimated to give a deeper insight into the performance. Economical gains of 9 % and 16 % are estimated depending on the type and voltage level of the connection

    Voxelwise statistical methods to localize practice variation in brain tumor surgery

    Get PDF
    Purpose During resections of brain tumors, neurosurgeons have to weigh the risk between residual tumor and damage to brain functions. Different perspectives on these risks result in practice variation. We present statistical methods to localize differences in extent of resection between institutions which should enable to reveal brain regions affected by such practice variation. Methods Synthetic data were generated by simulating spheres for brain, tumors, resection cavities, and an effect region in which a likelihood of surgical avoidance could be varied between institutions. Three statistical methods were investigated: a non-parametric permutation based approach, Fisher’s exact test, and a full Bayesian Markov chain Monte Carlo (MCMC) model. For all three methods the false discovery rate (FDR) was determined as a function of the cut-off value for the q-value or the highest density interval, and receiver operating characteristic and precision recall curves were created. Sensitivity to variations in the parameters of the synthetic model were investigated. Finally, all these methods were applied to retrospectively collected data of 77 brain tumor resections in two academic hospitals. Results Fisher’s method provided an accurate estimation of observed FDR in the synthetic data, whereas the permutation approach was too liberal and underestimated FDR. AUC values were similar for Fisher and Bayes methods, and superior to the permutation approach. Fisher’s method deteriorated and became too liberal for reduced tumor size, a smaller size of the effect region, a lower overall extent of resection, fewer patients per cohort, and a smaller discrepancy in surgical avoidance probabilities between the different surgical practices. In the retrospective patient data, all three methods identified a similar effect region, with lower estimated FDR in Fisher’s method than using the permutation method. Conclusions Differences in surgical practice may be detected using voxel statistics. Fisher’s test provides a fast method to localize differences but could underestimate true FDR. Bayesian MCMC is more flexible and easily extendable, and leads to similar results, but at increased computational cost

    Earliest radiological progression in glioblastoma by multidisciplinary consensus review

    Get PDF
    Background: Detection of glioblastoma progression is important for clinical decision-making on cessation or initiation of therapy, for enrollment in clinical trials, and for response measurement in time and location. The RANO-criteria are considered standard for the timing of progression. To evaluate local treatment, we aim to find the most accurate progression location. We determined the differences in progression free survival (PFS) and in tumor volumes at progression (Vprog) by three definitions of progression. Methods: In a consecutive cohort of 73 patients with newly-diagnosed glioblastoma between 1/1/2012 and 31/12/2013, progression was established according to three definitions. We determined (1) earliest radiological progression (ERP) by retrospective multidisciplinary consensus review using all available imaging and follow-up, (2) clinical practice progression (CPP) from multidisciplinary tumor board conclusions, and (3) progression by the RANO-criteria. Results: ERP was established in 63 (86%), CPP in 64 (88%), RANO progression in 42 (58%). Of the 63 patients who had died, 37 (59%) did with prior RANO-progression, compared to 57 (90%) for both ERP and CPP. The median overall survival was 15.3 months. The median PFS was 8.8 months for ERP, 9.5 months for CPP, and 11.8 months for RANO. The PFS by ERP was shorter than CPP (HR 0.57, 95% CI 0.38–0.84, p = 0.004) and RANO-progression (HR 0.29, 95% CI 0.19–0.43, p < 0.001). The Vprog were significantly smaller for ERP (median 8.8 mL), than for CPP (17 mL) and RANO (22 mL). Conclusion: PFS and Vprog vary considerably between progression definitions. Earliest radiological progression by retrospective consensus review should be considered to accurately localize progression and to address confounding of lead time bias in clinical trial enrollment

    Accurate MR Image Registration to Anatomical Reference Space for Diffuse Glioma

    No full text
    To summarize the distribution of glioma location within a patient population, registration of individual MR images to anatomical reference space is required. In this study, we quantified the accuracy of MR image registration to anatomical reference space with linear and non-linear transformations using estimated tumor targets of glioblastoma and lower-grade glioma, and anatomical landmarks at pre- and post-operative time-points using six commonly used registration packages (FSL, SPM5, DARTEL, ANTs, Elastix, and NiftyReg). Routine clinical pre- and post-operative, post-contrast T1-weighted images of 20 patients with glioblastoma and 20 with lower-grade glioma were collected. The 2009a Montreal Neurological Institute brain template was used as anatomical reference space. Tumors were manually segmented in the patient space and corresponding healthy tissue was delineated as a target volume in the anatomical reference space. Accuracy of the tumor alignment was quantified using the Dice score and the Hausdorff distance. To measure the accuracy of general brain alignment, anatomical landmarks were placed in patient and in anatomical reference space, and the landmark distance after registration was quantified. Lower-grade gliomas were registered more accurately than glioblastoma. Registration accuracy for pre- and post-operative MR images did not differ. SPM5 and DARTEL registered tumors most accurate, and FSL least accurate. Non-linear transformations resulted in more accurate general brain alignment than linear transformations, but tumor alignment was similar between linear and non-linear transformation. We conclude that linear transformation suffices to summarize glioma locations in anatomical reference space

    NTEGRAL: ICT-platform based Distributed Control in electricity grids with a large share of Distributed Energy Resources and Renewable Energy Sources

    No full text
    International audienceThe European project INTEGRAL aims to build and demonstrate an industry-quality reference solution for DER aggregation-level control and coordination, based on commonly available ICT components, standards, and platforms. To achieve this, the Integrated ICT-platform based Distributed Control (IIDC) is introduced. The project includes also three field test site installations in the Netherlands, Spain and France, covering normal, critical and emergency grid conditions

    Robust Deep Learning-based Segmentation of Glioblastoma on Routine Clinical MRI Scans Using Sparsified Training

    No full text
    Purpose: To improve the robustness of deep learning-based glioblastoma segmentation in a clinical setting with sparsified datasets. Materials and Methods: In this retrospective study, preoperative T1-weighted, T2-weighted, T2-weighted fluid-attenuated inversion recovery, and postcontrast T1-weighted MRI from 117 patients (median age, 64 years; interquartile range [IQR], 55-73 years; 76 men) included within the Multimodal Brain Tumor Image Segmentation (BraTS) dataset plus a clinical dataset (2012-2013) with similar imaging modalities of 634 patients (median age, 59 years; IQR, 49-69 years; 382 men) with glioblastoma from six hospitals were used. Expert tumor delineations on the postcontrast images were available, but for various clinical datasets, one or more sequences were missing. The convolutional neural network, DeepMedic, was trained on combinations of complete and incomplete data with and without site-specific data. Sparsified training was introduced, which randomly simulated missing sequences during training. The effects of sparsified training and center-specific training were tested using Wilcoxon signed rank tests for paired measurements. Results: A model trained exclusively on BraTS data reached a median Dice score of 0.81 for segmentation on BraTS test data but only 0.49 on the clinical data. Sparsified training improved performance (adjusted P < .05), even when excluding test data with missing sequences, to median Dice score of 0.67. Inclusion of site-specific data during sparsified training led to higher model performance Dice scores greater than 0.8, on par with a model based on all complete and incomplete data. For the model using BraTS and clinical training data, inclusion of site-specific data or sparsified training was of no consequence. Conclusion: Accurate and automatic segmentation of glioblastoma on clinical scans is feasible using a model based on large, heterogeneous, and partially incomplete datasets. Sparsified training may boost the performance of a smaller model based on public and site-specific data.Supplemental material is available for this article.Published under a CC BY 4.0 license

    Smart4RES: Next generation solutions for renewable energy forecasting and applications with focus on distribution grids

    No full text
    International audienceThis paper presents the solutions on renewable energy forecasting proposed by the Horizon2020 Project Smart4RES. The ambition of the project is twofold: (1) increase substantially the performance of short-term forecasting models of Renewable Energy Sources (RES) production and associated weather forecasting and (2) optimize decisions subject to RES uncertainty in power systems and electricity markets. Developments are based on latest advances in meteorology and original use of data science (combination of multiple data sources, data-driven approaches for trading and grid management). Finally, solutions such as flexibility forecast of distributed resources and data markets are oriented towards value for power system stakeholders. Although the project covers a broad scope, in this paper we focus on a selection of use cases that concern the integration of renewables in distribution grids

    Robust deep learning–based segmentation of glioblastoma on routine clinical MRI scans using sparsified training

    No full text
    Purpose: To improve the robustness of deep learning–based glioblastoma segmentation in a clinical setting with sparsified datasets. Materials and Methods: In this retrospective study, preoperative T1-weighted, T2-weighted, T2-weighted fluid-attenuated inversion re-covery, and postcontrast T1-weighted MRI from 117 patients (median age, 64 years; interquartile range [IQR], 55–73 years; 76 men) included within the Multimodal Brain Tumor Image Segmentation (BraTS) dataset plus a clinical dataset (2012–2013) with similar imaging modalities of 634 patients (median age, 59 years; IQR, 49–69 years; 382 men) with glioblastoma from six hospitals were used. Expert tumor delineations on the postcontrast images were available, but for various clinical datasets, one or more sequences were miss-ing. The convolutional neural network, DeepMedic, was trained on combinations of complete and incomplete data with and without site-specific data. Sparsified training was introduced, which randomly simulated missing sequences during training. The effects of spar-sified training and center-specific training were tested using Wilcoxon signed rank tests for paired measurements. Results: A model trained exclusively on BraTS data reached a median Dice score of 0.81 for segmentation on BraTS test data but only 0.49 on the clinical data. Sparsified training improved performance (adjusted P, .05), even when excluding test data with missing sequences, to median Dice score of 0.67. Inclusion of site-specific data during sparsified training led to higher model performance Dice scores greater than 0.8, on par with a model based on all complete and incomplete data. For the model using BraTS and clinical training data, inclusion of site-specific data or sparsified training was of no consequence. Conclusion: Accurate and automatic segmentation of glioblastoma on clinical scans is feasible using a model based on large, heteroge-neous, and partially incomplete datasets. Sparsified training may boost the performance of a smaller model based on public and site-specific data

    Highlight results of the Smart4RES project on weather modelling and forecasting dedicated to renewable energy applications

    No full text
    International audienceIn this presentation we detail highlight results obtained from the research work within the European Horizon 2020 project Smart4RES (http://www.smart4res.eu). The project, which started in 2019 and runs until 2023, aims at a better modelling and forecasting of weather variables necessary to optimise the integration of weather-dependent renewable energy (RES) production (i.e. wind, solar, run-of-the-river hydro) into power systems and electricity markets. Smart4RES gathers experts from several disciplines, from meteorology and renewable generation to market- and grid-integration. It aims to contribute to the pathway towards energy systems with very high RES penetrations by 2030 and beyond, through thematic objectives including:Improvement of weather and RES forecasting,Streamlined extraction of optimal value through new forecasting products, data market places, and novel business models;New data-driven optimization and decision-aid tools for market and grid management applications;Validation of new models in living labs and assessment of forecasting value vs costly remedies to hedge uncertainties (i.e. storage). In this presentation we will focus on our results on models that permit to improve forecasting of weather variables with focus on extreme situations and also through innovative measuring settings (i.e. a network of sky cameras). Also results will be presented on the development of seamless approach able to couple outputs from different ensemble numerical weather prediction (NWP) models with different temporal resolutions. Advances on the contribution of ultra-high resolution NWPs based on Large Eddy Simulation will be presented with evaluation results on real case studies like the Rhodes island in Greece.When it comes to forecasting the power output of RES plants, mainly wind and solar, the focus is on improving predictability using multiple sources of data. The proposed modelling approaches aim to efficiently combine highly dimensionally input (various types of satellite images, numerical weather predictions, spatially distributed measurements etc.). A priority has been to propose models that permit to generate probabilistic forecasts for multiple time frames in a seamless way. Thus, the objective is not only to improve accuracy and uncertainty estimations, but also to simplify complex forecasting modelling chains for applications that use forecasts at different time frames (i.e. a virtual power plant - VPP- with or without storage that participates in multiple markets). Our results show that the proposed seamless models permit to reach these performance objectives. Results will be presented also on how these approaches can be extended to aggregations of RES plants which is relevant for forecasting VPP production.How to cite: Kariniotakis, G. and Camal, S. and the Smart4RES team: Highlight results of the Smart4RES project on weather modelling and forecasting dedicated to renewable energy applications, EGU General Assembly 2022, Vienna, Austria, 23–27 May 2022, EGU22-12923, 2022
    corecore