101 research outputs found

    A Weighted Markov Chain Application to Predict Consumer Price Index When Facing Pandemic Covid-19

    Get PDF
    Covid-19 is an infectious disease caused by acute respiratory syndrome coronavirus 2 (severe acute respiratory syndrome coronavirus 2 or SARS-CoV-2). It makes the decrease of people's purchasing power. Meanwhile, the economic growth indicates the success of a country's economic development. Therefore, the Consumer Price Index occurs Inflation and Deflation, which is commonly referred to in the economy as the Consumer Price Index. This study aimed to apply a weighted Markov chain method to predict the consumer price index in the future. The satisfactory results obtained by researchers in predicting the consumer price index are in, the chance is 84.34% and the 12th month has a 78.54% chance

    Data-driven Feature Description of Heat Wave Effect on Distribution System

    Get PDF
    During the last years, the effects of the climate change have become more and more evident. In particular, urban regions, where is more common the use of underground cables, are experiencing the strong effect of extremely high temperature conditions and low humidity. This phenomenon, known in literature as “heat wave”, should be properly evaluated for highlighting its effect on the system operation and planning, as well as for properly scheduling appropriate maintenance interventions. This paper presents a three-step procedure aiming to characterize the heat wave phenomenon in terms of “most significant features” and, on this basis, recognizing the days as “critical” and “non-critical”. The weather conditions of the city of Turin (Italy) and the faults that have affected the local network in the last 10 years have been considered. This approach will be useful for system operators for integrating the weather information in distribution system operation and planning procedures

    Quantifying Forecast Uncertainty in the Energy Domain

    Get PDF
    This dissertation focuses on quantifying forecast uncertainties in the energy domain, especially for the electricity and natural gas industry. Accurate forecasts help the energy industry minimize their production costs. However, inaccurate weather forecasts, unusual human behavior, sudden changes in economic conditions, unpredictable availability of renewable sources (wind and solar), etc., represent uncertainties in the energy demand-supply chain. In the current smart grid era, total electricity demand from non-renewable sources influences by the uncertainty of the renewable sources. Thus, quantifying forecast uncertainty has become important to improve the quality of forecasts and decision making. In the natural gas industry, the task of the gas controllers is to guide the hourly natural gas flow in such a way that it remains within a certain daily maximum and minimum flow limits to avoid penalties. Due to inherent uncertainties in the natural gas forecasts, setting such maximum and minimum flow limits a day or more in advance is difficult. Probabilistic forecasts (cumulative distribution functions), which quantify forecast uncertainty, are a useful tool to guide gas controllers to make such tough decisions. Three methods (parametric, semi-parametric, and non-parametric) are presented in this dissertation to generate 168-hour horizon probabilistic forecasts for two real utilities (electricity and natural gas) in the US. Probabilistic forecasting is used as a tool to solve a real-life problem in the natural gas industry. A benchmark was created based on the existing solution, which assumes forecast error is normal. Two new probabilistic forecasting methods are implemented in this work without the normality assumption. There is no single popular evaluation technique available to assess probabilistic forecasts, which is one reason for people’s lack of interest in using probabilistic forecasts. Existing scoring rules are complicated, dataset dependent, and provide less emphasis on reliability (empirical distribution matches with observed distribution) than sharpness (the smallest distance between any two quantiles of a CDF). A graphical way to evaluate probabilistic forecasts along with two new scoring rules are offered in this work. The non-parametric and semi-parametric probabilistic forecasting methods outperformed the benchmark method during unusual days (difficult days to forecast) as well as on other days

    Uncertainty quantification for an electric motor inverse problem - tackling the model discrepancy challenge

    Get PDF
    In the context of complex applications from engineering sciences the solution of identification problems still poses a fundamental challenge. In terms of Uncertainty Quantification (UQ), the identification problem can be stated as a separation task for structural model and parameter uncertainty. This thesis provides new insights and methods to tackle this challenge and demonstrates these developments on an industrial benchmark use case combining simulation and real-world measurement data. While significant progress has been made in development of methods for model parameter inference, still most of those methods operate under the assumption of a perfect model. For a full, unbiased quantification of uncertainties in inverse problems, it is crucial to consider all uncertainty sources. The present work develops methods for inference of deterministic and aleatoric model parameters from noisy measurement data with explicit consideration of model discrepancy and additional quantification of the associated uncertainties using a Bayesian approach. A further important ingredient is surrogate modeling with Polynomial Chaos Expansion (PCE), enabling sampling from Bayesian posterior distributions with complex simulation models. Based on this, a novel identification strategy for separation of different sources of uncertainty is presented. Discrepancy is approximated by orthogonal functions with iterative determination of optimal model complexity, weakening the problem inherent identifiability problems. The model discrepancy quantification is complemented with studies to statistical approximate numerical approximation error. Additionally, strategies for approximation of aleatoric parameter distributions via hierarchical surrogate-based sampling are developed. The proposed method based on Approximate Bayesian Computation (ABC) with summary statistics estimates the posterior computationally efficient, in particular for large data. Furthermore, the combination with divergence-based subset selection provides a novel methodology for UQ in stochastic inverse problems inferring both, model discrepancy and aleatoric parameter distributions. Detailed analysis in numerical experiments and successful application to the challenging industrial benchmark problem -- an electric motor test bench -- validates the proposed methods

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Advanced forecasting methods for renewable generation and loads in modern power systems

    Get PDF
    The PhD Thesis deals with the problem of forecasting in power systems, i.e., a wide topic that today covers many and many needs, and that is universally acknowledged to require further deep research efforts. After a brief discussion on the classification of forecasting systems and on the methods that are currently available in literature for forecasting electrical variables, stressing pros and cons of each approach, the PhD Thesis provides four contributes to the state of the art on forecasting in power systems where literature is somehow weak. The first provided contribute is a Bayesian-based probabilistic method to forecast photovoltaic (PV) power in short-term scenarios. Parameters of the predictive distributions are estimated by means of an exogenous linear regression model and through the Bayesian inference of past observations. The second provided contribute is a probabilistic competitive ensemble method once again to forecast PV power in short-term scenarios. The idea is to improve the quality of forecasts obtained through some individual probabilistic predictors, by combining them in a probabilistic competitive approach based on a linear pooling of predictive cumulative density functions. A multi-objective optimization method is proposed in order to guarantee elevate sharpness and reliability characteristics of the predictive distribution. The third contribute is aimed to the development of a deterministic industrial load forecasting method suitable in short-term scenarios, at both aggregated and single-load levels, and for both active and reactive powers. The deterministic industrial load forecasting method is based on multiple linear regression and support vector regression models, selected by means of 10-fold cross-validation or lasso analysis. The fourth contribute provides advanced PDFs for the statistical characterization of Extreme Wind Speeds (EWS). In particular, the PDFs proposed in the PhD Thesis are an Inverse Burr distribution and a mixture Inverse Burr – Inverse Weibull distribution. The mixture of an Inverse Burr and an Inverse Weibull distribution allows to increase the versatility of the tool, although increasing the number of parameters to be estimated. This complicates the parameter estimation process, since traditional techniques such as the maximum likelihood estimation suffer from convergence problems. Therefore, an expectation-maximization procedure is specifically developed for the parameter estimation. All of the contributes presented in the PhD Thesis are tested on actual data, and compared to the state-of-the-art benchmarks to assess the suitability of each proposal

    Information Theory and Its Application in Machine Condition Monitoring

    Get PDF
    Condition monitoring of machinery is one of the most important aspects of many modern industries. With the rapid advancement of science and technology, machines are becoming increasingly complex. Moreover, an exponential increase of demand is leading an increasing requirement of machine output. As a result, in most modern industries, machines have to work for 24 hours a day. All these factors are leading to the deterioration of machine health in a higher rate than before. Breakdown of the key components of a machine such as bearing, gearbox or rollers can cause a catastrophic effect both in terms of financial and human costs. In this perspective, it is important not only to detect the fault at its earliest point of inception but necessary to design the overall monitoring process, such as fault classification, fault severity assessment and remaining useful life (RUL) prediction for better planning of the maintenance schedule. Information theory is one of the pioneer contributions of modern science that has evolved into various forms and algorithms over time. Due to its ability to address the non-linearity and non-stationarity of machine health deterioration, it has become a popular choice among researchers. Information theory is an effective technique for extracting features of machines under different health conditions. In this context, this book discusses the potential applications, research results and latest developments of information theory-based condition monitoring of machineries

    Machine cosmology: investigating the dark sector through novel inference methods

    Get PDF
    Cosmology during the last few decades has experienced an influx of new theory and observations, pushed forward by ever-increasing capabilities of current and upcoming large-scale surveys, computational and methodological capabilities, and new theoretical work being fueled by these latter factors. Observational measurements often carry uncertainties from noise or random processes, with inference methods being concerned with inverse probability as the quest to explore underlying distributions of data. Over the same time frame, Bayesian statistics has thus quickly found itself in a central role in cosmological analysis, as the field is rife with inverse problems such as hypothesis testing, model selection, and parameter estimation. More recently, inference models from the field of machine learning have also experienced a surge in applications to cosmology. We delve into the utility of such inference methods for challenges in cosmology in different degrees of granularity and focusing on the dark sector of our Universe, traveling from the largest scale to more local problems in the process. Starting in the area of cosmological parameter estimation, we develop a novel parallel-iterative parameter estimation method rooted in Bayesian nonparametrics and recent developments in variational inference from the field of machine learning in Chapter 2. In doing so, we propose, implement, and test a new approach to fast high-dimensional parameter estimation in an embarrassingly parallel manner. For this work, we make use of large-scale supercomputing facilities to speed up the functional extraction of cosmological parameter posteriors based on data from the Dark Energy Survey. Next, we concentrate on the dark energy equation of state in Chapter 3, stress-testing its imprint on type Ia supernovae measurements through an introduced random curve generator for smooth function perturbation. We then investigate the robustness of standard model analyses based on such data with regard to deviations from a cosmological constant in the form of a redshift-dependent equation of state. With regard to large-scale structure, we show the advantages of density ridges as curvilinear principal curves from Dark Energy Survey weak lensing data for cosmic trough identification in Chapter 4. Denoising large-scale structure in this way allows for the more fine-grained identification of structural components in the cosmic web. We also compare the results of our extended version of the subspace-constrained mean shift algorithm to curvelet denoising as an alternative method, as well as trough structure from measurements of the foreground matter density field. Lastly, in the area of galaxy formation and evolution, we combine analytic formalisms and machine learning methods in a hybrid prediction framework in Chapter 5. We use a two-step process to populate dark matter haloes taken from the SIMBA cosmological simulation with baryonic galaxy properties of interest. For this purpose, we use the equilibrium model of galaxy evolution as a precursory module to enable an improved prediction of remaining baryonic properties as a way to quickly complete cosmological simulations

    Radar Target Classification using Recursive Knowledge-Based Methods

    Get PDF
    • 

    corecore