9 research outputs found

    Power Systems Stability through Piecewise Monotonic Data Approximations – Part 2: Adaptive Number of Monotonic Sections and Performance of L1PMA, L2WPMA, and L2CXCV in Overhead Medium-Voltage Broadband over Power Lines Networks

    Get PDF
    This second paper investigates the role of the number of monotonic sections during the mitigation of measurement differences in overhead medium-voltage broadband over power lines (OV MV BPL) transfer functions. The performance of two well-known piecewise monotonic data approximations that are based on the number of monotonic sections (i.e., L1PMA and L2WPMA) is assessed in comparison with the occurred measurement differences and L2CXCV, which is a piecewise monotonic data approximation without considering monotonic sections.The contribution of this paper is double. First, further examination regarding the definition of the optimal number of monotonic section is made so that the accuracy of L1PMA can be significantly enhanced. In fact, the goal is to render piecewise monotonic data approximations that are based on the optimal number of monotonic sections as the leading approximation against the other ones without monotonic sections. Second, a generic framework concerning the definition of an adaptive number of monotonic sections is proposed for given OV MV BPL topology.Citation: Lazaropoulos, A. G. (2017). Power Systems Stability through Piecewise Monotonic Data Approximations – Part 2: Adaptive Number of Monotonic Sections and Performance of L1PMA, L2WPMA, and L2CXCV in Overhead Medium-Voltage Broadband over Power Lines Networks. Trends in Renewable Energy, 3(1), 33-60. DOI: 10.17737/tre.2017.3.1.003

    Business Analytics and IT in Smart Grid – Part 2: The Qualitative Mitigation Impact of Piecewise Monotonic Data Approximations on the iSHM Class Map Footprints of Overhead Low-Voltage Broadband over Power Lines Topologies Contaminated by Measurement Differences

    Get PDF
    Business analytics and IT infrastructure preserve the integrity of the smart grid (SG) operation against the flood of big data that may be susceptible to faults, such as measurement differences. In [1], the impact of measurement differences that follow continuous uniform distributions (CUDs) of different magnitudes has been investigated via initial Statistical Hybrid Model (iSHM) footprints during the operation of overhead low-voltage broadband over power lines (OV LV BPL) networks. In this companion paper, the mitigation efficiency of piecewise monotonic data approximations, such as L1PMA and L2WPMA, is qualitatively assessed in terms of iSHM footprints when the aforementioned measurement difference CUD of different intensities are applied.Citation: Lazaropoulos, A. G. (2020). Business Analytics and IT in Smart Grid – Part 2: The Qualitative Mitigation Impact of Piecewise Monotonic Data Approximations on the iSHM Class Map Footprints of Overhead Low-Voltage Broadband over Power Lines Topologies Contaminated by Measurement Differences. Trends in Renewable Energy, 6, 177-203. DOI: 10.17737/tre.2020.6.2.0011

    Smart Energy and Spectral Efficiency (SE) of Distribution Broadband over Power Lines (BPL) Networks – Part 2: L1PMA, L2WPMA and L2CXCV for SE against Measurement Differences in Overhead Medium-Voltage BPL Networks

    Get PDF
    This second paper assesses the performance of piecewise monotonic data approximations, such as L1PMA, L2WPMA and L2CXCV, against the measurement differences during the spectral efficiency (SE) calculations in overhead medium-voltage broadband over power lines (OV MV BPL) networks. In this case study paper, the performance of the aforementioned three already known piecewise monotonic data approximations, which are considered as countermeasure techniques against measurement differences, is here extended during the SE computations. The indicative BPL topologies of the first paper are again considered while the 3-30 MHz frequency band of the BPL operation is assumed.Citation: Lazaropoulos, A. G. (2018). Smart Energy and Spectral Efficiency (SE) of Distribution Broadband over Power Lines (BPL) Networks – Part 2: L1PMA, L2WPMA and L2CXCV for SE against Measurement Differences in Overhead Medium-Voltage BPL Networks. Trends in Renewable Energy, 4, 185-212. DOI: 10.17737/tre.2018.4.2.007

    Power Systems Stability through Piecewise Monotonic Data Approximations – Part 1: Comparative Benchmarking of L1PMA, L2WPMA and L2CXCV in Overhead Medium-Voltage Broadband over Power Lines Networks

    Get PDF
    This first paper assesses the performance of three well-known piecewise monotonic data approximations (i.e., L1PMA, L2WPMA, and L2CXCV) during the mitigation of measurement differences in the overhead medium-voltage broadband over power lines (OV MV BPL) transfer functions.The contribution of this paper is triple. First, based on the inherent piecewise monotonicity of OV MV BPL transfer functions, L2WPMA and L2CXCV are outlined and applied during the determination of theoretical and measured OV MVBPL transfer functions. Second, L1PMA, L2WPMA, and L2CXCV are comparatively benchmarked by using the performance metrics of the percent error sum (PES) and fault PES. PES and fault PES assess the efficiency and accuracy of the three piecewise monotonic data approximations during the determination of transmission BPL transfer functions. Third, the performance of L1PMA, L2WPMA, and L2CXCV is assessed with respect to the nature of faults —i.e. faults that follow either continuous uniform distribution (CUD) or normal distribution (ND) of different magnitudes—.The goal of this set of two papers is the establishment of a more effective identification and restoration of the measurement differences during the OV MV BPL coupling transfer function determination that may significantly help towards a more stable and self-healing power system.Citation: Lazaropoulos, A. G. (2017). Power Systems Stability through Piecewise Monotonic Data Approximations – Part 1: Comparative Benchmarking of L1PMA, L2WPMA and L2CXCV in Overhead Medium-Voltage Broadband over Power Lines Networks. Trends in Renewable Energy, 3(1), 2-32. DOI: 10.17737/tre.2017.3.1.002

    Main Line Fault Localization Methodology (MLFLM) in Smart Grid – The Underground Medium- and Low-Voltage Broadband over Power Lines Networks Case

    Get PDF
    This paper assesses the performance of the main line fault localization methodology (MLFLM) when its application is extended to underground medium- and low-voltage broadband over power lines (UN MV and UN LV BPL) networks, say UN distribution BPL networks.  This paper focuses on the localization of main distribution line faults across UV MV and UN LV BPL networks. By extending the MLFLM procedure, which has successfully been applied to overhead medium-voltage (OV MV) BPL networks, the performance assessment of MLFLM is investigated with respect to the nature of the main distribution line faults, the intensity of the measurement differences and the fault location across the main distribution lines of the underground distribution power grid (either MV or LV grid).Citation: Lazaropoulos, A. G. (2017). Main Line Fault Localization Methodology (MLFLM) in Smart Grid – The Underground Medium- and Low-Voltage Broadband over Power Lines Networks Case. Trends in Renewable Energy, 4, 15-42. DOI: 10.17737/tre.2018.4.1.004

    Evaluated data file for neutron irradiation of Ta-181 at energies up to 200 MeV

    Get PDF
    New evaluated data file for 181Ta irradiated with neutrons at energies up to 200 MeV has been prepared. The data evaluation has been done using the results of calculations, measured data, systematics predictions, and covariance information. Calculations have been performed using a special version of the TALYS code implementing the geometry dependent hybrid model and models for the non-equilibrium light cluster emission. The TEFAL code and the FOX code from the BEKED package have been used for the formatting of the data

    Least squares convex-concave data smoothing

    No full text
    We consider n noisy measurements of a smooth (unknown) function, which suggest that the graph of the function consists of one convex and one concave section. Due to the noise the sequence of the second divided differences of the data exhibits more sign changes than those expected in the second derivative of the underlying function. We address the problem of smoothing the data so as to minimize the sum of squares of residuals subject to the condition that the sequence of successive second divided differences of the smoothed values changes sign at most once. It is a nonlinear problem, since the position of the sign change is also an unknown of the optimization process. We state a characterization theorem, which shows that the smoothed values can be derived by at most 2n - 2 quadratic programming calculations to subranges of data. Then, we develop an algorithm that solves the problem in about O(n2) computer operations by employing several techniques, including B-splines, the use of active sets, quadratic programming and updating methods. A Fortran program has been written and some of its numerical results are presented. Applications of the smoothing technique may be found in scientific, economic and engineering calculations, when a potential shape for the underlying function is an S-curve. Generally, the smoothing calculation may arise from processes that show initially increasing and then decreasing rates of change

    Additive models with shape constraints

    Get PDF
    In many practical situations when analyzing a dependence of one or more explanatory variables on a response variable it is essential to assume that the relationship of interest obeys certain shape constraints, such as monotonicity or monotonicity and convexity/concavity. In this thesis a new approach to shape preserving smoothing within generalized additive models has been developed. In contrast with previous quadratic programming based methods, the project develops intermediate rank penalized smoothers with shape constrained restrictions based on re-parameterized B-splines and penalties based on the P-spline ideas of Eilers and Marx (1996). Smoothing under monotonicity constraints and monotonicity together with convexity/concavity for univariate smooths; and smoothing of bivariate functions with monotonicity restrictions on both covariates and on only one of them are considered. The proposed shape constrained smoothing has been incorporated into generalized additive models with a mixture of unconstrained and shape restricted smooth terms (mono-GAM). A fitting procedure for mono-GAM is developed. Since a major challenge of any flexible regression method is its implementation in a computationally efficient and stable manner, issues such as convergence, rank deficiency of the working model matrix, initialization, and others have been thoroughly dealt with. A question about the limiting posterior distribution of the model parameters is solved, which allows us to construct Bayesian confidence intervals of the mono-GAM smooth terms by means of the delta method. The performance of these confidence intervals is examined by assessing realized coverage probabilities using simulation studies. The proposed modelling approach has been implemented in an R package monogam. The model setup is the same as in mgcv(gam) with the addition of shape constrained smooths. In order to be consistent with the unconstrained GAM, the package provides key functions similar to those associated with mgcv(gam). Performance and timing comparisons of mono-GAM with other alternative methods has been undertaken. The simulation studies show that the new method has practical advantages over the alternatives considered. Applications of mono-GAM to various data sets are presented which demonstrate its ability to model many practical situations.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    No full text
    Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics, biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Title of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc.Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components that give nonnegative second divided differences (convexity) and one separate section of optimal components that give nonpositive second divided differences (concavity). The solution process finds the joint (that is the inflection point estimate of the underlying function) of the sections automatically. The underlying method is iterative, each iteration solving a structured strictly convex quadratic programming problem in order to obtain a convex or a concave section over a subrange of data. Restrictions on the complexity of the problem:Number of data, n, is not limited in the software package, but is limited to 2000 in the main driver. The total work of the method requires 2n-2 structured quadratic programming calculations over subranges of data, which in practice does not exceed the amount of O(n2) computer operations. Typical running times:CPU time on a PC with an Intel 733 MHz processor operating in Windows 98: About 2 s to smooth n=1000 noisy measurements that follow the shape of the sine function over one period. Summary:L2CXCV is a package of Fortran 77 subroutines for least squares smoothing to n univariate data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is unknown. The piecewise linear interpolant to the smoothed values gives a convex/concave fit to the data. The underlying algorithm is based on the property that in this best convex/concave fit, the convex and the concave section are both optimal and separate. The algorithm is iterative, each iteration solving a strictly convex quadratic programming problem for the best convex fit to the first k data, starting from the best convex fit to the first k-1 data. By reversing the order and sign of the data, the algorithm obtains the best concave fit to the last n-k data. Then it chooses that k as the optimal position of the required sign change (which defines the inflection point of the fit), if the convex and the concave components to the first k and the last n-k data, respectively, form a convex/concave vector that gives the least sum of squares of residuals. In effect the algorithm requires at most 2n-2 quadratic programming calculations over subranges of data. The package employs a technique for quadratic programming, which takes advantage of a B-spline representation of the smoothed values and makes use of some efficient O(k) updating procedures, where k is the number of data of a subrange. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes that is about n, thus exhibiting quadratic performance in n. The Fortran codes have been designed to minimize the use of computing resources. Attention has been given to computer rounding errors details, which are essential to the robustness of the software package. Numerical examples with output are provided to help the use of the software and exhibit certain features of the method. Distribution material that includes driver programs, technical details of the installation of the package and test examples that demonstrate the use of the software is available in an ASCII file that accompanies this work. © 2006 Elsevier B.V. All rights reserved
    corecore