618 research outputs found

    A new approach to the fault location problem: using the fault's transient intermediate frequency response

    Get PDF
    The fault location problem has been tackled mainly through impedance-based techniques, the travelling wave principle and more recently by machine learning algorithms. These techniques require both current and voltage measurements. In the case of impedance-based methods they can provide multiples solutions. In the case of the travelling wave approach it usually requires high sampling and synchronized frequency measurements together with sophisticated identification algorithms. Machine learning techniques require training data and re-tuning for different grid topologies. In this work we propose a new fault location method based on the fault's transient intermediate frequency response of the system immediately after a fault occurs. The transient response immediately after the occurrence of a fault is characterized by the travelling wave phenomenon together with intermediate frequencies of oscillation in the range of 5 to 500 kHz. These intermediate frequencies of oscillations are associated with the natural response of the cable/line system to the fault event. Their frequencies of oscillation are dependent on the faulted section and the fault location within that section. The proposed fault location methodology aims to leverage on that dependency, by firstly identifying these intermediate frequencies for different fault location scenarios for a given network. This process is performed offline using a linear time invariant (LTI) representation of the network. To compute this LTI representation, as part of this work an impedance representation in the modal domain is established for cable/line sections, which is able to capture the frequency-dependence and distributed nature of its electrical parameters. The offline methodology identifies these intermediate frequencies for different fault location scenarios, and then proceeds to fit the fault location dependence of each intermediate frequency using a polynomial regression. An online methodology is also proposed to perform the fault location in real time by solving the polynomial regressions computed during the offline methodology using measurements of the intermediate frequencies present in the frequency spectrum of transient signals. The fault location is thus solved by using voltage or current measurements of the fault’s transient response at different locations in the network, together with simple signal processing techniques such as the Fast Fourier Transform. The full method is tested with an EMT simulation in PSCAD, using the detailed frequency dependent model for underground cables, together with realistic load models in a low voltage distribution network test system.Open Acces

    Analysis of an On-Line Stability Monitoring Approach for DC Microgrid Power Converters

    Get PDF
    An online approach to evaluate and monitor the stability margins of dc microgrid power converters is presented in this paper. The discussed online stability monitoring technique is based on the Middlebrook's loop-gain measurement technique, adapted to the digitally controlled power converters. In this approach, a perturbation is injected into a specific digital control loop of the converter and after measuring the loop gain, its crossover frequency and phase margin are continuously evaluated and monitored. The complete analytical derivation of the model, as well as detailed design aspects, are reported. In addition, the presence of multiple power converters connected to the same dc bus, all having the stability monitoring unit, is also investigated. An experimental microgrid prototype is implemented and considered to validate the theoretical analysis and simulation results, and to evaluate the effectiveness of the digital implementation of the technique for different control loops. The obtained results confirm the expected performance of the stability monitoring tool in steady-state and transient operating conditions. The proposed method can be extended to generic control loops in power converters operating in dc microgrids

    Parameterized macromodeling of passive and active dynamical systems

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Dynamic operability assessment : a mathematical programming approach based on Q-parametrization

    Get PDF
    Bibliography: pages 197-208.The ability of a process plant to guarantee high product quality, in terms of low variability, is emerging as a defining feature when distinguishing between alternative suppliers. The extent to which this can be achieved is termed a plant's dynamic operability and is a function of both the plant design and the control system design. In the limit, however, the closedloop performance is determined by the properties inherent in the plant. This realization of the interrelationship between a plant design and its achievable closed-loop performance has motivated research toward systematic techniques for screening inherently inferior designs. Pioneering research in the early 1980's identified right-half-plane transmission zeros, time delays, input constraints and model uncertainty as factors that limit the achievable closedloop performance of a process. Quantifying the performance-limiting effect of combinations of these factors has proven to be a challenging problem, as reflected in the literature. It is the aim of this thesis to develop a systematic procedure for dynamic operability assessment in the presence of combinations of performance-limiting factors. The approach adopted in this thesis is based on the Q-parametrization of stabilizing linear feedback controllers and involves posing dynamic operability assessment as a mathematical programming problet? In the proposed formulation, a convex objective function, reflecting a measure of closed-loop performance, is optimized over all stable Q, subject. to a set of constraints on the closed-loop behavior, which for many specifications of interest is convex. A discrete-time formulation is chosen so as to allow for the convenient hand.ling of time delays and time-domain constraints. An important feature of the approach is that, due to the convexity, global optimality is guaranteed. Furthermore, the fact that Q parametrizes all stabilizing linear feedback controllers implies that the performance at the optimum represents the best possible performance for any such controller. The results are thus not biased by controller type or tuning, apart from the requirement that the controller be linear

    New Approaches in Automation and Robotics

    Get PDF
    The book New Approaches in Automation and Robotics offers in 22 chapters a collection of recent developments in automation, robotics as well as control theory. It is dedicated to researchers in science and industry, students, and practicing engineers, who wish to update and enhance their knowledge on modern methods and innovative applications. The authors and editor of this book wish to motivate people, especially under-graduate students, to get involved with the interesting field of robotics and mechatronics. We hope that the ideas and concepts presented in this book are useful for your own work and could contribute to problem solving in similar applications as well. It is clear, however, that the wide area of automation and robotics can only be highlighted at several spots but not completely covered by a single book

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case

    Modeling and Analysis of Power Processing Systems (MAPPS), initial phase 2

    Get PDF
    The overall objective of the program is to provide the engineering tools to reduce the analysis, design, and development effort, and thus the cost, in achieving the required performances for switching regulators and dc-dc converter systems. The program was both tutorial and application oriented. Various analytical methods were described in detail and supplemented with examples, and those with standardization appeals were reduced into computer-based subprograms. Major program efforts included those concerning small and large signal control-dependent performance analysis and simulation, control circuit design, power circuit design and optimization, system configuration study, and system performance simulation. Techniques including discrete time domain, conventional frequency domain, Lagrange multiplier, nonlinear programming, and control design synthesis were employed in these efforts. To enhance interactive conversation between the modeling and analysis subprograms and the user, a working prototype of the Data Management Program was also developed to facilitate expansion as future subprogram capabilities increase
    • …
    corecore