18,372 research outputs found

    A Scalable Correlator Architecture Based on Modular FPGA Hardware, Reuseable Gateware, and Data Packetization

    Full text link
    A new generation of radio telescopes is achieving unprecedented levels of sensitivity and resolution, as well as increased agility and field-of-view, by employing high-performance digital signal processing hardware to phase and correlate large numbers of antennas. The computational demands of these imaging systems scale in proportion to BMN^2, where B is the signal bandwidth, M is the number of independent beams, and N is the number of antennas. The specifications of many new arrays lead to demands in excess of tens of PetaOps per second. To meet this challenge, we have developed a general purpose correlator architecture using standard 10-Gbit Ethernet switches to pass data between flexible hardware modules containing Field Programmable Gate Array (FPGA) chips. These chips are programmed using open-source signal processing libraries we have developed to be flexible, scalable, and chip-independent. This work reduces the time and cost of implementing a wide range of signal processing systems, with correlators foremost among them,and facilitates upgrading to new generations of processing technology. We present several correlator deployments, including a 16-antenna, 200-MHz bandwidth, 4-bit, full Stokes parameter application deployed on the Precision Array for Probing the Epoch of Reionization.Comment: Accepted to Publications of the Astronomy Society of the Pacific. 31 pages. v2: corrected typo, v3: corrected Fig. 1

    Journey of an intruder through the fluidisation and jamming transitions of a dense granular media

    Full text link
    We study experimentally the motion of an intruder dragged into an amorphous monolayer of horizontally vibrated grains at high packing fractions. This motion exhibits two transitions. The first transition separates a continuous motion regime at comparatively low packing fractions and large dragging force from an intermittent motion one at high packing fraction and low dragging force. Associated to these different motions, we observe a transition from a linear rheology to a stiffer response. We thereby call "fluidisation" this first transition. A second transition is observed within the intermittent regime, when the intruder's motion is made of intermittent bursts separated by long waiting times. We observe a peak in the relative fluctuations of the intruder's displacements and a critical scaling of the burst amplitudes distributions. This transition occurs at the jamming point characterized in a previous study and defined as the point where the static pressure (i.e. the pressure measured in the absence of vibration) vanishes. Investigating the motion of the surrounding grains, we show that below the fluidisation transition, there is a permanent wake of free volume behind the intruder. This transition is marked by the evolution of the reorganization patterns around the intruder, which evolve from compact aggregates in the flowing regime to long-range branched shapes in the intermittent regime, suggesting an increasing role of the stress fluctuations. Remarkably, the distributions of the kinetic energy of these reorganization patterns also exhibits a critical scaling at the jamming transition.Comment: 12 pages, 11 figure

    Review of trends and targets of complex systems for power system optimization

    Get PDF
    Optimization systems (OSs) allow operators of electrical power systems (PS) to optimally operate PSs and to also create optimal PS development plans. The inclusion of OSs in the PS is a big trend nowadays, and the demand for PS optimization tools and PS-OSs experts is growing. The aim of this review is to define the current dynamics and trends in PS optimization research and to present several papers that clearly and comprehensively describe PS OSs with characteristics corresponding to the identified current main trends in this research area. The current dynamics and trends of the research area were defined on the basis of the results of an analysis of the database of 255 PS-OS-presenting papers published from December 2015 to July 2019. Eleven main characteristics of the current PS OSs were identified. The results of the statistical analyses give four characteristics of PS OSs which are currently the most frequently presented in research papers: OSs for minimizing the price of electricity/OSs reducing PS operation costs, OSs for optimizing the operation of renewable energy sources, OSs for regulating the power consumption during the optimization process, and OSs for regulating the energy storage systems operation during the optimization process. Finally, individual identified characteristics of the current PS OSs are briefly described. In the analysis, all PS OSs presented in the observed time period were analyzed regardless of the part of the PS for which the operation was optimized by the PS OS, the voltage level of the optimized PS part, or the optimization goal of the PS OS.Web of Science135art. no. 107

    Event-based Green Scheduling of Radiant Systems in Buildings

    Get PDF
    This paper looks at the problem of peak power demand reduction for intermittent operation of radiant systems in buildings. Uncoordinated operation of the circulation pumps of a multi-zone hydronic radiant system can cause temporally correlated electricity demand surges when multiple pumps are activated simultaneously. Under a demand-based electricity pricing policy, this uncoordinated behavior can result in high electricity costs and expensive system operation. We have previously presented Green Scheduling with the periodic scheduling approach for reducing the peak power demand of electric radiant heating systems while maintaining indoor thermal comfort. This paper develops an event-based state feedback scheduling strategy that, unlike periodic scheduling, directly takes into account the disturbances and is thus more suitable for building systems. The effectiveness of the new strategy is demonstrated through simulation in MATLAB

    A review of tools, models and techniques for long-term assessment of distribution systems using OpenDSS and parallel computing

    Get PDF
    Many distribution system studies require long-term evaluations (e.g. for one year or more): Energy loss minimization, reliability assessment, or optimal rating of distributed energy resources should be based on long-term simulations of the distribution system. This paper summarizes the work carried out by the authors to perform long-term studies of large distribution systems using an OpenDSS-MATLAB environment and parallel computing. The paper details the tools, models, and procedures used by the authors in optimal allocation of distributed resources, reliability assessment of distribution systems with and without distributed generation, optimal rating of energy storage systems, or impact analysis of the solid state transformer. Since in most cases, the developed procedures were implemented for application in a multicore installation, a summary of capabilities required for parallel computing applications is also included. The approaches chosen for carrying out those studies used the traditional Monte Carlo method, clustering techniques or genetic algorithms. Custom-made models for application with OpenDSS were required in some studies: A summary of the characteristics of those models and their implementation are also included.Peer ReviewedPostprint (published version

    Controlling coexisting attractors of an impacting system via linear augmentation

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.This paper studies the control of coexisting attractors in an impacting system via a recently developed control law based on linear augmentation. Special attention is given to two control issues in the framework of multistable engineering systems, namely, the switching between coexisting attractors without altering the system’s main parameters and the avoidance of grazing-induced chaotic responses. The effectiveness of the proposed control scheme is confirmed numerically for the case of a periodically excited, soft impact oscillator. Our analysis shows how path-following techniques for non-smooth systems can be used in order to determine the optimal control parameters in terms of energy expenditure due to the control signal and transient behavior of the control error, which can be applied to a broad range of engineering problemsThe second author has been supported by the ‘DRESDEN Fellowship Programm’ of the TU Dresden

    Active actuator fault-tolerant control of a wind turbine benchmark model

    Get PDF
    This paper describes the design of an active fault-tolerant control scheme that is applied to the actuator of a wind turbine benchmark. The methodology is based on adaptive filters obtained via the nonlinear geometric approach, which allows to obtain interesting decoupling property with respect to uncertainty affecting the wind turbine system. The controller accommodation scheme exploits the on-line estimate of the actuator fault signal generated by the adaptive filters. The nonlinearity of the wind turbine model is described by the mapping to the power conversion ratio from tip-speed ratio and blade pitch angles. This mapping represents the aerodynamic uncertainty, and usually is not known in analytical form, but in general represented by approximated two-dimensional maps (i.e. look-up tables). Therefore, this paper suggests a scheme to estimate this power conversion ratio in an analytical form by means of a two-dimensional polynomial, which is subsequently used for designing the active fault-tolerant control scheme. The wind turbine power generating unit of a grid is considered as a benchmark to show the design procedure, including the aspects of the nonlinear disturbance decoupling method, as well as the viability of the proposed approach. Extensive simulations of the benchmark process are practical tools for assessing experimentally the features of the developed actuator fault-tolerant control scheme, in the presence of modelling and measurement errors. Comparisons with different fault-tolerant schemes serve to highlight the advantages and drawbacks of the proposed methodology
    corecore