114,622 research outputs found

    Application of the Probabilistic Technologies to Power Plant Design

    Get PDF
    In the report the developed approach to choice and substantiation of power equipment, current-carrying parts and switching devices using existing and proposed probabilistic method of borders selection of input and output data (SBID) is presented. SBID method allows to receive complete probabilistic characteristics or probability distribution laws (PDL) of output data as functional dependence from the arguments (input data) by probabilistic characteristics of arguments. Any task, including electrical values in operating modes and transient conditions in power system can be exposed as dependence on input or output data. Due to SBID method, the PDL of input data processing results of these dependencies may be received. PDL allows to calculate risks of overload of the power components in the operating modes and destruction, also power equipment parameters, current-carrying parts and switching devices are going to be selected on the basis of the minimum specified risks

    Active power management of islanded interconnected distributed generation

    Get PDF
    Abstract: The present paper proposes a management of active power in distributed generation considering an islanded mode. Power system is a complex system from the point of view of its constitution, operation and management. Because of energy sources scarcity and energy increasing demand in most of the electrical power systems worldwide, renewable energy exploitation continue to attract researches and exploitation of this weather depending resources. When considering the island mode or without connection to the main grid, of the distributed generation its operation and control became more difficult or uncertain based their dependencies on the weather. Using optimal theory, this paper solve the management of interconnected microgrids operating in islanded mode. Matlab software is used to solve all optimisation problems

    Deep neural network configuration sensitivity analysis in wind power forecasting

    Get PDF
    The trend toward increasing integration of wind farms into the power system is a challenge for transmission and distribution system operators and electricity market operators. The variability of electricity generation from wind farms increases the requirements for flexibility needed for the reliable and stable operation of the power system. Operating a power system with a high share of renewables requires advanced generation and consumpti-on forecasting methods to ensure the reliable and economical operation of the system. Installed wind power capacities require advanced techniques to monitor and control such data-rich power systems. The rapid development of advanced artificial neural networks and data processing capabilities offers numerous potential applications. The effectiveness of advanced deep recurrent neural networks with long-term memory is constantly being demonstrated for learning complex temporal sequence-to-sequence dependencies. This paper presents the application of deep learning methods to wind power production forecasting. The models are trained using historical wind farm generation measurements and NWP weather forecasts for the areas of Croatian wind farms. Furthermore, a comparison of the accuracy of the proposed models with currently used forecasting tools is presented

    A Subset of the CERN Virtual Machine File System: Fast Delivering of Complex Software Stacks for Supercomputing Resources

    Full text link
    Delivering a reproducible environment along with complex and up-to-date software stacks on thousands of distributed and heterogeneous worker nodes is a critical task. The CernVM-File System (CVMFS) has been designed to help various communities to deploy software on worldwide distributed computing infrastructures by decoupling the software from the Operating System. However, the installation of this file system depends on a collaboration with system administrators of the remote resources and an HTTP connectivity to fetch dependencies from external sources. Supercomputers, which offer tremendous computing power, generally have more restrictive policies than grid sites and do not easily provide the mandatory conditions to exploit CVMFS. Different solutions have been developed to tackle the issue, but they are often specific to a scientific community and do not deal with the problem in its globality. In this paper, we provide a generic utility to assist any community in the installation of complex software dependencies on supercomputers with no external connectivity. The approach consists in capturing dependencies of applications of interests, building a subset of dependencies, testing it in a given environment, and deploying it to a remote computing resource. We experiment this proposal with a real use case by exporting Gauss-a Monte-Carlo simulation program from the LHCb experiment-on Mare Nostrum, one of the top supercomputers of the world. We provide steps to encapsulate the minimum required files and deliver a light and easy-to-update subset of CVMFS: 12.4 Gigabytes instead of 5.2 Terabytes for the whole LHCb repository

    The H.E.S.S. central data acquisition system

    Full text link
    The High Energy Stereoscopic System (H.E.S.S.) is a system of Imaging Atmospheric Cherenkov Telescopes (IACTs) located in the Khomas Highland in Namibia. It measures cosmic gamma rays of very high energies (VHE; >100 GeV) using the Earth's atmosphere as a calorimeter. The H.E.S.S. Array entered Phase II in September 2012 with the inauguration of a fifth telescope that is larger and more complex than the other four. This paper will give an overview of the current H.E.S.S. central data acquisition (DAQ) system with particular emphasis on the upgrades made to integrate the fifth telescope into the array. At first, the various requirements for the central DAQ are discussed then the general design principles employed to fulfil these requirements are described. Finally, the performance, stability and reliability of the H.E.S.S. central DAQ are presented. One of the major accomplishments is that less than 0.8% of observation time has been lost due to central DAQ problems since 2009.Comment: 17 pages, 8 figures, published in Astroparticle Physic

    A Validation Framework for the Long Term Preservation of High Energy Physics Data

    Full text link
    The study group on data preservation in high energy physics, DPHEP, is moving to a new collaboration structure, which will focus on the implementation of preservation projects, such as those described in the group's large scale report published in 2012. One such project is the development of a validation framework, which checks the compatibility of evolving computing environments and technologies with the experiments software for as long as possible, with the aim of substantially extending the lifetime of the analysis software, and hence of the usability of the data. The framework is designed to automatically test and validate the software and data of an experiment against changes and upgrades to the computing environment, as well as changes to the experiment software itself. Technically, this is realised using a framework capable of hosting a number of virtual machine images, built with different configurations of operating systems and the relevant software, including any necessary external dependencies.Comment: Proceedings of a poster presented at CHEP 2013, Amsterdam, October 14-18 201
    corecore