683,457 research outputs found

    A Holistic Approach to Forecasting Wholesale Energy Market Prices

    Get PDF
    Electricity market price predictions enable energy market participants to shape their consumption or supply while meeting their economic and environmental objectives. By utilizing the basic properties of the supply-demand matching process performed by grid operators, known as Optimal Power Flow (OPF), we develop a methodology to recover energy market's structure and predict the resulting nodal prices by using only publicly available data, specifically grid-wide generation type mix, system load, and historical prices. Our methodology uses the latest advancements in statistical learning to cope with high dimensional and sparse real power grid topologies, as well as scarce, public market data, while exploiting structural characteristics of the underlying OPF mechanism. Rigorous validations using the Southwest Power Pool (SPP) market data reveal a strong correlation between the grid level mix and corresponding market prices, resulting in accurate day-ahead predictions of real time prices. The proposed approach demonstrates remarkable proximity to the state-of-the-art industry benchmark while assuming a fully decentralized, market-participant perspective. Finally, we recognize the limitations of the proposed and other evaluated methodologies in predicting large price spike values.Comment: 14 pages, 14 figures. Accepted for publication in IEEE Transactions on Power System

    McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing

    Full text link
    McRunjob is a powerful grid workflow manager used to manage the generation of large numbers of production processing jobs in High Energy Physics. In use at both the DZero and CMS experiments, McRunjob has been used to manage large Monte Carlo production processing since 1999 and is being extended to uses in regular production processing for analysis and reconstruction. Described at CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety of environments. The powerful core metadata description language includes methods for converting the metadata into persistent forms, job descriptions, multi-step workflows, and data provenance information. The language features allow for structure in the metadata by including full expressions, namespaces, functional dependencies, site specific parameters in a grid environment, and ontological definitions. It also has simple control structures for parallelization of large jobs. McRunjob features a modular design which allows for easy expansion to new job description languages or new application level tasks.Comment: CHEP 2003 serial number TUCT00

    High-resolution mapping of GDP using multi-scale feature fusion by integrating remote sensing and POI data

    Get PDF
    \ua9 2024High-resolution spatial distribution maps of GDP are essential for accurately analyzing economic development, industrial layout, and urbanization processes. However, the currently accessible GDP gridded datasets are limited in number and resolution. Furthermore, high-resolution GDP mapping remains a challenge due to the complex sectoral structure of GDP, which encompasses agriculture, industry, and services. Meanwhile, multi-source data with high spatial resolution can effectively reflect the level of regional economic development. Therefore, we propose a multi-scale fusion residual network (Res-FuseNet) designed to estimate the GDP grid density by integrating remote sensing and POI data. Specifically, Res-FuseNet extracts multi-scale features of remote sensing and POI data relevant to different sectors. It constructs a joint representation of multi-source data through a fusion mechanism and accurately estimates GDP grid density for three sectors using residual connections. Subsequently, the high-resolution GDP grid data are obtained by correcting and overlaying grid density for each sector using county-level statistical GDP data. The 100-meter gridded GDP map of the urban agglomeration in the middle reaches of the Yangtze River in 2020 was successfully generated using this method. The experimental results confirm that Res-FuseNet outperforms machine learning models and baseline model significantly in training across different sectors and at the town-level. The R2 values for the three sectors are 0.69, 0.91, and 0.99, respectively, while the town-level evaluation results also exhibit high accuracy (R2=0.75). Res-FuseNet provides an innovative high-resolution mapping method, and the generated high-resolution GDP grid data reveal the distribution characteristics of different sector structures and fine-scale economic disparities within cities, offering robust support for sustainable development

    Hybrid Multicore/vectorisation technique applied to the elastic wave equation on a staggered grid

    Get PDF
    In modern physics it has become common to find the solution of a problem by solving numerically a set of PDEs. Whether solving them on a finite difference grid or by a finite element approach, the main calculations are often applied to a stencil structure. In the last decade it has become usual to work with so called big data problems where calculations are very heavy and accelerators and modern architectures are widely used. Although CPU and GPU clusters are often used to solve such problems, parallelisation of any calculation ideally starts from a single processor optimisation. Unfortunately, it is impossible to vectorise a stencil structured loop with high level instructions. In this paper we suggest a new approach to rearranging the data structure which makes it possible to apply high level vectorisation instructions to a stencil loop and which results in significant acceleration. The suggested method allows further acceleration if shared memory APIs are used. We show the effectiveness of the method by applying it to an elastic wave propagation problem on a finite difference grid. We have chosen Intel architecture for the test problem and OpenMP (Open Multi-Processing) since they are extensively used in many applications

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    Secure Data Management and Transmission Infrastructure for the Future Smart Grid

    Get PDF
    Power grid has played a crucial role since its inception in the Industrial Age. It has evolved from a wide network supplying energy for incorporated multiple areas to the largest cyber-physical system. Its security and reliability are crucial to any country’s economy and stability [1]. With the emergence of the new technologies and the growing pressure of the global warming, the aging power grid can no longer meet the requirements of the modern industry, which leads to the proposal of ‘smart grid’. In smart grid, both electricity and control information communicate in a massively distributed power network. It is essential for smart grid to deliver real-time data by communication network. By using smart meter, AMI can measure energy consumption, monitor loads, collect data and forward information to collectors. Smart grid is an intelligent network consists of many technologies in not only power but also information, telecommunications and control. The most famous structure of smart grid is the three-layer structure. It divides smart grid into three different layers, each layer has its own duty. All these three layers work together, providing us a smart grid that monitor and optimize the operations of all functional units from power generation to all the end-customers [2]. To enhance the security level of future smart grid, deploying a high secure level data transmission scheme on critical nodes is an effective and practical approach. A critical node is a communication node in a cyber-physical network which can be developed to meet certain requirements. It also has firewalls and capability of intrusion detection, so it is useful for a time-critical network system, in other words, it is suitable for future smart grid. The deployment of such a scheme can be tricky regarding to different network topologies. A simple and general way is to install it on every node in the network, that is to say all nodes in this network are critical nodes, but this way takes time, energy and money. Obviously, it is not the best way to do so. Thus, we propose a multi-objective evolutionary algorithm for the searching of critical nodes. A new scheme should be proposed for smart grid. Also, an optimal planning in power grid for embedding large system can effectively ensure every power station and substation to operate safely and detect anomalies in time. Using such a new method is a reliable method to meet increasing security challenges. The evolutionary frame helps in getting optimum without calculating the gradient of the objective function. In the meanwhile, a means of decomposition is useful for exploring solutions evenly in decision space. Furthermore, constraints handling technologies can place critical nodes on optimal locations so as to enhance system security even with several constraints of limited resources and/or hardware. The high-quality experimental results have validated the efficiency and applicability of the proposed approach. It has good reason to believe that the new algorithm has a promising space over the real-world multi-objective optimization problems extracted from power grid security domain. In this thesis, a cloud-based information infrastructure is proposed to deal with the big data storage and computation problems for the future smart grid, some challenges and limitations are addressed, and a new secure data management and transmission strategy regarding increasing security challenges of future smart grid are given as well

    The connection between migration and regional structure in Finland around 1990 - a GIS viewpoint

    Get PDF
    The connection between migration and regional structure in Finland in the early 19905 is discussed on the basis of Geographic Irformation Systems (GIS) data from Statistics Finland, compiled for map coordinate grid cells of 1 x 1 km. The results indicate that data of this kind enable a more detailed typology to be drawn up for migration. At the regional level, this allows the defining of places of "passing through '' which gain population from other local government districts but lose population through migration within their own district. The connection between migration and regional structure is manifested in the fact that flows both between and within local government districts mainly involve the more urbanised population centres and areas with: high levels of unemployment

    Optimization of a Parallel CFD Code and Its Performance Evaluation on Tianhe-1A

    Get PDF
    This paper describes performance tuning experiences with a parallel CFD code to enhance its performance and flexibility on large scale parallel computers. The code solves the incompressible Navier-Stokes equations based on the novel Slightly Compressible Model on three-dimensional structure grids. High level loop transformations and argument based code specialization are utilized to optimize its uniprocessor performance. Static arrays are converted into dynamically allocated arrays to improve the flexibility. The grid generator is coupled with the flow solver so that they can exchange grid data in the memory. A detailed performance evaluation is performed. The results show that our uniprocessor optimizations improve the performance of the flow solver for 1.38 times to 3.93 times on Tianhe-1A supercomputer. In memory grid data exchange optimization speeds up the application startup time by nearly two magnitudes. The optimized code exhibits an excellent parallel scalability running realistic test cases. On 4 096 CPU cores, it achieves a strong scaling parallel efficiency of 77.39 % and a maximum performance of 4.01 Tflops
    corecore