597 research outputs found
Towards the integration of modern power systems into a cyber–physical framework
The cyber–physical system (CPS) architecture provides a novel framework for analyzing and expanding research and innovation results that are essential in managing, controlling and operating complex, large scale, industrial systems under a holistic insight. Power systems constitute such characteristically large industrial structures. The main challenge in deploying a power system as a CPS lies on how to combine and incorporate multi-disciplinary, core, and advanced technologies into the specific for this case, social, environmental, economic and engineering aspects. In order to substantially contribute towards this target, in this paper, a specific CPS scheme that clearly describes how a dedicated cyber layer is deployed to manage and interact with comprehensive multiple physical layers, like those found in a large-scale modern power system architecture, is proposed. In particular, the measurement, communication, computation, control mechanisms, and tools installed at different hierarchical frames that are required to consider and modulate the social/environmental necessities, as well as the electricity market management, the regulation of the electric grid, and the power injection/absorption of the controlled main devices and distributed energy resources, are all incorporated in a common CPS framework. Furthermore, a methodology for investigating and analyzing the dynamics of different levels of the CPS architecture (including physical devices, electricity and communication networks to market, and environmental and social mechanisms) is provided together with the necessary modelling tools and assumptions made in order to close the loop between the physical and the cyber layers. An example of a real-world industrial micro-grid that describes the main aspects of the proposed CPS-based design for modern electricity grids is also presented at the end of the paper to further explain and visualize the proposed framework
IndexMAC: A Custom RISC-V Vector Instruction to Accelerate Structured-Sparse Matrix Multiplications
Structured sparsity has been proposed as an efficient way to prune the
complexity of modern Machine Learning (ML) applications and to simplify the
handling of sparse data in hardware. The acceleration of ML models - for both
training and inference - relies primarily on equivalent matrix multiplications
that can be executed efficiently on vector processors or custom matrix engines.
The goal of this work is to incorporate the simplicity of structured sparsity
into vector execution, thereby accelerating the corresponding matrix
multiplications. Toward this objective, a new vector index-multiply-accumulate
instruction is proposed, which enables the implementation of lowcost indirect
reads from the vector register file. This reduces unnecessary memory traffic
and increases data locality. The proposed new instruction was integrated in a
decoupled RISCV vector processor with negligible hardware cost. Extensive
evaluation demonstrates significant speedups of 1.80x-2.14x, as compared to
state-of-the-art vectorized kernels, when executing layers of varying sparsity
from state-of-the-art Convolutional Neural Networks (CNNs).Comment: DATE 202
Bidirectional dc/dc power converters with current limitation based on nonlinear control design
A new nonlinear controller for bidirectional dc/dc
power converters that guarantees output voltage regulation with
an inherent current limitation is proposed in this paper. In
contrast to traditional single or cascaded PI controllers with a
saturation unit that can lead to integrator windup and instability,
the proposed controller is based on a rigorous nonlinear
mathematical analysis and, using Lyapunov stability theory, it
is proven that the current of the converter is always limited
without the need of additional saturation units or limiters. The
proposed concept introduces a virtual resistance at the input
of the converter and a controllable voltage that can take both
positive and negative values leading to bidirectional power flow
capability. The dynamics of this voltage are proven to remain
bounded and with a suitable choice of the voltage bound and
the virtual resistance, the upper limit for the converter current
is guaranteed at all times, even during transients. Simulation
results for a bidirectional converter equipped with the proposed
controller are presented to verify the current-limiting capability
and the desired voltage regulation
Feature extraction and identification techniques for the alignment of perturbation simulations with power plant measurements
In this work, a methodology is proposed for the comparison of the measured and simulated neutron noise signals in nuclear power plants, with the simulation sets having been generated by the CORE SIM+ diffusion-based reactor noise simulator. More specifically, the method relies on the computation of the Cross-Power Spectral Density of the detector signals and the subsequent comparison with their simulated counterparts, which involves specific frequency values corresponding to the signals’ high energy content. The different simulated perturbations considered are (i) axially traveling perturbations, (ii) fuel assembly vibrations, (iii) core barrel vibrations, and finally (iv) generic “absorber of variable strength” types. The reactor core used for the current study is a German 4-loop pre-Konvoi Pressurized Water Reactor
Recommended from our members
Value creation from M&As: New evidence
M&A deals create more value for acquiring firm shareholders post-2009 than ever before. Public acquisitions fuel positive and statistically significant abnormal returns for acquirers while stock-for-stock deals no longer destroy value. Mega deals, priced at least 62mil around the announcement of such deals; a 542mil pointing to overall value creation from M&As on a large scale. Our results are robust to different measures and controls and appear to be linked with profound improvements in the quality of corporate governance among acquiring firms in the aftermath of the 2009 financial crisis
Transnasal endoscopy: no gagging no panic!
BACKGROUND: Transnasal endoscopy (TNE) is performed with an ultrathin scope via the nasal passages and is increasingly used. This review covers the technical characteristics, tolerability, safety and acceptability of TNE and also diagnostic accuracy, use as a screening tool and therapeutic applications. It includes practical advice from an ear, nose, throat (ENT) specialist to optimise TNE practice, identify ENT pathology and manage complications. METHODS: A Medline search was performed using the terms “transnasal”, “ultrathin”, “small calibre”, “endoscopy”, “EGD” to identify relevant literature. RESULTS: There is increasing evidence that TNE is better tolerated than standard endoscopy as measured using visual analogue scales, and the main area of discomfort is nasal during insertion of the TN endoscope, which seems remediable with adequate topical anaesthesia. The diagnostic yield has been found to be similar for detection of Barrett's oesophagus, gastric cancer and GORD-associated diseases. There are some potential issues regarding the accuracy of TNE in detecting small early gastric malignant lesions, especially those in the proximal stomach. TNE is feasible and safe in a primary care population and is ideal for screening for upper gastrointestinal pathology. It has an advantage as a diagnostic tool in the elderly and those with multiple comorbidities due to fewer adverse effects on the cardiovascular system. It has significant advantages for therapeutic procedures, especially negotiating upper oesophageal strictures and insertion of nasoenteric feeding tubes. CONCLUSIONS: TNE is well tolerated and a valuable diagnostic tool. Further evidence is required to establish its accuracy for the diagnosis of early and small gastric malignancies. There is an emerging role for TNE in therapeutic endoscopy, which needs further study
Structure of nanoparticles embedded in micellar polycrystals
We investigate by scattering techniques the structure of water-based soft
composite materials comprising a crystal made of Pluronic block-copolymer
micelles arranged in a face-centered cubic lattice and a small amount (at most
2% by volume) of silica nanoparticles, of size comparable to that of the
micelles. The copolymer is thermosensitive: it is hydrophilic and fully
dissolved in water at low temperature (T ~ 0{\deg}C), and self-assembles into
micelles at room temperature, where the block-copolymer is amphiphilic. We use
contrast matching small-angle neuron scattering experiments to probe
independently the structure of the nanoparticles and that of the polymer. We
find that the nanoparticles do not perturb the crystalline order. In addition,
a structure peak is measured for the silica nanoparticles dispersed in the
polycrystalline samples. This implies that the samples are spatially
heterogeneous and comprise, without macroscopic phase separation, silica-poor
and silica-rich regions. We show that the nanoparticle concentration in the
silica-rich regions is about tenfold the average concentration. These regions
are grain boundaries between crystallites, where nanoparticles concentrate, as
shown by static light scattering and by light microscopy imaging of the
samples. We show that the temperature rate at which the sample is prepared
strongly influence the segregation of the nanoparticles in the
grain-boundaries.Comment: accepted for publication in Langmui
Pricing Rainfall Based Futures Using Genetic Programming
Rainfall derivatives are in their infancy since starting trading on the Chicago Mercantile Exchange (CME) since 2011. Being a relatively new class of financial instruments there is no generally recognised pricing framework used within the literature. In this paper, we propose a novel framework for pricing contracts using Genetic Programming (GP). Our novel framework requires generating a risk-neutral density of our rainfall predictions generated by GP supported by Markov chain Monte Carlo and Esscher transform. Moreover, instead of having a single rainfall model for all contracts, we propose having a separate rainfall model for each contract. We compare our novel framework with and without our proposed contract-specific models for pricing against the pricing performance of the two most commonly used methods, namely Markov chain extended with rainfall prediction (MCRP), and burn analysis (BA) across contracts available on the CME. Our goal is twofold, (i) to show that by improving the predictive accuracy of the rainfall process, the accuracy of pricing also increases. (ii) contract-specific models can further improve the pricing accuracy. Results show that both of the above goals are met, as GP is capable of pricing rainfall futures contracts closer to the CME than MCRP and BA. This shows that our novel framework for using GP is successful, which is a significant step forward in pricing rainfall derivatives
Investigation of aggregation effects in vegetation condition monitoring at a national scale
Abstract Monitoring vegetation condition is an important issue in the Mediterranean region, in terms of both securing food and preventing fires. Vegetation indices (VIs), mathematical transformations of reflectance bands, have played an important role in vegetation monitoring, as they depict the abundance and health of vegetation. Instead of storing raster VI maps, aggregated statistics can be derived and used in long-term monitoring. The aggregation schemes (zonations) used in Greece are the forest service units, the fire service units and the administrative units. The purpose of this work was to explore the effect of the Modifiable Areal Unit Problem (MAUP) in vegetation condition monitoring at the above mentioned aggregation schemes using 16day Normalized Difference Vegetation Index (NDVI) composites acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) satellite sensor. The effects of aggregation in the context of MAUP were examined by analyzing variance, from which the among polygon variation (objects' heterogeneity) and the within polygon variation (pixels' homogeneity) was derived. Significant differences in objects' heterogeneity were observed when aggregating at the three aggregation schemes, therefore there is a MAUP effect in monitoring vegetation condition on a nationwide scale in Greece with NDVI. Monitoring using the fire service units has significantly higher pixels' homogeneity, therefore there is indication that it is the most appropriate for monitoring vegetation condition on a nationwide scale in Greece with NDVI. Results were consistent between the two major types of vegetation, natural and agricultural. According to the statistical validation, conclusions based on the examined years (2003 and 2004) are justified
- …