20,547 research outputs found
Compressing networks with super nodes
Community detection is a commonly used technique for identifying groups in a
network based on similarities in connectivity patterns. To facilitate community
detection in large networks, we recast the network to be partitioned into a
smaller network of 'super nodes', each super node comprising one or more nodes
in the original network. To define the seeds of our super nodes, we apply the
'CoreHD' ranking from dismantling and decycling. We test our approach through
the analysis of two common methods for community detection: modularity
maximization with the Louvain algorithm and maximum likelihood optimization for
fitting a stochastic block model. Our results highlight that applying community
detection to the compressed network of super nodes is significantly faster
while successfully producing partitions that are more aligned with the local
network connectivity, more stable across multiple (stochastic) runs within and
between community detection algorithms, and overlap well with the results
obtained using the full network
A new method for the spectroscopic identification of stellar non-radial pulsation modes. I. The method and numerical tests
We present the Fourier parameter fit method, a new method for
spectroscopically identifying stellar radial and non-radial pulsation modes
based on the high-resolution time-series spectroscopy of absorption-line
profiles. In contrast to previous methods this one permits a quantification of
the statistical significance of the computed solutions. The application of
genetic algorithms in seeking solutions makes it possible to search through a
large parameter space. The mode identification is carried out by minimizing
chi-square, using the observed amplitude and phase across the line profile and
their modeled counterparts. Computations of the theoretical line profiles are
based on a stellar displacement field, which is described as superposition of
spherical harmonics and that includes the first order effects of the Coriolis
force. We made numerical tests of the method on a grid of different mono- and
multi-mode models for 0 <= l <= 4 in order to explore its capabilities and
limitations. Our results show that whereas the azimuthal order m can be
unambiguously identified for low-order modes, the error of l is in the range of
pm 1. The value of m can be determined with higher precision than with other
spectroscopic mode identification methods. Improved values for the inclination
can be obtained from the analysis of non-axisymmetric pulsation modes. The new
method is ideally suited to intermediatley rotating Delta Scuti and Beta Cephei
stars.Comment: 12 pages, 14 figure
Power quality and electromagnetic compatibility: special report, session 2
The scope of Session 2 (S2) has been defined as follows by the Session Advisory Group and the Technical Committee: Power Quality (PQ), with the more general concept of electromagnetic compatibility (EMC) and with some related safety problems in electricity distribution systems.
Special focus is put on voltage continuity (supply reliability, problem of outages) and voltage quality (voltage level, flicker, unbalance, harmonics). This session will also look at electromagnetic compatibility (mains frequency to 150 kHz), electromagnetic interferences and electric and magnetic fields issues. Also addressed in this session are electrical safety and immunity concerns (lightning issues, step, touch and transferred voltages).
The aim of this special report is to present a synthesis of the present concerns in PQ&EMC, based on all selected papers of session 2 and related papers from other sessions, (152 papers in total). The report is divided in the following 4 blocks:
Block 1: Electric and Magnetic Fields, EMC, Earthing systems
Block 2: Harmonics
Block 3: Voltage Variation
Block 4: Power Quality Monitoring
Two Round Tables will be organised:
- Power quality and EMC in the Future Grid (CIGRE/CIRED WG C4.24, RT 13)
- Reliability Benchmarking - why we should do it? What should be done in future? (RT 15
Recommended from our members
State-of-the-art on research and applications of machine learning in the building life cycle
Fueled by big data, powerful and affordable computing resources, and advanced algorithms, machine learning has been explored and applied to buildings research for the past decades and has demonstrated its potential to enhance building performance. This study systematically surveyed how machine learning has been applied at different stages of building life cycle. By conducting a literature search on the Web of Knowledge platform, we found 9579 papers in this field and selected 153 papers for an in-depth review. The number of published papers is increasing year by year, with a focus on building design, operation, and control. However, no study was found using machine learning in building commissioning. There are successful pilot studies on fault detection and diagnosis of HVAC equipment and systems, load prediction, energy baseline estimate, load shape clustering, occupancy prediction, and learning occupant behaviors and energy use patterns. None of the existing studies were adopted broadly by the building industry, due to common challenges including (1) lack of large scale labeled data to train and validate the model, (2) lack of model transferability, which limits a model trained with one data-rich building to be used in another building with limited data, (3) lack of strong justification of costs and benefits of deploying machine learning, and (4) the performance might not be reliable and robust for the stated goals, as the method might work for some buildings but could not be generalized to others. Findings from the study can inform future machine learning research to improve occupant comfort, energy efficiency, demand flexibility, and resilience of buildings, as well as to inspire young researchers in the field to explore multidisciplinary approaches that integrate building science, computing science, data science, and social science
Efficient Prediction Designs for Random Fields
For estimation and predictions of random fields it is increasingly
acknowledged that the kriging variance may be a poor representative of true
uncertainty. Experimental designs based on more elaborate criteria that are
appropriate for empirical kriging are then often non-space-filling and very
costly to determine. In this paper, we investigate the possibility of using a
compound criterion inspired by an equivalence theorem type relation to build
designs quasi-optimal for the empirical kriging variance, when space-filling
designs become unsuitable. Two algorithms are proposed, one relying on
stochastic optimization to explicitly identify the Pareto front, while the
second uses the surrogate criteria as local heuristic to chose the points at
which the (costly) true Empirical Kriging variance is effectively computed. We
illustrate the performance of the algorithms presented on both a simple
simulated example and a real oceanographic dataset
Particle filter-based Gaussian process optimisation for parameter inference
We propose a novel method for maximum likelihood-based parameter inference in
nonlinear and/or non-Gaussian state space models. The method is an iterative
procedure with three steps. At each iteration a particle filter is used to
estimate the value of the log-likelihood function at the current parameter
iterate. Using these log-likelihood estimates, a surrogate objective function
is created by utilizing a Gaussian process model. Finally, we use a heuristic
procedure to obtain a revised parameter iterate, providing an automatic
trade-off between exploration and exploitation of the surrogate model. The
method is profiled on two state space models with good performance both
considering accuracy and computational cost.Comment: Accepted for publication in proceedings of the 19th World Congress of
the International Federation of Automatic Control (IFAC), Cape Town, South
Africa, August 2014. 6 pages, 4 figure
- …