39,718 research outputs found
Optimal load shedding for microgrids with unlimited DGs
Recent years, increasing trends on electrical supply demand, make us to search for
the new alternative in supplying the electrical power. A study in micro grid system
with embedded Distribution Generations (DGs) to the system is rapidly increasing.
Micro grid system basically is design either operate in islanding mode or
interconnect with the main grid system. In any condition, the system must have
reliable power supply and operating at low transmission power loss. During the
emergency state such as outages of power due to electrical or mechanical faults in
the system, it is important for the system to shed any load in order to maintain the
system stability and security. In order to reduce the transmission loss, it is very
important to calculate best size of the DGs as well as to find the best positions in
locating the DG itself.. Analytical Hierarchy Process (AHP) has been applied to find
and calculate the load shedding priorities based on decision alternatives which have
been made. The main objective of this project is to optimize the load shedding in the
micro grid system with unlimited DGâs by applied optimization technique
Gravitational Search Algorithm (GSA). The technique is used to optimize the
placement and sizing of DGs, as well as to optimal the load shedding. Several load
shedding schemes have been proposed and studied in this project such as load
shedding with fixed priority index, without priority index and with dynamic priority
index. The proposed technique was tested on the IEEE 69 Test Bus Distribution
system
Detecting gravitational radiation from neutron stars using a six-parameter adaptive MCMC method
We present a Markov chain Monte Carlo technique for detecting gravitational
radiation from a neutron star in laser interferometer data. The algorithm can
estimate up to six unknown parameters of the target, including the rotation
frequency and frequency derivative, using reparametrization, delayed rejection
and simulated annealing. We highlight how a simple extension of the method,
distributed over multiple computer processors, will allow for a search over a
narrow frequency band. The ultimate goal of this research is to search for
sources at a known locations, but uncertain spin parameters, such as may be
found in SN1987A.Comment: Submitted to Classical and Quantum Gravity for GWDAW-8 proceeding
A burst search for gravitational waves from binary black holes
Compact binary coalescence (CBC) is one of the most promising sources of
gravitational waves. These sources are usually searched for with matched
filters which require accurate calculation of the GW waveforms and generation
of large template banks. We present a complementary search technique based on
algorithms used in un-modeled searches. Initially designed for detection of
un-modeled bursts, which can span a very large set of waveform morphologies,
the search algorithm presented here is constrained for targeted detection of
the smaller subset of CBC signals. The constraint is based on the assumption of
elliptical polarisation for signals received at the detector. We expect that
the algorithm is sensitive to CBC signals in a wide range of masses, mass
ratios, and spin parameters. In preparation for the analysis of data from the
fifth LIGO-Virgo science run (S5), we performed preliminary studies of the
algorithm on test data. We present the sensitivity of the search to different
types of simulated CBC waveforms. Also, we discuss how to extend the results of
the test run into a search over all of the current LIGO-Virgo data set.Comment: 12 pages, 4 figures, 2 tables, submitted for publication in CQG in
the special issue for the conference proceedings of GWDAW13; corrected some
typos, addressed some minor reviewer comments one section restructured and
references updated and correcte
Implementation of barycentric resampling for continuous wave searches in gravitational wave data
We describe an efficient implementation of a coherent statistic for
continuous gravitational wave searches from neutron stars. The algorithm works
by transforming the data taken by a gravitational wave detector from a moving
Earth bound frame to one that sits at the Solar System barycenter. Many
practical difficulties arise in the implementation of this algorithm, some of
which have not been discussed previously. These difficulties include
constraints of small computer memory, discreteness of the data, losses due to
interpolation and gaps in real data. This implementation is considerably more
efficient than previous implementations of these kinds of searches on Laser
Interferometer Gravitational Wave (LIGO) detector data.Comment: 10 pages, 3 figure
A Metropolis-Hastings algorithm for extracting periodic gravitational wave signals from laser interferometric detector data
The Markov chain Monte Carlo methods offer practical procedures for detecting
signals characterized by a large number of parameters and under conditions of
low signal-to-noise ratio. We present a Metropolis-Hastings algorithm capable
of inferring the spin and orientation parameters of a neutron star from its
periodic gravitational wave signature seen by laser interferometric detector
An all-sky search algorithm for continuous gravitational waves from spinning neutron stars in binary systems
Rapidly spinning neutron stars with non-axisymmetric mass distributions are
expected to generate quasi-monochromatic continuous gravitational waves. While
many searches for unknown, isolated spinning neutron stars have been carried
out, there have been no previous searches for unknown sources in binary
systems. Since current search methods for unknown, isolated neutron stars are
already computationally limited, expanding the parameter space searched to
include binary systems is a formidable challenge. We present a new hierarchical
binary search method called TwoSpect, which exploits the periodic orbital
modulations of the continuous waves by searching for patterns in doubly
Fourier-transformed data. We will describe the TwoSpect search pipeline,
including its mitigation of detector noise variations and corrections for
Doppler frequency modulation caused by changing detector velocity. Tests on
Gaussian noise and on a set of simulated signals will be presented.Comment: 22 pages, 10 figures, 1 table, Submitted to Classical and Quantum
Gravit
A Bayesian Approach to the Detection Problem in Gravitational Wave Astronomy
The analysis of data from gravitational wave detectors can be divided into
three phases: search, characterization, and evaluation. The evaluation of the
detection - determining whether a candidate event is astrophysical in origin or
some artifact created by instrument noise - is a crucial step in the analysis.
The on-going analyses of data from ground based detectors employ a frequentist
approach to the detection problem. A detection statistic is chosen, for which
background levels and detection efficiencies are estimated from Monte Carlo
studies. This approach frames the detection problem in terms of an infinite
collection of trials, with the actual measurement corresponding to some
realization of this hypothetical set. Here we explore an alternative, Bayesian
approach to the detection problem, that considers prior information and the
actual data in hand. Our particular focus is on the computational techniques
used to implement the Bayesian analysis. We find that the Parallel Tempered
Markov Chain Monte Carlo (PTMCMC) algorithm is able to address all three phases
of the anaylsis in a coherent framework. The signals are found by locating the
posterior modes, the model parameters are characterized by mapping out the
joint posterior distribution, and finally, the model evidence is computed by
thermodynamic integration. As a demonstration, we consider the detection
problem of selecting between models describing the data as instrument noise, or
instrument noise plus the signal from a single compact galactic binary. The
evidence ratios, or Bayes factors, computed by the PTMCMC algorithm are found
to be in close agreement with those computed using a Reversible Jump Markov
Chain Monte Carlo algorithm.Comment: 19 pages, 12 figures, revised to address referee's comment
Use of the MultiNest algorithm for gravitational wave data analysis
We describe an application of the MultiNest algorithm to gravitational wave
data analysis. MultiNest is a multimodal nested sampling algorithm designed to
efficiently evaluate the Bayesian evidence and return posterior probability
densities for likelihood surfaces containing multiple secondary modes. The
algorithm employs a set of live points which are updated by partitioning the
set into multiple overlapping ellipsoids and sampling uniformly from within
them. This set of live points climbs up the likelihood surface through nested
iso-likelihood contours and the evidence and posterior distributions can be
recovered from the point set evolution. The algorithm is model-independent in
the sense that the specific problem being tackled enters only through the
likelihood computation, and does not change how the live point set is updated.
In this paper, we consider the use of the algorithm for gravitational wave data
analysis by searching a simulated LISA data set containing two non-spinning
supermassive black hole binary signals. The algorithm is able to rapidly
identify all the modes of the solution and recover the true parameters of the
sources to high precision.Comment: 18 pages, 4 figures, submitted to Class. Quantum Grav; v2 includes
various changes in light of referee's comment
Spectral Line Removal in the LIGO Data Analysis System (LDAS)
High power in narrow frequency bands, spectral lines, are a feature of an
interferometric gravitational wave detector's output. Some lines are coherent
between interferometers, in particular, the 2 km and 4 km LIGO Hanford
instruments. This is of concern to data analysis techniques, such as the
stochastic background search, that use correlations between instruments to
detect gravitational radiation. Several techniques of `line removal' have been
proposed. Where a line is attributable to a measurable environmental
disturbance, a simple linear model may be fitted to predict, and subsequently
subtract away, that line. This technique has been implemented (as the command
oelslr) in the LIGO Data Analysis System (LDAS). We demonstrate its application
to LIGO S1 data.Comment: 11 pages, 5 figures, to be published in CQG GWDAW02 proceeding
Hydra: An Adaptive--Mesh Implementation of PPPM--SPH
We present an implementation of Smoothed Particle Hydrodynamics (SPH) in an
adaptive-mesh PPPM algorithm. The code evolves a mixture of purely
gravitational particles and gas particles. The code retains the desirable
properties of previous PPPM--SPH implementations; speed under light clustering,
naturally periodic boundary conditions and accurate pairwise forces. Under
heavy clustering the cycle time of the new code is only 2--3 times slower than
for a uniform particle distribution, overcoming the principal disadvantage of
previous implementations\dash a dramatic loss of efficiency as clustering
develops. A 1000 step simulation with 65,536 particles (half dark, half gas)
runs in one day on a Sun Sparc10 workstation. The choice of time integration
scheme is investigated in detail. A simple single-step Predictor--Corrector
type integrator is most efficient. A method for generating an initial
distribution of particles by allowing a a uniform temperature gas of SPH
particles to relax within a periodic box is presented. The average SPH density
that results varies by \%. We present a modified form of the
Layzer--Irvine equation which includes the thermal contribution of the gas
together with radiative cooling. Tests of sound waves, shocks, spherical infall
and collapse are presented. Appropriate timestep constraints sufficient to
ensure both energy and entropy conservation are discussed. A cluster
simulation, repeating Thomas andComment: 29 pp, uuencoded Postscrip
- âŠ