122,103 research outputs found

    A method to search for long duration gravitational wave transients from isolated neutron stars using the generalized FrequencyHough

    Full text link
    We describe a method to detect gravitational waves lasting O(hoursdays)O(hours-days) emitted by young, isolated neutron stars, such as those that could form after a supernova or a binary neutron star merger, using advanced LIGO/Virgo data. The method is based on a generalization of the FrequencyHough (FH), a pipeline that performs hierarchical searches for continuous gravitational waves by mapping points in the time/frequency plane of the detector to lines in the frequency/spindown plane of the source. We show that signals whose spindowns are related to their frequencies by a power law can be transformed to coordinates where the behavior of these signals is always linear, and can therefore be searched for by the FH. We estimate the sensitivity of our search across different braking indices, and describe the portion of the parameter space we could explore in a search using varying fast Fourier Transform (FFT) lengths.Comment: 15 figure

    Searching for periodic sources with LIGO. II: Hierarchical searches

    Full text link
    The detection of quasi-periodic sources of gravitational waves requires the accumulation of signal-to-noise over long observation times. If not removed, Earth-motion induced Doppler modulations, and intrinsic variations of the gravitational-wave frequency make the signals impossible to detect. These effects can be corrected (removed) using a parameterized model for the frequency evolution. We compute the number of independent corrections Np(ΔT,N)N_p(\Delta T,N) required for incoherent search strategies which use stacked power spectra---a demodulated time series is divided into NN segments of length ΔT\Delta T, each segment is FFTed, the power is computed, and the NN spectra are summed up. We estimate that the sensitivity of an all-sky search that uses incoherent stacks is a factor of 2--4 better than would be achieved using coherent Fourier transforms; incoherent methods are computationally efficient at exploring large parameter spaces. A two-stage hierarchical search which yields another 20--60% improvement in sensitivity in all-sky searches for old (>= 1000 yr) slow (= 40 yr) fast (<= 1000 Hz) pulsars. Assuming 10^{12} flops of effective computing power for data analysis, enhanced LIGO interferometers should be sensitive to: (i) Galactic core pulsars with gravitational ellipticities of \epsilon\agt5\times 10^{-6} at 200 Hz, (ii) Gravitational waves emitted by the unstable r-modes of newborn neutron stars out to distances of ~8 Mpc, and (iii) neutron stars in LMXB's with x-ray fluxes which exceed 2×108erg/(cm2s)2 \times 10^{-8} erg/(cm^2 s). Moreover, gravitational waves from the neutron star in Sco X-1 should be detectable is the interferometer is operated in a signal-recycled, narrow-band configuration.Comment: 22 Pages, 13 Figure

    Gravitational waves from Sco X-1: A comparison of search methods and prospects for detection with advanced detectors

    Get PDF
    The low-mass X-ray binary Scorpius X-1 (Sco X-1) is potentially the most luminous source of continuous gravitational-wave radiation for interferometers such as LIGO and Virgo. For low-mass X-ray binaries this radiation would be sustained by active accretion of matter from its binary companion. With the Advanced Detector Era fast approaching, work is underway to develop an array of robust tools for maximizing the science and detection potential of Sco X-1. We describe the plans and progress of a project designed to compare the numerous independent search algorithms currently available. We employ a mock-data challenge in which the search pipelines are tested for their relative proficiencies in parameter estimation, computational efficiency, robust- ness, and most importantly, search sensitivity. The mock-data challenge data contains an ensemble of 50 Scorpius X-1 (Sco X-1) type signals, simulated within a frequency band of 50-1500 Hz. Simulated detector noise was generated assuming the expected best strain sensitivity of Advanced LIGO and Advanced VIRGO (4×10244 \times 10^{-24} Hz1/2^{-1/2}). A distribution of signal amplitudes was then chosen so as to allow a useful comparison of search methodologies. A factor of 2 in strain separates the quietest detected signal, at 6.8×10266.8 \times 10^{-26} strain, from the torque-balance limit at a spin frequency of 300 Hz, although this limit could range from 1.2×10251.2 \times 10^{-25} (25 Hz) to 2.2×10262.2 \times 10^{-26} (750 Hz) depending on the unknown frequency of Sco X-1. With future improvements to the search algorithms and using advanced detector data, our expectations for probing below the theoretical torque-balance strain limit are optimistic.Comment: 33 pages, 11 figure

    Low cost multimedia sensor networks for obtaining lighting maps

    Get PDF
    In many applications, video streams, images, audio streams and scalar data are commonly used. In these fields, one of the most important magnitudes to be collected and controlled is the light intensity in different spots. So, it is extremely important to be able to deploy a network of light sensors which are usually integrated in a more general Wireless Multimedia Sensor Network (WMSN). Light control systems have increasing applications in many places like streets, roads, buildings, theaters, etc. In these situations having a dense grid of sensing spots significantly enhances measuring precision and control performance. When a great number of measuring spots are required, the cost of the sensor becomes a very important concern. In this paper the use of very low cost light sensors is proposed and it is shown how to overcome its limited performance by directionally correcting its results. A correction factor is derived for several lighting conditions. The proposed method is firstly applied to measure light in a single spot. Additionally a prototype of a sensor network is employed to draw the lighting map of a surface. Finally the sensor grid is employed to estimate the position and power of a set of light sources in a certain region of interest (street, building,…). These three applications have shown that using low cost sensors instead of luxmeters is a feasible approach to estimate illuminance levels in a room and to derive light sources maps. The obtained error measuring spots illuminance or estimating lamp emittances are quite acceptable in many practical applications.Telefonica Chair "Intelligence in Networks" of the University of Seville (Spain

    Emission Corrections for Hydrogen Features of the Graves et. al 2007 Sloan Digital Sky Survey Averages of Early Type, Non-liner Galaxies

    Full text link
    For purposes of stellar population analysis, emission corrections for Balmer series indices on the Lick index system in Sloan Digital Sky Survey (SDSS) stacked quiescent galaxy spectra are derived, along with corrections for continuum shape and gross stellar content, as a function of the Mg bb Lick index strength. These corrections are obtained by comparing the observed Lick index measurements of the SDSS with new observed measurements of 13 Virgo Cluster galaxies, and checked with model grids. From the Hα\alpha Mg bb diagram a linear correction for the observed measurement is constructed using best fit trend lines. Corrections for Hβ\beta, Hγ\gamma and Hδ\delta are constructed using stellar population models to predict continuum shape changes as a function of Mg bb and Balmer series emission intensities typical of H{\sc II} regions. The corrections themselves are fairly secure, but the interpretation for Hδ\delta and Hγ\gamma indices is complicated by the fact that the Hδ\delta and Hγ\gamma indices are sensitive to elemental abundances other than hydrogen

    Paper II: Calibration of the Swift ultraviolet/optical telescope

    Full text link
    The Ultraviolet/Optical Telescope (UVOT) is one of three instruments onboard the Swift observatory. The photometric calibration has been published, and this paper follows up with details on other aspects of the calibration including a measurement of the point spread function with an assessment of the orbital variation and the effect on photometry. A correction for large scale variations in sensitivity over the field of view is described, as well as a model of the coincidence loss which is used to assess the coincidence correction in extended regions. We have provided a correction for the detector distortion and measured the resulting internal astrometric accuracy of the UVOT, also giving the absolute accuracy with respect to the International Celestial Reference System. We have compiled statistics on the background count rates, and discuss the sources of the background, including instrumental scattered light. In each case we describe any impact on UVOT measurements, whether any correction is applied in the standard pipeline data processing or whether further steps are recommended.Comment: Accepted for publication in MNRAS. 15 pages, 21 figures, 4 table

    Submillimeter Polarimetry with PolKa, a reflection-type modulator for the APEX telescope

    Get PDF
    Imaging polarimetry is an important tool for the study of cosmic magnetic fields. In our Galaxy, polarization levels of a few up to \sim10\% are measured in the submillimeter dust emission from molecular clouds and in the synchrotron emission from supernova remnants. Only few techniques exist to image the distribution of polarization angles, as a means of tracing the plane-of-sky projection of the magnetic field orientation. At submillimeter wavelengths, polarization is either measured as the differential total power of polarization-sensitive bolometer elements, or by modulating the polarization of the signal. Bolometer arrays such as LABOCA at the APEX telescope are used to observe the continuum emission from fields as large as \sim0\fdg2 in diameter. %Here we present the results from the commissioning of PolKa, a polarimeter for Here we present PolKa, a polarimeter for LABOCA with a reflection-type waveplate of at least 90\% efficiency. The modulation efficiency depends mainly on the sampling and on the angular velocity of the waveplate. For the data analysis the concept of generalized synchronous demodulation is introduced. The instrumental polarization towards a point source is at the level of 0.1\sim0.1\%, increasing to a few percent at the 10-10db contour of the main beam. A method to correct for its effect in observations of extended sources is presented. Our map of the polarized synchrotron emission from the Crab nebula is in agreement with structures observed at radio and optical wavelengths. The linear polarization measured in OMC1 agrees with results from previous studies, while the high sensitivity of LABOCA enables us to also map the polarized emission of the Orion Bar, a prototypical photon-dominated region

    An improved algorithm for narrow-band searches of continuous gravitational waves

    Full text link
    Continuous gravitational waves signals, emitted by asymmetric spinning neutron stars, are among the main targets of current detectors like Advanced LIGO and Virgo. In the case of sources, like pulsars, which rotational parameters are measured through electromagnetic observations, typical searches assume that the gravitational wave frequency is at a given known fixed ratio with respect to the star rotational frequency. For instance, for a neutron star rotating around one of its principal axis of inertia the gravitational signal frequency would be exactly two times the rotational frequency of the star. It is possible, however, that this assumption is wrong. This is why search algorithms able to take into account a possible small mismatch between the gravitational waves frequency and the frequency inferred from electromagnetic observations have been developed. In this paper we present an improved pipeline to perform such narrow-band searches for continuous gravitational waves from neutron stars, about three orders of magnitude faster than previous implementations. The algorithm that we have developed is based on the {\it 5-vectors} framework and is able to perform a fully coherent search over a frequency band of width O\mathcal{O}(Hertz) and for hundreds of spin-down values running a few hours on a standard workstation. This new algorithm opens the possibility of long coherence time searches for objects which rotational parameters are highly uncertain.Comment: 19 pages, 8 figures, 6 tables, submitted to CQ

    Global estimations of wind energy potential considering seasonal air density changes

    Get PDF
    The literature typically considers constant annual average air density when computing the wind energy potential of a given location. In this work, the recent reanalysis ERA5 is used to obtain global seasonal estimates of wind energy production that include seasonally varying air density. Thus, errors due to the use of a constant air density are quantified. First, seasonal air density changes are studied at the global scale. Then, wind power density errors due to seasonal air density changes are computed. Finally, winter and summer energy production errors due to neglecting the changes in air density are computed by implementing the power curve of the National Renewable Energy Laboratorys 5 MW turbine. Results show relevant deviations for three variables (air density, wind power density, and energy production), mainly in the middle-high latitudes (Hudson Bay, Siberia, Patagonia, Australia, etc.). Locations with variations from −6% to 6% are identified from summers to winters in the Northern Hemisphere. Additionally, simulations with the aeroelastic code FAST for the studied turbine show that instantaneous power production can be affected by greater than 20% below the rated wind speed if a day with realistically high or low air density values is compared for the same turbulent wind speed.This work was funded by the Spanish Government's MINECO project CGL2016-76561-R (AEI/FEDER EU) and the University of the Basque Country (UPV/EHU-funded project GIU17/02). The ECMWFERA-5 data used in this study were obtained from the Copernicus Climate Data Store. All the calculations were carried out in the framework of R Core Team (2016). More can be learnt about R, alanguage and an environment for statistical computing, at the website of the R Foundation for Statistical Computing, Vienna,Austria (https://www.R-project.org/)
    corecore