34 research outputs found

    Luminous red galaxies in the Kilo Degree Survey: selection with broad-band photometry and weak lensing measurements

    Get PDF
    We use the overlap between multiband photometry of the Kilo-Degree Survey (KiDS) and spectroscopic data based on the Sloan Digital Sky Survey (SDSS) and Galaxy And Mass Assembly (GAMA) to infer the colour-magnitude relation of red-sequence galaxies. We then use this inferred relation to select luminous red galaxies (LRGs) in the redshift range of 0.1<z<0.70.1<z<0.7 over the entire KiDS Data Release 3 footprint. We construct two samples of galaxies with different constant comoving densities and different luminosity thresholds. The selected red galaxies have photometric redshifts with typical photo-z errors of σz∼0.014(1+z)\sigma_z \sim 0.014 (1+z) that are nearly uniform with respect to observational systematics. This makes them an ideal set of galaxies for lensing and clustering studies. As an example, we use the KiDS-450 cosmic shear catalogue to measure the mean tangential shear signal around the selected LRGs. We detect a significant weak lensing signal for lenses out to z∼0.7z \sim 0.7

    Towards emulating cosmic shear data:Revisiting the calibration of the shear measurements for the Kilo-Degree Survey

    Get PDF
    Exploiting the full statistical power of future cosmic shear surveys will necessitate improvements to the accuracy with which the gravitational lensing signal is measured. We present a framework for calibrating shear with image simulations that demonstrates the importance of including realistic correlations between galaxy morphology, size and more importantly, photometric redshifts. This realism is essential so that selection and shape measurement biases can be calibrated accurately for a tomographic cosmic shear analysis. We emulate Kilo-Degree Survey (KiDS) observations of the COSMOS field using morphological information from {\it Hubble} Space Telescope imaging, faithfully reproducing the measured galaxy properties from KiDS observations of the same field. We calibrate our shear measurements from lensfit, and find through a range of sensitivity tests that lensfit is robust and unbiased within the allowed 2 per cent tolerance of our study. Our results show that the calibration has to be performed by selecting the tomographic samples in the simulations, consistent with the actual cosmic shear analysis, because the joint distributions of galaxy properties are found to vary with redshift. Ignoring this redshift variation could result in misestimating the shear bias by an amount that exceeds the allowed tolerance. To improve the calibration for future cosmic shear analyses, it will be essential to also correctly account for the measurement of photometric redshifts, which requires simulating multi-band observations.Comment: 31 pages, 17 figures and 2 tables. Accepted for publication in A&A. Matches the published versio

    Organised Randoms: learning and correcting for systematic galaxy clustering patterns in KiDS using self-organising maps

    Get PDF
    We present a new method for the mitigation of observational systematic effects in angular galaxy clustering via corrective random galaxy catalogues. Real and synthetic galaxy data, from the Kilo Degree Survey's (KiDS) 4th^{\rm{th}} Data Release (KiDS-10001000) and the Full-sky Lognormal Astro-fields Simulation Kit (FLASK) package respectively, are used to train self-organising maps (SOMs) to learn the multivariate relationships between observed galaxy number density and up to six systematic-tracer variables, including seeing, Galactic dust extinction, and Galactic stellar density. We then create `organised' randoms, i.e. random galaxy catalogues with spatially variable number densities, mimicking the learnt systematic density modes in the data. Using realistically biased mock data, we show that these organised randoms consistently subtract spurious density modes from the two-point angular correlation function w(ϑ)w(\vartheta), correcting biases of up to 12σ12\sigma in the mean clustering amplitude to as low as 0.1σ0.1\sigma, over a high signal-to-noise angular range of 7-100 arcmin. Their performance is also validated for angular clustering cross-correlations in a bright, flux-limited subset of KiDS-10001000, comparing against an analogous sample constructed from highly-complete spectroscopic redshift data. Each organised random catalogue object is a `clone' carrying the properties of a real galaxy, and is distributed throughout the survey footprint according to the parent galaxy's position in systematics-space. Thus, sub-sample randoms are readily derived from a single master random catalogue via the same selection as applied to the real galaxies. Our method is expected to improve in performance with increased survey area, galaxy number density, and systematic contamination, making organised randoms extremely promising for current and future clustering analyses of faint samples.Comment: 18 pages (6 appendix pages), 12 figures (8 appendix figures), submitted to A&

    Clustering of red-sequence galaxies in the fourth data release ofthe Kilo-Degree Survey

    Get PDF
    We present a sample of luminous red-sequence galaxies to study the large-scale structure in the fourth data release of the Kilo-Degree Survey. The selected galaxies are defined by a red-sequence template, in the form of a data-driven model of the colour-magnitude relation conditioned on redshift. In this work, the red-sequence template is built using the broad-band optical+near infrared photometry of KiDS-VIKING and the overlapping spectroscopic data sets. The selection process involves estimating the red-sequence redshifts, assessing the purity of the sample, and estimating the underlying redshift distributions of redshift bins. After performing the selection, we mitigate the impact of survey properties on the observed number density of galaxies by assigning photometric weights to the galaxies. We measure the angular two-point correlation function of the red galaxies in four redshift bins, and constrain the large scale bias of our red-sequence sample assuming a fixed Λ\LambdaCDM cosmology. We find consistent linear biases for two luminosity-threshold samples (dense and luminous). We find that our constraints are well characterized by the passive evolution model.Comment: submitted to A&

    A Seismic Performance Classification Framework to Provide Increased Seismic Resilience

    Get PDF
    Several performance measures are being used in modern seismic engineering applications, suggesting that seismic performance could be classified a number of ways. This paper reviews a range of performance measures currently being adopted and then proposes a new seismic performance classification framework based on expected annual losses (EAL). The motivation for an EAL-based performance framework stems from the observation that, in addition to limiting lives lost during earthquakes, changes are needed to improve the resilience of our societies, and it is proposed that increased resilience in developed countries could be achieved by limiting monetary losses. In order to set suitable preliminary values of EAL for performance classification, values of EAL reported in the literature are reviewed. Uncertainties in current EAL estimates are discussed and then an EAL-based seismic performance classification framework is proposed. The proposal is made that the EAL should be computed on a storey-by-storey basis in recognition that EAL for different storeys of a building could vary significantly and also recognizing that a single building may have multiple owners

    Asymmetric multi-antenna coded caching for location-dependent content delivery

    No full text
    Abstract Efficient usage of in-device storage and computation capabilities are key solutions to support data-intensive applications such as immersive digital experiences. This paper proposes a location-dependent multi-antenna coded caching - based content delivery scheme tailored specifically for wireless immersive viewing applications. First, a novel memory allocation process incentivizes the content relevant to the identified wireless bottleneck areas. This enables a trade-off between local and global caching gains and results in unequal fractions of location-dependent multimedia content cached by each user. Then, a novel packet generation process is carried out during the subsequent delivery phase, given the asymmetric cache placement. During this phase, the number of packets transmitted to each user is the same, while the sizes of the packets are proportional to the corresponding location-dependent cache ratios. In this regard, each user is served with location-specific content using joint multicast beamforming and a multi-rate modulation scheme that simultaneously benefits from global caching and spatial multiplexing gains. Numerical experiments and mathematical analysis demonstrate significant performance gains compared to the state-of-the-art

    Non-symmetric multi-antenna coded caching for location-dependent content delivery

    No full text
    Abstract Immersive viewing, as the next-generation interface for human-computer interaction, is emerging as a wireless application. A genuinely wireless immersive experience necessitates immense data delivery with ultra-low latency, raising stringent requirements for future wireless networks. In this regard, efficient usage of in-device storage and computation capabilities is a potential candidate for addressing these requirements. In addition, recent advancement in multi-antenna transmission has significantly enhanced wireless communication. Hence, this paper proposes a novel location-based multi-antenna coded cache placement and delivery scheme. We first formulate a linear programming cache allocation problem to provide a uniform quality of experience in different network locations; then, cache-placement is done for each location independently. Subsequently, based on the users’ spatial realizations, a transmission vector is created considering diverse available memory at each user. Moreover, a weighted-max-min optimization is used for the beamformers to support different transmission rates. Finally, numerical results are used to show the performance of the proposed scheme

    Low-subpacketization multi-antenna coded caching for dynamic networks

    No full text
    Abstract Multi-antenna coded caching combines a global caching gain, proportional to the total cache size in the network, with an additional spatial multiplexing gain that stems from multiple transmitting antennas. However, classic centralized coded caching schemes are not suitable for dynamic networks as they require prior knowledge of the number of users to indicate what data should be cached at each user during the placement phase. On the other hand, fully decentralized schemes provide comparable gains to their centralized counterparts only when the number of users is very large. In this paper, we propose a novel multi-antenna coded caching scheme for dynamic networks, where instead of defining individual cache contents, we associate users with a limited set of predefined caching profiles. Then, during the delivery phase, we aim at achieving a combined caching and spatial multiplexing gain, comparable to a large extent with the ideal case of fully centralized schemes. The resulting scheme imposes small subpacketization and beamforming overheads, is robust under dynamic network conditions, and incurs small finite-SNR performance loss compared with centralized schemes

    Low-complexity multicast beamforming for multi-stream multi-group communications

    No full text
    Abstract In this paper, assuming multi-antenna transmitter and receivers, we consider a multicast beamformer design for the weighted max-min-fairness (WMMF) problem in a multi-stream multi-group communication setup. Unlike the single-stream scenario, the WMMF objective in this setup is not equivalent to maximizing the minimum weighted SINR due to the summation over the rates of multiple streams. Therefore, the non-convex problem at hand is first approximated with a convex one and then solved using Karush-Kuhn-Tucker (KKT) conditions. Then, a practically appealing closed-form solution is derived for both transmit and receive beamformers as a function of dual variables. Finally, we use an iterative solution based on the sub-gradient method to solve for the mutually coupled and interdependent dual variables. The proposed solution does not rely on generic solvers and does not require any bisection loop for finding the achievable rate of various streams. As a result, it significantly outperforms the state-of-art in terms of computational cost and convergence speed
    corecore