590 research outputs found

    Robust Optical Richness Estimation with Reduced Scatter

    Full text link
    Reducing the scatter between cluster mass and optical richness is a key goal for cluster cosmology from photometric catalogs. We consider various modifications to the red-sequence matched filter richness estimator of Rozo et al. (2009), and evaluate their impact on the scatter in X-ray luminosity at fixed richness. Most significantly, we find that deeper luminosity cuts can reduce the recovered scatter, finding that sigma_lnLX|lambda=0.63+/-0.02 for clusters with M_500c >~ 1.6e14 h_70^-1 M_sun. The corresponding scatter in mass at fixed richness is sigma_lnM|lambda ~ 0.2-0.3 depending on the richness, comparable to that for total X-ray luminosity. We find that including blue galaxies in the richness estimate increases the scatter, as does weighting galaxies by their optical luminosity. We further demonstrate that our richness estimator is very robust. Specifically, the filter employed when estimating richness can be calibrated directly from the data, without requiring a-priori calibrations of the red-sequence. We also demonstrate that the recovered richness is robust to up to 50% uncertainties in the galaxy background, as well as to the choice of photometric filter employed, so long as the filters span the 4000 A break of red-sequence galaxies. Consequently, our richness estimator can be used to compare richness estimates of different clusters, even if they do not share the same photometric data. Appendix 1 includes "easy-bake" instructions for implementing our optimal richness estimator, and we are releasing an implementation of the code that works with SDSS data, as well as an augmented maxBCG catalog with the lambda richness measured for each cluster.Comment: Submitted to ApJ. 20 pages in emulateapj forma

    Orientation bias of optically selected galaxy clusters and its impact on stacked weak-lensing analyses

    Get PDF
    Weak-lensing measurements of the averaged shear profiles of galaxy clusters binned by some proxy for cluster mass are commonly converted to cluster mass estimates under the assumption that these cluster stacks have spherical symmetry. In this paper, we test whether this assumption holds for optically selected clusters binned by estimated optical richness. Using mock catalogues created from N-body simulations populated realistically with galaxies, we ran a suite of optical cluster finders and estimated their optical richness. We binned galaxy clusters by true cluster mass and estimated optical richness and measure the ellipticity of these stacks. We find that the processes of optical cluster selection and richness estimation are biased, leading to stacked structures that are elongated along the line of sight. We show that weak-lensing alone cannot measure the size of this orientation bias. Weak-lensing masses of stacked optically selected clusters are overestimated by up to 3–6 per cent when clusters can be uniquely associated with haloes. This effect is large enough to lead to significant biases in the cosmological parameters derived from large surveys like the Dark Energy Survey, if not calibrated via simulations or fitted simultaneously. This bias probably also contributes to the observed discrepancy between the observed and predicted Sunyaev–Zel’dovich signal of optically selected clusters

    Constraining the Mass-Richness Relationship of redMaPPer Clusters with Angular Clustering

    Full text link
    The potential of using cluster clustering for calibrating the mass-observable relation of galaxy clusters has been recognized theoretically for over a decade. Here, we demonstrate the feasibility of this technique to achieve high precision mass calibration using redMaPPer clusters in the Sloan Digital Sky Survey North Galactic Cap. By including cross-correlations between several richness bins in our analysis we significantly improve the statistical precision of our mass constraints. The amplitude of the mass-richness relation is constrained to 7% statistical precision. However, the error budget is systematics dominated, reaching an 18% total error that is dominated by theoretical uncertainty in the bias-mass relation for dark matter halos. We perform a detailed treatment of the effects of assembly bias on our analysis, finding that the contribution of such effects to our parameter uncertainties is somewhat greater than that of measurement noise. We confirm the results from Miyatake et al. (2015) that the clustering amplitude of redMaPPer clusters depends on galaxy concentration, and provide additional evidence in support of this effect being due to some form of assembly bias. The results presented here demonstrate the power of cluster clustering for mass calibration and cosmology provided the current theoretical systematics can be ameliorated.Comment: 18 pages, 9 figure

    SunPy: Python for Solar Physics. An implementation for local correlation tracking

    Get PDF
    Python programming language has experienced a great progress and growing use in the scientific community in the last years as well as a direct impact on solar physics. Python is a very mature language and almost any fundamental feature you might want to do is already implemented in a library or module. SunPy is a common effort of, using the advantages of Python, developing tools to be applied for processing and analysis of solar data. In this work we present a particular development, based on Python, for the analysis of proper motions in time series of images through the local correlation tracking algorithm. A graphic user interface allows to select different parameters for the computations, visualization and analysis of flow fields

    The Effects of Halo Assembly Bias on Self-Calibration in Galaxy Cluster Surveys

    Get PDF
    Self-calibration techniques for analyzing galaxy cluster counts utilize the abundance and the clustering amplitude of dark matter halos. These properties simultaneously constrain cosmological parameters and the cluster observable-mass relation. It was recently discovered that the clustering amplitude of halos depends not only on the halo mass, but also on various secondary variables, such as the halo formation time and the concentration; these dependences are collectively termed assembly bias. Applying modified Fisher matrix formalism, we explore whether these secondary variables have a significant impact on the study of dark energy properties using the self-calibration technique in current (SDSS) and the near future (DES, SPT, and LSST) cluster surveys. The impact of the secondary dependence is determined by (1) the scatter in the observable-mass relation and (2) the correlation between observable and secondary variables. We find that for optical surveys, the secondary dependence does not significantly influence an SDSS-like survey; however, it may affect a DES-like survey (given the high scatter currently expected from optical clusters) and an LSST-like survey (even for low scatter values and low correlations). For an SZ survey such as SPT, the impact of secondary dependence is insignificant if the scatter is 20% or lower but can be enhanced by the potential high scatter values introduced by a highly correlated background. Accurate modeling of the assembly bias is necessary for cluster self-calibration in the era of precision cosmology.Comment: 13 pages, 5 figures, replaced to match published versio

    Generalized Task-Parameterized Skill Learning

    Get PDF
    Programming by demonstration has recently gained much attention due to its user-friendly and natural way to transfer human skills to robots. In order to facilitate the learning of multiple demonstrations and meanwhile generalize to new situations, a task-parameterized Gaussian mixture model (TP-GMM) has been recently developed. This model has achieved reliable performance in areas such as human-robot collaboration and dual-arm manipulation. However, the crucial task frames and associated parameters in this learning framework are often set by the human teacher, which renders three problems that have not been addressed yet: (i) task frames are treated equally, without considering their individual importance, (ii) task parameters are defined without taking into account additional task constraints, such as robot joint limits and motion smoothness, and (iii) a fixed number of task frames are pre-defined regardless of whether some of them may be redundant or even irrelevant for the task at hand. In this paper, we generalize the task-parameterized learning by addressing the aforementioned problems. Moreover, we provide a novel learning perspective which allows the robot to refine and adapt previously learned skills in a low dimensional space. Several examples are studied in both simulated and real robotic systems, showing the applicability of our approach

    Hybrid Probabilistic Trajectory Optimization Using Null-Space Exploration

    Get PDF
    In the context of learning from demonstration, human examples are usually imitated in either Cartesian or joint space. However, this treatment might result in undesired movement trajectories in either space. This is particularly important for motion skills such as striking, which typically imposes motion constraints in both spaces. In order to address this issue, we consider a probabilistic formulation of dynamic movement primitives, and apply it to adapt trajectories in Cartesian and joint spaces simultaneously. The probabilistic treatment allows the robot to capture the variability of multiple demonstrations and facilitates the mixture of trajectory constraints from both spaces. In addition to this proposed hybrid space learning, the robot often needs to consider additional constraints such as motion smoothness and joint limits. On the basis of Jacobian-based inverse kinematics, we propose to exploit robot null-space so as to unify trajectory constraints from Cartesian and joint spaces while satisfying additional constraints. Evaluations of hand-shaking and striking tasks carried out with a humanoid robot demonstrate the applicability of our approach

    Kernelized movement primitives

    Get PDF
    Imitation learning has been studied widely as a convenient way to transfer human skills to robots. This learning approach is aimed at extracting relevant motion patterns from human demonstrations and subsequently applying these patterns to different situations. Despite the many advancements that have been achieved, solutions for coping with unpredicted situations (e.g., obstacles and external perturbations) and high-dimensional inputs are still largely absent. In this paper, we propose a novel kernelized movement primitive (KMP), which allows the robot to adapt the learned motor skills and fulfill a variety of additional constraints arising over the course of a task. Specifically, KMP is capable of learning trajectories associated with high-dimensional inputs owing to the kernel treatment, which in turn renders a model with fewer open parameters in contrast to methods that rely on basis functions. Moreover, we extend our approach by exploiting local trajectory representations in different coordinate systems that describe the task at hand, endowing KMP with reliable extrapolation capabilities in broader domains. We apply KMP to the learning of time-driven trajectories as a special case, where a compact parametric representation describing a trajectory and its first-order derivative is utilized. In order to verify the effectiveness of our method, several examples of trajectory modulations and extrapolations associated with time inputs, as well as trajectory adaptations with high-dimensional inputs are provided

    An Uncertainty-Aware Minimal Intervention Control Strategy Learned from Demonstrations

    Get PDF
    Motivated by the desire to have robots physically present in human environments, in recent years we have witnessed an emergence of different approaches for learning active compliance. Some of the most compelling solutions exploit a minimal intervention control principle, correcting deviations from a goal only when necessary, and among those who follow this concept, several probabilistic techniques have stood out from the rest. However, these approaches are prone to requiring several task demonstrations for proper gain estimation and to generating unpredictable robot motions in the face of uncertainty. Here we present a Programming by Demonstration approach for uncertainty-aware impedance regulation, aimed at making the robot compliant - and safe to interact with - when the uncertainty about its predicted actions is high. Moreover, we propose a data-efficient strategy, based on the energy observed during demonstrations, to achieve minimal intervention control, when the uncertainty is low. The approach is validated in an experimental scenario, where a human collaboratively moves an object with a 7-DoF torque-controlled robot
    corecore