14,745 research outputs found

    Non-negative matrix factorization for self-calibration of photometric redshift scatter in weak lensing surveys

    Full text link
    Photo-z error is one of the major sources of systematics degrading the accuracy of weak lensing cosmological inferences. Zhang et al. (2010) proposed a self-calibration method combining galaxy-galaxy correlations and galaxy-shear correlations between different photo-z bins. Fisher matrix analysis shows that it can determine the rate of photo-z outliers at a level of 0.01-1% merely using photometric data and do not rely on any prior knowledge. In this paper, we develop a new algorithm to implement this method by solving a constrained nonlinear optimization problem arising in the self-calibration process. Based on the techniques of fixed-point iteration and non-negative matrix factorization, the proposed algorithm can efficiently and robustly reconstruct the scattering probabilities between the true-z and photo-z bins. The algorithm has been tested extensively by applying it to mock data from simulated stage IV weak lensing projects. We find that the algorithm provides a successful recovery of the scatter rates at the level of 0.01-1%, and the true mean redshifts of photo-z bins at the level of 0.001, which may satisfy the requirements in future lensing surveys.Comment: 12 pages, 6 figures. Accepted for publication in ApJ. Updated to match the published versio

    Reliable dual-redundant sensor failure detection and identification for the NASA F-8 DFBW aircraft

    Get PDF
    A technique was developed which provides reliable failure detection and identification (FDI) for a dual redundant subset of the flight control sensors onboard the NASA F-8 digital fly by wire (DFBW) aircraft. The technique was successfully applied to simulated sensor failures on the real time F-8 digital simulator and to sensor failures injected on telemetry data from a test flight of the F-8 DFBW aircraft. For failure identification the technique utilized the analytic redundancy which exists as functional and kinematic relationships among the various quantities being measured by the different control sensor types. The technique can be used not only in a dual redundant sensor system, but also in a more highly redundant system after FDI by conventional voting techniques reduced to two the number of unfailed sensors of a particular type. In addition the technique can be easily extended to the case in which only one sensor of a particular type is available

    Phoenix-XNS - A Miniature Real-Time Navigation System for LEO Satellites

    Get PDF
    The paper describes the development of a miniature GPS receiver with integrated real-time navigation system for orbit determination of satellites in low Earth orbit (LEO). The Phoenix-XNS receiver is based on a commercial-off-the-shelf (COTS) single-frequency GPS receiver board that has been qualified for use in a moderate space environment. Its firmware is specifically designed for space applications and accounts for the high signal dynamics in the acquisition and tracking process. The supplementary eXtended Navigation System (XNS) employs an elaborate force model and a 24-state Kalman filter to provide a smooth and continuous reduced-dynamics navigation solution even in case of restricted GPS availability. Through the use of the GRAPHIC code-carrier combination, ionospheric path delays can be fully eliminated in the filter, which overcomes the main limitation of conventional single-frequency receivers. Tests conducted in a signal simulator test bed have demonstrated a filtered navigation solution accuracy of better than 1 m (3D rms)

    AIS-BN: An Adaptive Importance Sampling Algorithm for Evidential Reasoning in Large Bayesian Networks

    Full text link
    Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung and Chang, 1989; Shachter and Peot, 1989) and self-importance sampling (Shachter and Peot, 1989). We used in our tests three large real Bayesian network models available to the scientific community: the CPCS network (Pradhan et al., 1994), the PathFinder network (Heckerman, Horvitz, and Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, and Druzdzel, 1997), with evidence as unlikely as 10^-41. While the AIS-BN algorithm always performed better than the other two algorithms, in the majority of the test cases it achieved orders of magnitude improvement in precision of the results. Improvement in speed given a desired precision is even more dramatic, although we are unable to report numerical results here, as the other algorithms almost never achieved the precision reached even by the first few iterations of the AIS-BN algorithm

    Interplanetary Guidance System Requirements Study. Volume 2 - Computer Program Descriptions. Part 2 - Performance Assessment of Midcourse Guidance Systems

    Get PDF
    Mathematical model, program description and users guide to digital computer programs for interplanetary mission guidance and contro

    Response Surface Methodology's Steepest Ascent and Step Size Revisited

    Get PDF
    Response Surface Methodology (RSM) searches for the input combination maximizing the output of a real system or its simulation.RSM is a heuristic that locally fits first-order polynomials, and estimates the corresponding steepest ascent (SA) paths.However, SA is scale-dependent; and its step size is selected intuitively.To tackle these two problems, this paper derives novel techniques combining mathematical statistics and mathematical programming.Technique 1 called 'adapted' SA (ASA) accounts for the covariances between the components of the estimated local gradient.ASA is scale-independent.The step-size problem is solved tentatively.Technique 2 does follow the SA direction, but with a step size inspired by ASA.Mathematical properties of the two techniques are derived and interpreted; numerical examples illustrate these properties.The search directions of the two techniques are explored in Monte Carlo experiments.These experiments show that - in general - ASA gives a better search direction than SA.response surface methodology

    Driving with Style: Inverse Reinforcement Learning in General-Purpose Planning for Automated Driving

    Full text link
    Behavior and motion planning play an important role in automated driving. Traditionally, behavior planners instruct local motion planners with predefined behaviors. Due to the high scene complexity in urban environments, unpredictable situations may occur in which behavior planners fail to match predefined behavior templates. Recently, general-purpose planners have been introduced, combining behavior and local motion planning. These general-purpose planners allow behavior-aware motion planning given a single reward function. However, two challenges arise: First, this function has to map a complex feature space into rewards. Second, the reward function has to be manually tuned by an expert. Manually tuning this reward function becomes a tedious task. In this paper, we propose an approach that relies on human driving demonstrations to automatically tune reward functions. This study offers important insights into the driving style optimization of general-purpose planners with maximum entropy inverse reinforcement learning. We evaluate our approach based on the expected value difference between learned and demonstrated policies. Furthermore, we compare the similarity of human driven trajectories with optimal policies of our planner under learned and expert-tuned reward functions. Our experiments show that we are able to learn reward functions exceeding the level of manual expert tuning without prior domain knowledge.Comment: Appeared at IROS 2019. Accepted version. Added/updated footnote, minor correction in preliminarie
    • …
    corecore