132,272 research outputs found

    Efficient Second-Order Shape-Constrained Function Fitting

    Get PDF
    We give an algorithm to compute a one-dimensional shape-constrained function that best fits given data in weighted-LL_{\infty} norm. We give a single algorithm that works for a variety of commonly studied shape constraints including monotonicity, Lipschitz-continuity and convexity, and more generally, any shape constraint expressible by bounds on first- and/or second-order differences. Our algorithm computes an approximation with additive error ε\varepsilon in O(nlogUε)O\left(n \log \frac{U}{\varepsilon} \right) time, where UU captures the range of input values. We also give a simple greedy algorithm that runs in O(n)O(n) time for the special case of unweighted LL_{\infty} convex regression. These are the first (near-)linear-time algorithms for second-order-constrained function fitting. To achieve these results, we use a novel geometric interpretation of the underlying dynamic programming problem. We further show that a generalization of the corresponding problems to directed acyclic graphs (DAGs) is as difficult as linear programming.Comment: accepted for WADS 2019; (v2 fixes various typos

    Efficient Second-Order Shape-Constrained Function Fitting

    Get PDF
    We give an algorithm to compute a one-dimensional shape-constrained function that best fits given data in weighted-LL_{\infty} norm. We give a single algorithm that works for a variety of commonly studied shape constraints including monotonicity, Lipschitz-continuity and convexity, and more generally, any shape constraint expressible by bounds on first- and/or second-order differences. Our algorithm computes an approximation with additive error ε\varepsilon in O(nlogUε)O\left(n \log \frac{U}{\varepsilon} \right) time, where UU captures the range of input values. We also give a simple greedy algorithm that runs in O(n)O(n) time for the special case of unweighted LL_{\infty} convex regression. These are the first (near-)linear-time algorithms for second-order-constrained function fitting. To achieve these results, we use a novel geometric interpretation of the underlying dynamic programming problem. We further show that a generalization of the corresponding problems to directed acyclic graphs (DAGs) is as difficult as linear programming

    A modal approach to hyper-redundant manipulator kinematics

    Get PDF
    This paper presents novel and efficient kinematic modeling techniques for “hyper-redundant” robots. This approach is based on a “backbone curve” that captures the robot's macroscopic geometric features. The inverse kinematic, or “hyper-redundancy resolution,” problem reduces to determining the time varying backbone curve behavior. To efficiently solve the inverse kinematics problem, the authors introduce a “modal” approach, in which a set of intrinsic backbone curve shape functions are restricted to a modal form. The singularities of the modal approach, modal non-degeneracy conditions, and modal switching are considered. For discretely segmented morphologies, the authors introduce “fitting” algorithms that determine the actuator displacements that cause the discrete manipulator to adhere to the backbone curve. These techniques are demonstrated with planar and spatial mechanism examples. They have also been implemented on a 30 degree-of-freedom robot prototype

    B-spline techniques for volatility modeling

    Full text link
    This paper is devoted to the application of B-splines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility models and the parameterization of an arbitrage-free implied volatility surface calibrated to sparse option data. We use an extension of classical B-splines obtained by including basis functions with infinite support. We first come back to the application of shape-constrained B-splines to the estimation of conditional expectations, not merely from a scatter plot but also from the given marginal distributions. An application is the Monte Carlo calibration of stochastic local volatility models by Markov projection. Then we present a new technique for the calibration of an implied volatility surface to sparse option data. We use a B-spline parameterization of the Radon-Nikodym derivative of the underlying's risk-neutral probability density with respect to a roughly calibrated base model. We show that this method provides smooth arbitrage-free implied volatility surfaces. Finally, we sketch a Galerkin method with B-spline finite elements to the solution of the partial differential equation satisfied by the Radon-Nikodym derivative.Comment: 25 page

    AGN and their host galaxies in the local Universe: two mass independent Eddington ratio distribution functions characterize black hole growth

    Full text link
    We use a phenomenological model to show that black hole growth in the local Universe (z < 0.1) can be described by two separate, mass independent Eddington ratio distribution functions (ERDFs). We assume that black holes can be divided into two independent groups: those with radiatively efficient accretion, primarily hosted by optically blue and green galaxies, and those with radiatively inefficient accretion, which are mainly found in red galaxies. With observed galaxy stellar mass functions as input, we show that the observed AGN luminosity functions can be reproduced by using mass independent, broken power law shaped ERDFs. We use the observed hard X-ray and 1.4 GHz radio luminosity functions to constrain the ERDF for radiatively efficient and inefficient AGN, respectively. We also test alternative ERDF shapes and mass dependent models. Our results are consistent with a mass independent AGN fraction and AGN hosts being randomly drawn from the galaxy population. We argue that the ERDF is not shaped by galaxy-scale effects, but by how efficiently material can be transported from the inner few parsecs to the accretion disc. Our results are incompatible with the simplest form of mass quenching where massive galaxies host higher accretion rate AGN. Furthermore, if reaching a certain Eddington ratio is a sufficient condition for maintenance mode, it can occur in all red galaxies, not just the most massive ones.Comment: 33 pages, 15 figures, accepted for publication in ApJ, Fig. 6 shows the main resul

    A New Look at Massive Clusters: weak lensing constraints on the triaxial dark matter halos of Abell 1689, Abell 1835, & Abell 2204

    Full text link
    Measuring the 3D distribution of mass on galaxy cluster scales is a crucial test of the LCDM model, providing constraints on the nature of dark matter. Recent work investigating mass distributions of individual galaxy clusters (e.g. Abell 1689) using weak and strong gravitational lensing has revealed potential inconsistencies between the predictions of structure formation models relating halo mass to concentration and those relationships as measured in massive clusters. However, such analyses employ simple spherical halo models while a growing body of work indicates that triaxial 3D halo structure is both common and important in parameter estimates. We recently introduced a Markov Chain Monte Carlo (MCMC) method to fit fully triaxial models to weak lensing data that gives parameter and error estimates that fully incorporate the true shape uncertainty present in nature. In this paper we apply that method to weak lensing data obtained with the ESO/MPG Wide-Field Imager for galaxy clusters A1689, A1835, and A2204, under a range of Bayesian priors derived from theory and from independent X-ray and strong lensing observations. For Abell 1689, using a simple strong lensing prior we find marginalized mean parameter values M_200 = (0.83 +- 0.16)x10^15 M_solar/h and C=12.2 +- 6.7, which are marginally consistent with the mass-concentration relation predicted in LCDM. The large error contours that accompany our triaxial parameter estimates more accurately represent the true extent of our limited knowledge of the structure of galaxy cluster lenses, and make clear the importance of combining many constraints from other theoretical, lensing (strong, flexion), or other observational (X-ray, SZ, dynamical) data to confidently measure cluster mass profiles. (Abridged)Comment: 21 pages, 10 figures, accepted for publication in MNRA

    Analysis of Scarp Profiles: Evaluation of Errors in Morphologic Dating

    Get PDF
    Morphologic analysis of scarp degradation can be used quantitatively to determine relative ages of different scarps formed in cohesionless materials, under the same climatic conditions. Scarps of tectonic origin as well as wavecut or rivercut terraces can be treated as topographic impulses that are attenuated by surface erosional processes. This morphological evolution can be modelled as the convolution of the initial shape with erosion (or degradation) function whose width increases with time. Such modeling applies well to scarps less than 10m high, formed in unconsolidated fanglomerates. To a good approximation, the degradation function is Gaussian with a variance measuring the degree of rounding of the initial shape. This geometric parameter can be called the degradation coefficient. A synthetic experiment shows that the degradation coefficient can be obtained by least squares fitting of profiles levelled perpendicular to the scarp. Gravitational collapse of the free face is accounted for by assuming initial scarp slopes at the angle of repose of the cohesionless materials (30°–35°). Uncertainties in the measured profiles result in an uncertainty in degradation coefficient that can be evaluated graphically. Because the degradation coefficient is sensitive to the regional slope and to three-dimensional processes (gullying, loess accumulation, stream incision, etc.), a reliable and accurate determination of degradation coefficient requires several long profiles across the same scarp. The linear diffusion model of scarp degradation is a Gaussian model in which the degradation coefficient is proportional to numerical age. In that case, absolute dating requires only determination of the propotionality constant, called the mass diffusivity constant. For Holocene scarps a few meters high, in loose alluvium under arid climatic conditions, mass diffusivity constants generally range between 1 and 6 m^2/kyr. Morphologic analysis is a reliable method to compare ages of different scarps in a given area, and it can provide approximate absolute ages of Holocene scarplike landforms

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page
    corecore