1,236 research outputs found

    Variational methods in simultaneous optimum interpolation and initialization

    Get PDF
    The duality between optimum interpolation and variational objective analysis, is reviewed. This duality is used to set up a variational approach to objective analysis which uses prior information concerning the atmospheric spectral energy distribution, in the variational problem. In the wind analysis example, the wind field is partitioned into divergent and nondivergent parts, and a control parameter governing the relative energy in the two parts is estimated from the observational data being analyzed by generalized cross validation, along with a bandwidth parameter. A variational approach to combining objective analysis and initialization in a single step is proposed. In a simple example of this approach, data, forecast, and prior information concerning atmospheric energy distribution is combined into a single variational problem. This problem has (at least) one bandwidth parameter, one partitioning parameter governing the relative energy in fast slow modes, and one parameter governing the relative weight to be given to observational and forecast data

    Vector splines on the sphere with application to the estimation of vorticity and divergence from discrete, noisy data

    Get PDF
    Vector smoothing splines on the sphere are defined. Theoretical properties are briefly alluded to. The appropriate Hilbert space norms used in a specific meteorological application are described and justified via a duality theorem. Numerical procedures for computing the splines as well as the cross validation estimate of two smoothing parameters are given. A Monte Carlo study is described which suggests the accuracy with which upper air vorticity and divergence can be estimated using measured wind vectors from the North American radiosonde network

    Design criteria and eigensequence plots for satellite computed tomography

    Get PDF
    The use of the degrees of freedom for signal is proposed as a design criteria for comparing different designs for satellite and other measuring systems. It is also proposed that certain eigensequence plots be examined at the design stage along with appropriate estimates of the parameter lambda playing the role of noise to signal ratio. The degrees of freedom for signal and the eigensequence plots may be determined using prior information in the spectral domain which is presently available along with a description of the system, and simulated data for estimating lambda. This work extends the 1972 work of Weinreb and Crosby

    Important lessons on FGM/C abandonment from four research studies in Egypt

    Get PDF
    Female genital mutilation/cutting (FGM/C) continues to be a widespread practice in Egypt. According to the 2014 Egypt Demographic and Health Survey, the prevalence of FGM/C was 92 percent among ever-married women aged 15–49. However, Egypt continues to witness a drastic surge in the medicalization of FGM/C, with 74 percent of women aged 19 years and younger circumcised by medical practitioners, compared to 55 percent in 1995. This policy brief provides key results and recommendations of four studies conducted by the Population Council/ Egypt under the Evidence to End FGM/C project, in coordination with Egypt’s National Population Council. The four studies investigated the process through which families reach a decision on FGM/C; study the impact of FGM/C campaigns on the perspectives surrounding the practice; examine the characteristics of abandoners and challenges they face in maintaining their position; and understand the drivers of the medicalization of the practice. The ultimate goal of the studies, conducted between 2016 and 2019, is to assist the National Taskforce for Ending Female Genital Mutilation/Circumcision in developing evidence-based policies and programs to accelerate the abandonment of FGM/C

    Surface Brightness Profiles of Galactic Globular Clusters from Hubble Space Telescope Images

    Full text link
    Hubble Space Telescope allows us to study the central surface brightness profiles for globular clusters at unprecedented detail. We have mined the HST archives to obtain 38 WFPC2 images of galactic globular clusters with adequate exposure times and filters, which we use to measure their central structure. We outline a reliable method to obtain surface brightness profiles from integrated light that we test on an extensive set of simulated images. Most clusters have central surface brightness about 0.5 mag brighter than previous measurements made from ground-based data, with the largest differences around 2 magnitudes. Including the uncertainties in the slope estimates, the surface brightness slope distribution is consistent with half of the sample having flat cores and the remaining half showing a gradual decline from 0 to -0.8 (dlog(Sigma)/dlogr). We deproject the surface brightness profiles in a non-parametric way to obtain luminosity density profiles. The distribution of luminosity density logarithmic slopes show similar features with half of the sample between -0.4 and -1.8. These results are in contrast to our theoretical bias that the central regions of globular clusters are either isothermal (i.e. flat central profiles) or very steep (i.e. luminosity density slope ~-1.6) for core-collapse clusters. With only 50% of our sample having central profiles consistent with isothermal cores, King models appear to poorly represent most globular clusters in their cores.Comment: 23 pages, 14 figures, AJ accepte

    Detection of trend changes in time series using Bayesian inference

    Full text link
    Change points in time series are perceived as isolated singularities where two regular trends of a given signal do not match. The detection of such transitions is of fundamental interest for the understanding of the system's internal dynamics. In practice observational noise makes it difficult to detect such change points in time series. In this work we elaborate a Bayesian method to estimate the location of the singularities and to produce some confidence intervals. We validate the ability and sensitivity of our inference method by estimating change points of synthetic data sets. As an application we use our algorithm to analyze the annual flow volume of the Nile River at Aswan from 1871 to 1970, where we confirm a well-established significant transition point within the time series.Comment: 9 pages, 12 figures, submitte

    Fast stable direct fitting and smoothness selection for Generalized Additive Models

    Get PDF
    Existing computationally efficient methods for penalized likelihood GAM fitting employ iterative smoothness selection on working linear models (or working mixed models). Such schemes fail to converge for a non-negligible proportion of models, with failure being particularly frequent in the presence of concurvity. If smoothness selection is performed by optimizing `whole model' criteria these problems disappear, but until now attempts to do this have employed finite difference based optimization schemes which are computationally inefficient, and can suffer from false convergence. This paper develops the first computationally efficient method for direct GAM smoothness selection. It is highly stable, but by careful structuring achieves a computational efficiency that leads, in simulations, to lower mean computation times than the schemes based on working-model smoothness selection. The method also offers a reliable way of fitting generalized additive mixed models

    Statistical Mechanics of Learning: A Variational Approach for Real Data

    Full text link
    Using a variational technique, we generalize the statistical physics approach of learning from random examples to make it applicable to real data. We demonstrate the validity and relevance of our method by computing approximate estimators for generalization errors that are based on training data alone.Comment: 4 pages, 2 figure

    On the uniqueness of the surface sources of evoked potentials

    Full text link
    The uniqueness of a surface density of sources localized inside a spatial region RR and producing a given electric potential distribution in its boundary B0B_0 is revisited. The situation in which RR is filled with various metallic subregions, each one having a definite constant value for the electric conductivity is considered. It is argued that the knowledge of the potential in all B0B_0 fully determines the surface density of sources over a wide class of surfaces supporting them. The class can be defined as a union of an arbitrary but finite number of open or closed surfaces. The only restriction upon them is that no one of the closed surfaces contains inside it another (nesting) of the closed or open surfaces.Comment: 16 pages, 5 figure

    P-splines with derivative based penalties and tensor product smoothing of unevenly distributed data

    Get PDF
    The P-splines of Eilers and Marx (1996) combine a B-spline basis with a discrete quadratic penalty on the basis coefficients, to produce a reduced rank spline like smoother. P-splines have three properties that make them very popular as reduced rank smoothers: i) the basis and the penalty are sparse, enabling efficient computation, especially for Bayesian stochastic simulation; ii) it is possible to flexibly `mix-and-match' the order of B-spline basis and penalty, rather than the order of penalty controlling the order of the basis as in spline smoothing; iii) it is very easy to set up the B-spline basis functions and penalties. The discrete penalties are somewhat less interpretable in terms of function shape than the traditional derivative based spline penalties, but tend towards penalties proportional to traditional spline penalties in the limit of large basis size. However part of the point of P-splines is not to use a large basis size. In addition the spline basis functions arise from solving functional optimization problems involving derivative based penalties, so moving to discrete penalties for smoothing may not always be desirable. The purpose of this note is to point out that the three properties of basis-penalty sparsity, mix-and-match penalization and ease of setup are readily obtainable with B-splines subject to derivative based penalization. The penalty setup typically requires a few lines of code, rather than the two lines typically required for P-splines, but this one off disadvantage seems to be the only one associated with using derivative based penalties. As an example application, it is shown how basis-penalty sparsity enables efficient computation with tensor product smoothers of scattered data
    • …
    corecore