9,119 research outputs found

    Quadratic programming and penalized regression

    Get PDF
    Quadratic programming is a versatile tool for calculating estimates in penalized regression. It can be used to produce estimates based on L1 roughness penalties, as in total variation denoising. In particular, it can calculate estimates when the roughness penalty is the total variation of a derivative of the estimate. Combining two roughness penalties, the total variation and total variation of the third derivative, results in an estimate with continuous second derivative but controls the number of spurious local extreme values. A multiresolution criterion may be included in a quadratic program to achieve local smoothing without having to specify smoothing parameters. Copyright © Taylor & Francis Group, LLC

    Dietary patterns obtained through principal components analysis: The effect of input variable quantification

    Get PDF
    Principal components analysis (PCA) is a popular method for deriving dietary patterns. A number of decisions must be made throughout the analytic process, including how to quantify the input variables of the PCA. The present study aims to compare the effect of using different input variables on the patterns extracted using PCA on 3-d diet diary data collected from 7473 children, aged 10 years, in the Avon Longitudinal Study of Parents and Children. Four options were examined: weight consumed of each food group (g/d), energy-adjusted weight, percentage contribution to energy of each food group and binary intake (consumed/not consumed). Four separate PCA were performed, one for each intake measurement. Three or four dietary patterns were obtained from each analysis, with at least one component that described 'more healthy' and 'less healthy' diets and one component that described a diet with high consumption of meat, potatoes and vegetables. There were no obvious differences between the patterns derived using percentage energy as a measurement and adjusting weight for total energy intake, compared to those derived using gram weights. Using binary input variables yielded a component that loaded positively on reduced fat and reduced sugar foods. The present results suggest that food intakes quantified by gram weights or as binary variables both resulted in meaningful dietary patterns and each method has distinct advantages: weight takes into account the amount of each food consumed and binary intake appears to describe general food preferences, which are potentially easier to modify and useful in public health settings. © 2012 The Authors

    A Chandra X-ray Study of Cygnus A - II. The Nucleus

    Full text link
    We report Chandra ACIS and quasi-simultaneous RXTE observations of the nearby, powerful radio galaxy Cygnus A, with the present paper focusing on the properties of the active nucleus. In the Chandra observation, the hard (> a few keV) X-ray emission is spatially unresolved with a size \approxlt 1 arcsec (1.5 kpc, H_0 = 50 km s^-1 Mpc^-1) and coincides with the radio and near infrared nuclei. In contrast, the soft (< 2 keV) emission exhibits a bi-polar nebulosity that aligns with the optical bi-polar continuum and emission-line structures and approximately with the radio jet. In particular, the soft X-ray emission corresponds very well with the [O III] \lambda 5007 and H\alpha + [N II] \lambda\lambda 6548, 6583 nebulosity imaged with HST. At the location of the nucleus there is only weak soft X-ray emission, an effect that may be intrinsic or result from a dust lane that crosses the nucleus perpendicular to the source axis. The spectra of the various X-ray components have been obtained by simultaneous fits to the 6 detectors. The compact nucleus is detected to 100 keV and is well described by a heavily absorbed power law spectrum with \Gamma_h = 1.52^{+0.12}_{-0.12} (similar to other narrow line radio galaxies) and equivalent hydrogen column N_H (nuc) = 2.0^{+0.1}_{-0.2} \times 10^{23} cm^-2. (Abstract truncated).Comment: To be published in the Astrophysical Journal, v564 January 1, 2002 issue; 34 pages, 11 figures (1 color

    Live-birth rate associated with repeat in vitro fertilization treatment cycles

    Get PDF
    © 2015 American Medical Association. All rights reserved. Importance The likelihood of achieving a live birth with repeat in vitro fertilization (IVF) is unclear, yet treatment is commonly limited to 3 or 4 embryo transfers. Objective To determine the live-birth rate per initiated ovarian stimulation IVF cycle and with repeated cycles. Design, Setting, and Participants Prospective study of 156 947 UKwomen who received 257 398 IVF ovarian stimulation cycles between 2003 and 2010 and were followed up until June 2012. Exposures In vitro fertilization, with a cycle defined as an episode of ovarian stimulation and all subsequent separate fresh and frozen embryo transfers. Main Outcomes and Measures Live-birth rate per IVF cycle and the cumulative live-birth rates across all cycles in all women and by age and treatment type. Optimal, prognosis-adjusted, and conservative cumulative live-birth rates were estimated, reflecting 0%, 30%, and 100%, respectively, of women who discontinued due to poor prognosis and having a live-birth rate of 0 had they continued. Results Among the 156 947 women, the median age at start of treatment was 35 years (interquartile range, 32-38; range, 18-55), and the median duration of infertility for all 257 398 cycles was 4 years (interquartile range, 2-6; range

    Adaptive homodyne measurement of optical phase

    Get PDF
    We present an experimental demonstration of the power of real-time feedback in quantum metrology, confirming a theoretical prediction by Wiseman regarding the superior performance of an adaptive homodyne technique for single-shot measurement of optical phase. For phase measurements performed on weak coherent states with no prior knowledge of the signal phase, we show that the variance of adaptive homodyne estimation approaches closer to the fundamental quantum uncertainty limit than any previously demonstrated technique. Our results underscore the importance of real-time feedback for reaching quantum performance limits in coherent telecommunication, precision measurement and information processing.Comment: RevTex4, color PDF figures (separate files), submitted to PR

    KwaZulu-Natal coastal erosion events of 2006/2007 and 2011: A predictive tool?

    Get PDF
    Severe coastal erosion occurred along the KwaZulu-Natal coastline between mid-May and November 2011. Analysis of this erosion event and comparison with previous coastal erosion events in 2006/2007 offered the opportunity to extend the understanding of the time and place of coastal erosion strikes. The swells that drove the erosion hotspots of the 2011 erosion season were relatively low (significant wave heights were between 2 m and 4.5 m) but of long duration. Although swell height was important, swell-propagation direction and particularly swell duration played a dominant role in driving the 2011 erosion event. Two erosion hotspot types were noted: sandy beaches underlain by shallow bedrock and thick sandy beaches. The former are triggered by high swells (as in March 2007) and austral winter erosion events (such as in 2006, 2007 and 2011). The latter become evident later in the austral winter erosion cycle. Both types were associated with subtidal shore-normal channels seaward of megacusps, themselves linked to megarip current heads. This 2011 coastal erosion event occurred during a year in which the lunar perigee sub-harmonic cycle (a ±4.4-year cycle) peaked, a pattern which appears to have recurred on the KwaZulu-Natal coast. If this pattern proves true, severe coastal erosion may be expected in 2015. Evidence indicates that coastal erosion is driven by the lunar nodal cycle peak but that adjacent lunar perigee sub-harmonic peaks can also cause severe coastal erosion. Knowing where and when coastal erosion may occur is vital for coastal managers and planners

    Perspectives on open access high resolution digital elevation models to produce global flood hazard layers

    Get PDF
    Global flood hazard models have recently become a reality thanks to the release of open access global digital elevation models, the development of simplified and highly efficient flow algorithms, and the steady increase in computational power. In this commentary we argue that although the availability of open access global terrain data has been critical in enabling the development of such models, the relatively poor resolution and precision of these data now limit significantly our ability to estimate flood inundation and risk for the majority of the planet’s surface. The difficulty of deriving an accurate ‘bare-earth’ terrain model due to the interaction of vegetation and urban structures with the satellite-based remote sensors means that global terrain data are often poorest in the areas where people, property (and thus vulnerability) are most concentrated. Furthermore, the current generation of open access global terrain models are over a decade old and many large floodplains, particularly those in developing countries, have undergone significant change in this time. There is therefore a pressing need for a new generation of high resolution and high vertical precision open access global digital elevation models to allow significantly improved global flood hazard models to be developed

    Nonparametric Regression on a Graph

    Get PDF
    The 'Signal plus Noise' model for nonparametric regression can be extended to the case of observations taken at the vertices of a graph. This model includes many familiar regression problems. This article discusses the use of the edges of a graph to measure roughness in penalized regression. Distance between estimate and observation is measured at every vertex in the L2 norm, and roughness is penalized on every edge in the L1 norm. Thus the ideas of total variation penalization can be extended to a graph. The resulting minimization problem presents special computational challenges, so we describe a new and fast algorithm and demonstrate its use with examples. The examples include image analysis, a simulation applicable to discrete spatial variation, and classification. In our examples, penalized regression improves upon kernel smoothing in terms of identifying local extreme values on planar graphs. In all examples we use fully automatic procedures for setting the smoothing parameters. Supplemental materials are available online. © 2011 American Statistical Association
    corecore