308 research outputs found

    Efficiency and Consistency for Regularization Parameter Selection in Penalized Regression: Asymptotics and Finite-Sample Corrections

    Get PDF
    This paper studies the asymptotic and nite-sample performance of penalized regression methods when different selectors of the regularization parameter are used under the assumption that the true model is, or is not, included among the candidate model. In the latter setting, we relax assumptions in the existing theory to show that several classical information criteria are asymptotically efficient selectors of the regularization parameter. In both settings, we assess the nite-sample performance of these as well as other common selectors and demonstrate that their performance can suffer due to sensitivity to the number of variables that are included in the full model. As alternatives, we propose two corrected information criteria which are shown to outperform the existing procedures while still maintaining the desired asymptotic properties. In the non-true model world, we relax the assumption made in the literature that the true error variance is known or that a consistent estimator is available to prove that Akaike's information criterion (AIC), Cp and Generalized cross-validation (GCV) themselves are asymptotically efficient selectors of the regularization parameter and we study their performance in nite samples. In classical regression, AIC tends to select overly complex models when the dimension of the maximum candidate model is large relative to the sample size. Simulation studies suggest that AIC suffers from the same shortcomings when used in penalized regression. We therefore propose the use of the classical AICc as an alternative. In the true model world, a similar investigation into the nite sample properties of BIC reveals analogous overfitting tendencies and leads us to further propose the use of a corrected BIC (BICc). In their respective settings (whether the true model is, or is not, among the candidate models), BICc and AICc have the desired asymptotic properties and we use simulations to assess their performance, as well as that of other selectors, in nite samples for penalized regressions fit using the Smoothly clipped absolute deviation (SCAD) and Least absolute shrinkage and selection operator (Lasso) penalty functions. We nd that AICc and 10-fold cross-validation outperform the other selectors in terms of squared error loss, and BICc avoids the tendency of BIC to select overly complex models when the dimension of the maximum candidate model is large relative to the sample size.NYU Stern School of BusinessStatistics Working Papers Serie

    Long-term calorie restriction in humans is not associated with indices of delayed immunologic aging: A descriptive study.

    Get PDF
    BACKGROUND: Delayed immunologic aging is purported to be a major mechanism through which calorie restriction (CR) exerts its anti-aging effects in non-human species. However, in non-obese humans, the effect of CR on the immune system has been understudied relative to its effects on the cardiometabolic system. OBJECTIVE: To examine whether CR is associated with delayed immunologic aging in non-obese humans. METHODS: We tested whether long-term CR practitioners (average 10.03 years of CR) evidenced decreased expression of T cell immunosenescence markers and longer immune cell telomeres compared to gender-, race/ethnicity-, age-, and education-matched "healthy" Body Mass Index (BMI) and "overweight"/"obese" BMI groups. RESULTS: Long-term human CR practitioners had lower BMI (p <  0.001) and fasting glucose (p <  0.001), as expected. They showed similar frequencies of pre-senescent cells (CD8+CD28- T cells and CD57 and PD-1 expressing T cells) to the comparison groups. Even after adjusting for covariates, including cytomegalovirus status, we observed shorter peripheral blood mononuclear cell telomeres in the CR group (p = 0.012) and no difference in granulocyte telomeres between groups (p = 0.42). CONCLUSIONS: We observed no clear evidence that CR as it is currently practiced in humans delays immune aging related to telomere length or T cell immunosenescent markers

    Lessons from a Marine Spatial Planning data management process for Ireland

    Get PDF
    Peer-reviewedThis paper presents a framework containing ten components to deliver a data management process for the storage and management of data used for Marine Spatial Planning (MSP) in Ireland. The work includes a data process flow and a recommended solution architecture. The architecture includes a central data catalogue and a spatial storage system. The components of the process are presented to maximise the reuse potential of any dataset within an MSP context. The terms ‘Suitability’ and ‘Readiness’ in the MSP context are offered as both formal and considered assessments of data, as is the applicability of a data stewardship maturity matrix. How data contained in such a storage system can be published externally to potential consumers of these data is also explored. The process presents a means of managing data and metadata to ensure data lineage is optimised by carrying information about the origin of and the processing applied to the data; to evaluate the quality and relevance of geospatial datasets for use in MSP decisions in Ireland. The process was piloted in the National Marine Planning Framework for Ireland in the development of draft map products; feedback from the public consultation is ongoing and not presented

    Time to revisit the passive overconsumption hypothesis?:Humans show sensitivity to calories in energy-rich meals

    Get PDF
    BACKGROUND: A possible driver of obesity is insensitivity (passive overconsumption) to food energy density (ED, kcal/g); however, it is unclear whether this insensitivity applies to all meals. OBJECTIVES: We assessed the influence of ED on energy intake (kcal) across a broad and continuous range of EDs comprised of noncovertly manipulated, real-world meals. We also allowed for the possibility that the association between energy intake and ED is nonlinear. METHODS: We completed a secondary analysis of 1519 meals which occurred in a controlled environment as part of a study conducted by Hall and colleagues to assess the effects of food ultra-processing on energy intake. To establish the generalizability of the findings, the analyses were repeated in 32,162 meals collected from free-living humans using data from the UK National Diet and Nutrition Survey (NDNS). Segmented regressions were performed to establish ED “breakpoints” at which the association between consumed meal ED and mean centered meal caloric intake (kcal) changed. RESULTS: Significant breakpoints were found in both the Hall et al. data set (1.41 kcal/g) and the NDNS data set (1.75 and 2.94 kcal/g). Centered meal caloric intake did not increase linearly with consumed meal ED, and this pattern was captured by a 2-component (“volume” and “calorie content” [biologically derived from the sensing of fat, carbohydrate, and protein]) model of physical meal size (g), in which volume is the dominant signal with lower energy-dense foods and calorie content is the dominant signal with higher energy-dense foods. CONCLUSIONS: These analyses reveal that, on some level, humans are sensitive to the energy content of meals and adjust meal size to minimize the acute aversive effects of overconsumption. Future research should consider the relative importance of volume and calorie-content signals, and how individual differences impact everyday dietary behavior and energy balance

    Efficiency and Consistency for Regularization Parameter Selection in Penalized Regression: Asymptotics and Finite-Sample Corrections

    Get PDF
    This paper studies the asymptotic and nite-sample performance of penalized regression methods when different selectors of the regularization parameter are used under the assumption that the true model is, or is not, included among the candidate model. In the latter setting, we relax assumptions in the existing theory to show that several classical information criteria are asymptotically efficient selectors of the regularization parameter. In both settings, we assess the nite-sample performance of these as well as other common selectors and demonstrate that their performance can suffer due to sensitivity to the number of variables that are included in the full model. As alternatives, we propose two corrected information criteria which are shown to outperform the existing procedures while still maintaining the desired asymptotic properties. In the non-true model world, we relax the assumption made in the literature that the true error variance is known or that a consistent estimator is available to prove that Akaike's information criterion (AIC), Cp and Generalized cross-validation (GCV) themselves are asymptotically efficient selectors of the regularization parameter and we study their performance in nite samples. In classical regression, AIC tends to select overly complex models when the dimension of the maximum candidate model is large relative to the sample size. Simulation studies suggest that AIC suffers from the same shortcomings when used in penalized regression. We therefore propose the use of the classical AICc as an alternative. In the true model world, a similar investigation into the nite sample properties of BIC reveals analogous overfitting tendencies and leads us to further propose the use of a corrected BIC (BICc). In their respective settings (whether the true model is, or is not, among the candidate models), BICc and AICc have the desired asymptotic properties and we use simulations to assess their performance, as well as that of other selectors, in nite samples for penalized regressions fit using the Smoothly clipped absolute deviation (SCAD) and Least absolute shrinkage and selection operator (Lasso) penalty functions. We nd that AICc and 10-fold cross-validation outperform the other selectors in terms of squared error loss, and BICc avoids the tendency of BIC to select overly complex models when the dimension of the maximum candidate model is large relative to the sample size.NYU Stern School of BusinessStatistics Working Papers Serie

    Workplace Turbulence and Workforce Preparedness

    Get PDF
    The year 1973 marked a divide in the postwar economy.1 During the 25 years between 1948 and 1973, private sector productivity increased at an annual rate of 2.9%. Productivity improvement after 1973 fell way below this long-term trend, leveling off at about 0.6% a year until 1981 and rising to only 1.6% a year between 1981 and 1987. A similar pattern is reflected in the real wages of the workforce.2The conventional interpretation of this difference in the U.S. economy before and after 1973 is that it reflects the combined influence of the OPEC oil shock and the influx into the labor market of inexperienced workers born in the postwar baby boom, possibly reinforced by growth in regulatory costs.3 However, when the productivity data are analyzed in a growth accounting framework, these economic factors can only account for about two thirds of the productivity decline.4 What then explains the balance of the shortfall in productivity? Many analysts have pointed to the intangible effects on managers of increased economic uncertainty since 1973—growing business cautiousness, increased emphasis on short-term financial objectives, and inadequate entrepreneurial incentives.5 But economic change and uncertainty can also affect productivity through their impact on jobs and workers

    Comparison of Body Composition Measurements using a New Caliper, Two Established Calipers, Hydrostatic Weighing, and BodPod

    Get PDF
    Purposes: (1) To compare the Lafayette Instruments (LI) skinfold caliper to the Lange (L) and Harpenden (H) calipers using a diverse subject population. (2) To determine the validity of the LI caliper in a subset of subjects by comparing body compositions from skinfold thicknesses to those measured by hydrostatic weighing (HW) and air displacement plethysmography (ADP). (3) To compare measurements obtained by experienced (EX) and inexperienced (IX) technicians using all three calipers. Methods: Skinfold measurements were performed by both EX and IX technicians using three different calipers on 21 younger (21.2 ± 1.5 yrs) and 20 older (59.2 ± 4 yrs) subjects. Body compositions were calculated using the Jackson-Pollock seven-site and three-site formulas. HW and ADP tests were performed on a subset of subjects (10 younger, 10 older). Results: No significant differences existed between LI and L or H when measurements were made by EX. Further, the LI-EX measurements were highly correlated to both H-EX and L-EX. No significant differences existed in the subgroup between LI-EX and HW or ADP. Skinfold determinations made by EX and IX were similar. Conclusions: Similar body compositions determined using LI, H, and L suggest that LI determines body composition as effectively as H and L. High correlations between the three calipers support this notion. Similar results between LI and HW/ADP subgroup suggest that the LI caliper may be a valid method of measuring body composition. Overall, performance by IX was similar to EX and suggests similar ease of use for all three calipers

    Probing the extragalactic fast transient sky at minute timescales with DECam

    Get PDF
    Searches for optical transients are usually performed with a cadence of days to weeks, optimised for supernova discovery. The optical fast transient sky is still largely unexplored, with only a few surveys to date having placed meaningful constraints on the detection of extragalactic transients evolving at sub-hour timescales. Here, we present the results of deep searches for dim, minute-timescale extragalactic fast transients using the Dark Energy Camera, a core facility of our all-wavelength and all-messenger Deeper, Wider, Faster programme. We used continuous 20s exposures to systematically probe timescales down to 1.17 minutes at magnitude limits g>23g > 23 (AB), detecting hundreds of transient and variable sources. Nine candidates passed our strict criteria on duration and non-stellarity, all of which could be classified as flare stars based on deep multi-band imaging. Searches for fast radio burst and gamma-ray counterparts during simultaneous multi-facility observations yielded no counterparts to the optical transients. Also, no long-term variability was detected with pre-imaging and follow-up observations using the SkyMapper optical telescope. We place upper limits for minute-timescale fast optical transient rates for a range of depths and timescales. Finally, we demonstrate that optical gg-band light curve behaviour alone cannot discriminate between confirmed extragalactic fast transients such as prompt GRB flashes and Galactic stellar flares.Comment: Published in MNRA

    The extended tails of Palomar 5: A ten degree arc of globular cluster tidal debris

    Full text link
    Using wide-field photometric data from the Sloan Digital Sky Survey (SDSS) we recently showed that the Galactic globular cluster Palomar 5 is in the process of being tidally disrupted. Its tidal tails were initially detected in a 2.5 degree wide band along the celestial equator. A new analysis of SDSS data for a larger field now reveals that the tails of Pal 5 have a much larger spatial extent and can be traced over an arc of 10 deg across the sky, corresponding to a projected length of 4 kpc at the distance of the cluster. The number of former cluster stars found in the tails adds up to about 1.2 times the number of stars in the cluster. The radial profile of stellar surface density in the tails follows approximately a power law r^gamma with -1.5 < gamma < -1.2. The stream of debris from Pal 5 is significantly curved, which demonstrates its acceleration by the Galactic potential. The cluster is presently near the apocenter but has repeatedly undergone disk crossings in the inner part of the Galaxy leading to strong tidal shocks. Our results suggest that the observed debris originates mostly from mass loss within the last 2 Gyrs. The cluster is likely to be destroyed after the next disk crossing, which will happen in about 100 Myr. (abridged)Comment: 44 pages, including 14 figures (Figs.1,3 & 14 with decreased resolution), accepted for publication in the Astronomical Journa

    Innate and adaptive humoral responses coat distinct commensal bacteria with immunoglobulin A

    Get PDF
    Immunoglobulin A (IgA) is prominently secreted at mucosal surfaces and coats a fraction of the intestinal microbiota. However, the commensal bacteria bound by IgA are poorly characterized and the type of humoral immunity they elicit remains elusive. We used bacterial flow cytometry coupled with 16S rRNA gene sequencing (IgA-Seq) in murine models of immunodeficiency to identify IgA-bound bacteria and elucidate mechanisms of commensal IgA targeting. We found that residence in the small intestine, rather than bacterial identity, dictated induction of specific IgA. Most commensals elicited strong T-independent (TI) responses that originated from the orphan B1b lineage and from B2 cells, but excluded natural antibacterial B1a specificities. Atypical commensals including segmented filamentous bacteria and Mucispirillum evaded TI responses but elicited T-dependent IgA. These data demonstrate exquisite targeting of distinct commensal bacteria by multiple layers of humoral immunity and reveal a specialized function of the B1b lineage in TI mucosal IgA responses
    • 

    corecore