5,711 research outputs found

    Generalized Additive Models for Location Scale and Shape (GAMLSS) in R

    Get PDF
    GAMLSS is a general framework for fitting regression type models where the distribution of the response variable does not have to belong to the exponential family and includes highly skew and kurtotic continuous and discrete distribution. GAMLSS allows all the parameters of the distribution of the response variable to be modelled as linear/non-linear or smooth functions of the explanatory variables. This paper starts by defining the statistical framework of GAMLSS, then describes the current implementation of GAMLSS in R and finally gives four different data examples to demonstrate how GAMLSS can be used for statistical modelling.

    The repeatability of the abbreviated (4-h) Oral Fat Tolerance Test and influence of prior acute aerobic exercise

    Get PDF
    © 2016 The Author(s) Purpose: The Oral Fat Tolerance Test (OFTT) is regarded as a repeatable measure used to assess postprandial triglyceride (TAG) levels, with higher levels observed in cardio-metabolic disorders. Acute aerobic exercise intervention before OFTT reduces the TAG response, but the repeatability of this effect is unknown. The aim of this study was to determine the repeatability of the abbreviated 4-h OFTT with and without immediate prior aerobic exercise. Methods: On four separate days, healthy adult male participants underwent two 4-h OFTT (n = 10) and another two 4-h OFTT with 1-h of standardised moderate intensity aerobic exercise performed immediately before meal ingestion (n = 11). The OFTT meal composition included 75.4 g total fat, 21.7 g carbohydrate and 13.7 g protein. Venous blood was sampled at baseline and hourly up to 4 h after the OFTT meal ingestion, and TAG area under the curve (AUC) was calculated. Results: Nonparametric Bland–Altman analysis of 4-h TAG AUC revealed that 9 of 10 repeat measurements fell within ±15 % of the median TAG AUC for the OFTT. By contrast, two of 11 repeat measurements fell within ±15 % of the median TAG AUC for the OFTT undertaken with 1-h prior aerobic exercise. Conclusions: The 4-h OFTT is a repeatable test of postprandial TAG responses in healthy men. However, aerobic exercise performed immediately before OFTT considerably increases the variability of TAG AUC. These findings have implications for interpretation of research studies investigating exercise intervention performed immediately before OFTT. Future studies should also investigate the repeatability of exercise performed 8–24 h before OFTT

    Transoral laser surgery for laryngeal carcinoma: has Steiner achieved a genuine paradigm shift in oncological surgery?

    Get PDF
    Transoral laser microsurgery applies to the piecemeal removal of malignant tumours of the upper aerodigestive tract using the CO2 laser under the operating microscope. This method of surgery is being increasingly popularised as a single modality treatment of choice in early laryngeal cancers (T1 and T2) and occasionally in the more advanced forms of the disease (T3 and T4), predomi- nantly within the supraglottis. Thomas Kuhn, the American physicist turned philosopher and historian of science, coined the phrase ‘paradigm shift’ in his groundbreaking book The Structure of Scientific Revolutions. He argued that the arrival of the new and often incompatible idea forms the core of a new paradigm, the birth of an entirely new way of thinking. This article discusses whether Steiner and col- leagues truly brought about a paradigm shift in oncological surgery. By rejecting the principle of en block resection and by replacing it with the belief that not only is it oncologically safe to cut through the substance of the tumour but in doing so one can actually achieve better results, Steiner was able to truly revolutionise the man- agement of laryngeal cancer. Even though within this article the repercussions of his insight are limited to the upper aerodigestive tract oncological surgery, his willingness to question other peoples’ dogma makes his contribution truly a genuine paradigm shift

    Distribution of contaminants in the environment and wildlife habitat use: a case study with lead and waterfowl on the Upper Texas Coast

    Get PDF
    The magnitude and distribution of lead contamination remain unknown in wetland systems. Anthropogenic deposition of lead may be contributing to negative population-level effects in waterfowl and other organisms that depend on dynamic wetland habitats, particularly if they are unable to detect and differentiate levels of environmental contamination by lead. Detection of lead and behavioral response to elevated lead levels by waterfowl is poorly understood, but necessary to characterize the risk of lead-contaminated habitats. We measured the relationship between lead contamination of wetland soils and habitat use by mottled ducks (Anas fulvigula) on the Upper Texas Coast, USA. Mottled ducks have historically experienced disproportionate negative effects from lead exposure, and exhibit a unique nonmigratory life history that increases risk of exposure when inhabiting contaminated areas. We used spatial interpolation to estimate lead in wetland soils of the Texas Chenier Plain National Wildlife Refuge Complex. Soil lead levels varied across the refuge complex (0.01–1085.51 ppm), but greater lead concentrations frequently corresponded to areas with high densities of transmittered mottled ducks. We used soil lead concentration data and MaxENT species distribution models to quantify relationships among various habitat factors and locations of mottled ducks. Use of habitats with greater lead concentration increased during years of a major disturbance. Because mottled ducks use habitats with high concentrations of lead during periods of stress, have greater risk of exposure following major disturbance to the coastal marsh system, and no innate mechanism for avoiding the threat of lead exposure, we suggest the potential presence of an ecological trap of quality habitat that warrants further quantification at a population scale for mottled ducks

    Lifting the Veil on Obscured Accretion: Active Galactic Nuclei Number Counts and Survey Strategies for Imaging Hard X-Ray Missions

    Get PDF
    Finding and characterizing the population of active galactic nuclei (AGNs) that produces the X-ray background (XRB) is necessary to connect the history of accretion to observations of galaxy evolution at longer wavelengths. The year 2012 will see the deployment of the first hard X-ray imaging telescope which, through deep extragalactic surveys, will be able to measure the AGN population at the energies where the XRB peaks (~20-30 keV). Here, we present predictions of AGN number counts in three hard X-ray bandpasses: 6-10 keV, 10-30 keV, and 30-60 keV. Separate predictions are presented for the number counts of Compton thick AGNs, the most heavily obscured active galaxies. The number counts are calculated for five different models of the XRB that differ in the assumed hard X-ray luminosity function, the evolution of the Compton thick AGNs, and the underlying AGN spectral model. The majority of the hard X-ray number counts will be Compton thin AGNs, but there is a greater than tenfold increase in the Compton thick number counts from the 6-10 keV to the 10-30 keV band. The Compton thick population shows enough variation that a hard X-ray number counts measurement will constrain the models. The computed number counts are used to consider various survey strategies for the NuSTAR mission, assuming a total exposure time of 6.2 Ms. We find that multiple surveys will allow a measurement of Compton thick evolution. The predictions presented here should be useful for all future imaging hard X-ray missions

    Cosmic downsizing of powerful radio galaxies to low radio luminosities

    Get PDF
    At bright radio powers (P1.4GHz>1025P_{\rm 1.4 GHz} > 10^{25} W/Hz) the space density of the most powerful sources peaks at higher redshift than that of their weaker counterparts. This paper establishes whether this luminosity-dependent evolution persists for sources an order of magnitude fainter than those previously studied, by measuring the steep--spectrum radio luminosity function (RLF) across the range 1024<P1.4GHz<102810^{24} < P_{\rm 1.4 GHz} < 10^{28} W/Hz, out to high redshift. A grid-based modelling method is used, in which no assumptions are made about the RLF shape and high-redshift behaviour. The inputs to the model are the same as in Rigby et al. (2011): redshift distributions from radio source samples, together with source counts and determinations of the local luminosity function. However, to improve coverage of the radio power vs. redshift plane at the lowest radio powers, a new faint radio sample is introduced. This covers 0.8 sq. deg., in the Subaru/XMM-Newton Deep Field, to a 1.4 GHz flux density limit of S1.4GHz100 μS_{\rm 1.4 GHz} \geq 100~\muJy, with 99% redshift completeness. The modelling results show that the previously seen high-redshift declines in space density persist to P1.4GHz<1025P_{\rm 1.4 GHz} < 10^{25} W/Hz. At P1.4GHz>1026P_{\rm 1.4 GHz} > 10^{26} W/Hz the redshift of the peak space density increases with luminosity, whilst at lower radio luminosities the position of the peak remains constant within the uncertainties. This `cosmic downsizing' behaviour is found to be similar to that seen at optical wavelengths for quasars, and is interpreted as representing the transition from radiatively efficient to inefficient accretion modes in the steep-spectrum population. This conclusion is supported by constructing simple models for the space density evolution of these two different radio galaxy classes; these are able to successfully reproduce the observed variation in peak redshift.Comment: 7 pages, 6 figures; accepted for publication in Astronomy & Astrophysic

    Single-Degree-of-Freedom response of finite targets subjected to blast loading – The influence of clearing

    Get PDF
    When evaluating the dynamic response of a structure subjected to a high explosive detonation, it is common to simplify both the target properties and the form of the blast pressure load - a standard approach is to model the target as an equivalent Single-Degree-of-Freedom (SDOF) system with the blast load idealised as a pulse which decays linearly with time. Whilst this method is suitable for cases where the reflecting surface is large, it is well known that for smaller targets, the propagation of a rarefaction 'clearing' wave from the edges of the target may cause a premature reduction in the magnitude of the blast pressure and hence reduce the total impulse acting on the structure. In this article, a simple method for calculating clearing relief, based on an acoustic approximation of the rarefaction wave, is coupled with an SDOF model to investigate the influence of clearing on the dynamic response of elastic targets. Response spectra are developed for a range of target sizes and blast events that may be of interest to the engineer, enabling the effects of blast wave clearing to be evaluated and situations where blast wave clearing may increase the peak displacement of the target to be determined. When the natural period of the target is large compared to the duration of loading, the reduction in positive phase impulse leads to significantly lower values of peak displacement when compared to an identical system subjected to a triangular blast load. For systems where the natural period is comparable to the duration of the loading, the early onset of negative pressure (attributed to blast wave clearing) can coincide with the rebound of the target and result in greater peak displacements. It is concluded that blast wave clearing should be evaluated and its influence quantified in order to ensure that blast resistant designs are efficient and safe. © 2012 Elsevier Ltd

    Energy self-sufficiency, grid demand variability and consumer costs: Integrating solar PV, Stirling engine CHP and battery storage

    Get PDF
    Global uptake of solar PV has risen significantly over the past four years, motivated by increased economic feasibility and the desire for electricity self-sufficiency. However, significant uptake of solar PV could cause grid balancing issues. A system comprising Stirling engine combined heat and power, solar PV and battery storage (SECHP-PV-battery) may further improve self-sufficiency, satisfying both heat and electricity demand as well as mitigating potential negative grid effects. This paper presents the results of a simulation of 30 households with different energy demand profiles using this system, in order to determine: the degree of household electricity self-sufficiency achieved; resultant grid demand profiles; and the consumer economic costs and benefits. The results indicate that, even though PV and SECHP collectively produced 30% more electricity than the average demand of 3300. kWh/yr, households still had to import 28% of their electricity demand from the grid with a 6. kWh battery. This work shows that SECHP is much more effective in increasing self-sufficiency than PV, with the households consuming on average 49% of electricity generated (not including battery contribution), compared to 28% for PV. The addition of a 6. kWh battery to PV and SECHP improves the grid demand profile by 28% in terms of grid demand ramp-up requirement and 40% for ramp-downs. However, the variability of the grid demand profile is still greater than for the conventional system comprising a standard gas boiler and electricity from the grid. These moderate improvements must be weighed against the consumer cost: with current incentives, the system is only financially beneficial for households with high electricity demand (<4300. kWh/yr). A capital grant of 24% of the installed cost of the whole micro-generation system is required to make the system financially viable for households with an average electricity demand (3300. kWh/yr)
    corecore