4,370 research outputs found

    Constraining the Structure of GRB Jets Through the log(N)-log(S) Distribution

    Full text link
    A general formalism is developed for calculating the luminosity function and the expected number NN of observed GRBs above a peak photon flux SS for any GRB jet structure. This new formalism directly provides the true GRB rate without the need for a `correction factor'. We apply it to the uniform jet (UJ) and universal structured jet (USJ) models for the structure of GRB jets and perform fits to the observed log(N)-log(S) distribution from the GUSBAD catalog which contains 2204 BATSE bursts. A core angle θc\theta_c and an outer edge at θmax\theta_{max} are introduced for the structured jet, and a finite range of half-opening angles θminθjθmax\theta_{min}\leq\theta_j\leq\theta_{max} is assumed for the uniform jets. The efficiency ϵγ\epsilon_\gamma for producing gamma-rays, and the energy per solid angle ϵ\epsilon in the jet are allowed to vary with θj\theta_j (the viewing angle θobs\theta_{obs}) in the UJ (USJ) model, ϵγθb\epsilon_\gamma\propto\theta^{-b} and ϵθa\epsilon\propto\theta^{-a}. We find that a single power-law luminosity function provides a good fit to the data. Such a luminosity function arises naturally in the USJ model, while in the UJ model it implies a power-law probability distribution for θj\theta_j, P(θj)θjqP(\theta_j)\propto\theta_j^{-q}. The value of qq cannot be directly determined from the fit to the observed log(N)-log(S) distribution, and an additional assumption on the value of aa or bb is required. Alternatively, an independent estimate of the true GRB rate would enable one to determine aa, bb and qq. The implied values of θc\theta_c (or θmin\theta_{min}) and θmax\theta_{max} are close to the current observational limits. The true GRB rate for the USJ model is found to be RGRB(z=0)=0.860.05+0.14Gpc3yr1R_{GRB}(z=0)=0.86^{+0.14}_{-0.05} Gpc^{-3} yr^{-1}.Comment: 9 pages, 5 figures, 1 table; accepted for publication in Ap

    The Use of Structured Imagery and Dispositional Measurement to Assess Situational Use of Mindfulness Skills

    Get PDF
    The recent proliferation of studies on mindfulness produced varying theoretical models, each based in part on how mindfulness is assessed. These models agree, however, that mindfulness encompasses moment-to-moment or situational experiences. Incongruence between dispositional and situational assessment would be problematic for theory and empirical research. In particular, it remains to be established whether situational measurement is an accurate method for mindfulness assessment and whether dispositional measures are able to accurately detect mindfulness skills in various situations. The association between dispositional and situational mindfulness processes (i.e., situational attention awareness and emotion acceptance) was examined in two studies. In Study 1 (N = 148), independent groups who reported high and low levels of dispositional mindfulness skills were compared on a continuous measure of situational mindfulness skills. In Study 2 (N = 317), dispositional mindfulness questionnaires were used to predict situational use of mindfulness skills. Results suggest not only that situational measures accurately detect use of mindfulness skills, but also that dispositional measures can predict one\u27s use of situational mindfulness skills. Findings from both studies were consistent across both positive and negative situations. Moreover, neither neuroticism nor extraversion was shown to have a moderating effect on the relationship between dispositional and situational use of mindfulness skills. The implications of these findings for clinical practice and future investigations pertaining to measurement validity in this area are discussed

    Life expectancy in Australian senior with or without cognitive impairment: the Australia Diabetes, Obesity and Lifestyle Study Wave 3

    No full text
    Objective: To determine prevalence of cognitive impairment (CI) and to estimate life expectancy with and without cognitive impairment in the Australian population over age 60. Method: Adults aged 60 and older participating in the 12 year follow-up of the Australia Diabetes Obesity and Lifestyle Study (AusDiab) were included in the sample (n=1666). The mean age was 69.5 years, and 46.3% of the sample was male. The Mini-Mental State Examination was used to assess cognitive impairment. Logistic regression analysis was used to determine the effect of predictor variables (age, gender, education), measured at baseline, on cognitive impairment status. The Sullivan Method was used to estimate Total Life Expectancy (TLE), Cognitively Impaired (CILE) and Cognitive Impairment-free life expectancies (CIFLE). Results: Odds of CI were greater for males than females (OR 2.1, 95% confidence interval: 1.2-3.7) and among Australians with low education levels compared with Australians with high education levels (OR 2.1, 95% confidence interval: 1.2-3.7). The odds of CI also increased each year with age (OR 1.1, (95% confidence interval: 1.0-1.1). It was found that in all age groups females have greater TLE and CIFLE when compared to their male counterparts.This research was supported by the Australian Research Council Centre of Excellence in Population Aging Research (project number CE110001029). KJA is funded by NHMRC Fellowship #1002560. We acknowledge support from the NHMRC Dementia Collaborative Research Centres. The AusDiab study co-coordinated by the Baker IDI Heart and Diabetes Institute, gratefully acknowledges the support and assistance given by: K Anstey, B Atkins, B Balkau, E Barr, A Cameron, S Chadban, M de Courten, D Dunstan, A Kavanagh, D Magliano, S Murray, N Owen, K Polkinghorne, J Shaw, T Welborn, P Zimmet and all the study participants. Also, for funding or logistical support, we are grateful to: National Health and Medical Research Council (NHMRC grants 233200 and 1007544), Australian Government Department of Health and Aging, Abbott Australasia Pty Ltd, Alphapharm Pty Ltd, Amgen Australia, AstraZeneca, Bristol-Myers Squibb, City Health Centre-Diabetes Service-Canberra, Department of Health and Community Services- Northern Territory, Department of Health and Human Services– Tasmania, Department of Health–New South Wales, Department of Health–Western Australia, Department of Health–South Australia, Department of Human Services–Victoria, Diabetes Australia, Diabetes Australia Northern Territory, Eli Lilly Australia, Estate of the Late Edward Wilson, GlaxoSmithKline, Jack Brockhoff Foundation, Janssen-Cilag, Kidney Health Australia, Marian & FH Flack Trust, Menzies Research Institute, Merck Sharp & Dohme, Novartis Pharmaceuticals, Novo Nordisk Pharmaceuticals, Pfizer Pty Ltd, Pratt Foundation, Queensland Health, Roche Diagnostics Australia, Royal Prince Alfred Hospital, Sydney, Sanofi Aventis, sanofi-synthelabo, and the Victorian Government’s OIS Program

    Celebrity, Death, and Taxes: Michael Jackson\u27s Estate

    Get PDF
    The untimely death of Michael Jackson this past June presents an opportunity to reassess certain thorny estate tax issues that may arise when a celebrity dies owning valuable intellectual property. Elsewhere we have debated hypothetical, tax-motivated changes to state laws relating to postmortem publicity rights. This article focuses on existing legislation, like California’s, that makes publicity rights both devisable and descendible. Federal transfer taxes are levied on intangible property as well as tangible assets, and therefore apply to intellectual property, including a celebrity’s right of publicity and copyrights retained by an artist in his or her creations. Using Michael Jackson’s estate as an example, and focusing primarily on publicity rights, we examine two questions that any estate planner representing a celebrity client ought to consider. First, how should a personal representative value intellectual property for estate tax purposes? Second, what strategies are available to lessen the estate tax burden associated with certain intellectual property rights

    Does environmental enrichment promote recovery from stress in rainbow trout?

    Get PDF
    The EC Directive on animal experimentation suggests that animals should have enrichment to improve welfare yet relatively little research has been conducted on the impact of enrichment in fish. Studies on zebrafish have been contradictory and other fish species may require species specific enrichments. Salmonids are important experimental models given their relevance to aquaculture and natural ecosystems. This study sought to establish how an enriched environment may promote better welfare in rainbow trout (Oncorhynchus mykiss) enhancing their recovery from invasive procedures. Trout were held individually in either barren or enriched (gravel, plants and an area of cover) conditions and recovery rates after a potentially painful event and a standard stressor were investigated by recording parameters such as behaviour, opercular beat rate and plasma cortisol concentrations. Fish were randomly assigned to one of four treatment groups: Control where the fish were left undisturbed; Sham where fish were anaesthetised but no invasive procedure; Pain where a subcutaneous injection of acetic acid was administered to the frontal lips during anaesthesia; and Stress where fish were subject to one minute of air emersion. Video recordings were made prior to treatment then at 30 minute intervals afterwards to determine whether fish in enriched conditions recovered more rapidly than those in barren tanks. Preliminary analyses suggest that enriched fish may be less stressed thus these findings have important implications for the husbandry and welfare of captive rainbow trout but may also affect the outcome of experimental studies dependent upon whether enrichment was adopted

    The Abdominal Aortic Aneurysm Statistically Corrected Operative Risk Evaluation (AAA SCORE) for predicting mortality after open and endovascular interventions

    Get PDF
    BackgroundAccurate adjustment of surgical outcome data for risk is vital in an era of surgeon-level reporting. Current risk prediction models for abdominal aortic aneurysm (AAA) repair are suboptimal. We aimed to develop a reliable risk model for in-hospital mortality after intervention for AAA, using rigorous contemporary statistical techniques to handle missing data.MethodsUsing data collected during a 15-month period in the United Kingdom National Vascular Database, we applied multiple imputation methodology together with stepwise model selection to generate preoperative and perioperative models of in-hospital mortality after AAA repair, using two thirds of the available data. Model performance was then assessed on the remaining third of the data by receiver operating characteristic curve analysis and compared with existing risk prediction models. Model calibration was assessed by Hosmer-Lemeshow analysis.ResultsA total of 8088 AAA repair operations were recorded in the National Vascular Database during the study period, of which 5870 (72.6%) were elective procedures. Both preoperative and perioperative models showed excellent discrimination, with areas under the receiver operating characteristic curve of .89 and .92, respectively. This was significantly better than any of the existing models (area under the receiver operating characteristic curve for best comparator model, .84 and .88; P < .001 and P = .001, respectively). Discrimination remained excellent when only elective procedures were considered. There was no evidence of miscalibration by Hosmer-Lemeshow analysis.ConclusionsWe have developed accurate models to assess risk of in-hospital mortality after AAA repair. These models were carefully developed with rigorous statistical methodology and significantly outperform existing methods for both elective cases and overall AAA mortality. These models will be invaluable for both preoperative patient counseling and accurate risk adjustment of published outcome data

    Nutrient Digestibility of Condensed Algal Residue Solubles in Beef Cattle Fishing Diets

    Get PDF
    Condensed algal residue solubles (CARS) were evaluated in finishing cattle diets. Six treatments were evaluated (2 × 3 factorial arrangement), CARS inclusion in the diet at 0, 5, or 10% of diet dry matter with 0 or 20% wet distillers grains. Th e remainder of the diets consisted of 57.5– 87.5% dry rolled corn, 7.5% sorghum silage and 5% supplement. Increasing wet distillers grains in the diet had no effect on dry matter and organic matter intake but decreased dry matter and organic matter digestibility. Increasing CARS inclusion in the diet resulted in lower dry matter and organic matter intake with no effect on dry matter and organic matter digestibility. Replacing up to 10% dry rolled corn with CARS in diets with or without wet distillers grains had little effect on digestibility of finishing beef cattle diets

    Glacial melt under a porous debris layer

    Get PDF
    In this paper we undertake a quantitative analysis of the dynamic process by which ice underneath a dry porous debris layer melts. We show that the incorporation of debris-layer airflow into a theoretical model of glacial melting can capture the empirically observed features of the so-called Østrem curve (a plot of the melt rate as a function of debris depth). Specifically, we show that the turning point in the Østrem curve can be caused by two distinct mechanisms: the increase in the proportion of ice that is debris-covered and/or a reduction in the evaporative heat flux as the debris layer thickens. This second effect causes an increased melt rate because the reduction in (latent) energy used for evaporation increases the amount of energy available for melting. Our model provides an explicit prediction for the melt rate and the temperature distribution within the debris layer, and provides insight into the relative importance of the two effects responsible for the maximum in the Østrem curve. We use the data of Nicholson and Benn (2006) to show that our model is consistent with existing empirical measurements

    Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML.

    Get PDF
    Recent years have seen neuroimaging data sets becoming richer, with larger cohorts of participants, a greater variety of acquisition techniques, and increasingly complex analyses. These advances have made data analysis pipelines complicated to set up and run (increasing the risk of human error) and time consuming to execute (restricting what analyses are attempted). Here we present an open-source framework, automatic analysis (aa), to address these concerns. Human efficiency is increased by making code modular and reusable, and managing its execution with a processing engine that tracks what has been completed and what needs to be (re)done. Analysis is accelerated by optional parallel processing of independent tasks on cluster or cloud computing resources. A pipeline comprises a series of modules that each perform a specific task. The processing engine keeps track of the data, calculating a map of upstream and downstream dependencies for each module. Existing modules are available for many analysis tasks, such as SPM-based fMRI preprocessing, individual and group level statistics, voxel-based morphometry, tractography, and multi-voxel pattern analyses (MVPA). However, aa also allows for full customization, and encourages efficient management of code: new modules may be written with only a small code overhead. aa has been used by more than 50 researchers in hundreds of neuroimaging studies comprising thousands of subjects. It has been found to be robust, fast, and efficient, for simple-single subject studies up to multimodal pipelines on hundreds of subjects. It is attractive to both novice and experienced users. aa can reduce the amount of time neuroimaging laboratories spend performing analyses and reduce errors, expanding the range of scientific questions it is practical to address
    corecore