5,359 research outputs found

    Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    Full text link
    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the KĪ±K_\alpha sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels

    Impact of Selective Mapping Strategies on Automated Laboratory Result Notification to Public Health Authorities

    Get PDF
    Automated electronic laboratory reporting (ELR) for public health has many potential advantages, but requires mapping local laboratory test codes to a standard vocabulary such as LOINC. Mapping only the most frequently reported tests provides one way to prioritize the effort and mitigate the resource burden. We evaluated the implications of selective mapping on ELR for public health by comparing reportable conditions from an operational ELR system with the codes in the LOINC Top 2000. Laboratory result codes in the LOINC Top 2000 accounted for 65.3% of the reportable condition volume. However, by also including the 129 most frequent LOINC codes that identified reportable conditions in our system but were not present in the LOINC Top 2000, this set would cover 98% of the reportable condition volume. Our study highlights the ways that our approach to implementing vocabulary standards impacts secondary data uses such as public health reporting

    Development and testing of a risk indexing framework to determine field-scale critical source areas of faecal bacteria on grassland.

    Get PDF
    This paper draws on lessons from a UK case study in the management of diffuse microbial pollution from grassland farm systems in the Taw catchment, south west England. We report on the development and preliminary testing of a field-scale faecal indicator organism risk indexing tool (FIORIT). This tool aims to prioritise those fields most vulnerable in terms of their risk of contributing FIOs to water. FIORIT risk indices were related to recorded microbial water quality parameters (faecal coliforms [FC] and intestinal enterococci [IE]) to provide a concurrent on-farm evaluation of the tool. There was a significant upward trend in Log[FC] and Log[IE] values with FIORIT risk score classification (r2 =0.87 and 0.70, respectively and P<0.01 for both FIOs). The FIORIT was then applied to 162 representative grassland fields through different seasons for ten farms in the case study catchment to determine the distribution of on-farm spatial and temporal risk. The high risk fields made up only a small proportion (1%, 2%, 2% and 3% for winter, spring, summer and autumn, respectively) of the total number of fields assessed (and less than 10% of the total area), but the likelihood of the hydrological connection of high FIO source areas to receiving watercourses makes them a priority for mitigation efforts. The FIORIT provides a preliminary and evolving mechanism through which we can combine risk assessment with risk communication to end-users and provides a framework for prioritising future empirical research. Continued testing of FIORIT across different geographical areas under both low and high flow conditions is now needed to initiate its long term development into a robust indexing tool

    Learning from the Crowd in Terminology Mapping: The LOINC Experience

    Get PDF
    National policies in the United States require the use of standard terminology for data exchange between clinical information systems. However, most electronic health record systems continue to use local and idiosyncratic ways of representing clinical observations. To improve mappings between local terms and standard vocabularies, we sought to make existing mappings (wisdom) from healt care organizations (the Crowd) available to individuals engaged in mapping processes. We developed new functionality to display counts of local terms and organizations that had previously mapped to a given Logical Observation Identifiers Names and Codes (LOINC) code. Further, we enabled users to view the details of those mappings, including local term names and the organizations that create the mappings. Users also would have the capacity to contribute their local mappings to a shared mapping repository. In this article, we describe the new functionality and its availability to implementers who desire resources to make mapping more efficient and effective

    Identification of gene pathways implicated in Alzheimer's disease using longitudinal imaging phenotypes with sparse regression

    Get PDF
    We present a new method for the detection of gene pathways associated with a multivariate quantitative trait, and use it to identify causal pathways associated with an imaging endophenotype characteristic of longitudinal structural change in the brains of patients with Alzheimer's disease (AD). Our method, known as pathways sparse reduced-rank regression (PsRRR), uses group lasso penalised regression to jointly model the effects of genome-wide single nucleotide polymorphisms (SNPs), grouped into functional pathways using prior knowledge of gene-gene interactions. Pathways are ranked in order of importance using a resampling strategy that exploits finite sample variability. Our application study uses whole genome scans and MR images from 464 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. 66,182 SNPs are mapped to 185 gene pathways from the KEGG pathways database. Voxel-wise imaging signatures characteristic of AD are obtained by analysing 3D patterns of structural change at 6, 12 and 24 months relative to baseline. High-ranking, AD endophenotype-associated pathways in our study include those describing chemokine, Jak-stat and insulin signalling pathways, and tight junction interactions. All of these have been previously implicated in AD biology. In a secondary analysis, we investigate SNPs and genes that may be driving pathway selection, and identify a number of previously validated AD genes including CR1, APOE and TOMM40

    A general theory of intertemporal decision-making and the perception of time

    Full text link
    Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier versus later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory-Training--Integrated Maximized Estimation of Reinforcement Rate (TIMERR)--that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window; this window includes a past integration interval (over which experienced reward rate is estimated) and the expected delay to future reward. Using this theory, we derive a mathematical expression for the subjective representation of time. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the nonlinearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception.Comment: 37 pages, 4 main figures, 3 supplementary figure

    Mapping the State of Financial Stability

    Get PDF
    The paper uses the Self-Organizing Map for mapping the state of financial stability and visualizing the sources of systemic risks on a two-dimensional plane as well as for predicting systemic financial crises. The Self-Organizing Financial Stability Map (SOFSM) enables a two-dimensional representation of a multidimensional financial stability space and thus allows disentangling the individual sources impacting on systemic risks. The SOFSM can be used to monitor macro-financial vulnerabilities by locating a country in the financial stability cycle: being it either in the pre-crisis, crisis, post-crisis or tranquil state. In addition, the SOFSM performs better than or equally well as a logit model in classifying in-sample data and predicting out-of-sample the global financial crisis that started in 2007. Model robustness is tested by varying the thresholds of the models, the policymakerā€™s preferences, and the forecasting horizon.systemic financial crisis; systemic risk; self-organizing maps; visualisation; prediction; macroprudential supervision

    Predicting the need for aged care services at the small area level: the CAREMOD spatial microsimulation model

    Get PDF
    Most industrialised societies face rapid population ageing over the next two decades, including sharp increases in the number of people aged 85 years and over. As a result, the supply of and demand for aged care services has assumed increasing policy prominence. The likely spatial distribution of the need for aged care services is critical for planners and policy makers. This article describes the development of a regional microsimulation model of the need for aged care in New South Wales, a state of Australia. It details the methods involved in reweighting the 1998 Survey of Disability, Ageing and Carers, a national level dataset, against the 2001 Census to produce synthetic small area estimates at the statistical local area level. Validation shows that survey variables not constrained in the weighting process can provide unreliable local estimates. A proposed solution to this problem is outlined, involving record cloning, value imputation and alignment. Indicative disability estimates arising from this process are then discussed.Disability, ageing, spatial analysis, aged care, cloning; imputation; alignment; NATSEM

    Fast, Exact Bootstrap Principal Component Analysis for p>1 million

    Full text link
    Many have suggested a bootstrap procedure for estimating the sampling variability of principal component analysis (PCA) results. However, when the number of measurements per subject (pp) is much larger than the number of subjects (nn), the challenge of calculating and storing the leading principal components from each bootstrap sample can be computationally infeasible. To address this, we outline methods for fast, exact calculation of bootstrap principal components, eigenvalues, and scores. Our methods leverage the fact that all bootstrap samples occupy the same nn-dimensional subspace as the original sample. As a result, all bootstrap principal components are limited to the same nn-dimensional subspace and can be efficiently represented by their low dimensional coordinates in that subspace. Several uncertainty metrics can be computed solely based on the bootstrap distribution of these low dimensional coordinates, without calculating or storing the pp-dimensional bootstrap components. Fast bootstrap PCA is applied to a dataset of sleep electroencephalogram (EEG) recordings (p=900p=900, n=392n=392), and to a dataset of brain magnetic resonance images (MRIs) (pā‰ˆp\approx 3 million, n=352n=352). For the brain MRI dataset, our method allows for standard errors for the first 3 principal components based on 1000 bootstrap samples to be calculated on a standard laptop in 47 minutes, as opposed to approximately 4 days with standard methods.Comment: 25 pages, including 9 figures and link to R package. 2014-05-14 update: final formatting edits for journal submission, condensed figure
    • ā€¦
    corecore