19 research outputs found

    On the Taylor Tower of Relative K-theory

    Get PDF
    For a functor with smash product F and an F-bimodule P, we construct an invariant W(F;P) which is an analog of TR(F) with coefficients. We study the structure of this invariant and its finite-stage approximations W_n(F;P), and conclude that for F the FSP associated to a ring R and P the FSP associated to the simplicial R-bimodule M[X] (with M a simplicial R-bimodule, X a simplicial set), the functor sending X to W_n(R;M[X]) is the nth stage of the Goodwillie calculus Taylor tower of the functor which sends X to the reduced K-theory spectrum of R with coefficients in M[X]. Thus the functor sending X to W(R;M[X]) is the full Taylor tower, which converges to the reduced K-theory of R with coefficients in M[X] for connected X. We show the equivalence between relative K-theory of R with coefficients in M[-] and W(R;M[-]) using Goodwillie calculus: we construct a natural transformation between the two functors, both of which are 0-analytic, and show that this natural transformation induces an equivalence on the derivatives at any connected X.Comment: 66 pages, plain te

    The Reflection Component from Cygnus X-1 in the Soft State Measured by NuSTAR and Suzaku

    Get PDF
    The black hole binary Cygnus X-1 was observed in late 2012 with the Nuclear Spectroscopic Telescope Array (NuSTAR) and Suzaku, providing spectral coverage over the ~1-300 keV range. The source was in the soft state with a multi-temperature blackbody, power law, and reflection components along with absorption from highly ionized material in the system. The high throughput of NuSTAR allows for a very high quality measurement of the complex iron line region as well as the rest of the reflection component. The iron line is clearly broadened and is well described by a relativistic blurring model, providing an opportunity to constrain the black hole spin. Although the spin constraint depends somewhat on which continuum model is used, we obtain ɑ_* > 0.83 for all models that provide a good description of the spectrum. However, none of our spectral fits give a disk inclination that is consistent with the most recently reported binary values for Cyg X-1. This may indicate that there is a >13° misalignment between the orbital plane and the inner accretion disk (i.e., a warped accretion disk) or that there is missing physics in the spectral models

    Cognitive and autonomic dysfunction measures in normal controls, white coat and borderline hypertension

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>White coat hypertension (WCHT) is a significant clinical condition with haemodynamic differences and presence of functional changes. We aim to compare cognitive and autonomic dysfunction variables (heart rate variability) between subjects with normal blood pressure (controls), WCHT, and borderline hypertension (BLH).</p> <p>Methods</p> <p>We performed a cross-sectional study in a cohort of 69 subjects (mean age ± SD; 38.2 ±10.8 years) comprising comparable number of normal controls, WCHT, and BLH. We measured clinic and 24-hour ambulatory blood pressure monitoring (ABPM), cognitive function parameters, and heart rate variability (HRV). All subjects underwent 24-hour ambulatory electrocardiography monitoring which was analyzed for HRV measurements. We performed a routine echocardiography (ECHO) for all subjects.</p> <p>Results</p> <p>Multiple comparison between the three groups revealed significant (p < 0.04) differences in mean day-time ABPM (systolic and diastolic). In the state anxiety inventory (SAI), both subjects with WCHT and BLH had significantly (p < 0.006) higher anxiety levels than the control group. In memory tasks WCHT subjects scored significantly (p < 0.004) lower in comparison with the other two groups. WCHT significantly (p < 0.001) performed less in memory tests, whereas BLH subjects had significantly (p < 0.001) lower reaction time. We found a significant (p < 0.05) difference in the 24-hour RMSSD and SDNN between the three groups. There was significant correlation between 24-hour RMSSD and computer CANTAB scores. The Echocardiography assessment revealed no significant differences in LV mass indices and diastolic function.</p> <p>Conclusions</p> <p>WCHT and BLH subjects showed lower cognitive performance and higher levels of anxiety when compared to controls. Autonomic function reflected by HRV indices was lower in WCHT and BLH in contrast to control, though not significantly. Our results suggest that WCHT may not be a benign condition as it may contribute to the overall risk for cardiovascular disease and LV damage. Longitudinal studies of patients with WCHT should clarify the transient, persistent or the progressive nature of this condition.</p

    A modified load apportionment model for identifying point and diffuse source nutrient inputs to rivers from stream monitoring data

    Full text link
    Determining point (PS) and diffuse source (DS) nutrient inputs to rivers is essential for assessing and developing mitigation strategies to reduce excessive nutrient loads that induce eutrophication. However, application of watershed mechanistic models to assess nutrient inputs is limited by large data requirements and intensive model calibration efforts. Simple export coefficient models and statistical models also require extensive primary watershed attribute information and further they cannot address seasonal patterns of nutrient delivery. In practice, monitoring efforts to identify all PSs within a watershed are very difficult due to time and economic limitations. To overcome these issues, based on the fundamental hydrological differences between PS and DS pollution, a modified load apportionment model (LAM) was developed relating the river nutrient load to nutrient inputs from PS, DS and upstream inflow sources while adjusting for in-stream nutrient retention processes. Estimates of PS and DS inputs can be easily achieved through Bayesian calibration of the five model parameters from commonly available stream monitoring data. It considers in-stream nutrient retention processes, temporal changes of PS and DS inputs, and nutrient contributions from upstream inflow waters, as well as the uncertainty associated with load estimations. The efficacy of this modified LAM was demonstrated for total nitrogen (TN) source apportionment using a 6-year record of monthly water quality data for the ChangLe River in eastern China. Aimed at attaining the targeted river TN concentration (2mgL-1), required input load reductions for PS, DS and upstream inflow were estimated. This modified LAM is applicable for both district-based and catchment-based water quality management strategies with limited data requirements, providing a simple, effective and economical tool for apportioning PS and DS nutrient inputs to rivers. © 2013 Elsevier B.V

    Recommendations for enterovirus diagnostics and characterisation within and beyond Europe.

    Get PDF
    Enteroviruses (EV) can cause severe neurological and respiratory infections, and occasionally lead to devastating outbreaks as previously demonstrated with EV-A71 and EV-D68 in Europe. However, these infections are still often underdiagnosed and EV typing data is not currently collected at European level. In order to improve EV diagnostics, collate data on severe EV infections and monitor the circulation of EV types, we have established European non-polio enterovirus network (ENPEN). First task of this cross-border network has been to ensure prompt and adequate diagnosis of these infections in Europe, and hence we present recommendations for non-polio EV detection and typing based on the consensus view of this multidisciplinary team including experts from over 20 European countries. We recommend that respiratory and stool samples in addition to cerebrospinal fluid (CSF) and blood samples are submitted for EV testing from patients with suspected neurological infections. This is vital since viruses like EV-D68 are rarely detectable in CSF or stool samples. Furthermore, reverse transcriptase PCR (RT-PCR) targeting the 5'noncoding regions (5'NCR) should be used for diagnosis of EVs due to their sensitivity, specificity and short turnaround time. Sequencing of the VP1 capsid protein gene is recommended for EV typing; EV typing cannot be based on the 5'NCR sequences due to frequent recombination events and should not rely on virus isolation. Effective and standardized laboratory diagnostics and characterisation of circulating virus strains are the first step towards effective and continuous surveillance activities, which in turn will be used to provide better estimation on EV disease burden

    Introduction to special issue:New Times Revisited: Britain in the 1980s

    Get PDF
    The authors in this volume are collectively engaged with a historical puzzle: What happens if we examine the decade once we step out of the shadows cast by Thatcher? That is, does the decade of the 1980s as a significant and meaningful periodisation (equivalent to that of the 1960s) still work if Thatcher becomes but one part of the story rather than the story itself? The essays in this collection suggest that the 1980s only makes sense as a political period. They situate the 1980s within various longer term trajectories that show the events of the decade to be as much the consequence as the cause of bigger, long-term historical processes. This introduction contextualises the collection within the wider literature, before explaining the collective and individual contributions made

    Automatic Pill Identification from Pillbox Images

    Full text link
    There is a vital need for fast and accurate recognition of medicinal tablets and capsules. Efforts to date have centered on automatic segmentation, color and shape identification. Our system combines these with preprocessing before imprint recognition. Using the National Library of Medicine Pillbox database, regression analysis applied to automatic color and shape recognition allows for successful pill identification. Measured errors for the subtasks of segmentation and color recognition for this database are 1.9% and 2.2%, respectively. Imprint recognition with optical character recognition (OCR) is key to exact pill ID, but remains a challenging problem, therefore overall recognition accuracy is not yet known

    A Conceptual Architecture for Reproducible On-demand Data Integration for Complex Diseases

    Full text link
    Eosinophilic Esophagitis, which is a complex and emerging condition characterized by poorly defined phenotypes, and associated with both genetic and environmental conditions. Understanding such diseases requires researchers to seamlessly navigate across multiple scales (e.g., metabolome, proteome, genome, phenome, exposome) and models (sources using different stores, formats, and semantics), interrogate existing knowledge bases, and obtain results in formats of choice to answer different types of research questions. All of these would need to be performed to support reproducibility and sharability of methods used for selecting data sources, designing research queries, as well as query execution, understanding results and their quality. We present a higher level of formalizations for building multi-source data platforms on-demand based on the principles of meta-process modeling and provide reproducible and sharable data query and interrogation workflows and artifacts. A framework based on these formalizations consists of a layered abstraction of processes to support administrative and research end users: Top layer (meta-process): An extendable library of computable generic process concepts (PC) stored in a metadata repository1 (MDR) and describe steps/phases in the translational research life cycle. Middle layer (process): Methods to generate on-demand queries by assembling instantiated PC into query processes and rules. Researchers design query processes using PC, and evaluate their feasibility and validity by leveraging metadata content in the MDR. Bottom layer (execution): Interaction with a hyper-generalized federation platform (e.g. OpenFurther1) that performs complex interrogation and integration queries that require consideration of interdependencies and precedence across the selected sources. This framework can be implemented using process exchange formats (e.g., DAX, BPMN); and scientific workflow systems (e.g., Pegasus2, Apache Taverna3). All content (PC, rules, and workflows), assembling, and executing mechanism are sharable. The content, design, and development of the framework is informed by user-centered design methodology and consists of researcher and integration-centric components to provide robust and reproducible workflows. References 1. Gouripeddi R, Facelli JC, et al. FURTHeR: An Infrastructure for Clinical, Translational and Comparative Effectiveness Research. AMIA Annual Fall Symposium. 2013; Wash, DC. 2. Pegasus. The Pegasus Project. 2016; https://pegasus.isi.edu/. 3. Apache Software Foundation. Apache Taverna. 2016; https://taverna.incubator.apache.org/

    Streamlining Study Design and Statistical Analysis for Quality Improvement and Research Reproducibility

    Full text link
    Research Overview: This summarizes the current and future work done in streamlining the processes and methods involved with study design and statistical analyses in order to ensure quality of statistical methods and reproducibility of research. Objectives/Goals: Key factors causing irreproducibility of research include those related to inappropriate study design methodologies and statistical analysis. In modern statistical practice irreproducibility could arise due to statistical (false discoveries, p-hacking, overuse/misuse of p-values, low power, poor experimental design) and computational (data, code & software management) issues. These require understanding the processes and workflows practiced by an organization, and the development and use of metrics to quantify reproducibility. Methods/Study Population: Within the Foundation of Discovery - Population Health Research, Center for Clinical and Translational Science, University of Utah, we are undertaking a project to streamline the study design and statistical analysis workflows and processes. As a first step we met with key stakeholders to understand the current practices by eliciting example statistical projects, and then developed process information models for different types of statistical needs using Lucidchart. We then reviewed these with the Foundation’s leadership and the Standards Committee to come up with ideal workflows and model, and defined key measurement points (such as those around study design, analysis plan, final report, requirements for quality checks, and double coding) for assessing reproducibility. As next steps we are using our finding to embed analytical and infrastructural approaches within the statisticians’ workflows. This will include data and code dissemination platforms such as Box, Bitbucket and GitHub, documentation platforms such as Confluence, and workflow tracking platforms such as Jira. These tools will simplify and automate the capture of communications as a statistician work through a project. Data-intensive process will use process-workflow management platforms such as Activiti, Pegasus and Taverna. Results/Anticipated Results: These strategies for sharing and publishing study protocols, data, code and results across the spectrum, active collaboration with the research team, automation of key steps, along with decision support. Discussion/Significance of Impact: This analysis of the statistical methods and process and computational methods to automate them ensure quality of statistical methods and reproducibility of research
    corecore